In light of the growing interest in deep learning algorithms for machines and the growing plans to rely on artificial intelligence techniques in the Kingdom of Saudi Arabia, there are warnings issued by Saudi specialists, including alerting Saudi doctoral researcher Ibrahim Al-Muslim, of the bias of artificial intelligence.

The Muslim issued the first Saudi study on artificial intelligence governance and reducing its risks and negative effects; the study believes that the principle of artificial intelligence is based on simulating the human mind and its working patterns, such as the ability to learn, infer, and react to situations not programmed within the machine, that is, programming them to learn. The study divided the artificial intelligence into levels; some of them are real and some are fictional, starting with the weak similar to the self-driving car, the general unlimited in a specific field such as the mechanical theorist in research fields, the strong who are able to simulate the functions of the human brain, and the supernatural superior to human intelligence in various fields on Like science fiction movie robots.

The study summarized the risks of artificial intelligence that may result from three stages in the technology's life cycle: development and errors during building systems and measuring their performance and security, the stage of application and the risks of misuse, and the stage of risks resulting from widespread adoption.

The study highlighted the bias of artificial intelligence, classifying bias as a human trait, and because artificial intelligence simulates human intelligence by learning from the data entered for it, it will inherit its flaws, and if the data includes racism, class or sexual discrimination, the systems of artificial intelligence will enhance These thoughts.

The study recommended the necessity of taking important measures; avoiding legally protected data such as avoiding the use of race, religion, or gender data, imposing a study of bias when requesting material support in research proposals or government projects, and granting certificates of accreditation from government institutions or associations that confirm and demonstrate that companies or products are followed for the best Practices and standards, enacting mandatory laws to create an appropriate regulatory framework for the use of artificial intelligence systems to be governed in the event of any exclusionary or biased practices, and educating and educating developers on the ethics of artificial intelligence to be able to distinguish their bias and Personal loyalty and prevent their presence in the systems, taking into account the fair and equal representation in teams of developers to reduce individual biases. To address the risks of self-systems and their unexpected risks, the study sees the necessity of imposing compulsory insurance on the manufacturers of products that include self-systems, and imposing the study of risks when requesting financial support in research proposals or government projects, and the adoption of technologies gradually to allow the opportunity to test it on a small scale, and benefit from the expertise of manufacturers Aviation security professionals, making use of sensitive scientific fields such as cybersecurity, extensive testing of self-systems, and studying the consequences of each decision that results from it.

The study indicates that there are gaps in the applications of artificial intelligence, including data poisoning, which deflects the course of learning the system, during the application phase.

The study highlighted the bias of artificial intelligence.

Gaps in artificial intelligence, including data poisoning.