Artificial intelligence applications are constantly expanding, in all sectors and at various levels, which opens new horizons for humanity and different ways of work and life through innovative technical solutions provided by artificial intelligence to reach a digitally connected future, where machines and humans work together to achieve impressive results that were not possible From before.

But this prosperous future needs, first of all, to define an ethical strategy for dealing with artificial intelligence, so that this strategy maximizes the benefits and reduces the negatives that may be associated with the application of this advanced technology.

In fact, developing an ethical strategy to deal with artificial intelligence is one thing, and implementing this strategy and applying it on the ground is quite another. Scientists for this goal, but when applied on the ground, the situation was different, and these companies did not adhere to the strategies that they set for themselves, and perhaps the greatest example of this is Google (Google).

Google has worked for many years to present itself as an ethically responsible organization (Reuters)

Google between theory and practice

Google has worked for many years to present itself as a responsible institution in dealing with artificial intelligence in an ethical manner that takes into account the interests of its customers around the world, and it has employed distinguished scientists and academics in its research centers and laboratories, acclaimed in this field. Ways of dealing with artificial intelligence, and participated in the largest international conferences specialized in this field.

With all that said, the company's reputation has been irreversibly damaged in the recent period, as the company is now struggling to convince people and governments of its good "ethical" handling of the huge amount of data it owns, according to The Verge. Technical specialist.

The company's decision to fire two scientists, Timnit Gibero and Margaret Mitchell, two of its top researchers in the field of artificial intelligence ethics who were studying the downsides of the popular Google search engine, sparked large waves of protests inside and outside the giant company.

Scientists and academics registered their strong dissatisfaction with this arbitrary decision in various ways. Two of them withdrew from a workshop organized by Google, a third scientist refused a grant of $60,000 from the company, and a fourth pledged not to accept any funding from it in the future.

Two of the company's top engineers also resigned in protest of Gebero and Mitchell's treatment, and they were recently followed by a senior Google AI employee, Sami Bengio, director of research who oversaw hundreds of employees working in this field at the company, according to the previous report by The Verge.

Google's dismissal of Gebru and Mitchell prompted thousands of company employees to protest, and the two scientists had earlier called for more diversity and inclusion among Google's search staff, and expressed concern that the company had begun to censor papers critical of its products.

"What happened makes me deeply concerned about Google's commitment to ethics and diversity within the company, and what's even more worrying is that they have shown a willingness to crack down on science that is inconsistent with their business interests," said Scott Nikom, an assistant professor at the University of Texas who works in robotics and machine learning.

“It certainly hurts their credibility in the field of justice and AI ethics,” said Deep Raji, a professor at the Mozilla Foundation who works on AI ethics.

There were many questions previously raised about the ethics with which Google deals with the huge amount of data it collects from billions of people in the world, the way it collects this data and how the company uses this data to achieve billions of dollars in profits every year at the expense of users, this is an addition To many cases of monopoly and abuse of power that were brought against it in many countries of the world.

All of this re-brands the question of ethics in dealing with the artificial intelligence that Google and other tech giants are using to bring them more power and authority.

In fact, in theory, Google has a comprehensive system for managing the ethics of dealing with artificial intelligence, and it was one of the most important companies in the world that adopted such a system, as it established a specialized section for this purpose in 2018, as reported by the American newspaper “Washington Post” .

Google has a comprehensive system for managing the ethics of dealing with artificial intelligence (Shutterstock)

Google has set a set of goals that it wants to implement, and seeks to implement during its dealings with artificial intelligence, and we convey them to you as stated on the company's website:

To be socially useful

The wide spread of new technologies increasingly touches society as a whole, and advances in artificial intelligence will have different implications in a wide range of areas, including healthcare, security, energy, transportation, manufacturing, and entertainment.

In considering the potential uses of artificial intelligence technologies, a wide range of social and economic factors will be taken into account as the business continues, as the company believes that the expected benefits greatly exceed potential risks and downsides, and adds that it will strive to provide high-quality and accurate information using artificial intelligence, with Continue to respect cultural, social and legal norms in the countries in which it operates.

Anti-bias

AI algorithms and data sets can reflect, enhance or reduce unfair biases, and the company says it understands that distinguishing between fair and unfair biases is not easy, and varies across cultures and societies.

It says it will seek to avoid unfair effects on people, particularly those related to sensitive characteristics such as race, gender, nationality, income, sexual orientation, ability, and political or religious belief.

Safety and security

Google asserts that it will continue to develop and implement robust practices that respect safety and security principles, and avoid unintended consequences that could lead to harm, adding that it will design its AI systems to be appropriately prudent, and seek to develop them according to best practices in safety research, and will test AI technologies in restricted environments and monitors their operation while you work.

Responsibility towards people

Google says that it will design AI systems that provide appropriate opportunities for relevant feedback and interpretation while ensuring a right of appeal for users, and that the company's AI technologies will be subject to appropriate human control.

Privacy Guarantee

The company asserts that privacy principles are respected when developing and using its AI technologies, that it will give users the opportunity to consent to data collection while respecting their privacy, and that it will provide appropriate transparency and control over the use of the data collected.

Responsible AI requires creating systems that adhere to basic guidelines that distinguish between permitted and illegal uses (Getty Images)

Adhering to the highest standards of scientific excellence

Technological innovation is rooted in a commitment to the scientific method, intellectual rigor, integrity and cooperation.

And the tools provided by artificial intelligence that have the potential to open up new areas of scientific research in many important fields, such as biology, chemistry, medicine and environmental sciences.

Google aspires to high levels of scientific excellence through its work on the development of artificial intelligence, and confirms that it will responsibly share the knowledge it obtains by publishing educational materials, best practices and research that enable more people to develop useful applications for artificial intelligence.

Availability for useful use

There are many technologies that have multiple uses, and Google says that, for its part, it will limit applications that may be harmful or abusive, and will also evaluate the potential uses of various artificial intelligence technologies so that they are useful to users.

Steps to build an ethical strategy to deal with artificial intelligence

All of the above is a good thing and no one can say otherwise, however when it came to implementation the result was often different, so the question is: How does one ensure that AI is aligned with business models and core values? Which such companies follow?

Responsible AI entails creating systems that adhere to basic guidelines that distinguish between permitted and illegal uses, so that AI systems are transparent, human-centred, interpretable, and socially useful to be considered responsible AI.

The researcher and writer, Prangia Pandap, identifies 5 basic steps for building and implementing any ethical strategy for dealing with artificial intelligence, in an article for her published by the "EnterPriseTalk" platform:

Start at the top

Many corporate managers are still unaware of how to build and implement responsible AI within their companies and organizations, and here leaders must be educated about the principles of trustworthy AI, so that they can take a clear stand on the ethics of this intelligence, and ensure its compliance with applicable laws and regulations.

Risk assessment

It is necessary to understand the risks that may be associated with the applications of this new technology, given that artificial intelligence is an emerging technology, and the laws, instructions and standards for dealing with it have not been determined definitively in various countries of the world, due to the difficulty of identifying the risks and threats that it may represent. Ongoing risk posed by the application of this technology is essential and critical.

Define baseline

Trustworthy AI processes must be integrated into the company’s management system, and here the company’s policies must be updated in order to ensure that the application of artificial intelligence at work does not result in any negative consequences affecting human rights inside and outside the organization, as well as to solve any problems that may arise in this The context in the future, which requires the adoption of a reliable compliance policy that contains a mix of technical and non-technical safeguards to ensure the best results.

Increase awareness at the company level

Companies must educate their employees about the legal, social, and ethical implications of engaging with AI, and explain the risks associated with AI, and how to reduce those risks.

In this context, holding training workshops on the ethics of dealing with artificial intelligence will be more important than focusing on the rigid compliance rules that companies distribute to their employees.

third party

An artificial intelligence system is rarely built by a single company or organization, there are always other parties involved in this process, and here it is of great importance that these external parties and organizations adhere to the company’s ethical strategy for dealing with artificial intelligence, there must be mutual commitments between Various institutions are working on the system, in order to ensure the reliability of this technology, and this includes conducting audits on suppliers that include how they deal with potential adverse human rights impacts.