The Guardian newspaper warned of the dangers posed by the development of smart devices to possess capabilities comparable to those of the human mind.
The paper said in its editorial that the risks of programming computers and other smart devices to do business as human beings are numerous, especially the inability of the inventors of those machines to explain the information acquired by the machines they invented.
Software engineers and developers of artificial intelligence should take the ethical implications of their work seriously, the paper said.
She cited Brad Smith, Microsoft's president, who said technology companies should stop acting as if everything was illegal and that if we could consider technology ethically neutral, the makers of that technology could not be considered as such.
The paper pointed out that the awareness of the moral responsibility of the use of artificial intelligence is sometimes clearer when we see its direct impact on people.
It was easy to see the ethical problem in selling Microsoft's face recognition technology to the US Immigration and Customs Enforcement Administration when the Trump administration dispersed immigrant children from their parents at the southern border of the United States.
The newspaper praised the moral position of more than 3,000 employees of Google objected to a deal between the company and the Pentagon on the use of artificial intelligence for military purposes and ensure the manufacture of drones, prompting the company to abandon the deal.
Simulate the subconscious
According to the newspaper, polls show that Americans do not support the development of artificial intelligence technology for military purposes, but this view may change when the opponents of the United States begin to use artificial intelligence for military purposes.
She also pointed out that the subconscious is considered the most difficult and complex human brain simulation skills, which is the result of thousands of years of evolution.
The problem of simulating the subconscious is the inability of AI engineers to identify defects that may affect these machines, which is a big problem where it is difficult to predict the capabilities of their inventions.
Humans could develop smart devices to learn, but programming experts were still unable to understand the knowledge they acquired.