Artificial intelligence has invaded every aspect of modern life, from “smart” vacuum cleaners to self-driving vehicles and advanced technologies for diagnosing diseases.

While its promoters promise that it revolutionizes human life, those who criticize it assert that this technology carries a danger that machines will take over the task of making fateful decisions in life.

Here, there is concern on the part of regulators in Europe and North America.

It is likely that the European Union will pass the “Artificial Intelligence Act” next year, which aims to rein in the age of algorithms.

The United States recently published a plan to legalize rights related to artificial intelligence, while Canada is considering the possibility of resorting to a law in the same field.

China's use of biometric data, facial recognition, and other technologies to build a robust system of control is often hinted at.

Grey Hasselback, a Danish researcher who advises the EU on the controversial technology, notes that the West risks creating "holistic infrastructures".

"I see this move as a big threat, whatever the benefits," she told AFP.

But before taking action, regulators face the daunting task of defining the exact concept of AI.

"foolish move"

The AI ​​bill of rights co-author, Suresh Venkatasubramanian, said trying to define AI was a "foolish move".

He indicated in a tweet via Twitter that any technology that affects people's rights should be within the scope of the bill.

However, the European Union - which includes 27 countries - takes the most complicated path in trying to define this broad field.

His draft bill mentions approaches referred to as AI, including: any computer system that incorporates automation.

But the problem stems from the changing uses of the term AI.

For decades this has been associated with a concept he presents as attempts to create machines that mimic human thinking.

In the early 2000s, funding for this research largely dried up.

The emergence of giant "Silicon Valley" companies has been accompanied by a renewal of the term artificial intelligence as an attractive slogan for these companies' programs and algorithms.

This operation allowed companies to target users with ads and content, which helped them make hundreds of billions of dollars.

Meredith Whitaker, a former Google employee and co-founder of New York University's Artifact Intelligence Now Institute, told AFP: "Artificial intelligence has been a way for companies to get more out of surveillance data. hide what was going on."

Both the European Union and the United States concluded that any definition of AI should be as comprehensive as possible, but from this point on, the two Western powers took different paths.

The EU draft law on artificial intelligence is over 100 pages long.

Among the most striking proposals in it;

a complete ban on some “high risk” technologies;

Such as: biometric monitoring tools used in China.

The project also proposes to significantly reduce the use of artificial intelligence tools, from those responsible for the immigration file, the police and judges.

Hasselback notes that some of the technologies "were very problematic in terms of basic rights."

On the other hand, the American bill constitutes a set of brief principles formulated in ambitious language, with advice among them - for example - what falls under the idea of ​​"the need to be protected from insecure or ineffective systems."

The bill, which builds on existing laws, came from the White House.

Experts believe that it is unlikely that there will be a law on artificial intelligence in the United States before at least 2024, because Congress has reached a dead end in this regard.

Experts believe that it is unlikely that there will be a law on artificial intelligence in the United States before 2024 because Congress has reached a dead end in this regard (Getty Images)

excessive regulation

Opinions differ on the merits of each approach on both sides.

"The subject of artificial intelligence desperately needs a law," Gary Marcus of New York University told AFP.

He notes that "big language models", including the artificial intelligence responsible for conversational "bots", translation tools and predictive scripting software, may be used to spread disinformation.

Whitaker is skeptical of the value of laws aimed at tackling AI rather than the "surveillance models" on which it is based.

"If the law does not address the issue in a substantial way, I think it will be a temporary solution, like putting a bandage on a wound on the body," she says.

On the other hand, other experts welcomed the American approach.

Researcher Sean MacGregor, who logs technical bugs in the Artifil Intelligence Incident database, notes that AI was a better target for regulators than the more opaque concept of privacy.

He warns of the possibility of laws that over-regulate artificial intelligence.

"The current authorities can regulate the issue of artificial intelligence," he tells AFP, listing examples including: the US Federal Trade Commission and the Department of Housing and Urban Development.

One point that experts agree on is the need to dispel the hyper-hype and ambiguity surrounding it.

McGregor believes that it is "not magic", likening artificial intelligence to the very complex "Microsoft Excel" program.