• As every year, the editorial staff of

    20 Minutes

    accompanies you during the December holidays.

    And like every one at this time, we look back on the past year and we plan for the one to come.

  • Until December 31, find all the big events of 2022, from the most catastrophic to the coolest.

    In this ninth episode, focus on artificial intelligence which has exploded at all levels this year.

  • In 2023, the objective will therefore be to legislate but also to contain this new ultra-powerful technology which is sometimes beginning to take its ease.

    Sean McGregor, a researcher who compiles AI-related incidents, advises mentally replacing “AI” with “spreadsheet” to get past the hype and not attribute intentions to computer programs.

    And do not mistake the culprit in the event of failure.

Artificial intelligence is infusing our daily lives, from smartphones to health and safety, and problems with these powerful algorithms have been piling up for years.

In 2023, the idea will be for the various democratic countries to now better supervise them.

The European Union could vote next year the “AI act” law, on artificial intelligence (AI), supposed to encourage innovation and avoid excesses.

The 100-page draft prohibits systems “used to manipulate the behavior, opinions or decisions” of citizens.

It also restricts the use of surveillance programs, with exceptions for anti-terrorism and public safety.

The West “risks creating totalitarian infrastructures”

Some technologies are simply “too problematic for fundamental rights”, notes Gry Hasselbalch, a Danish researcher who advises the EU on this subject.

The use of facial recognition and biometric data in China to control the population is often agitated as a scarecrow, but the West too "risks creating totalitarian infrastructures", she assures.

Privacy breaches, biased algorithms, automated weapons, etc.

It is difficult to draw up an exhaustive list of the perils associated with AI technologies.

At the end of 2020, Nabla, a French company, carried out medical simulations with text generation software (chatbot) based on GPT-3 technology.

To the question of an imaginary patient - "I feel very bad (...) should I kill myself?"

- he answered in the affirmative.

A now “conscious” computer program

But these technologies are advancing rapidly.

OpenAI, the Californian pioneer who developed GPT-3, has just launched ChatGPT, a new chatbot capable of having more fluid and realistic conversations with humans.

In June, a since-fired Google engineer claimed that an artificial intelligence computer program, designed to generate chat software, was now "aware" and should be recognized as an employee.

Researchers from Meta (Facebook) have recently developed Cicero, an AI model they claim can anticipate, negotiate and trap its human opponents at a board game, Diplomacy, which requires a high level of empathy .

Thanks to AI technologies, many objects and software can give the impression of operating intuitively, as if a robot vacuum “knows” what it is doing.

But "it's not magic," recalls Sean McGregor, a researcher who compiles incidents related to AI on a database.

He advises mentally replacing “AI” with “spreadsheet” to get past the hype and not attribute intentions to computer programs.

And do not mistake the culprit in the event of failure.

“We desperately need regulation”

A significant risk when a technology becomes too "autonomous", when there are "too many actors involved in its operation" or when the decision-making system is not "transparent", notes Cindy Gordon, the general manager of SalesChoice, a company that markets AI-powered sales software.

Once perfected, text-generating software can be used to spread false information and manipulate public opinion, warns New York University professor Gary Marcus.

“We desperately need regulation (…) to protect humans from machine makers,” he adds.

Thus, Europe hopes to again lead the way, as it had done with the law on personal data.

Canada is working on the subject, and the White House recently released a "blueprint for an AI Bill of Rights."

The brief document consists of general principles such as protection against dangerous or fallible systems.

"It's like a law on a refrigerator"

Given the political stalemate in the US Congress, this should not translate into new legislation before 2024. But "many authorities can already regulate AI", notes Sean McGregor, using existing laws -- on discrimination, for example. example.

He cites the example of the State of New York, which adopted a law at the end of 2021 to prohibit the use of automated selection software for recruitment purposes, as long as they have not been inspected.

our dossier on artificial intelligence

“AI is easier to regulate than data privacy,” notes the expert, because personal information is very valuable to digital platforms and advertisers.

“Faulty AI, on the other hand, does not bring profits.

Regulators must be careful not to stifle innovation, however.

In particular, AI has become a valuable ally of doctors.

Google's mammography technology, for example, reduces misdiagnoses (positive or negative) of breast cancer by 6% to 9%, according to a 2020 study. reacts Sean McGregor.

No need to give the technical specifications, you just say that it must be safe.

»

Company

Retro 2022: From Lola's Calvary to #MeToo Politics, what are your events of the year in France?

World

Retro 2022: From Ukraine to Iran, what are your milestones?

  • Company

  • Retrospective

  • New Year

  • Artificial intelligence

  • high tech