• Privacy, Brad Smith: "We need a law for facial recognition"
  • Farewell to Nils Nilsson, pioneer of robotics and artificial intelligence
  • Turing Prize awarded to three pioneers of artificial intelligence
  • Artificial intelligence? A challenge to be won for the good of humanity
  • With artificial intelligence all composers like Bach
  • The Pope meets Microsoft President Brad Smith: artificial intelligence and ethics
  • China, facial recognition companies enter Trump's 'blacklist'

Share

January 19, 2020 "The use of facial recognition technologies by the public or private sector should be prohibited for a period of time (three to five years), during which a methodology for assessing the impact of these technologies and possible measures to mitigate risks can be identified and developed ".

It is the most significant passage, not surprisingly highlighted by the media, regarding the draft of the European Commission's 'White Paper' on artificial intelligence.

The text, which is expected to be presented in late February, is the result of public discussion on how to face the challenges posed by this technology as a whole.

The High Level Group on AI
In a report presented in June 2019, in fact, the European Commission's high level group on AI (composed of 52 experts including Luciano Floridi, Stefano Quintarelli, Andrea Renda) indicated that the EU should take seriously the need rules to protect themselves from the negative impact that biometric identification (such as facial recognition), the use of autonomous systems of lethal weapons (such as military robots), the profiling of children with Intelligence systems could have artificial, as well as the impact of AI on fundamental human rights.

The draft document that came to light (published by Euractiv, you can read it here ) is made up of 18 pages. The full version, which the Commission is expected to release in late February, presents five regulatory options for artificial intelligence, which would be:
1. Voluntary labeling
2. Sectoral requirements for public administration and facial recognition
3. Mandatory requirements for high risk applications
4. Security and responsibility
5. Governance
The Commission is likely to decide to formally adopt a 'mix' of options 3, 4 and 5, underlines Euractiv.

The options for the new rules
Here is what they foresee: on point 3, the document states that "the risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake", for example health care, transport, the police and the judiciary.

Security and accountability, including cyber threats
Point 4, on the other hand, concerns the safety and responsibility issues that could arise in the future development of artificial intelligence and provides "targeted changes" to Community legislation such as, for example, the general product safety directive, the machinery directive, the directive on radio equipment and product reliability. Cyber ​​threats, personal security, privacy and personal data protection risks should also be identified.

On the liability front, "adjustments may be needed to clarify the responsibility of AI developers and to distinguish them from the manufacturer's responsibilities." Earlier, it will be necessary to determine whether artificial intelligence systems should be considered as "products".

Finally, as regards Governance, the Commission states that an effective and strong public control system with the involvement of national authorities and cooperation between Member States is essential.

The Vatican and AI ethics
"RenAIssance. For a Humanistic Artificial Intelligence": the Pontifical Academy for Life also takes care of it, with a conference that, in February, will see the participation of Brad Smith, president of Microsoft, John Kelly III, deputy executive director of IBM , the President of the European Parliament David Sassoli, the director general of the Fao Qu Dongyu.

On this occasion, Microsoft and IBM will sign a 'Call for Ethics' to involve companies in an evaluation process of the effects of technologies connected to artificial intelligence, of the risks they entail, of possible regulatory paths, also on an educational level.

"We are engaged in this sector", explains Monsignor Vincenzo Paglia, president of the Pontifical Academy for Life, because "with the development of Artificial Intelligence the risk is that access and processing will become selectively reserved for large economic holding companies, public security systems, to the actors of political governance. In other words, equity is at stake in the search for information or in maintaining contact with others, if the sophistication of the services will be automatically taken away from those who do not belong to privileged groups or does not have any special skills. "