Teller Report

Now you can see non-English news...

Artificial intelligence: "This is not a TÜV like for the car"

2019-11-18T12:35:01.227Z

Will the launching "AI Observatory" check the power and dangers of artificial intelligence? That would be necessary in any case, says the responsible Secretary of State Björn Böhning.



Already robots operate humans, cars drive by themselves, algorithms give out credits. How much power should artificial intelligence have? What if they make mistakes, build accidents, discriminate and monitor people? Promoting the technologies, but keeping ethics and data protection - as the Ministry of Labor wants to do, explains State Secretary Bjoern Böhning.

ZEIT ONLINE: A TÜV for artificial intelligence should come, was to read last week. What do you mean with that?

Björn Böhning : In 2018 we defined in the Federal Government's AI strategy that we want to establish a clear observatory for applications of artificial intelligence. Its task: to observe and analyze the technological developments of AI and their consequences in a structured process. We start now.

ZEIT ONLINE: How exactly?

Björn Böhning is Secretary of State in the Federal Ministry of Labor and Social Affairs, where the AI ​​Observatory is located. © J. Konrad Schmidt / BMAS

Böhning: We will look at where there are opportunities and potentials of artificial intelligence, where we can expand them and how we can speed up the transfer, for example, to companies. One important question: what are the risks in this technology and how do we deal with it in the future?

ZEIT ONLINE: Under a TÜVstellen many before a test center, which certifies something - just like the car. In fact, you start by looking at what regulation it would need for AI and what suitable framework would be. Correctly?

Böhning: We proceed in several steps. First, we examine how far artificial intelligence penetration is in this country, what it can do now and where it is used. Let us derive proposals for a regulatory framework. This is not a TÜV like for the car. But, for example, when I look at the proposal of the Data Ethics Commission for a Risk Pyramid, which provides for different levels of regulation, I increasingly come to the conclusion that we will need something like that. EinKI-TÜV with a corresponding institution would go in this direction.

ZEIT ONLINE: It is said that you start with eight employees. That does not sound like much now.

Böhning: The eight people will be hired this day. Federal Minister Hubertus Heil will open the observatory early next year.

ZEIT ONLINE: How long will it take for a kind of German testing laboratory to emerge, the concrete AI applications for companies, in devices with voice control, in nursing and medicine, in traffic and ultimately in all areas of economy and society in different Classes of risk divided or even certified?

Böhning: We do not want to start a new authority from one day to the next. In recent years, ethical guidelines for AI have focused on issues of transparency, robustness and anti-discrimination. From this altitude, we now need to come a long way to the ground and answer the question as to what the ethical guidelines, such as the OECD or the EU, have in the past few years on artificial intelligence, in concrete terms, in the integration of technology Working world, for example.

ZEIT ONLINE: But what does that mean concretely? Where does one begin to think about, for example, how to make the artificial intelligence of an online retailer more transparent, how a regulation would look like?

Böhning: Well, such developments are not new. In other areas we know institutions that can demand transparency about what is used technologically. In the energy market, for example, or telecommunications. In artificial intelligence, however, it comes to the fact that it is a fluid technology. She adapts, empathizes, she updates. That is, we will not be able to write an AI law that predefines any state of aggregation of artificial intelligence in any way. What we can do, however, is to watch: Do employees have a say on how AI is used in the workplace and in which areas?

TIME ONLINE: For example?

Böhning: If AI is used in automobile production on a tape, works councils could negotiate and make process agreements, which happens exactly there. Then one could agree that the behavioral or performance control of every single worker, every worker at each workstation be excluded. We know from our projects that such rights of attorney and agreements lead to greater trust, to greater use and higher acceptance of AI in the workplace. And that is our goal.

Source: zeit

You may like

News/Politics 2019-11-18T15:40:55.409Z

Trends 24h

Latest

tech 2019/12/15    

© Communities 2019 - Privacy