iBorderCtrl is the name of a completed research project funded by the EU.

Nobody knows how the money was used and how it was used.

It's classified.

"Trade secrets" is the motivation.

Even though it is tax money that financed it.

What is known is that it was intended to expose liars at the border control with the help of artificial intelligence.

A virtual passport controller asks you questions and assesses whether your answers are credible.

This was tested at three border crossings in the EU.

At one of them, a journalist managed to be selected as a subject.

Four of her sixteen answers were judged to be lies, even though she answered all questions honestly.

According to German MEP Patrick Breyer (Pirate Party), it proves that the technology is unreliable, and its use dangerous. 

- There is a great risk that people will be accused incorrectly.

The United States tested this a couple of years ago, and stopped after two years.

I'm sure they had their reasons for shutting it down.

Research bodies before a court

Breyer is outraged at the secrecy that is allowed to surround EU-funded research projects like this.

He wonders why tax money should go to things that taxpayers do not have access to and has therefore sued the EU research body REA before the European Court of Justice. 

- I request access to documents and a legal examination of whether this is compatible with fundamental rights within the EU.

The result will be prejudicial to the management of the EU's future research projects. 

Parts of the technology behind iBorderCtrl continue to be explored in the ongoing Tresspass project.

There is the "stress revealer", a kind of lie detector and a risk assessment which, among other things, is based on whether you paid for the ticket with cash.

Regulation of AI

The European Commission has recently proposed regulation of the use of AI, which in such cases is classified as "high risk".

But the responsible commissioner, Margrethe Vestager, does not see it as a problem that at the same time as raising warning flags, such projects are also financed. 

- On the contrary, it reinforces our message: embrace artificial intelligence, but be careful when used in situations where prejudice can be built in and it risks human rights, she tells SVT.