Using artificial intelligence to cope with a colossal flow of content, YouTube wants to scan every video and determine whether or not children will be able to see it on the platform.

In the event of non-compliance, the Internet user may be asked for proof of his age.

This is a real problem that concerns many parents: children quietly watch cartoons and, among the videos offered at the end of the program, they may come across content that is not intended for them at all, such as pictures from news or some music videos.

If these videos are to be hand-flagged by moderators today, that could change as all inappropriate videos for kids will be automatically filtered out on YouTube.

Application of a European directive

While the platform receives more than 500 hours of video every minute, the American streaming giant will use artificial intelligence to scan each content.

And validate whether or not it can be broadcast to everyone.

If this is not the case, you will have to log in to verify that you are over 18 years old.

In some cases, we may even ask for proof, such as a bank card or identity document.

This has not been the case so far.

>> Find Europe Matin in replay and podcast here

This is not a sudden awareness of YouTube.

It turns out that a European directive obliges platforms to protect minors on the internet (the AVMSD directive).

This directive was notably translated into the famous law which will require proof of age on pornographic sites.

Humans as a last resort

Can we be sure that artificial intelligence will not censor totally benign content?

There will definitely be bugs and YouTube is aware of that.

This is the reason why the authors will be warned and can appeal.

From there, there will be no more machines but humans who will validate that the video can be watched by children.