In his testimony before the Senate earlier this year, Facebook CEO Mark Zuckerberg said he would use artificial intelligence programs to detect false news and distinguish it from reliable information on the meeting platform, an idea that members of the board seemed to have accepted, although far from reality, According to an article in The New York Times.

The writer of the article, Professor of Psychology and Neuroscience Gary Marcus, and Professor of Computer Science Ernest Davis, said that artificial intelligence today, according to Zuckerberg - at the level of key words, determines the patterns of words and look for statistical links between them and their sources. Statistically, this may be useful as some language patterns may be associated with suspicious stories.

For example, for a long time most of the articles that contain the words Brad, Angelina and Divorce have been unpopular stories, and some sources are more or less reliably linked. In the Wall Street Journal compared to his appearance in The National Inquirer.

But none of these types of links can reliably tell the truth about the fakes. After all, the film stars Brad Pitt and Angelina Jolie have already split up. Key words that may help you someday may fool you the next day, according to the authors.

The authors argue that causal relationships are where modern machine learning techniques begin to falter. To label an article as misleading, the artificial intelligence program must understand the causal inclusion "what after?" And recognize that the calculation has come to a conclusion based on an incorrect association of correct information, And know how to search for information that is not available when those facts are presented in the article (publication or tweet). It also requires artificial intelligence to understand the multiple perspectives presented in the story.

The authors say they are not aware of any artificial intelligence or prototype that can produce the various facts about a particular story, as well as the distinction of the underlying ambiguities.

The most modern artificial intelligence systems that deal with language revolve around a variety of problems. Translation programs, for example, are interested in the first article of the problem of conformity, such as what is the best term in French correspond to a particular term in English? But to identify that someone is hinting, through a kind of logic based on facts, to an incorrect conclusion, is not a simple question to be verified in a database of facts.

The current artificial intelligence systems built to accommodate news accounts are very limited. They may be able to look at the path of a message and answer a question that may be answered directly and clearly in the story, but these systems seldom go further, lacking a strong mechanism To draw inferences or to communicate with a broader set of knowledge.

The authors conclude that in order to reach where Zuckerberg wanted, different artificial intelligence models should be developed fundamentally, not to detect statistical trends but to reveal ideas and relationships between them. Only then will these promises on artificial intelligence become a reality, not a scientific imagination.