Three scientists who specialize in deep learning (one of the most advanced branches of artificial intelligence) have revealed their success in arriving at two new technologies, namely "self-supervision" and "capsule networks", capable of detecting manipulation and counterfeiting, intended, or accidental, in any amount of data And information, whatever its magnitude, in addition to bringing it to the proper state that it is supposed to be, in an instantaneous manner, when operating in various information systems and networks, within institutions and companies, which lays the foundation towards a new, advanced and high-performance generation of information systems based on Deep learning technique.

Artificial intelligence

This came during the 34th annual conference of the International Association for the Advancement of Artificial Intelligence, which concluded its work in New York, USA, yesterday. The announcement of the new progress was broadcast on live video on the conference website, followed by "Emirates Today".

It is noteworthy that the three scientists are the three largest scientists in the world in the field of "deep learning", who last year won a high-level "Turing Award" for their entire work in the field of computer science, and they are: Professor at the Canadian Mila Institute, Joshua Bingo, and Professor at The University of Toronto, Jeffrey Hintonon, who is the most prominent member of the artificial intelligence team at Google, and head of the artificial intelligence team at Facebook, and Yale to be.

Problems

The scientists said that despite the great progress made by deep learning techniques in several fields, since they were applied in practice, they still suffer from difficult problems to solve, pointing out that the most prominent of these problems are "hostile examples" and "lack of proper understanding", as the first means possible Deep learning algorithms display deception and confusion, by adding data that represents disturbance or noise to the original data being used in analysis, understanding, and perception, while the second problem is intended to do "blindness" and intentional concealment of a portion of the original data, which makes the algorithms deviate from proper understanding during their creation Baldst The elicitation and giving results.

In this context, Dr. Wyal Lecon said that these two problems were the focus of the discussion, which addressed many negative aspects about deep learning during previous years, as it became clear from the discussions that took place on this point that these two problems affected the deep learning techniques and tools currently available, including networking technology Known as "RNN", and bypass neuronal technology known as "CNN".

He added that because of these two problems, the science of deep learning, and the scientists who adopt it, have been subjected to continuous criticism and waves of criticism, attack and skepticism in their work over the past nine years, by scientists working in other branches of artificial intelligence.

Repair

The three scientists emphasized that deep learning is now able to repair itself and overcome its problems through two new interconnected technologies, namely "self-supervision" and "capsule networks", indicating that this development means that anything added to big data for the purpose of confusing learning systems The profound that works to extrapolate it, explore it and analyze it, it will be identified, revealed and isolated, and anything that is intentionally or accidentally hidden, with the aim of exposing deep learning systems to deviation in the work and improper understanding of the data, will be recognized and generated with high accuracy through components and models dedicated to that.

"Deep learning" and its uses

Deep learning is a branch of artificial intelligence and machine learning, and deals with finding theories and algorithms that allow the machine to learn by itself by simulating the way neurons work in the human body. The term "deep" is used because the neural networks used in this type of artificial intelligence have multiple deep layers that enable learning, thinking and foresight, through methods of eliciting a high degree of abstraction by analyzing a huge data set using linear and non-linear transformers. This means that any problem that requires “thinking” is a problem that can be taught by deep learning to solve, even when using a very diverse, disorganized and interconnected data set, and the more you learn deep learning algorithms the better the performance.

Deep learning techniques are currently used in eight areas: digital voice aids, translation, autonomous vehicle operating systems, service and chat platforms, image coloring, face recognition, medicine, marketing and entertainment.