The saturation of the number of components built into chips, known as the 'electronic bottleneck', will soon limit the possibilities of improving the performance of artificial neural networks -

© Shutterstock (via The Conversation)

  • Artificial intelligence applications are supported by artificial neural networks (ARNs) which are very energy intensive to produce, according to our partner The Conversation.

  • Research is therefore being carried out to make these ANNs more “economical” and more efficient.

  • The analysis of these investigations was carried out by Roberto Morandotti, professor of nonlinear optics at the National Institute for Scientific Research (INRS).

We unlock our phones with facial recognition, our “smart” refrigerators help us manage our food stocks, and our cars will soon be driving themselves.

Our everyday objects are constantly “learning”.

But the amount of knowledge they can accumulate is limited by current technology.

They need new “neurons” that perform better and consume less energy.

Science is in the process of finding them.

Applications that enable

Machine Learning

, the basis of artificial intelligence, are supported by Artificial Neural Networks (RNAs).

RNAs are organized collections of interconnected artificial neurons.

They are created for the purpose of being able to perform complex operations or solve difficult problems through a learning mechanism similar to the functioning of the brain.

A bottleneck

These networks have contributed to the advent of the Internet of Things or “connected objects” and have revolutionized the way we consume many services in the financial sector, transport, telecommunications and healthcare.

Nowadays, artificial neural networks are mainly designed with software requiring more and more computing power.

This power is supplied by gigantic servers, which are very energy-intensive and have a considerable carbon footprint.

In addition, the number of electronic components that can be integrated into artificial neural networks will soon reach saturation point and will limit the possibilities of improving their performance.

This phenomenon, known as a “bottleneck”, causes significant transmission delays, which adversely affects real-time mobile services.

At the National Institute for Scientific Research - Energy Materials Telecommunications (INRS-EMT), the nonlinear photonics research group, which I lead, is trying to develop micro photonic devices (which use photons rather than electrons) intelligent, and which consume little energy, in order to increase the capacity of the current artificial neural networks.

Harness the light

Such devices, powered by learning algorithms, are made of integrated photonic components, which exploit the intrinsic properties of light to achieve extremely high performance (especially speed), with a small environmental footprint.

In a recent study published in the journal Nature in collaboration with Professor David J. Moss, director of the Center for Optical Sciences, at Swinburne University of Technology, we tested a very powerful network of artificial neurons called a convolutional neural network ( RNC).

This type of network can perform 10 trillion operations per second.

Artificial neural networks make it possible to detect the presence of simple patterns in an image, such as shape or color, and gradually identify the content of the entire image by association.

Here, a man using a facial recognition system to unlock the door of an office building © Shutterstock (via The Conversation)

Convolutional neural networks work on the same principle as the visual cortex of mammals.

They make it possible to detect the presence of simple patterns in an image, such as shape or color, and to gradually identify the content of the entire image by association and cross-checking.

This type of network is used in particular for facial recognition and computer image applications, but also for voice recognition and medical diagnostics.

Convolutional networks break down an image into dominant features, such as edges, colors, gradient orientation (called “filtering”), which are easier to process.

Then, these characteristics are assigned to individual layers of the network (called hidden layers), accelerating the acquisition of an image in stages of "convolution".

A breakthrough for autonomous vehicles

My collaborators and I have developed a convolutional neural network capable of processing images up to 250,000 pixels, at a speed high enough for facial recognition applications.

Comparative analyzes have shown successful recognition of digital images with 88% accuracy.

To achieve this result, our network used a layer of 10 fully connected neurons that we had already tested before.

The scalability of this device and its compatibility with standard electronic hardware offers interesting prospects in big data training for real-time and very high-speed applications, such as autonomous vehicles and real-time video recognition.

Like the human brain

My team and I develop our devices based on recurrent neural network architectures.

These networks, which are inspired by the circuits of the brain (made up of neurons and synapses), have “short-term” memory essential for processing dynamic sequences of data (for example, to improve the performance of telecommunications channels).

[How it works] The human brain and neurons © CEA Recherche

A fundamental aspect of this system is that the number of neurons needed in the network is reduced thanks to time division multiplexing (distribution of multiple time pulses in different channels).

This allows us to create virtual neurons from a single physical neuron.

The number of virtual neurons created varies according to the applications.

This technique has the advantage of reducing the complexity and the number of components required compared to other artificial neural networks during their design.

The photonic recurrent neural networks studied at INRS can potentially accomplish a wider range of complex and time-consuming learning tasks such as speech recognition, financial forecasting and medical diagnosis.

"Artificial intelligence" dossier

My research team is committed to advancing the state of knowledge on artificial neural networks for the advent of emerging technologies, such as 6G, which will require the transmission and processing of high data rates, at ultra speed. -high, which is inconceivable with current networks.

And the last impact of our research, and not the least, is that it will significantly reduce the carbon footprint of deep learning applications and their effect on the environment.


Why artificial intelligence still (temporarily) struggles to match the human brain


Can an AI (really) guess your health from listening to you speak?

This analysis was written by Roberto Morandotti, professor of nonlinear optics at the National Institute for Scientific Research (INRS).

The original article was published on The Conversation website.

  • Brain

  • Future (s)


  • Podcast

  • High-Tech

  • Artificial intelligence