• Dall E The artificial intelligence that draws anything you ask it to

  • Technology The new Google Maps shows you cities in 3D

Google already has the answer to Dall·E 2, the engine capable of drawing anything that is described as text.

It is called Image and like Dall·E, it is capable of

imitating different pictorial styles and providing incredibly realistic results

in response to any description.

A dog in a shed made of sushi? A raccoon playing guitar on top of a mountain? A cactus with a straw hat and sunglasses in the desert?

No problem.

The engineers responsible for Imagen assure that it is capable of drawing with greater precision and in a more realistic way than other engines such as

VQ-GAN, LDM

or Dall E 2 itself. To prove it, they created a battery of tests with more than 200 descriptive texts and they asked the different engines to draw them.

A human jury then reviewed the results on accuracy and clarity, but also on how complex the description was or how the different elements were arranged in the image.

These types of text-to-image engines have become the new frontier for artificial intelligence researchers.

They not only have to be able to

understand natural language

, a complex task in itself, but also understand what image is acceptable as a result.

They are not perfect.

The examples selected by both the Dall·E 2 and Imagen teams

omit many attempts

in which the AI ​​has not been able to understand the description correctly, but they are becoming more and more accurate.

This, although it may not seem like it, is good news for artists and designers, who will soon have more advanced tools with which to make the first sketches of an illustration, for example, or with which they will be able to explore different creative avenues

without having to invest a lot of time and work.

But it also opens new ethical fronts that can have a profound impact on society.

These engines are capable of creating highly realistic images that can perfectly pass for real photographs and the researchers fear that they

could be used in disinformation campaigns.

"The potential risks of misuse pose problems, so for now we have decided not to publish code or make a public demonstration," explain those responsible for Image.

The creators of Dall·E 2, the OpenAI consortium, have opted for a similar policy.

Access to its engine is currently restricted

to a very exclusive set of academics, engineers and researchers to ensure that it is not used for harmful purposes.

Conforms to The Trust Project criteria

Know more

  • Google