On July 9, the cross-modal general artificial intelligence (AI) platform "Zidong Taichu" successfully developed by the Institute of Automation of the Chinese Academy of Sciences (Institute of Automation, Chinese Academy of Sciences) was officially released to the public. It takes a multi-modal large model as the core and is based on a full stack. The localized basic software and hardware platform can support all-scenario AI applications.

  On the same day, the virtual person "Xiaochu" created by the Institute of Automation of the Chinese Academy of Sciences based on "Zidong Taichu" was also demonstrated. Through the human-machine dialogue demonstration of the universal multi-modal large model, it demonstrated the mutual conversion and generation of different modalities.

  In the process of human-computer dialogue, the first appearance of the virtual person "Xiaochu" replies like streams, covering multiple functions such as video description, intelligent question and answer, image retrieval, poetry writing, Chinese continuation, bilingual translation, and voice recognition.

(Reporter Sun Zifa produced Wang Jiayi's video source from Institute of Automation, Chinese Academy of Sciences)

Editor in charge: [Liu Xian]