A startup company from the University of Tokyo has developed and completed the largest domestically produced generative AI in Japan.

While American companies are taking the lead, the aim is to compete with a strategy that specializes in Japanese.

The company that made the announcement was Eliza, a startup company from the University of Tokyo, which held a press conference on the 12th and revealed its outline.

This generative AI has 70 billion parameters, which indicates the amount of learning for the underlying large-scale language model, and has the largest processing capacity of any domestically produced generative AI that any company has worked on to date.



Based on publicly available technology called open source, we were able to develop it in a short period of time starting in December last year, using data centers such as the ABCI data center operated by the National Institute of Advanced Industrial Science and Technology.



According to the company, it has the same Japanese processing ability as the generation AI of leading American companies.



In addition to making chat-style generation AI available to the general public, the company plans to gradually start providing it to companies and local governments.

Yuya Soneoka, President of Eliza, said, ``As of the end of last year, Japan's AI model was not as good as global models such as Open AI and Google.This time, we can finally stand on the starting line and show Japan's presence. I want to do that,'' he said.

Background to the growing competition in generative AI development

Behind the intensifying competition in the development of generative AI is the "competition to capture" how to increase the number of client companies implementing it in their business and services.



As Microsoft and Google, which are collaborating with American Open AI, which is leading the way in development, are increasing their aggressiveness in the Japanese domestic market, Japanese companies developing domestically produced generative AI are trying to keep costs down while focusing on the Japanese language and specific fields. We are proceeding with a strategy to



Of these, NTT and NEC will each start services for businesses starting this month.



The large-scale language model that forms the basis of generative AI has the characteristic that the higher the number of parameters, the higher the processing power, but it also requires a large data center and significantly increases costs.



The number of parameters in "GPT-3" released by Open AI four years ago was 175 billion, and the latest model "GPT-4" is expected to exceed 1 trillion.



Google's Gemini, which launched a new model last week, is also expected to have a similar number of parameters.



On the other hand, NTT's cost is 600 million and 7 billion, and NEC's is 13 billion, which they say will help reduce installation and operation costs for client companies.



On the other hand, the generative AI developed by Eliza has 70 billion parameters, and although it is specialized in Japanese, it is a general-purpose model with high processing power and has a strategy focused on large-scale production. It's a hammered out shape.



In the competition to capture generative AI, the customer companies that have introduced it will be required to realize efficiency and profitability in their own operations and services, and the ability to meet those needs is likely to have a major impact on the outcome of the competition.