November 11 marks one year since ChatGPT, a generative AI that can create answers in natural sentences when you enter a question, was released. While the use of generative AI is rapidly expanding in companies and educational settings around the world, the challenge is how to respond to the risk of threatening human employment and spreading misinformation.

table of contents

  • One year after the release of ChatGPT

  • Market size: Expected to be nearly nine times larger by 2032

  • notice

    Open AI and Altman's dismissal

  • ◆ Generative AI Various risks pointed out

  • ◇Regulatory trends in each country regarding AI

Open Table of Contents

table of contents

table of contents

  • One year after the release of ChatGPT

  • Market size: Expected to be nearly nine times larger by 2032

  • notice

    Open AI and Altman's dismissal

  • ◆ Generative AI Various risks pointed out

  • ◇Regulatory trends in each country regarding AI

One year after the release of ChatGPT

On November 2022, 11, the American venture company OpenAI released ChatGPT, a generative AI.

When you enter a question, you can create an answer in a natural sentence as if it were written by a person, and you can create programming code in a short time, so its use has rapidly expanded in companies, educational settings, medical care, and government offices around the world.

In addition, various companies such as American IT giants Google and Amazon, and the former Facebook Meta have begun to develop generative AI services.

In October, Gartner, an American research firm, predicted that more than 30% of companies worldwide will introduce generative AI software by 2026.

On the other hand, generative AI has negative aspects such as the risk of threatening employment, the spread of misinformation, and the use of fraud.

Governments are considering regulations to minimize these downsides, and how to respond is a major challenge for the future.

Market size: Expected to be nearly nine times larger by 2032

According to Precedent Research, a research firm based in Canada and India, generative AI is already being used in a wide range of fields such as the financial industry, robotics, and healthcare, and the global market size is estimated to be 137.2 billion dollars, or about 27 trillion yen in Japan yen.

Since then, the global market size is expected to expand by an average of about 2032% every year, reaching $9 billion or more than 1180 trillion yen by 17, nearly nine times that of the previous year.

It is increasingly being used by American companies. According to data released in November by the American research company "Demand Surge", more than 11% of the top 500 companies in the United States based on sales compiled by Fortune magazine are already using ChatGPT.

  • notice

Open AI and Altman's dismissal

OpenAI is a venture company headquartered in San Francisco, California, in the western United States, and is involved in the development of AI = artificial intelligence. It was founded in 2015 by Sam Altman and entrepreneur Elon Musk as a non-profit company that does not aim to make a profit with the goal of developing safe AI that will broadly benefit humanity.

After that, in 2019, we set up a subsidiary of a for-profit company that makes a profit under a non-profit company, and we entered into a strategic alliance with the investment of the American IT giant Microsoft. Microsoft's investment in Open AI has reportedly reached $130 billion so far, or about 1.9000 trillion yen in Japan yen.

The company has been expanding its business rapidly since it launched its ChatGPT service on November 11 last year. At the beginning, the number of employees was just over 30, but now it has more than doubled to about 300. Since the current office has become too small, we have decided to lease two new office buildings in San Francisco, and the office area will be at least 770.2 times larger than the current one.

On the other hand, the rapid expansion of the business caused friction within the company. On November 2, CEO Sam Altman was abruptly removed from the Board of Directors.

The American media reports that the disagreement between Mr. Altman, who is trying to rapidly expand the service of generative AI as a business, and the members of the board of directors who emphasize the safety of AI, have intensified and decided to dismiss.

After the dismissal, it was reported that major investors lobbied the company for Mr. Altman's return, and in addition to revealing that Microsoft would headhunt Mr. Altman, more than 3% of all employees signed a letter saying that if Mr. Altman did not return, he would leave the company and move to Microsoft, and the company was greatly disrupted.

On November 5, Mr. Altman decided to return to the CEO of Open AI. Whether it is possible to normalize company management and accelerate development while considering the safety of generative AI is a major challenge for Altman and Open AI.

◆ Generative AI Various risks pointed out

Various risks have been pointed out regarding generative AI.

[Spread of disinformation]
The risk of the spread of false information and fake images due to the use of generative AI is higher than ever.

In May, a fake image circulating online claiming that an explosion occurred near the U.S. Department of Defense. The image is believed to have been created by AI.
It was also posted by an account pretending to be Bloomberg, an American media that transmits global financial news, and the Dow Jones Industrial Average of the New York stock market fell by more than $ 5 at one time.

In the United States, just before former President Trump was indicted, a fake image of being dragged by a police officer spread on Twitter at the time, and concerns about malicious use are spreading.

[Impact on the election]
The impact on elections is particularly severe.
Fake images and videos that are used in election ads can affect voters' voting behavior.

Copyright infringement:
There have also been troubles over copyright infringement.
In many cases, it is ambiguous where the image data to be trained in advance by the image generation AI software is collected from, or there are cases where permission is not obtained to use it, and in the United States, if your art is learned by AI without permission and the image is generated, There is also a movement for artists to file a class-action lawsuit against a company that operates AI software on the grounds of copyright infringement.

It has also been pointed out that there is a risk that generative AI that has learned
a large amount of data will infringe on privacy, such as identifying individuals by combining various personal information, and that it may promote discrimination by learning content that contains discrimination or prejudice.

In the
United States, there are also concerns that generative AI will threaten jobs by doing jobs on behalf of humans. There are already people who have actually lost their jobs, such as copywriters and counselors.

Labor unions made up of Hollywood and other screenwriters went on strike from early May to late September to demand that their work not be infringed upon by AI.

Challenger, Grey & Christmas, an American reemployment support company, has compiled a survey that found that more than 5,9 people had lost their jobs due to AI by October.

◇Regulatory trends in each country regarding AI

This is the regulatory movement of each country regarding AI.


Europe has taken the lead in creating rules for AI, with the European Parliament and member states in talks with the aim of agreeing on the world's first legislation to regulate generative AI by the end of this year.

However, at the end of October, when the talks were coming to a close, France and Germany, major countries, expressed a cautious stance on regulating generative AI, saying that "Europe's competitiveness depends on the development of excellent original AI." On November 10, French President Emmanuel Macron said that it is not the technological development of AI that should be regulated, but the operation of AI, and that 'penalties should not be set.'

In Europe, there are calls for strict regulation of generative AI from the viewpoint of protecting personal information and copyrights, while there is a strong sense of crisis about the spread of generative AI led by major IT companies in the United States from the perspective of the economy and industrial development. It is believed that the reason why France and Germany are cautious about regulations is because they intend to support their own companies and researchers who are developing Europe's own generative AI.

In the
United States, President Biden signed an executive order on October 11 that includes setting new standards for AI safety.
The
executive order calls
for government agencies to set strict standards for testing before it is released to the public to ensure the safety of AI, and
to create a certification mechanism to indicate that the content is AI-generated. It includes expanding funding for AI research in key areas such as healthcare, aiming to innovate while managing risk.

President Biden has emphasized his intention to lead the world in the field of AI, and has called on Congress to take immediate action to pass new regulatory legislation to protect privacy.

The battle for leadership among IT companies intensifies

American IT companies are entering the development and services of generative AI one after another, and the competition for leadership is intensifying.

Google announced
that it will release its generative AI "Bard" to the public in February. It aims to compete with ChatGPT, which was released earlier. In addition to English, it supports more than 2 languages such as Japanese and Korean, and is characterized by the ability to create answers from the latest information on the Internet, and is released as being in trial operation. In September, we also announced cooperation with services such as Gmail, and it is now possible to summarize the contents of emails.

In response, Microsoft has
invested heavily in Open AI, and its partner Microsoft is also focusing on this area. In addition to announcing a search engine that incorporates generative AI, we are also developing a service that allows you to use the generative AI "Copilot" developed based on open AI technology in the document software "Word" and the spreadsheet software "Excel".

Meta (formerly Facebook):
On the other hand, Meta, the former Facebook, released a conversational AI one step earlier than ChatGPT, but it was immediately canceled because it contained incorrect content. However, since then, we have started providing the basic technology "Llama" for generative AI, and we have made it available for a wide range of software development.

Claude, a generative AI developed by Anthropic, a venture company launched by
a former employee of Open AI, is characterized by its focus on AI safety, and Amazon has announced that it will invest up to $ 40 billion in this company.


Entrepreneur Elon Musk announced in November the generative AI 'Grok' developed by his newly founded company 'xAI', explaining that it will be available for use with the paid service of $ 9 per month of the former Twitter "X".