Huge investments are being made in 'artificial general intelligence (AGI)' and 'artificial superintelligence (ASI),' which go beyond artificial intelligence (AI), without clear definitions.

As AI develops rapidly, some AI companies, including Google and OpenAI, are aiming to realize
Why Big Tech cannot agree on artificial general intelligence
https://www.ft.com/content/d20e8c22-bc03-4404-ac93-f7886525d8d6

AGI is an abbreviation for 'Artificial general intelligence' and refers to AI that exceeds human intellectual capabilities in all fields. While some people believe that general artificial intelligence is impossible to achieve, Microsoft has invested more than 100 billion yen in OpenAI to develop general artificial intelligence (AGI), and Google founder Sergey Brin has said , 'If we work on development in the office for 60 hours a week on weekdays, Google can develop AGI and lead the industry.' Companies leading AI development are seriously working on the realization of AGI.
Google DeepMind defines AGI as 'AI with capabilities at least equivalent to an experienced adult in most cognitive tasks.' However, it is not clearly defined how capable 'an experienced adult' is, or how broad 'most cognitive tasks' should be. Therefore, AGI is intangible, and when talking about 'when AGI will be realized,' it depends greatly on how one defines AGI.
For example, OpenAI defines AGI as a financial indicator, saying, 'AI will be considered to have achieved AGI if it makes a profit of 15 trillion yen.' However, this is an agreement with Microsoft, which invested in OpenAI to support AGI development, so it is far from the technical definition of AGI. However, Mark Chen, Chief Research Officer of OpenAI, said, 'We aim to develop highly autonomous systems that have the ability to outperform humans in many economically valuable tasks,' and sees AGI from an economic perspective.
Microsoft and OpenAI have agreed that 'if OpenAI develops an AI that generates 15 trillion yen in profits, it will be considered to have achieved general artificial intelligence (AGI)' - GIGAZINE

The lack of an agreed-upon AGI has also led some technology leaders to lower the bar for its arrival: OpenAI CEO Sam Altman said in December 2024, 'My guess is that AGI will come sooner than most people in the world think, and it will be much less important .'
Margaret Mitchell, a researcher and chief ethicist at Hugging Face, an AI development platform, argued in a paper published in February 2025 (PDF file) that 'AGI should not be a guide or a 'North Star' goal.' In an interview with the Financial Times, Mitchell pointed out that the reason for the vague definition of AGI is that the concept of 'intelligence' itself is vague, saying, 'The term AGI is used in a way that gives the illusion that there is a common understanding of what the term means and a shared will about what we need to do. This state of affairs can be very harmful.'
Some people argue that large-scale language models (LLMs) are general-purpose AI because they can handle a wide range of fields compared to conventional AI, which was specialized in specific computational capabilities. On the other hand, AGI is not just an AI that can perform general tasks as the name 'general artificial intelligence' suggests, but is also used to mean 'strong AI' that demonstrates high capabilities that exceed the computational capabilities of previous AI and even the capabilities of humans specialized in that field.
For this reason, Yann LeCun, Meta's chief AI scientist and one of the 'godfathers of AI technology,' does not like the term AGI because 'human intelligence is not that general-purpose (any AI that is somewhat general-purpose can be said to exceed human intelligence).' Instead, some experts use the term 'artificial superintelligence (ASI)' to describe AI that greatly exceeds human intelligence.

Further development of AI requires huge amounts of water and energy to carry out learning in large data centers, which also entails a huge environmental burden. In addition, there are concerns about the possibility of ethical issues and social harm in AGI. Despite this, leading companies in AI development are putting a lot of effort into AGI, with hundreds of millions of yen of investments being made and OpenAI announcing a partnership with a US national research institute to build AGI, without a clear definition of either AGI or ASI being shared.
Nick Frost, co-founder of AI startup Cohere, said of the AGI debate, 'It's mostly a bubble filled with people trying to raise funds for the idea. Someone has been saying for years that AGI is imminent, making it seem like there's a financial incentive to say it, but you have to wonder why it hasn't happened for so long.' He warns of the heat of AI development. Some researchers also argue that 'creating meaningful ways to measure and evaluate AI technology in the real world is hindered by the AI industry's obsession with 'crude claims about AGI,'' pointing out the possibility that they may not be clearly defined.
in Software, Posted by log1e_dh