When talking about artificial intelligence, there are many perspectives and understandings of what AI is and what it encompasses. We relate to AI terms like algorithms, machine learning, language processing, deep learning, robots and many other aspects. AI is a wide range of different technologies. Today let’s talk a little bit about ANI, AGI and ASI – what they are and what the difference between them is.
Current AI technologies all fall under the Artificial Narrow Intelligence (ANI) category, which means they are very good at only one or a few closely related tasks. This type of AI has a limited range of abilities, specifically designed for a narrow use. It is able to reach a level of performance of a human, and even better, but only within this limited field that is its specialty. Examples of ANI include everything from Siri, Face ID and the Google Assistant, to self-driving cars and DeepMind’s board game playing program. This is the only form of AI that we have been able to develop so far. If you can think about any form of AI that you know exists today, it is ANI.
The next step after ANI that people are trying to achieve is Artificial General Intelligence (AGI), which would be good at a vast range of things, much more similar to human intelligence and not focused on specific tasks. It would be kind of similar to a human mind, and in theory it should be able to think and function like a human mind, being able to make sense of different content, understand issues and decide what is best in a complex situation. This is exactly why AGI hasn’t been achieved yet. We are not technically capable of producing something as complex yet, and we aren’t really sure how the human brain actually works either. AGI is a relatively logical and rational future though, and it could be attained at some point if humans develop their knowledge and understanding, as well as technical skills to a high enough level.
When AGI is achieved and computers are able to learn independently at a very quick rate, and exponentially improve on their own without human intervention or help, the final step that AI could hypothetically reach is Artificial Super Intelligence (ASI). At this stage AI would be capable of vastly outperforming the best human brains in practically every field. The evolution from AGI to ASI would in theory be much faster than it is taking us to get from ANI to AGI right now, since AGI would allow computers to “think” and exponentially improve themselves once they are able to really learn from experience and by trial and error. If a transition to ASI ever happens, the exponential growth that is in theory expected to occur at this point is often called an “intelligence explosion”.
ASI is currently very far away and right now we are not even close to this level of development, but as a potential scenario that could possibly happen one day, we need to consider all implications of a future where ASI does exist, and how to ensure that it develops in the right way to keep it safe and beneficial for us, and to ensure an optimal cooperation between humans and machines. And the question of ethics and morals is not only related to the point when AI develops into Super Intelligence, but is relevant even now, with data collection and security, online presence and communication, and every other aspect of narrow artificial intelligence technologies. We should ensure a safe and ethical functioning of AI in all fields and make it a priority in further development.