Artificial Intelligence

What are the 3 types of AI? A guide to narrow, general, and super artificial intelligence

24 October 2017 • 11 minutes

Written by Eban Escott

image for 'What are the 3 types of AI? A guide to narrow, general, and super artificial intelligence'

In this article, we discuss the 3 types of AI in depth, and theories on the future of AI.

There are 3 types of artificial intelligence (AI): narrow or weak AI, general or strong AI, and artificial superintelligence.

We have currently only achieved narrow AI. As machine learning capabilities continue to evolve, and scientists get closer to achieving general AI, theories and speculations regarding the future of AI are circulating. There are two main theories.

One theory is based on fear of a dystopian future, where super intelligent killer robots take over the world, either wiping out the human race or enslaving all of humanity, as depicted in many science fiction narratives.

The other theory predicts a more optimistic future, where humans and bots work together, humans using artificial intelligence as a tool to enhance their life experience.

Artificial intelligence tools are already having a significant impact on the way we conduct business worldwide, completing tasks with a speed and efficiency that wouldn’t be possible for humans. However, human emotion and creativity is something incredibly special and unique, extremely difficult - if not impossible - to replicate in a machine. Codebots is backing a future where humans and bots work together for the win.

In this article, we discuss the 3 types of AI in depth, and theories on the future of AI. Let’s start by clearly defining artificial intelligence.

What is artificial intelligence (AI)?

Artificial Intelligence is a branch of computer science that endeavours to replicate or simulate human intelligence in a machine, so machines can perform tasks that typically require human intelligence. Some programmable functions of AI systems include planning, learning, reasoning, problem solving, and decision making.

Artificial intelligence systems are powered by algorithms, using techniques such as machine learning, deep learning and rules. Machine learning algorithms feed computer data to AI systems, using statistical techniques to enable AI systems to learn. Through machine learning, AI systems get progressively better at tasks, without having to be specifically programmed to do so.

If you’re new to the field of AI, you’re likely most familiar with the science fiction portrayal of artificial intelligence; robots with human-like characteristics. While we’re not quite at the human-like robot level of AI yet, there are a plethora of incredible things scientists, researchers and technologists are doing with AI.

AI can encompass anything from Google’s search algorithms, to IBM’s Watson, to autonomous weapons. AI technologies have transformed the capabilities of businesses globally, enabling humans to automate previously time-consuming tasks, and gain untapped insights into their data through rapid pattern recognition.

What are the 3 types of AI?

AI technologies are categorised by their capacity to mimic human characteristics, the technology they use to do this, their real-world applications, and the theory of mind, which we’ll discuss in more depth below.

Using these characteristics for reference, all artificial intelligence systems - real and hypothetical - fall into one of three types:

  1. Artificial narrow intelligence (ANI), which has a narrow range of abilities;
  2. Artificial general intelligence (AGI), which is on par with human capabilities; or
  3. Artificial superintelligence (ASI), which is more capable than a human.

Artificial Narrow Intelligence (ANI) / Weak AI / Narrow AI

Artificial narrow intelligence (ANI), also referred to as weak AI or narrow AI, is the only type of artificial intelligence we have successfully realized to date. Narrow AI is goal-oriented, designed to perform singular tasks - i.e. facial recognition, speech recognition/voice assistants, driving a car, or searching the internet - and is very intelligent at completing the specific task it is programmed to do.

While these machines may seem intelligent, they operate under a narrow set of constraints and limitations, which is why this type is commonly referred to as weak AI. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behaviour based on a narrow range of parameters and contexts.

Consider the speech and language recognition of the Siri virtual assistant on iPhones, vision recognition of self-driving cars, and recommendation engines that suggest products you make like based on your purchase history. These systems can only learn or be taught to complete specific tasks.

Narrow AI has experienced numerous breakthroughs in the last decade, powered by achievements in machine learning and deep learning. For example, AI systems today are used in medicine to diagnose cancer and other diseases with extreme accuracy through replication of human-esque cognition and reasoning.

Narrow AI’s machine intelligence comes from the use of natural language processing (NLP) to perform tasks. NLP is evident in chatbots and similar AI technologies. By understanding speech and text in natural language, AI is programmed to interact with humans in a natural, personalised manner.

Narrow AI can either be reactive, or have a limited memory. Reactive AI is incredibly basic; it has no memory or data storage capabilities, emulating the human mind’s ability to respond to different kinds of stimuli without prior experience. Limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use historical data to inform decisions.

Most AI is limited memory AI, where machines use large volumes of data for deep learning. Deep learning enables personalised AI experiences, for example, virtual assistants or search engines that store your data and personalise your future experiences.

Examples of narrow AI:

Artificial General Intelligence (AGI) / Strong AI / Deep AI

Artificial general intelligence (AGI), also referred to as strong AI or deep AI, is the concept of a machine with general intelligence that mimics human intelligence and/or behaviours, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is indistinguishable from that of a human in any given situation.

AI researchers and scientists have not yet achieved strong AI. To succeed, they would need to find a way to make machines conscious, programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply experiential knowledge to a wider range of different problems.

Strong AI uses a theory of mind AI framework, which refers to the ability to discern needs, emotions, beliefs and thought processes of other intelligent entitles. Theory of mind level AI is not about replication or simulation, it’s about training machines to truly understand humans.

The immense challenge of achieving strong AI is not surprising when you consider that the human brain is the model for creating general intelligence. The lack of comprehensive knowledge on the functionality of the human brain has researchers struggling to replicate basic functions of sight and movement.

Fujitsu-built K, one of the fastest supercomputers, is one of the most notable attempts at achieving strong AI, but considering it took 40 minutes to simulate a single second of neural activity, it is difficult to determine whether or not strong AI will be achieved in our foreseeable future. As image and facial recognition technology advances, it is likely we will see an improvement in the ability of machines to learn and see.

Artificial Superintelligence (ASI)

Artificial super intelligence (ASI), is the hypothetical AI that doesn’t just mimic or understand human intelligence and behaviour; ASI is where machines become self-aware and surpass the capacity of human intelligence and ability.

Superintelligence has long been the muse of dystopian science fiction in which robots overrun, overthrow, and/or enslave humanity. The concept of artificial superintelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs and desires of its own.

In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have a greater memory and a faster ability to process and analyse data and stimuli. Consequently, the decision-making and problem solving capabilities of super intelligent beings would be far superior than those of human beings.

The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life, is pure speculation.

Is AI dangerous? Will robots take over the world?

AI’s rapid growth and powerful capabilities have made many people paranoid about the “inevitability” and proximity of an AI takeover.

In his book Superintelligence, Nick Bostrom begins with “The Unfinished Fable of the Sparrows.” Basically, some sparrows decided they wanted a pet owl. Most sparrows thought the idea was awesome, but one was sceptical, voicing concern about how the sparrows could control an owl. This concern was dismissed in a “we’ll deal with that problem when it’s a problem” matter.

Elon Musk has similar concerns around superintelligent beings, and would argue that humans are the sparrows in Bostrom’s metaphor, and ASI is the owl. As it was for the sparrows, the “control problem” is especially concerning because we may only get one chance at solving it.

Mark Zuckerberg is less concerned about this hypothetical control problem, saying the positives of AI outweigh potential negatives.

Most researchers agree that superintelligent AI is unlikely to exhibit human emotions, and we have no reason to expect ASI will become malevolent. When considering how AI might become a risk, two key scenarios have been determined as most likely.

AI could be programmed to do something devastating.

Autonomous weapons are AI systems programmed to kill. In the hands of the wrong person, autonomous weapons could inadvertently lead to an AI war, and mass casualties, potentially even the end of humanity. Such weapons may be designed to be extremely difficult to “turn off”, and humans could plausibly, rapidly lose control. This risk is prevalent even with narrow AI, but grows exponentially as autonomy increases.

AI could be programmed to do something beneficial, but develop a destructive method for achieving its goal.

It can be difficult to program a machine to complete a task, when you don’t carefully and clearly outline your goals. Consider you ask an intelligent car to take you somewhere as fast as possible. The instruction “as fast as possible” fails to consider safety, road rules, etc. The intelligent car may successfully complete its task, but what havoc may it cause in the process? If a machine is given a goal, and then we need to change the goal, or have to stop the machine, how can we ensure the machine doesn’t view our attempts to stop it as a threat to the goal? How can we ensure the machine doesn’t do “whatever it takes” to complete the goal? The danger is in the “whatever it takes” and the risks with AI aren’t necessarily about malevolence, they’re about competence.

Superintelligent AI would be extremely efficient at attaining goals, whatever they may be, but we need to ensure these goals align with ours if we expect to maintain some level of control.

What is the future of AI?

This is the burning question. Are we capable of achieving strong AI or artificial superintelligence? Are they even possible? Optimistic experts believe AGI and ASI are possible, but it’s very difficult to determine how far away we are from realizing these levels of AI.

The line between computer programs and AI is opaque. Mimicking narrow elements of human intelligence and behaviour is relatively easy, but creating a machine version of human consciousness is a totally different story. While AI is still in its infancy, and the quest for strong AI was long thought to be science fiction, breakthroughs in machine and deep learning indicate that we may need to be more realistic about the possibility of achieving artificial general intelligence in our lifetime.

It’s daunting to consider a future where machines are better than humans at the very things that make us human. We cannot accurately predict all the impacts AI advancements will have on our world, but the eradication of things like disease and poverty is not unfathomable.

For now, the greatest concern civilisation faces with regard to narrow AI technologies, is the prospect of efficient goal-oriented automation causing many human jobs to become obsolete. In his talk, “What does AI add to our lives?” at the 2020 Digital Life Design (DLD) Conference in Munich, Germany, Gary Kasparov, the youngest world chess champion and best chess player for 20 years, presented an alternative argument.


LEFT: Michal Pechoucek, CTO at Avast and Gary Kasparov, world chess champion in Munich, Germany at #DLD20

Kasparov argued that we have more to win than lose when it comes to AI, and that rather than becoming obsolete, humans are going to be promoted. Kasparov says, “Jobs don’t disappear, they evolve. Deleting people from repetitive jobs frees them up to be more creative. The future of the human race is hedged on creativity.

“The future is about humans and machines working together. AI will bring you what you want the most…time.”

Codebots’ is built on the vision of a future where humans and bots work together, with bots taking care of the heavy lifting, so humans can focus on the creative. At the end of the day, AI technologies are a tool created to enhance the human experience and make our lives better.

Eban Escott

Written by Eban Escott

Founder of Codebots

Dr Eban Escott received his Doctorate from UQ (2013) in Model-Driven Engineering and his Masters from QUT (2004) in Artificial Intelligence. He is an advocate of using models as first class artefacts in software engineering and creating not just technologies, but methodologies that enhance the quality of life for software engineers.