Home

What Is AI Technology?

|
|  Updated:  
2023-05-26 18:15:05
|   From The Book:  
Generative AI For Dummies
Explore Book
Buy On Amazon
The first concept that’s important to understand is that artificial intelligence (AI) doesn’t really have anything to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation.

When asking "what is artificial intelligence?" notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal.

AI technology relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways:

  • Acting like a human: When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. This category also reflects what the media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test).

    The original Turing Test didn’t include any physical contact. The newer, Total Turing Test does include physical contact in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed.

    Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright Brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to aerodynamics that eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches.

  • Thinking like a human: When a computer thinks as a human, it performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on three techniques:
    • Introspection: Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes.
    • Psychological testing: Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other things).
    • Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Magnetoencephalography (MEG).

After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential.

  • Thinking rationally: Studying how humans think using some standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to interact with an environment based on the data at hand.

    The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point.

  • Acting rationally: Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data. As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline upon which a computer can begin negotiating the successful completion of a goal.

Hintze's AI classifications

The categories used to define AI offer a way to consider various uses for or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and not distinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well).

The problem with strong AI is that it doesn’t perform any task well, while weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI:

  • Reactive machines: The machines you see beating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience upon which to base a decision. Instead, it relies on pure computational power and smart algorithms to recreate every decision every time. This is an example of a weak AI used for a specific purpose.
  • Limited memory: A self-driving car or autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI.
  • Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly.
  • Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge.

Problems defining AI

Artificial Intelligence has had several false starts and stops over the years, partly because people don’t really understand what AI is all about, or even what it should accomplish.

A major part of the problem is that movies, television shows, and books have all conspired to give false hopes about hat AI could accomplish. In addition, the human tendency to anthropomorphize (give human characteristics to) technology makes it seem as if AI must do more than it can hope to accomplish.

Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently.

Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways.

The term artificial intelligence doesn’t really tell you anything meaningful, which is why there are so many discussions and disagreements about it. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous.

Discerning intelligence

People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following:
  • Learning: Having the ability to obtain and process new information
  • Reasoning: Being able to manipulate information in various ways
  • Understanding: Considering the result of information manipulation
  • Grasping truths: Determining the validity of the manipulated information
  • Seeing relationships: Divining how validated data interacts with other data
  • Considering meanings: Applying truths to particular situations in a manner consistent with their relationship
  • Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid

How does AI work?

The list above could easily get quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation:
  1. Set a goal based on needs or wants.
  2. Assess the value of any currently known information in support of the goal.
  3. Gather additional information that could support the goal. The emphasis here is on information that could support the goal, rather than information that you know will support the goal.
  4. Manipulate the data such that it achieves a form consistent with existing information.
  5. Define the relationships and truth values between existing and new information.
  6. Determine whether the goal is achieved.
  7. Modify the goal in light of the new data and its effect on the probability of success.
  8. Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false).
Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited.

For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth. In fact, no computer can fully implement any of the mental activities described in the list that describes intelligence.

As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence, but rather rely on multiple intelligences to perform tasks.

Howard Gardner of Harvard has defined a number of these types of intelligence, and knowing them helps you to relate them to the kinds of tasks that a computer can simulate as intelligence (see the table below for a modified version of these intelligences with additional description).

The Kinds of Human Intelligence and How AIs Simulate Them

Type Simulation Potential Human Tools Description
Visual-spatial Moderate Models, graphics, charts, photographs, drawings, 3-D modeling, video, television, and multimedia Physical-environment intelligence used by people like sailors and architects (among many others). To move at all, humans need to understand their physical environment — that is, its dimensions and characteristics. Every robot or portable computer intelligence requires this capability, but the capability is often difficult to simulate (as with self-driving cars) or less than accurate (as with vacuums that rely as much on bumping as they do on moving intelligently).
Bodily-kinesthetic Moderate to High Specialized equipment and real objects Body movements, such as those used by a surgeon or a dancer, require precision and body awareness. Robots commonly use this kind of intelligence to perform repetitive tasks, often with higher precision than humans, but sometimes with less grace. It’s essential to differentiate between human augmentation, such as a surgical device that provides a surgeon with enhanced physical ability, and true independent movement. The former is simply a demonstration of mathematical ability in that it depends on the surgeon for input.
Creative None Artistic output, new patterns of thought, inventions, new kinds of musical composition Creativity is the act of developing a new pattern of thought that results in unique output in the form of art, music, and writing. A truly new kind of product is the result of creativity. An AI can simulate existing patterns of thought and even combine them to create what appears to be a unique presentation but is really just a mathematically based version of an existing pattern. In order to create, an AI would need to possess self-awareness, which would require intrapersonal intelligence.
Interpersonal Low to Moderate Telephone, audio conferencing, video conferencing, writing, computer conferencing, email Interacting with others occurs at several levels. The goal of this form of intelligence is to obtain, exchange, give, and manipulate information based on the experiences of others. Computers can answer basic questions because of keyword input, not because they understand the question. The intelligence occurs while obtaining information, locating suitable keywords, and then giving information based on those keywords. Cross-referencing terms in a lookup table and then acting on the instructions provided by the table demonstrates logical intelligence, not interpersonal intelligence.
Intrapersonal None Books, creative materials, diaries, privacy, and time Looking inward to understand one’s own interests and then setting goals based on those interests is currently a human-only kind of intelligence. As machines, computers have no desires, interests, wants, or creative abilities. An AI processes numeric input using a set of algorithms and provides an output; it isn’t aware of anything that it does, nor does it understand anything that it does.
Linguistic (often divided into oral, aural, and written) Low for oral and aural

None for written

Games, multimedia, books, voice recorders, and spoken words Working with words is an essential tool for communication because spoken and written information exchange is far faster than any other form. This form of intelligence includes understanding oral, aural, and written input, managing the input to develop an answer, and providing an understandable answer as output. In many cases, computers can barely parse input into keywords, can’t actually understand the request at all, and output responses that may not be understandable at all. In humans, oral, aural, and written linguistic intelligence come from different areas of the brain, which means that even with humans, someone who has high written linguistic intelligence may not have similarly high oral linguistic intelligence. Computers don’t currently separate aural and oral linguistic ability — one is simply input and the other output. A computer can’t simulate written linguistic capability because this ability requires creativity.
Logical-mathematical High (potentially higher than humans) Logic games, investigations, mysteries, and brain teasers Calculating a result, performing comparisons, exploring patterns, and considering relationships are all areas in which computers currently excel. When you see a computer beat a human on a game show, this is the only form of intelligence that you’re actually seeing, out of seven kinds of intelligence. Yes, you might see small bits of other kinds of intelligence, but this is the focus. Basing an assessment of human-versus-computer intelligence on just one area isn’t a good idea.

The reality vs. hype

There is a lot of hype about AI out there. If you watch movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application like those shown in the movies is the creative output of an overactive imagination.

However, the importance of artificial intelligence to the future of technology cannot be overstated. It is already helping people in everyday technologies, and has great potential in everything from customer service to health care, to outer space exploration.

The five tribes and the master algorithm

You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning.

To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed earlier.

Here are the five tribes of learning:

  • Symbologists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems.
  • Connectionists: This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems.
  • Evolutionaries: The evolutionaries tribe originates in evolutionary biology, relying on genetic programming to solve problems.
  • Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems.
  • Analogizers: The origin of this tribe is in psychology. The group relies on kernel machines to solve problems.
The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are currently working toward that goal.

To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity.

At this point, you should be amazed at just how much people don’t know about how they think or why they think in a certain manner. Any rumors you hear about AI taking over the world or becoming superior to people are just plain false.

Considering sources of hype

There are many sources of AI hype. Quite a bit of the hype comes from the media and is presented by people who have no idea of what AI is all about, except perhaps from a sci-fi novel they read once. So, it’s not just movies or television that cause problems with AI hype; it’s all sorts of other media sources as well.

You can often find news reports presenting AI as being able to do something that it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news services now use AI to at least start articles for reporters.

Some products should be tested a lot more before being placed on the market. The “2020 in Review: 10 AI Failures” article at SyncedReview.com discusses ten products hyped by their developer but which fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole.

However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner.

Another cause of problems comes from asking the wrong person about AI. Not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet, many stories appear with people like these as the information source. To discover the future direction of AI, it’s best to ask a computer scientist or data scientist with a strong background in AI research.

Understanding user overestimation

Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph. However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure.

However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner.

The article “How to Solve the Most Annoying Robot Vacuum Cleaner Problems” at RobotsInMyHome.com discusses troubleshooting techniques for various robotic vacuums for a good reason — the robots still need human intervention. The point is that most robots need human intervention at some point because they simply lack the knowledge to go it alone.

What is AI technology?

Artificial intelligence is a sub-discipline of computer science that works by combining large amounts of data with fast, iterative algorithms with the goal of enabling computers to solve complex problems and complete complex tasks.

To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. For artificial intelligence, the computers could be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for some applications.

Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform.

The application can also vary in size, complexity, and even location. For example, if you’re a business and want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task.

On the other hand, if you’re a customer and want to find products on Amazon to go with your current purchase items, the application doesn’t even reside on your computer; you access it through a web-based application located on Amazon’s servers.

The knowledge base varies in location and size as well. The more complex the data, the more you can obtain from it, but the more you need to manipulate it as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important. A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, while fast, tend to lack details in many cases.

About This Article

This article is from the book: 

About the book author:

John Paul Mueller is a freelance author and technical editor. He has writing in his blood, having produced 100 books and more than 600 articles to date. The topics range from networking to home security and from database management to heads-down programming. John has provided technical services to both Data Based Advisor and Coast Compute magazines.

Luca Massaron is a data scientist specialized in organizing and interpreting big data and transforming it into smart data by means of the simplest and most effective data mining and machine learning techniques. Because of his job as a quantitative marketing consultant and marketing researcher, he has been involved in quantitative data since 2000 with different clients and in various industries, and is one of the top 10 Kaggle data scientists.