Articles From John Paul Mueller
Filter Results
Article / Updated 10-28-2024
Bayes’ theorem can help you deduce how likely something is to happen in a certain context, based on the general probabilities of the fact itself and the evidence you examine, and combined with the probability of the evidence given the fact. Seldom will a single piece of evidence diminish doubts and provide enough certainty in a prediction to ensure that it will happen. As a true detective, to reach certainty, you have to collect more evidence and make the individual pieces work together in your investigation. Noticing that a person has long hair isn’t enough to determine whether person is female or a male. Adding data about height and weight could help increase confidence. The Naïve Bayes algorithm helps you arrange all the evidence you gather and reach a more solid prediction with a higher likelihood of being correct. Gathered evidence considered singularly couldn’t save you from the risk of predicting incorrectly, but all evidence summed together can reach a more definitive resolution. The following example shows how things work in a Naïve Bayes classification. This is an old, renowned problem, but it represents the kind of capability that you can expect from an AI. The dataset is from the paper “Induction of Decision Trees,” by John Ross Quinlan. Quinlan is a computer scientist who contributed to the development of another machine learning algorithm, decision trees, in a fundamental way, but his example works well with any kind of learning algorithm. The problem requires that the AI guess the best conditions to play tennis given the weather conditions. The set of features described by Quinlan is as follows: Outlook: Sunny, overcast, or rainy Temperature: Cool, mild, or hot Humidity: High or normal Windy: True or false The following table contains the database entries used for the example: Outlook Temperature Humidity Windy PlayTennis Sunny Hot High False No Sunny Hot High True No Overcast Hot High False Yes Rainy Mild High False Yes Rainy Cool Normal False Yes Rainy Cool Normal True No Overcast Cool Normal True Yes Sunny Mild High False No Sunny Cool Normal False Yes Rainy Mild Normal False Yes Sunny Mild Normal True Yes Overcast Mild High True Yes Overcast Hot Normal False Yes Rainy Mild High True No The option of playing tennis depends on the four arguments shown here. The result of this AI learning example is a decision as to whether to play tennis, given the weather conditions (the evidence). Using just the outlook (sunny, overcast, or rainy) won’t be enough, because the temperature and humidity could be too high or the wind might be strong. These arguments represent real conditions that have multiple causes, or causes that are interconnected. The Naïve Bayes algorithm is skilled at guessing correctly when multiple causes exist. The algorithm computes a score, based on the probability of making a particular decision and multiplied by the probabilities of the evidence connected to that decision. For instance, to determine whether to play tennis when the outlook is sunny but the wind is strong, the algorithm computes the score for a positive answer by multiplying the general probability of playing (9 played games out of 14 occurrences) by the probability of the day’s being sunny (2 out of 9 played games) and of having windy conditions when playing tennis (3 out of 9 played games). The same rules apply for the negative case (which has different probabilities for not playing given certain conditions): likelihood of playing: 9/14 * 2/9 * 3/9 = 0.05 likelihood of not playing: 5/14 * 3/5 * 3/5 = 0.13 Because the score for the likelihood is higher, the algorithm decides that it’s safer not to play under such conditions. It computes such likelihood by summing the two scores and dividing both scores by their sum: probability of playing : 0.05 / (0.05 + 0.13) = 0.278 probability of not playing : 0.13 / (0.05 + 0.13) = 0.722 You can further extend Naïve Bayes to represent relationships that are more complex than a series of factors that hint at the likelihood of an outcome using a Bayesian network, which consists of graphs showing how events affect each other. Bayesian graphs have nodes that represent the events and arcs showing which events affect others, accompanied by a table of conditional probabilities that show how the relationship works in terms of probability. The figure shows a famous example of a Bayesian network taken from a 1988 academic paper, “Local computations with probabilities on graphical structures and their application to expert systems,” by Lauritzen, Steffen L. and David J. Spiegelhalter, published by the Journal of the Royal Statistical Society. The depicted network is called Asia. It shows possible patient conditions and what causes what. For instance, if a patient has dyspnea, it could be an effect of tuberculosis, lung cancer, or bronchitis. Knowing whether the patient smokes, has been to Asia, or has anomalous x-ray results (thus giving certainty to certain pieces of evidence, a priori in Bayesian language) helps infer the real (posterior) probabilities of having any of the pathologies in the graph. Bayesian networks, though intuitive, have complex math behind them, and they’re more powerful than a simple Naïve Bayes algorithm because they mimic the world as a sequence of causes and effects based on probability. Bayesian networks are so effective that you can use them to represent any situation. They have varied applications, such as medical diagnoses, the fusing of uncertain data arriving from multiple sensors, economic modeling, and the monitoring of complex systems such as a car. For instance, because driving in highway traffic may involve complex situations with many vehicles, the Analysis of MassIve Data STreams (AMIDST) consortium, in collaboration with the automaker Daimler, devised a Bayesian network that can recognize maneuvers by other vehicles and increase driving safety.
View ArticleArticle / Updated 09-24-2024
Both linear and logistic regression see a lot of use in data science but are commonly used for different kinds of problems. You need to know and understand both types of regression to perform a full range of data science tasks. Of the two, logistic regression is harder to understand in many respects because it necessarily uses a more complex equation model. The following information gives you a basic overview of how linear and logistic regression differ. The equation model Any discussion of the difference between linear and logistic regression must start with the underlying equation model. The equation for linear regression is straightforward. y = a + bx You may see this equation in other forms and you may see it called ordinary least squares regression, but the essential concept is always the same. Depending on the source you use, some of the equations used to express logistic regression can become downright terrifying unless you’re a math major. However, the start of this discussion can use one of the simplest views of logistic regression: p = f(a + bx) >p, is equal to the logistic function, f, applied to two model parameters, a and b, and one explanatory variable, x. When you look at this particular model, you see that it really isn’t all that different from the linear regression model, except that you now feed the result of the linear regression through the logistic function to obtain the required curve. The output (dependent variable) is a probability ranging from 0 (not going to happen) to 1 (definitely will happen), or a categorization that says something is either part of the category or not part of the category. (You can also perform multiclass categorization, but focus on the binary response for now.) The best way to view the difference between linear regression output and logistic regression output is to say that the following: Linear regression is continuous. A continuous value can take any value within a specified interval (range) of values. For example, no matter how closely the height of two individuals matches, you can always find someone whose height fits between those two individuals. Examples of continuous values include: Height Weight Waist size Logistic regression is discrete. A discrete value has specific values that it can assume. For example, a hospital can admit only a specific number of patients in a given day. You can’t admit half a patient (at least, not alive). Examples of discrete values include: Number of people at the fair Number of jellybeans in the jar Colors of automobiles produced by a vendor The logistic function Of course, now you need to know about the logistic function. You can find a variety of forms of this function as well, but here’s the easiest one to understand: f(x) = e<sup>x</sup> / e<sup>x</sup> + 1 You already know about f, which is the logistic function, and x equals the algorithm you want to use, which is a + bx in this case. That leaves e, which is the natural logarithm and has an irrational value of 2.718, for the sake of discussion (check out a better approximation of the whole value). Another way you see this function expressed is f(x) = 1 / (1 + e<sup>-x</sup>) Both forms are correct, but the first form is easier to use. Consider a simple problem in which a, the y-intercept, is 0, and ">b, the slope, is 1. The example uses x values from –6 to 6. Consequently, the first f(x) value would look like this when calculated (all values are rounded): (1) e<sup>-6</sup> / (1 + e<sup>-6</sup>) (2) 0.00248 / 1 + 0.00248 (3) 0.002474 As you might expect, an xvalue of 0 would result in an f(x) value of 0.5, and an x value of 6 would result in an f(x) value of 0.9975. Obviously, a linear regression would show different results for precisely the same x values. If you calculate and plot all the results from both logistic and linear regression using the following code, you receive a plot like the one below. import matplotlib.pyplot as plt %matplotlib inline from math import exp x_values = range(-6, 7) lin_values = [(0 + 1*x) / 13 for x in range(0, 13)] log_values = [exp(0 + 1*x) / (1 + exp(0 + 1*x)) for x in x_values] plt.plot(x_values, lin_values, 'b-^') plt.plot(x_values, log_values, 'g-*') plt.legend(['Linear', 'Logistic']) plt.show() This example relies on list comprehension to calculate the values because it makes the calculations clearer. The linear regression uses a different numeric range because you must normalize the values to appear in the 0 to 1 range for comparison. This is also why you divide the calculated values by 13. The exp(x) call used for the logistic regression raises e to the power of x, e<sup>x</sup>, as needed for the logistic function. The model discussed here is simplified, and some math majors out there are probably throwing a temper tantrum of the most profound proportions right now. The Python or R package you use will actually take care of the math in the background, so really, what you need to know is how the math works at a basic level so that you can understand how to use the packages. This section provides what you need to use the packages. However, if you insist on carrying out the calculations the old way, chalk to chalkboard, you’ll likely need a lot more information. The problems that logistic regression solves You can separate logistic regression into several categories. The first is simple logistic regression, in which you have one dependent variable and one independent variable, much as you see in simple linear regression. However, because of how you calculate the logistic regression, you can expect only two kinds of output: Classification: Decides between two available outcomes, such as male or female, yes or no, or high or low. The outcome is dependent on which side of the line a particular data point falls. Probability: Determines the probability that something is true or false. The values true and false can have specific meanings. For example, you might want to know the probability that a particular apple will be yellow or red based on the presence of yellow and red apples in a bin. Fit the curve As part of understanding the difference between linear and logistic regression, consider this grade prediction problem, which lends itself well to linear regression. In the following code, you see the effect of trying to use logistic regression with that data: x1 = range(0,9) y1 = (0.25, 0.33, 0.41, 0.53, 0.59, 0.70, 0.78, 0.86, 0.98) plt.scatter(x1, y1, c='r') lin_values = [0.242 + 0.0933*x for x in x1] log_values = [exp(0.242 + .9033*x) / (1 + exp(0.242 + .9033*x)) for x in range(-4, 5)] plt.plot(x1, lin_values, 'b-^') plt.plot(x1, log_values, 'g-*') plt.legend(['Linear', 'Logistic', 'Org Data']) plt.show() The example has undergone a few changes to make it easier to see precisely what is happening. It relies on the same data that was converted from questions answered correctly on the exam to a percentage. If you have 100 questions and you answer 25 of them correctly, you have answered 25 percent (0.25) of them correctly. The values are normalized to produce values between 0 and 1 percent. As you can see from the image above, the linear regression follows the data points closely. The logistic regression doesn’t. However, logistic regression often is the correct choice when the data points naturally follow the logistic curve, which happens far more often than you might think. You must use the technique that fits your data best, which means using linear regression in this case. A pass/fail example An essential point to remember is that logistic regression works best for probability and classification. Consider that points on an exam ultimately predict passing or failing the course. If you get a certain percentage of the answers correct, you pass, but you fail otherwise. The following code considers the same data used for the example above, but converts it to a pass/fail list. When a student gets at least 70 percent of the questions correct, success is assured. y2 = [0 if x < 0.70 else 1 for x in y1] plt.scatter(x1, y2, c='r') lin_values = [0.242 + 0.0933*x for x in x1] log_values = [exp(0.242 + .9033*x) / (1 + exp(0.242 + .9033*x)) for x in range(-4, 5)] plt.plot(x1, lin_values, 'b-^') plt.plot(x1, log_values, 'g-*') plt.legend(['Linear', 'Logistic', 'Org Data']) plt.show() This is an example of how you can use list comprehensions in Python to obtain a required dataset or data transformation. The list comprehension for y2 starts with the continuous data in y1 and turns it into discrete data. Note that the example uses precisely the same equations as before. All that has changed is the manner in which you view the data, as you can see below. Because of the change in the data, linear regression is no longer the option to choose. Instead, you use logistic regression to fit the data. Take into account that this example really hasn’t done any sort of analysis to optimize the results. The logistic regression fits the data even better if you do so.
View ArticleCheat Sheet / Updated 04-12-2024
A wide range of tools is available that are designed to help big businesses and small take advantage of the data science revolution. Among the most essential of these tools are Microsoft Power BI, Tableau, SQL, and the R and Python programming languages.
View Cheat SheetCheat Sheet / Updated 10-03-2023
Python is an incredible programming language that you can use to perform data science tasks with a minimum of effort. The huge number of available libraries means that the low-level code you normally need to write is likely already available from some other source. All you need to focus on is getting the job done. With that in mind, this Cheat Sheet helps you access the most commonly needed reminders for making your programming experience fast and easy.
View Cheat SheetArticle / Updated 09-13-2023
Many organizations are using Python these days to perform major tasks. You don't necessarily hear about them because organizations are usually reserved about giving out their trade secrets. However, Python is still there making a big difference in the way organizations work and toward keeping the bottom line from bottoming out. Following, are some major ways in which Python is used commercially that will make it easier to argue for using Python in your own organization. (Or you can read about some Python success stories.) Corel: PaintShop Pro is a product that many people have used over the years to grab screenshots, modify their pictures, draw new images, and perform a lot of other graphics-oriented tasks. The amazing thing about this product is that it relies heavily on Python scripting. In other words, to automate tasks in PaintShop Pro, you need to know Python. D-Link: Upgrading firmware over a network connection can be problematic, and D-Link was encountering a situation in which each upgrade was tying up a machine — a poor use of resources. In addition, some upgrades required additional work because of problems with the target device. Using Python to create a multithreaded application to drive updates to the devices allows one machine to service multiple devices, and a new methodology allowed by Python reduces the number of reboots to just one after that new firmware is installed. D-Link chose Python over other languages, such as Java, because it provides an easier-to-use serial communication code. Eve-Online: Games are a major business because so many people enjoy playing them. Eve-Online is a Massively Multiplayer Online Role Playing Game (MMORPG) that relies heavily on Python for both the client and server ends of the game. It actually relies on a Python variant named StacklessPython, which is important because you encounter these variants all the time when working with Python. Think of them as Python on steroids. These variants have all the advantages of Python, plus a few extra perks. The thing to take away from this particular company is that running an MMORPG takes major horsepower, and the company wouldn't have chosen Python unless it were actually up to the task. ForecastWatch.com: If you have ever wondered whether someone reviews the performance of your weatherman, look no further than ForecastWatch.com. This company compares the forecasts produced by thousands of weather forecasters each day against actual climatological data to determine their accuracy. The resulting reports are used to help improve weather forecasts. In this case, the software used to make the comparisons is written in pure Python because it comes with standard libraries useful in collecting, parsing, and storing data from online sources. In addition, Python's enhanced multithreading capabilities makes it possible to collect the forecasts from around 5,000 online sources each day. Most important of all, the code is much smaller than would have been needed by other languages such as Java or PHP. Frequentis: The next time you fly somewhere, you might be relying on Python to get you to the ground safely again. It turns out that Frequentis is the originator of TAPTools, a software product that is used for air traffic control in many airports. This particular tool provides updates on the weather and runway conditions to air traffic controllers. Honeywell: Documenting large systems is expensive and error prone. Honeywell uses Python to perform automated testing of applications, but it also uses Python to control a cooperative environment between applications used to generate documentation for the applications. The result is that Python helps generate the reports that form the documentation for the setup. Industrial Light & Magic: In this case, you find Python used in the production process for scripting complex, computer graphic-intensive films. Originally, Industrial Light & Magic relied on Unix shell scripting, but it was found that this solution just couldn't do the job. Python was compared to other languages, such as Tcl and Perl, and chosen because it's an easier-to-learn language that the organization can implement incrementally. In addition, Python can be embedded within a larger software system as a scripting language, even if the system is written in a language such as C/C++. It turns out that Python can successfully interact with these other languages in situations in which some languages can't. Philips: Automation is essential in the semiconductor industry, so imagine trying to coordinate the effort of thousands of robots. After a number of solutions, Philips decided to go with Python for the sequencing language (the language that tells what steps each robot should take). The low-level code is written in C++, which is another reason to use Python, because Python works well with C++.
View ArticleArticle / Updated 07-07-2023
You don’t need to understand absolutely every detail about how permanent storage works with Python in order to use it. For example, just how the drive spins (assuming that it spins at all) is unimportant. However, most platforms adhere to a basic set of principles when it comes to permanent storage. These principles have developed over a period of time, starting with mainframe systems in the earliest days of computing. Data is generally stored in files (with pure data representing application state information), but you could also find it stored as objects (a method of storing serialized class instances). probably know about files already because almost every useful application out there relies on them. For example, when you open a document in your word processor, you’re actually opening a data file containing the words that you or someone else has typed. Files typically have an extension associated with them that defines the file type. The extension is generally standardized for any given application and is separated from the filename by a period, such as MyData.txt. In this case, .txt is the file extension, and you probably have an application on your machine for opening such files. In fact, you can likely choose from a number of applications to perform the task because the .txt file extension is relatively common. Internally, files structure the data in some specific manner to make it easy to write and read data to and from the file. Any application you write must know about the file structure in order to interact with the data the file contains. File structures can become quite complex. Files would be nearly impossible to find if you placed them all in the same location on the hard drive. Consequently, files are organized into directories. Many newer computer systems also use the term folder for this organizational feature of permanent storage. No matter what you call it, permanent storage relies on directories to help organize the data and make individual files significantly easier to find. To find a particular file so that you can open it and interact with the data it contains, you must know which directory holds the file. Directories are arranged in hierarchies that begin at the uppermost level of the hard drive. For example, when working with the downloadable source code for this book, you find the code for the entire book in the BPPD directory within the user folder on your system. On a Windows system, that directory hierarchy is C:\Users\John\BPPD. However, other Mac and Linux systems have a different directory hierarchy to reach the same BPPD directory, and the directory hierarchy on your system will be different as well. Notice that you use a backslash (\) to separate the directory levels. Some platforms use the forward slash (/); others use the backslash. The book uses backslashes when appropriate and assumes that you'll make any required changes for your platform. A final consideration for Python developers (at least for this book) is that the hierarchy of directories is called a path. You see the term path in a few places in this book because Python must be able to find any resources you want to use based on the path you provide. For example, C:\Users\John\BPPD is the complete path to the source code on a Windows system. A path that traces the entire route that Python must search is called an absolute path. An incomplete path that traces the route to a resource using the current directory as a starting point is called a relative path. To find a location using a relative path, you commonly use the current directory as the starting point. For example, BPPD\__pycache__ would be the relative path to the Python cache. Note that it has no drive letter or beginning backslash. However, sometimes you must add to the starting point in specific ways to define a relative path. Most platforms define these special relative path character sets: \: The root directory of the current drive. The drive is relative, but the path begins at the root, the uppermost part, of that drive. .\: The current directory. You use this shorthand for the current directory when the current directory name isn't known. For example, you could also define the location of the Python cache as .\__pycache__. ..\: The parent directory. You use this shorthand when the parent directory name isn't known. ..\..\: The parent of the parent directory. You can proceed up the hierarchy of directories as far as necessary to locate a particular starting point before you drill back down the hierarchy to a new location.
View ArticleArticle / Updated 05-26-2023
The first concept that’s important to understand is that artificial intelligence (AI) doesn’t really have anything to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation. When asking "what is artificial intelligence?" notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal. AI technology relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways: Acting like a human: When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. This category also reflects what the media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test). The original Turing Test didn’t include any physical contact. The newer, Total Turing Test does include physical contact in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed. Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright Brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to aerodynamics that eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches. Thinking like a human: When a computer thinks as a human, it performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on three techniques: Introspection: Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes. Psychological testing: Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other things). Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and Magnetoencephalography (MEG). After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential. Thinking rationally: Studying how humans think using some standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide as to how to interact with an environment based on the data at hand. The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point. Acting rationally: Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data. As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline upon which a computer can begin negotiating the successful completion of a goal. Hintze's AI classifications The categories used to define AI offer a way to consider various uses for or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and not distinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well). The problem with strong AI is that it doesn’t perform any task well, while weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI: Reactive machines: The machines you see beating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience upon which to base a decision. Instead, it relies on pure computational power and smart algorithms to recreate every decision every time. This is an example of a weak AI used for a specific purpose. Limited memory: A self-driving car or autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI. Theory of mind: A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this level of AI must be fully developed. A self-driving car would not only need to know that it must go from one point to another, but also intuit the potentially conflicting goals of drivers around it and react accordingly. Self-awareness: This is the sort of AI that you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, instead of merely intuiting the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge. Problems defining AI Artificial Intelligence has had several false starts and stops over the years, partly because people don’t really understand what AI is all about, or even what it should accomplish. A major part of the problem is that movies, television shows, and books have all conspired to give false hopes about hat AI could accomplish. In addition, the human tendency to anthropomorphize (give human characteristics to) technology makes it seem as if AI must do more than it can hope to accomplish. Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently. Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways. The term artificial intelligence doesn’t really tell you anything meaningful, which is why there are so many discussions and disagreements about it. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous. Discerning intelligence People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following: Learning: Having the ability to obtain and process new information Reasoning: Being able to manipulate information in various ways Understanding: Considering the result of information manipulation Grasping truths: Determining the validity of the manipulated information Seeing relationships: Divining how validated data interacts with other data Considering meanings: Applying truths to particular situations in a manner consistent with their relationship Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid How does AI work? The list above could easily get quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation: Set a goal based on needs or wants. Assess the value of any currently known information in support of the goal. Gather additional information that could support the goal. The emphasis here is on information that could support the goal, rather than information that you know will support the goal. Manipulate the data such that it achieves a form consistent with existing information. Define the relationships and truth values between existing and new information. Determine whether the goal is achieved. Modify the goal in light of the new data and its effect on the probability of success. Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false). Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited. For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth. In fact, no computer can fully implement any of the mental activities described in the list that describes intelligence. As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence, but rather rely on multiple intelligences to perform tasks. Howard Gardner of Harvard has defined a number of these types of intelligence, and knowing them helps you to relate them to the kinds of tasks that a computer can simulate as intelligence (see the table below for a modified version of these intelligences with additional description). The Kinds of Human Intelligence and How AIs Simulate Them Type Simulation Potential Human Tools Description Visual-spatial Moderate Models, graphics, charts, photographs, drawings, 3-D modeling, video, television, and multimedia Physical-environment intelligence used by people like sailors and architects (among many others). To move at all, humans need to understand their physical environment — that is, its dimensions and characteristics. Every robot or portable computer intelligence requires this capability, but the capability is often difficult to simulate (as with self-driving cars) or less than accurate (as with vacuums that rely as much on bumping as they do on moving intelligently). Bodily-kinesthetic Moderate to High Specialized equipment and real objects Body movements, such as those used by a surgeon or a dancer, require precision and body awareness. Robots commonly use this kind of intelligence to perform repetitive tasks, often with higher precision than humans, but sometimes with less grace. It’s essential to differentiate between human augmentation, such as a surgical device that provides a surgeon with enhanced physical ability, and true independent movement. The former is simply a demonstration of mathematical ability in that it depends on the surgeon for input. Creative None Artistic output, new patterns of thought, inventions, new kinds of musical composition Creativity is the act of developing a new pattern of thought that results in unique output in the form of art, music, and writing. A truly new kind of product is the result of creativity. An AI can simulate existing patterns of thought and even combine them to create what appears to be a unique presentation but is really just a mathematically based version of an existing pattern. In order to create, an AI would need to possess self-awareness, which would require intrapersonal intelligence. Interpersonal Low to Moderate Telephone, audio conferencing, video conferencing, writing, computer conferencing, email Interacting with others occurs at several levels. The goal of this form of intelligence is to obtain, exchange, give, and manipulate information based on the experiences of others. Computers can answer basic questions because of keyword input, not because they understand the question. The intelligence occurs while obtaining information, locating suitable keywords, and then giving information based on those keywords. Cross-referencing terms in a lookup table and then acting on the instructions provided by the table demonstrates logical intelligence, not interpersonal intelligence. Intrapersonal None Books, creative materials, diaries, privacy, and time Looking inward to understand one’s own interests and then setting goals based on those interests is currently a human-only kind of intelligence. As machines, computers have no desires, interests, wants, or creative abilities. An AI processes numeric input using a set of algorithms and provides an output; it isn’t aware of anything that it does, nor does it understand anything that it does. Linguistic (often divided into oral, aural, and written) Low for oral and aural None for written Games, multimedia, books, voice recorders, and spoken words Working with words is an essential tool for communication because spoken and written information exchange is far faster than any other form. This form of intelligence includes understanding oral, aural, and written input, managing the input to develop an answer, and providing an understandable answer as output. In many cases, computers can barely parse input into keywords, can’t actually understand the request at all, and output responses that may not be understandable at all. In humans, oral, aural, and written linguistic intelligence come from different areas of the brain, which means that even with humans, someone who has high written linguistic intelligence may not have similarly high oral linguistic intelligence. Computers don’t currently separate aural and oral linguistic ability — one is simply input and the other output. A computer can’t simulate written linguistic capability because this ability requires creativity. Logical-mathematical High (potentially higher than humans) Logic games, investigations, mysteries, and brain teasers Calculating a result, performing comparisons, exploring patterns, and considering relationships are all areas in which computers currently excel. When you see a computer beat a human on a game show, this is the only form of intelligence that you’re actually seeing, out of seven kinds of intelligence. Yes, you might see small bits of other kinds of intelligence, but this is the focus. Basing an assessment of human-versus-computer intelligence on just one area isn’t a good idea. The reality vs. hype There is a lot of hype about AI out there. If you watch movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application like those shown in the movies is the creative output of an overactive imagination. However, the importance of artificial intelligence to the future of technology cannot be overstated. It is already helping people in everyday technologies, and has great potential in everything from customer service to health care, to outer space exploration. The five tribes and the master algorithm You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed earlier. Here are the five tribes of learning: Symbologists: The origin of this tribe is in logic and philosophy. This group relies on inverse deduction to solve problems. Connectionists: This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems. Evolutionaries: The evolutionaries tribe originates in evolutionary biology, relying on genetic programming to solve problems. Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems. Analogizers: The origin of this tribe is in psychology. The group relies on kernel machines to solve problems. The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are currently working toward that goal. To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity. At this point, you should be amazed at just how much people don’t know about how they think or why they think in a certain manner. Any rumors you hear about AI taking over the world or becoming superior to people are just plain false. Considering sources of hype There are many sources of AI hype. Quite a bit of the hype comes from the media and is presented by people who have no idea of what AI is all about, except perhaps from a sci-fi novel they read once. So, it’s not just movies or television that cause problems with AI hype; it’s all sorts of other media sources as well. You can often find news reports presenting AI as being able to do something that it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news services now use AI to at least start articles for reporters. Some products should be tested a lot more before being placed on the market. The “2020 in Review: 10 AI Failures” article at SyncedReview.com discusses ten products hyped by their developer but which fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole. However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner. Another cause of problems comes from asking the wrong person about AI. Not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet, many stories appear with people like these as the information source. To discover the future direction of AI, it’s best to ask a computer scientist or data scientist with a strong background in AI research. Understanding user overestimation Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph. However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure. However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner. The article “How to Solve the Most Annoying Robot Vacuum Cleaner Problems” at RobotsInMyHome.com discusses troubleshooting techniques for various robotic vacuums for a good reason — the robots still need human intervention. The point is that most robots need human intervention at some point because they simply lack the knowledge to go it alone. What is AI technology? Artificial intelligence is a sub-discipline of computer science that works by combining large amounts of data with fast, iterative algorithms with the goal of enabling computers to solve complex problems and complete complex tasks. To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. For artificial intelligence, the computers could be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for some applications. Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform. The application can also vary in size, complexity, and even location. For example, if you’re a business and want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task. On the other hand, if you’re a customer and want to find products on Amazon to go with your current purchase items, the application doesn’t even reside on your computer; you access it through a web-based application located on Amazon’s servers. The knowledge base varies in location and size as well. The more complex the data, the more you can obtain from it, but the more you need to manipulate it as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important. A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, while fast, tend to lack details in many cases.
View ArticleArticle / Updated 05-25-2023
You can hardly avoid hearing about artificial intelligence (AI) today. You see AI in the movies, in the news, in books, and online. It's been in the news a lot lately, with all of the frenzy surrounding ChatGPT (see more about that below). AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in so many ways. Some people have come to trust AIs so much, that they fall asleep while their self-driving cars take them to their destination — illegally, of course. Many pundits are burying you in information (and disinformation) about AI, too. Some see AI as cute and fuzzy; others see it as a potential mass murderer of the human race. The problem with being so loaded down with information in so many ways is that you struggle to separate what’s real from what is simply the product of an overactive imagination. Just how far can you trust your AI, anyway? Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. This article helps you understand some of the history of artificial intelligence and evolution of AI. The ChatGPT controversy The latest media storm around AI came in early January 2023, when OpenAI's free preview of its ChatGPT chatbot (released in November 2022) reached 100 million users. OpenAI then released a subscription service called ChatGPT Plus, and an upgraded version of its product, ChatGPT-4, in March 2023. A chatbot is a computer program designed to simulate human conversation. ChatGPT (GPT stands for generative pretrained transformer) is a particularly powerful chatbot able to produce natural, human-like writing through its use of 570GB of data from the Internet. Representing one of the latest achievements in the development of artificial intelligence, ChatGPT can answer questions and write articles, poems, emails, and research papers; it can also write programming code, translate languages, and perform other tasks related to language. ChatGPT's possible real-world uses include: Customer service Ecommerce Research Education and training Computer code writing and debugging Scheduling and booking Entertainment Health care information and assistance However, while many people are excited about the possibilities for ChatGPT and other similar technologies being developed, there are plenty of concerns about how it can be used in bad ways, too — for example, to cheat in school by having it write essays and research papers. It’s difficult to discern whether a piece of writing has been generated by ChatGPT or a human. In addition, the technology is far from perfect; the text it produces is often inaccurate and biased, and therefore, can spread false and even harmful information. AI can, and is, serving us well in many ways, but it’s important to understand its limitations. AI will never be able to engage in certain essential activities and tasks, and won’t be able to do other ones until far into the future. For example, while it can produce a piece of music with the data you’ve entered and in the style of a particular musician, say Beethoven, it cannot actually create anything. AI doesn’t have an imagination or original ideas. The history of AI, starting with Dartmouth Looking at artificial intelligence history begins with the earliest computers, which were just that: computing devices. They mimicked the human ability to manipulate symbols in order to perform basic math tasks, such as addition. Logical reasoning later added the capability to perform mathematical reasoning through comparisons (such as determining whether one value is greater than another value). However, for artificial intelligence evolution, humans still needed to define the algorithm used to perform the computation, provide the required data in the right format, and then interpret the result. During the summer of 1956, various scientists attended a workshop held on the Dartmouth College campus in Hanover, New Hampshire, to do something more. They predicted that machines that could reason as effectively as humans would require, at most, a generation to come about. They were wrong. Only now have we realized machines that can perform mathematical and logical reasoning as effectively as a human (which means that computers must master at least six more intelligences before reaching anything even close to human intelligence). The stated problem with the Dartmouth College and other endeavors of the time relates to hardware — the processing capability to perform calculations quickly enough to create a simulation. However, that’s not really the whole problem. Yes, hardware does figure in to the picture, but you can’t simulate processes that you don’t understand. Even so, the reason that AI is somewhat effective today is that the hardware has finally become powerful enough to support the required number of calculations. The biggest problem with these early attempts (and still a considerable problem today) is that we don’t understand how humans reason well enough to create a simulation of any sort — assuming that a direction simulation is even possible. Consider the issues surrounding the accomplishment of manned flight by the Wright brothers. They succeeded not by simulating birds, but rather by understanding the processes that birds use, thereby creating the field of aerodynamics. Consequently, when someone says that the next big AI innovation is right around the corner and yet no concrete dissertation exists of the processes involved, the innovation is anything but right around the corner. Continuing with expert systems Expert systems first appeared in the 1970s and again in the 1980s as an attempt to reduce the computational requirements posed by AI using the knowledge of experts. A number of expert system representations appeared, including: Rule based: These use "if … then" statements to base decisions on rules of thumb. Frame based: These use databases organized into related hierarchies of generic information called frames. Logic based: These rely on set theory to establish relationships). The advent of expert systems is important in artificial intelligence background because they present the first truly useful and successful implementations of AI. You still see expert systems in use today, although they aren’t called that any longer. For example, the spelling and grammar checkers in your application are kinds of expert systems. The grammar checker, especially, is strongly rule based. It pays to look around to see other places where expert systems may still see practical use in everyday applications. A problem with expert systems is that they can be hard to create and maintain. Early users had to learn specialized programming languages, such as List Processing (LisP) or Prolog. Some vendors saw an opportunity to put expert systems in the hands of less experienced or novice programmers. However, the products they used generally provided extremely limited functionality in using small knowledge bases. In the 1990s, the phrase expert system began to disappear. The idea that expert systems were a failure did appear, but the reality is that expert systems were simply so successful that they became ingrained in the applications that they were designed to support. Using the example of a word processor, at one time you needed to buy a separate grammar checking application, such as RightWriter. However, word processors now have grammar checkers built in because they proved so useful (if not always accurate). Overcoming the AI winters The term AI winter refers to a period of reduced funding in the development of AI. In general, AI has followed a path on which proponents overstate what is possible, inducing people with no technology knowledge at all, but lots of money, to make investments. A period of criticism then follows when AI fails to meet expectations, and finally, the reduction in funding occurs. A number of these cycles have occurred over the years — all of them devastating to true progress. AI is currently in a new hype phase because of machine learning, a technology that helps computers learn from data. Having a computer learn from data means not depending on a human programmer to set operations (tasks), but rather deriving them directly from examples that show how the computer should behave. ' Machine learning is like educating a baby by showing it how to behave through example. This technology has pitfalls because the computer can learn how to do things incorrectly through careless teaching. At this time, the most successful solution is deep learning, which is a technology that strives to imitate the human brain. Deep learning is possible because of the availability of powerful computers, smarter algorithms, large datasets produced by the digitalization of our society, and huge investments from businesses such as Google, Facebook, Amazon, and others that take advantage of this AI renaissance for their own businesses. People are saying that the AI winter is over because of deep learning, and that’s true for now. However, when you look around at the ways in which people are viewing AI, you can easily figure out that another criticism phase will eventually occur unless proponents tone the rhetoric down. A brief artificial intelligence timeline 1942: First electronic digital computer built by John Vincent Atanasoff and Clifford Berry at Iowa State University 1950: Alan Turing paper “Computing Machinery and Intelligence;” his proposal later became “The Turing Test,” which measured machine AI 1958: Perceptron computer, built by Cornell University Professor Frank Rosenblatt, regarded as first artificial neural network 1966: First “chatterbox” (later shortened to chatbot) — created by Joseph Weizenbaum, a German-American computer scientist — uses natural language processing to converse with humans 1971: First commercial microprocessor by Intel 1988: Jabberwacky, a chatbot created by British computer scientist Rollo Carpenter, provides interesting and entertaining conversation to humans 1990s: Early days of the Internet 1992: TD-Gammon, developed by Gerald Tesauro, of IBM; an artificial neural network trained by temporal-difference learning to play high-level backgammon 1997: IBM's Deep Blue chess computer defeats Russian chess grandmaster Garry Kasparov; Windows releases a speech recognition software, developed by Dragon Systems 2012: AlexNet, a convolutional neural network architecture, primarily designed by Alex Krizhevsky, a Ukrainian-born, Canadian computer scientist 2020: OpenAI beta tests GPT-3, which uses deep learning to create code, poetry, and other language and writing tasks; it's the first such chatbot that can create content almost indistinguishable from human-created content 2023: In January, OpenAI releases a free preview of its ChatGPT-3 to the public, and in March releases the upgrade ChatGPT-4 AI in our everyday lives You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. A smart thermostat for your home may not sound very exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror. As the development of AI has continued, there are now really cool uses for AI. For example, you may not know there is a medical monitoring device that can actually predict when you might have a heart problem, but such a device exists. AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used today in all sorts of space applications, and the evolution of artificial intelligence figures prominently in all the space adventures humans will have tomorrow. The potential uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are some of the ways in which you might see AI used: Fraud detection: You get a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else could be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it. Resource scheduling: Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine where to put a patient based on the patient’s needs, availability of skilled experts, and the amount of time the doctor expects the patient to be in the hospital. Complex analysis: Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms could indicate more than one problem. A doctor or other expert might need help making a diagnosis in a timely manner to save a patient’s life. Automation: Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation today is that an unexpected event, such as an object in the wrong place, can actually cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened. Customer service: The customer service line you call today may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. With good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer. Safety systems: Many of the safety systems found in machines of various sorts today rely on AI to take over the vehicle in a time of crisis. For example, many automatic braking systems (ABS) rely on AI to stop the car based on all the inputs that a vehicle can provide, such as the direction of a skid. Computerized ABS is actually relatively old at 40 years from a technology perspective. Machine efficiency: AI can help control a machine in such a manner as to obtain maximum efficiency. The AI controls the use of resources so that the system doesn’t overshoot speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.
View ArticleArticle / Updated 05-09-2023
Artificial intelligence (AI) is great at automation, which can make it ideal for tasks in health care. It never deviates from the procedure, never gets tired, and never makes mistakes as long as the initial procedure is correct. Unlike humans, AI never needs a vacation or a break or even an eight-hour day (not that many in the medical profession have that, either). Consequently, the same AI that interacts with a patient for breakfast will do so for lunch and dinner as well. So, at the outset, AI has some significant advantages if viewed solely on the bases of consistency, accuracy, and longevity. Working with medical records The major way in which an AI helps in medicine is medical records. In the past, everyone used paper records to store patient data. Each patient might also have a blackboard that medical personnel use to record information daily during a hospital stay. Various charts contain patient data, and the doctor might also have notes. Having all these sources of information in so many different places made it hard to keep track of the patient in any significant way. Using an AI, along with a computer database, helps make information accessible, consistent, and reliable. Products such as Google Deepmind Health enable personnel to mine the patient information to see patterns in data that aren’t obvious. Doctors don’t necessarily interact with records in the same way that everyone else does. The use of products such as IBM’s WatsonPaths helps doctors interact with patient data of all sorts in new ways to make better diagnostic decisions about patient health. You can see a video on how this product works. Medicine is about a team approach, with many people of varying specialties working together. However, anyone who watches the process for a while soon realizes that these people don’t communicate among themselves sufficiently because they’re all quite busy treating patients. Products such as CloudMedX take all the input from the all parties involved and performs risk analysis on it. The result is that the software can help locate potentially problematic areas that could reduce the likelihood of a good patient outcome. In other words, this product does some of the talking that the various stakeholders would likely do if they weren’t submerged in patient care. Predicting the future Some truly amazing predictive software based on medical records includes CareSkore, which actually uses algorithms to determine the likelihood of a patient’s requiring readmission into the hospital after a stay. By performing this task, hospital staff can review reasons for potential readmission and address them before the patient leaves the hospital, making readmission less likely. Along with this strategy, Zephyr Health helps doctors evaluate various therapies and choose those most likely to result in a positive outcome — again reducing the risk that a patient will require readmission to the hospital. This video tells you more about Zephyr Health. In some respects, your genetics form a map of what will happen to you in the future. Consequently, knowing about your genetics can increase your understanding of your strengths and weaknesses, helping you to live a better life. Deep Genomics is discovering how mutations in your genetics affect you as a person. Mutations need not always produce a negative result; some mutations actually make people better, so knowing about mutations can be a positive experience, too. Check out this video for more details. Making procedures safer Doctors need lots of data to make good decisions. However, with data being spread out all over the place, doctors who lack the ability to analyze that disparate data quickly often make imperfect decisions. To make procedures safer, a doctor needs not only access to the data but also some means of organizing and analyzing it in a manner reflecting the doctor’s specialty. One such product is Oncora Medical, which collects and organizes medical records for radiation oncologists. As a result, is these doctors can deliver the right amount of radiation to just the right locations to obtain a better result with a lower potential for unanticipated side effects. Doctors also have trouble obtaining necessary information because the machines they use tend to be expensive and huge. An innovator named Jonathan Rothberg has decided to change all that by using the Butterfly Network. Imagine an iPhone-sized device that can perform both an MRI and an ultrasound. The picture on the website is nothing short of amazing. Creating better medications Everyone complains about the price of medications today. Yes, medications can do amazing things for people, but they cost so much that some people end up mortgaging homes to obtain them. Part of the problem is that testing takes a lot of time. Performing a tissue analysis to observe the effects of a new drug can take up to a year. Fortunately, products such as 3Scan can greatly reduce the time required to obtain the same tissue analysis to as little as one day. Of course, better still would be the drug company having a better idea of which drugs are likely to work and which aren’t before investing any money in research. Atomwise uses a huge database of molecular structures to perform analyses on which molecules will answer a particular need. In 2015, researchers used Atomwise to create medications that would make Ebola less likely to infect others. The analysis that would have taken human researchers months or possibly years to perform took Atomwise just one day to complete. Imagine this scenario in the midst of a potentially global epidemic. If Atomwise can perform the analysis required to render the virus or bacteria noncontagious in one day, the potential epidemic could be curtailed before becoming widespread. Drug companies also produce a huge number of drugs. The reason for this impressive productivity, besides profitability, is that every person is just a little different. A drug that performs well and produces no side effects on one person might not perform well at all and could even harm a different person. Turbine enables drug companies to perform drug simulations so that the drug companies can locate the drugs most likely to work with a particular person’s body. Turbine’s current emphasis is on cancer treatments, but it’s easy to see how this same approach could work in many other areas. Medications can take many forms. Some people think they come only in pill or shot form, yet your body produces a wide range of medications in the form of microbiomes. Your body actually contains ten times as many microbes as it does human cells, and many of these microbes are essential for life; you’d quickly die without them. Whole Biome is using a variety of methods to make these microbiomes work better for you so that you don’t necessarily need a pill or a shot to cure something. Check out this video for additional information. Some companies have yet to realize their potential, but they’re likely to do so eventually. One such company is Recursion Pharmaceuticals, which employs automation to explore ways to use known drugs, bioactive drugs, and pharmaceuticals that didn’t previously make the grade to solve new problems. The company has had some success in helping to solve rare genetic diseases, and it has a goal of curing 100 diseases in the next ten years (obviously, an extremely high goal to reach).
View ArticleArticle / Updated 04-14-2023
Many of the current techniques for extending the healthy range of human life (the segment of life that contains no significant sickness), rather than just increasing the number of years of life depends on making humans more capable of improving their own health in various ways. You can find any number of articles that tell you 30, 40, or even 50 ways to extend this healthy range, but often it comes down to a combination of eating right, exercising enough and in the right way, and sleeping well. Of course, figuring out just which food, exercise, and sleep technique works best for you is nearly impossible. The following sections discuss ways in which an AI-enabled device might make the difference between having 60 good years and 80 or more good years. (In fact, it’s no longer hard to find articles that discuss human life spans of 1,000 or more years in the future because of technological changes.) Using games for therapy A gaming console can make a powerful and fun physical therapy tool. Both Nintendo Wii and Xbox 360 see use in many different physical therapy venues. The goal of these games is to get people moving in certain ways. As when anyone else plays, the game automatically rewards proper patient movements, but a patient also receives therapy in a fun way. Because the therapy becomes fun, the patient is more likely to actually do it and get better faster. Of course, movement alone, even when working with the proper game, doesn’t assure success. In fact, someone could develop a new injury when playing these games. The Jintronix add-on for the Xbox Kinect hardware standardizes the use of this game console for therapy, increasing the probability of a great outcome. Considering the use of exoskeletons One of the most complex undertakings for an AI is to provide support for an entire human body. That’s what happens when someone wears an exoskeleton (essentially a wearable robot). An AI senses movements (or need to move) and provides a powered response to the need. The military has excelled in the use of exoskeletons. Imagine being able to run faster and carry significantly heavier loads as a result of wearing an exoskeleton. This video gives you just a glimpse of what’s possible. Of course, the military continues to experiment, which actually feeds into civilian uses. The exoskeleton you eventually see (and you’re almost guaranteed to see one at some point) will likely have its origins in the military. Industry has also gotten in on the exoskeleton technology. Factory workers currently face a host of illnesses because of repetitive stress injuries. In addition, factory work is incredibly tiring. Wearing an exoskeleton not only reduces fatigue but also reduces errors and makes the workers more efficient. People who maintain their energy levels throughout the day can do more with far less chance of being injured, damaging products, or hurting someone else. The exoskeletons in use in industry today reflect their military beginnings. Look for the capabilities and appearance of these devices to change in the future to look more like the exoskeletons shown in movies such as Aliens. The real-world examples of this technology are a little less impressive but will continue to gain in functionality. As interesting as the use of exoskeletons to make able people even more incredible is, what they can enable people to do that they can’t do now is downright amazing. For example, scientists at the National Institutes of Health Clinical Center in Bethesda, Maryland, have helped children with cerebral learn how to walk more effectively by using an exoskeleton. Not all exoskeletons used in medical applications provide lifetime use, however. For example, an exoskeleton can help a stroke victim walk normally again. As the person becomes more able, the exoskeleton provides less support until the wearer no longer needs it. Some users of the device have even coupled their exoskeleton to other products, such as Amazon’s Alexa. The overall purpose of wearing an exoskeleton isn’t to make you into Iron Man. Rather, it’s to cut down on repetitive stress injuries and help humans excel at tasks that currently prove too tiring or just beyond the limits of their body. From a medical perspective, using an exoskeleton is a win because it keeps people mobile longer, and mobility is essential to good health.
View Article