College Student's Guide to Computers in Education/Chapter 4: Human and Artificial Intelligence





'''Links to the chapters of the book. You are currently reading Chapter 4.'''

Title Page

Preface

Chapter 1: Introduction

Chapter 2: Inventing Your Future

Chapter 3: Expertise and Problem Solving

Chapter 4: Human and Artificial Intelligence

Chapter 5: Computer-Assisted and Distance Learning

Chapter 6: Learning and Learning Theory

Chapter 7: Increasing Your Expertise in ICT

Chapter 8: Brief Introductions to A number of Key Ideas

Chapter 9: On the Lighter Side

References

Beginning of Chapter 4: Human and Artificial Intelligence

 * “Did you mean to say that one man may acquire a thing easily, another with difficulty; a little learning will lead the one to discover a great deal; whereas the other, after much study and application, no sooner learns than he forgets?” (Plato, 4th century B.C.)


 * “If we understand the human mind, we begin to understand what we can do with educational technology.” (Herbert A. Simon)

Right now, computers are not very smart. However, steady progress is occurring in making them smarter. Here is an amusing and/or thought-provoking pair of statements:


 * The typical car has an engine rated at approximately 1,000 “person-power.” When it comes to physical strength, machines are much stronger than people.
 * The typical modern microcomputer might be rated as approximately .01 “human-brainpower.” When it comes to brainpower, people are much smarter than computers. However, computers have by no means reached their upper intellectual limits Some futurists suggest that microcomputers will exceed 1.0 “human-brainpower” sometime in the next 30 years (Kurzweil, 2001).

Measuring computer capacities in terms of “human-brainpower” is suggestive but misleading. In some areas, such as brute force computation, computers are a billion times as capable as human brains. However, it is still a far out, wild prediction to suggest we may have artificially intelligent computers and robots equivalent to a 5-year-old human within 15 to 20 years or so.

This chapter explores educational implications of the cognitive capabilities and limitations of humans and machines. You are very good at some things that computers are not good at, and vice versa. This observation leads us to explore the general idea of humans and machines working together to solve cognitively challenging problems.

Definitions of Intelligence
Ray Kurzweil (2001), a well-known and highly respected futurist, says this about technological change:


 * An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense "intuitive linear" view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence. [Italics added for emphasis]

One possible beginning point for exploring this thought-provoking and controversial forecast is to examine various widely used definitions of human and machine intelligence.

Human Intelligence
Notice the quote from Plato given at the beginning of this chapter. Attempts to define and measure intelligence have a long and somewhat acrimonious history. Four definitions of intelligence are given in the following quotations.


 * Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person’s intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of “intelligence” are attempts to clarify and organize this complex set of phenomena. (Neisser et al., 1995)

- - - - - - - - - - -


 * Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—“catching on,” “making sense” of things, or “figuring out” what to do. (Gottfredson et al., 1994)

- - - - - - - - - - -

Howard Gardner is well known for his work on a theory of Multiple Intelligences (Gardner, 2003). He describes intelligence this way:


 * To my mind, a human intellectual competence must entail a set of skills of problem solving—enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product—and must also entail the potential for finding or creating problems—and thereby laying the groundwork for the acquisition of new knowledge. [Italics added for emphasis]

- - - - - - - - - - -

Jeff Hawkins founded Palm Computing, Handspring, Numenta, and the non-profit Redwood Neuroscience Institute, a scientific research institute focused on understanding how the human neocortex works. Quoting from the book On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines (Hawkins and Blakeslee, 2004):


 * The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions about future events. It is the ability to make predictions about the future that is the crux of intelligence. [Italics added for emphasis]

Hawkins is particularly interested in Artificial Intelligence, and his definition reflects this interest. From Hawkins’ point of view, intelligence is the ability to make predictions about the future and to take actions that produce desired outcomes. The prediction process requires a continual comparison between what is occurring and what we expect to occur. Intelligence depends on having a memory of past events so that one can compare predictions against past events and the outcomes of past events.

- - - - - - - - - - -

As mentioned earlier in this book, Robert Sternberg is a world-class expert in human intelligence. He defines intelligence “your skill in achieving whatever it is you want to attain in your life within your sociocultural context by capitalizing on your strengths and compensating for, or correcting, your weaknesses.” (Sternberg, 2007) His three-part theory of intelligence focuses on practical intelligence (often called street smarts), school smarts, and creativity.

Sir Ken Robinson is author of the report, Out of Our Minds: Learning to be Creative, and a leading expert on innovation and human resources. He defines creativity as “the process of having original ideas that have value.” He argues that our educational system at all levels is designed to stifle some important forms of creativity. To learn more about Ken Robinson’s insights into creativity:


 * View his 19 minute presentation given in 2006, available at http://ted.com/tedtalks/tedtalksplayer.cfm?key=ken_robinson.


 * Read the report Out of Our Minds: Learning to be Creative at http://www.dfes.gov.uk/naccce/index1.shtml.

In brief, human intelligence is a combination of the abilities to:


 * Learn. This includes all kinds of informal and formal learning via any combination of experience, education, and training.
 * Pose problems. This includes recognizing problem situations and transforming them into more clearly defined problems. (Here, I am using a very general definition of the term problem, such as is given in Chapter 3.)
 * Solve problems. This includes solving problems, accomplishing tasks, and fashioning products—ideas discussed in Chapter 3.
 * Be a futurist. This includes accurately forecasting likely outcomes and consequences of one’s possible activities. That is, be good in following the dictate, “look before you leap.”
 * Be creative in doing all of the above.

Turing (Imitation Game) Test for Machine Intelligence
Alan Turing (1912-1954) was a very good mathematician and a pioneer in the field of electronic digital computers. In 1936, he published a math paper that provides theoretical underpinnings for the capabilities and limitations of computers. During World War II, he helped develop computers in England that played a significant role in England’s war efforts. In 1950, Turing published a paper discussing ideas of current and potential computer intelligence, and describing an imitation game that is now known as the Turing Test for AI (Turing, 1950). Quoting from Turing’s 1950 paper:


 * The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:




 * We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"

The Loebner Prize is an annual contest to determine the best current Turing Test software. Although Turing in his 1950 article predicted that computers would pass the test by 2000, that has not proven to be the case. Ray Kurzweil believes that this event will occur in 2029. The Association for the Advancement of Artificial Intelligence (http://www.aaai.org/AITopics/html/welcome.html) is a good starting point if you want to learn more about the Turing Test and other aspects of AI.

g, Gf, and Gc in Humans and Machines
There is substantial evidence gathered through many years of research that humans possess a general intelligence factor. As noted in Wikipedia:


 * Charles Spearman [1863-1945] pioneered the use of factor analysis in the field of psychology and is sometimes credited with the invention of factor analysis. He discovered that school children’s scores on a wide variety of seemingly unrelated subjects were positively correlated, which led him to postulate that a general mental ability, or g, underlies and shapes human cognitive performance. His postulate now enjoys broad support in the field of intelligence research, where it is known as the g theory.

Research on the general intelligence factor has led to a nature-and-nurture theory that divides intelligence into fluid intelligence (the nature component) and crystallized intelligence (the nurture component). McArdle et al. (2002) describes these concepts this way:


 * The theory of fluid and crystallized intelligence … proposes that primary abilities are structured into two principal dimensions, namely, fluid (Gf) and crystallized (Gc) intelligence. The first common factor, Gf, represents a measurable outcome of the influence of biological factors on intellectual development (i.e., heredity, injury to the central nervous system), whereas the second common factor, Gc, is considered the main manifestation of influence from education, experience, and acculturation. Gf-Gc theory disputes the notion of a unitary structure, or general intelligence.

The human brain grows considerably during a person’s childhood, with full maturity being reached in the mid to late 20s for most people. Both Gf and Gc increase during this time. While a person’s level of fluid intelligence tends to peak in the mid to late 20s, growth in crystallized intelligence may continue well into the 50s.

Since the rate of decline in fluid intelligence over the years tends to be relatively slow, a person’s total cognitive capabilities can remain high over a long lifetime. Current research strongly supports the idea of “use it or lose it” for the brain or mind, as well as the rest of one’s body (Goldberg, 2005; McArdle et al.; 2002).

One can draw a weak analogy between Gf and Gc for humans and the artificial intelligence of machines. Think of a computer system in terms of hardware and software. The hardware provides the memory, processing speed, and connectivity, somewhat akin to Gf. The software provides content—data and information to be processed—as well as the instructions for processing. This is somewhat akin to Gc. Computers get “smarter” through improvements in hardware and improvements in software.

The Gc of humans comes through life experiences and through formal and informal education and training. Each individual faces the challenge of gaining such knowledge and skill. Contrast this with computer software. A large software development project may involve hundreds of designers, programmers, testers, and other support staff. Once a piece of software is completed, it can be installed in millions of computers, thus giving all these computers the same level of software capability.

Measuring Intelligence of People and Machines
Human intelligence and machine intelligence (artificial intelligence) are not the same thing. Both are hard to adequately measure, and it is quite difficult to compare these two kinds of intelligence.

Measuring Human Intelligence
Over the past century, there has been considerable research on how to measure intelligence. A wide variety of intelligence measures have been developed and tested for validity, reliability, and fairness. The term Intelligence Quotient (IQ) is a person’s mental age divided by the person’s chronological age, and then multiplied by 100. IQ tests are usually normed with a mean of 100 and a standard deviation of 15 (most common) or 16.

I hope you did not turn your brain off when you encountered the terms mean and standard deviation in the previous paragraph. If you have not yet achieved a fluent understanding of these terms, look them up on the Web and work to build your understanding of these widely used components of descriptive statistics.

Figure 4-1 is a short table of data from a normal distribution. (See, for example, http://www.iqcomparisonsite.com/IQtable.aspx.) This table indicates that 68.26 percent of the area under a normal curve lies between –1 and +1 standard deviations. From this table you can deduce that 2.28 percent of the area lies to the left of –2 standard deviations and 2.28 percent lies to the right of +2 standards deviations. Thus, for example, on an IQ test with a standard deviation of 15, only about .37% of people will score 145 or above.

IQ tests are useful and widely used. As Gottfredson (1998) explains:
 * The debate over intelligence and intelligence testing focuses on the question of whether it is useful or meaningful to evaluate people according to a single major dimension of cognitive competence. Is there indeed a general mental ability we commonly call “intelligence,” and is it important in the practical affairs of life? The answer, based on decades of intelligence research, is an unequivocal yes. No matter their form or content, tests of mental skills invariably point to the existence of a global factor that permeates all aspects of cognition. And this factor seems to have considerable influence on a person’s practical quality of life. Intelligence as measured by IQ tests is the single most effective predictor known of individual performance at school and on the job. [Italics added for emphasis]

However, IQ is only one measure of a person’s intelligence, and many people argue about the value of this measure. One obvious flaw is that different IQ tests emphasize somewhat different components of IQ. From the work of Howard Gardner (2003) and others, it is clear that a person might have substantially different intelligence scores in different areas of intelligence. Put another way, if two different IQ tests place different weights (for example, by using different numbers of questions) on various types of IQ, then a person might well score quite differently on the two tests.

IQ tests do not measure persistence, drive, passion, intrinsic motivation, emotional intelligence, social intelligence, and other traits that make a huge difference in learning, problem solving, and other human activities. Moreover, some people use their intellectual gifts much more effectively than do other people with similar IQs.

The Speed and Quality of Human Learning
The Science of Teaching and Learning provides us with some insights into how to help students learn better and faster. We also know that on average, people with higher IQs learn quite a bit faster and better than people with lower IQs. For the range of students who can effectively participate in the regular classroom environment in precollege education, the lower 5 percent probably learn half as fast and not as well as those with IQs in the mid range, while the upper 5 percent probably learn twice as fast or more, and quite a bit better, than those with the mid range students.

Gottfredson (1998) provides information about the rate of learning of slow learners versus fast learners:


 * High-ability students also master material at many times the rate of their low-ability peers. Many investigations have helped quantify this discrepancy. For example, a 1969 study done for the U.S. Army by the Human Resources Research Office found that enlistees in the bottom fifth of the ability distribution required two to six times as many teaching trials and prompts as did their higher-ability peers to attain minimal proficiency in rifle assembly, monitoring signals, combat plotting and other basic military tasks. Similarly, in school settings the ratio of learning rates between "fast" and "slow" students is typically five to one. [Italics added for emphasis]

The findings about slower learners versus faster learners are applicable to students in higher education. The typical four-year undergraduate degree program is based on students taking an average of about 15 credits per term. The general expectation is that one credit corresponds to an hour of class meeting per week (actually, 50 minutes) and two hours of study outside of class for each hour in class. Thus, for an average student getting average grades, 15 credits correspond to about 45 hours per week. Students who learn quite a bit faster and better than average are able to carry a significantly heavier course load and learn better than average grades.

Measuring Machine Intelligence
Consider an inexpensive solar battery calculator. It can add, subtract, multiply, divide, and calculate a square root. It takes hundreds of hours of instruction and practice for an average human to learn to reasonably accurately and reasonably rapidly carry out these operations in a paper-and-pencil environment. Moreover, humans are somewhat error prone in using the low tech paper-and-pencil calculation technology.

Thus, one might claim that in doing eight-digit calculations, the inexpensive calculator has more intelligence than a well-educated or well-trained human. I have a calculator that only cost me a dollar. In well under a second, it can tell me that the (positive) square root of 235 is 15.329709.

Of course, my calculator has no “understanding” of what a square root is. It does not know about mathematical functions and that a square root is a mathematical function with a wide range of uses. One of the uses is in determining the length of one edge of a right triangle, given the lengths of the other two edges. This involves knowing the Pythagorean theorem. My calculator does not know about surveying land to determine the corners of a plot of land after a flood. It does not know about the Nile River, the ancient Egyptians, and so on.

The point is that computer hardware and software can be developed to solve or help solve a wide range of problems that previously required human intelligence to solve. However, this does not give us much insight into the intelligence of a computer system or how to measure machine intelligence.

Some researchers in AI have put a tremendous amount of effort into developing and studying chess-playing computer systems. IBM even designed computer hardware that would be especially fast in analyzing chess moves. In 1997, this computer system (a combination of hardware and software named Big Blue) defeated Gary Kasparov, the reigning human world chess champion.

Wow! That certainly impressed some people. Of course, Big Blue did not know what a human being is, what a game is, the thrill of victory and the agony of defeat, why people enjoy learning how to play a game such as chess, why it took Gary Kasparov tens of thousand of hours of effort to become world champion, and so on.

Some people continue to put considerable effort into developing better computer chess programs. The best of such programs, playing on a typical modern microcomputer, are playing at roughly the same level as the top human chess players in the world (Andrews, 2007).

Consider a simple personal computer equipped with computer-assisted learning (CAL) software designed to help a human get better at fast keyboarding. The software provides instruction that has been carefully designed by one or more humans. The computer system can keep track of the performance of each finger of the human keyboarder. If a particular finger or combination of fingers is somewhat slow or inaccurate, it can adjust the training to give specific help to the finger or combination of fingers. It can provide an appropriate combination of speed drills and accuracy drills to help the user gain both speed and accuracy.

In summary, it is possible to develop keyboarding CAL that can outperform a good human tutor in certain areas, and that is as good as an individual human tutor in other areas. Do you have pity for all of the humans who used to teach typing? Perhaps you should. If you do a Google search on free typing tutor, you will see that part of what typing teachers used to do for a living can now be done by software that is available free on the Web. The AI of keyboarding CAL has proven sufficient to substantially change the jobs of humans who teach typing (keyboarding).

Do you know your current keyboarding speed? Free software is available on the Web that can assess your keyboarding skills. See, for example, http://www.northcanton.sparcc.org/~technology/keyboarding/freeware.html. If you are unsatisfied wit your speed, select one of the free self-instruction tutorials and spend a few hours of a few dozen hours practicing.

To carry the keyboarding example further, consider voice input systems. Voice input has proven to be a challenging AI problem. However, this task has been solved well enough so that many people now use voice input. When the user detects a computer voice recognition error, use of voice input and/or a keyboard and mouse can correct the error. Voice input systems get better (smarter, more intelligent) through continued research and the development of better hardware and software. A system can also get better by training itself to a particular user’s voice. Voice input is now good enough that it has wide commercial uses.

All of the examples given above fall into what is called weak AI. Contrast this with strong AI. As explained in Wikipedia:


 * In the philosophy of artificial intelligence, strong AI is the supposition that some forms of artificial intelligence can truly reason and solve problems; strong AI supposes that it is possible for machines to become sapient, or self-aware, but may or may not exhibit human-like thought processes. The term strong AI was originally coined by John Searle, who writes: “according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind.”

Ray Kurzweil and the others see the steady progress in weak AI and agree it will continue. Ray Kurzweil believes we will have strong AI that far exceeds human intelligence before the year 2050. Others believe that we will never have strong AI that in any sense rivals the intelligence of humans.

The educational implications of this issue are immense. Pick any academic discipline and the problems it addresses. Now, divide these problems into two categories. Into the first category, place the problems where a properly educated and trained person working with the best of current AI systems can substantially outperform an equally well-educated and well-trained person who does not have access to the computer systems. The second category of problems contains those where current levels of AI do not make an appreciable difference.

Now think about the first category of problems steadily increasing in size through a combination of steady progress in weak (and possibly strong) AI and through improved education for people who will work in this collaborative environment. That describes the world as it is today and in the future. You are living during a time of continual and substantial increase in the capabilities of computer systems.

Our educational system is struggling with what it should be doing about this change. You, personally, face the challenge of getting an education that helps prepare you for a future of steady increase in machine intelligence.

Human and Machine Memory
It is clear that memory is an important aspect of intelligence. Here is a repeat of the quote from Hawkins and Blakeslee (2004) given earlier in this chapter:


 * The brain uses vast amounts of memory to create a model of the world. Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions about future events. It is the ability to make predictions about the future that is the crux of intelligence.

One of the advantages that computers have over humans is their ability to quickly store and quickly retrieve large amounts of information. We are impressed by a person who can memorize a book or a musical performer who can memorize music that totals tens of thousands of notes in length. Such memorization takes considerable time and effort. Contrast this with storing information on a 300-gigabyte computer disk at the rate of many millions of bytes per second. Such a hard disk and its disk drive can now be purchased for under $100—and can store the equivalent of 300,000 full-length novels.

In 1956, George A. Miller published a seminal paper about human memory: “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. The unifying theme in this paper is that for a typical person, short-term working memory can only store about five to nine chunks of information. This means, for example, that a typical person can read or hear a seven-digit telephone number and remember it long enough to immediately key it into a telephone keypad.

Here are descriptions of three different types of human memory:


 * Sensory memory stores data from one’s senses, and for only a short time. For example, for most people visual sensory memory stores an image for less than a second and auditory sensory memory stores aural information for less than four seconds.
 * Working memory (short-term memory) can store and consciously, actively process a small number of chunks of information. It retains these chunks for less than 20 seconds.
 * One can draw a parallel between the central processing unit (CPU) of a computer and the working memory of a human. The CPU has some storage used in processing pieces of information, so it can be thought of as both a storage and a processing unit.


 * Long-term memory has large capacity and stores information for a long time. Human long-term memory is both a storage and a processing system. Millions of neurons may be working together (in parallel) to carry out a task.


 * The analogy between computer long-term memory (such as disk memory) and human long-term memory is not a good one. It is not correct to think of a specific neuron as storing a specific chunk of information or one byte of data. Moreover, the processing done by a human brain is not like the processing done by a digital computer. A human brain stores and processes patterns, using a large number of neurons to store a pattern and a large number of neurons when it is processing a pattern or group of patterns.

Working memory has a quite limited capacity. However, the research on expertise by Ericsson (n.d., in press) and others indicates that experts train their long-term memory in their areas of expertise so that it has some of the characteristics of working memory. One way to think about this is that working memory processes information at a conscious level and long-term memory processes information at a subconscious level. By dint of thousands of hours of study and practice, one can gain a certain amount of conscious control over parts of one’s long-term memory.

Artificial Intelligence
As noted earlier in this chapter, many AI experts like to distinguish between weak AI and strong AI. Some see us moving toward AI systems that can far exceed the mental capabilities of people. It is important that educators and students understand some of the mechanisms for increasing AI and areas of weakness of AI relative to human intelligence.

Rote Memory
The discussion of computer memory and CPUs gives us part of the foundation needed to discuss ways of increasing the AI of computer systems. Suppose, for example, that IQ depended mostly upon having a large memory that could quickly memorize and regurgitate what it has memorized, and that does not forget over a period of many years. With that measure, computers beat people hands down. Moreover, we are still in a technology development phase in which the cost effectiveness of computer storage devices is improving very rapidly. In addition, the Internet makes it possible for hundreds of millions of people to access the Web, a huge and rapidly growing virtual library.

Rote memory is an important component of increasing expertise in a variety of disciplines. For example, a world-class chess player must memorize many thousands of sequences of opening moves and end game moves. However, the number of different sequences of possible moves in a chess game overwhelms a human’s rote-memory approach to achieving a high level of expertise. Indeed, it overwhelms a computer’s rote-memory approach. A computer can easily memorize a trillion different sequences of chess moves. However, that is a very small number relative to the number of different sequences of moves possible in a chess game.

Algorithmic and Heuristic Procedures
An algorithmic procedure is a step-by-step procedure that can be proven to succeed in the task it is designed to accomplish. For example, you know an algorithm for multiplying multi digit numbers. To increase the weak AI of a computer, problem situations can be analyzed and new or better algorithms can be developed.

A heuristic procedure (often called a rule of thumb) is also a step-by-step procedure, but not one that can be proven to always work. We all frequently use heuristic procedures. Before I walk across a one-way street, I look in the direction from which the traffic is coming, in order to avoid getting hit by a car or a bicycle. However, this heuristic procedure fails if a car or bicycle is driving the wrong way on the road. It may also fail me if a car pulls out of a nearby parking lot immediately after I have checked for traffic.

Much of the recent progress in weak AI has been in developing better heuristics. A typical AI program contains a combination of algorithms and heuristics. A spelling checker uses an algorithmic procedure to see if a word is in its spelling dictionary. If a word is not in the dictionary, a heuristic procedure is used to suggest alternative words or spelling. I increase the intelligence of my spelling checker when I add words to my custom dictionary.

Grammar checking is a far harder problem than spelling checking. At the current time, grammar checking is based on a combination of algorithms and heuristics, and the results are only of modest quality. Current levels of Weak AI are not well suited to the task of grammar checking.

Voice input to computers is a relatively difficult AI problem. It has gradually gotten better as better heuristics have been developed.

Language translation is a really hard problem. Current levels of success are still rather modest, but significant progress is occurring (Tanner, 2007). The language translation problem gives us some interesting insights into the power of a human brain. Material to be translated comes into one’s brain through reading or listening. The brain translates this into meaningful ideas. Then the brain translates these meaningful ideas into a target language and produces the output. The key is that the brain understands the meaning of the input materials. Such understanding by a computer would require strong AI. Current computerized language translation systems do not have an understanding of the meaning of what they are translating.

Machine Learning
Still another way to increase the intelligence of a computer system is to have the computer system actively involved in learning on its own. For example, to develop a better chess-playing program, the program can use the analyses that have been carried out by chess masters, and can also “study” thousands of games that have been played by chess masters. A somewhat similar approach can be used in developing computer systems that can carry out medical diagnostic tasks. In both the chess-playing and medical diagnostic work, the computer studies what actually occurred (the game or the actual diagnosis, along with the medical data that was gathered) and then uses the outcomes (who won, whether the treatment based on the diagnosis worked).

Computer-Based Chunks
An earlier part of this chapter mentioned George Miller’s 1956 seminal paper about human memory: The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Chunking—organizing information into chunks—is an essential aspect of dealing with this limitation.

My long distance University of Oregon phone number used to be 1-541-346-3564. I chunked it as “long distance,” “my local area code,” “my university preface,” and the last four digits 3 5 6 4. In total, this is just seven chunks.

Pencil; and paper are a powerful aid to cognitive overload (that is, an overload of working memory). However, training and experience are also a powerful aid. Quoting George Miller:


 * A man just beginning to learn radio-telegraphic code hears each dit and dah as a separate chunk. Soon he is able to organize these sounds into letters and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases.

Each academic discipline develops it own special vocabulary and notation in a manner that facilitates chunking. A physicist might make use of the expression E = mc2 as a single chunk, and a high level chess play might consider a particular placement of a half dozen pieces on a chess board as one chunk. One aspect of gaining a high level of expertise in a discipline is learning chunks and to learn to think using these chunks.

An inexpensive six-function calculator contains a key labeled with symbols such as ×, 十, and √. Each can be thought of as a chunk—in this case, a symbol representing an arithmetic algorithm that the calculator can carry out. A scientific calculator provides many more chunks, while a computer provides still more chunks. For example, if one is familiar with the graphing capabilities of spreadsheet software such as Microsoft Excel, then a graph such as a bar graph, pie chart, or scatter plot is a single chunk. In some sense, learning calculator or computer chunks is somewhat like learning the words in a new language. Learning to think using such chunks is like learning to think in a new language.

Expert Systems
Earlier parts of this book have discussed expertise and islands of expertise. Computer scientists use the term expert system in discussing the development of AI-based islands of expertise. Quoting from the Wikipedia:


 * An expert system, also known as a knowledge based system, is a computer program that contains some of the subject-specific knowledge, and contains the knowledge and analytical skills of one or more human experts. This class of program was first developed by researchers in artificial intelligence during the 1960s and 1970s …

Some of the early success of expert systems was in medical diagnosis. A narrow problem area is selected, such as the diagnosis of infectious diseases. One or more human experts work with knowledge engineers to develop a computerized diagnostic system. The system is then tried out using data from many different patients who previously were diagnosed by human experts. The successes and failures by the computer system are used to modify the program so that it is more accurate. This process can also be somewhat automated, by having the computer system study case histories. The results in various narrow fields of medicine have been quite good—equaling or exceeding the expertise of well-qualified human doctors. However, such systems have seen only limited use in the actual world of medical practice. One reason is liability risks. Even well prepared medical doctors make mistakes—but in that case, there is a human being to whom to assign blame and perhaps to sue.

There is less concern in areas such as deciding whether to make a loan to a bank customer and in thousands of other situations where the islands of expertise provided by expert systems are now in routine use. My Google search using the quoted expression “expert system” produced 191 million hits.

Summary and Self-Assessment
You can maintain and even increase your intelligence by keeping yourself physically and mentally fit, and by your explicit efforts to increase your crystallized (Gc) intelligence. You can also improve your overall physical and cognitive capabilities by learning to make use of effective aids. This book focuses specifically on encouraging you to learn to make effective use of ICT systems. This chapter focuses specifically on the auxiliary brain/mind aspects of artificial intelligence.

Weak AI systems can solve a variety of cognitive problems and accomplish a variety of cognitive tasks. In some of these problems and tasks, AI systems readily outperform humans. However, the weak AI we currently have is substantially different than human intelligence.

At the current time, there are thousands of researchers and programmers working to improve the software and underlying theory of weak and strong AI. When any significant progress occurs, it can be widely disseminated and readily implemented on millions of different microcomputers.

It is evident that weak AI will continue to improve. However, weak AI tends to be domain-specific in that a particular AI system deals with a narrow range of problems. It also tends to be quite limited in the problem areas that require understanding of human language and what it means to be human being.

ICT presents both threats and opportunities. The development of intelligent machines is still in its infancy. By and large, even the people of the best-educated nations of the world are having difficulty dealing with this steady (and increasing) pace of change. My advice to you to think carefully about (that is, self assess) how well you are doing in each of the following;


 * Know your own physical and cognitive capabilities and limitations. Work to maintain and improve your physical and cognitive capabilities, and learn to make accommodations for your limitations.
 * Know the physical and AI machine capabilities and limitations of computer systems and computerized machines. Pay particular attention to the increasing capabilities of search engines on the Web and of software that can solve problems and accomplish tasks that previously required human intelligence.
 * Plot a lifelong path that is consistent with your own interests, drives, intrinsic, and extrinsic motivation and opportunities, and that takes into consideration your insights into 1-2 given above. Be fully aware that during your life, both you and machines will change.

Reader's Comments and Suggestions
Remember, all readers of this Wiki version of the book are free to edit the book. Please be considerate of others as you add, delete, and modify text.

Please use this Reader's Comments and Suggestions section to make general comments and suggestions.

'''Links to the chapters of the book. You are currently reading Chapter 4.'''

Title Page

Preface

Chapter 1: Introduction

Chapter 2: Inventing Your Future

Chapter 3: Expertise and Problem Solving

Chapter 4: Human and Artificial Intelligence

Chapter 5: Computer-Assisted and Distance Learning

Chapter 6: Learning and Learning Theory

Chapter 7: Increasing Your Expertise in ICT

Chapter 8: Brief Introductions to A number of Key Ideas

Chapter 9: On the Lighter Side

References