Artificial Intelligence





Introduction

 * "We must welcome the future, remembering that soon it will be the past; and we must respect the past, remembering that it was once all that was humanly possible." (George Santayana; Spanish citizen raised and educated in the United States, generally considered an American man of letters; 1863-1952.)


 * "The real problem is not whether machines think but whether men do." (B. F. Skinner; American psychologist; 1904-1990.)


 * "In times of change, the learner will inherit the earth while the learned are beautifully equipped for a world that no longer exists." (Eric Hoffer; American social writer and philosopher; 1902-1983.)

This introductory section is designed to awaken your brain to the progress that is occurring in developing smarter and smarter computers. A major goal in the field of machine intelligence/artificial intelligence (AI) is to develop computers that can solve problems and accomplish tasks that are currently challenging to humans or even beyond the intellectual capabilities of humans.

So, let's start with a quite modern and future-looking example. Nowadays, it is common to read about progress in developing self-driving cars. Think about the human knowledge, skills, and experience it takes to be a safe driver. The developers of driverless cars believe that they can develop driverless cars that are ten times as safe as human-driven cars.

As of 2016, quite good progress has occurred in this endeavor. My record as a forecaster of what the future will bring us in computer technology is not very good. However, I read a lot of forecasts that 10 to 15 years from now driverless cars will be reasonably common.

It is fun to look backwards at some of the progress that has occurred in developing artificial intelligence. For example, every once in a while I receive an email message in a language that I don't even recognize, much less know how to read. I copy the message into Google Translate and it provide me a free translation. The translation is not as good as a professional bilingual translator could provide, but usually it is adequate. And, such translation system are improving year by year. Do you know any humans that have a working repertoire of 100 languages?

In my youth and college days, I liked to play chess. I read some chess books and played the game from time to time. I learned that some reading was a very valuable aid—and that there were some people who were both faster learners and much better players than I could hope to be.

But, with this background, I enjoyed following the literature as people began writing computer programs that could play chess and began to have contests between these programs. As time went on, the computer programs became better and better, and began to play at a competitive level in chess tournaments. By 1997, a special IBM computer named Deep Blue was designed, built, and programmed especially to play chess. Deep Blue beat the world's (human) reigning chess champion Garry Kasparov in a 1997 chess match.

To me, and to many people throughout the world, this seemed like an amazing computer achievement. That was now many years ago. Today's desktop computers play a better chess game than the multi-million dollar Deep Blue computer system.

After its chess achievement, IBM built a computer named Watson and programmed it to play the popular television show game of Jeopardy. In 2011, this computer beat two human world-class players of Jeopardy. This was considered to be a truly amazing achievement in developing a computer program that could "understand" English and quickly draw on a huge database of information to respond to the questions.

Since 2011, IBM has been steadily improving Watson's hardware and has large teams of researchers developing software to solve or help to solve problems in medical diagnosis, science, business, and a host of other cognitively challenging areas. Many of these problems require a type of "intelligence" and memory that goes far beyond that of a human being. Just think about being able to memorize and readily draw on the many millions of articles that have been published in medical research journals.

Some Older History
The accomplishments listed above are impressive. However, they are a long way from the overall accomplishments of a human brain. Quoting David Deutsch, author of Creative Blocks: The Very Laws of Physics Imply that Artificial Intelligence Must be Possible (10/3/2012):


 * But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

This article includes some of the history of Information and Computer Technology (ICT) that has led up to current AI capabilities. For example, the 19th century mathematician Charles Babbage built a "difference engine" that could automatically carry out computations. Quoting again from Deutsch's article:


 * Thinking about how he could enlarge that repertoire, Babbage first realised that the programming phase of the Engine’s operation could itself be automated: the initial settings of the cogs could be encoded on punched cards. And then he had an epoch-making insight. The Engine could be adapted to punch new cards and store them for its own later use, making what we today call a computer memory. If it could run for long enough — powered, as he envisaged, by a steam engine — and had an unlimited supply of blank cards, its repertoire would jump from that tiny class of mathematical functions to the set of all computations that can possibly be performed by any physical object. That’s universality.


 * For humans, that difference in outcomes — the different error rate — would have been caused by the fact that computing exactly the same table with two different algorithms felt different. But it would not have felt different to the Difference Engine. It had no feelings. Experiencing boredom was one of many cognitive tasks at which the Difference Engine would have been hopelessly inferior to humans. Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would give identical results if executed accurately. Still less was it capable of wanting, as he did, to benefit seafarers and humankind in general. In fact, its repertoire was confined to evaluating a tiny class of specialised mathematical functions (basically, power series in a single variable).

Here is a little more history. Quoting from James Gaskin's article, What Ever Happened to Artificial Intelligence? (6/24/2008):


 * Stanford University computer science professor John McCarthy coined the phrase in 1956 to mean "the science and engineering of making intelligent machines." In the early years of the artificial intelligence movement, enthusiasm ran high and artificial intelligence pioneers made some bold predictions.


 * In 1965, artificial intelligence innovator Herbert Simon said, "Machines will be capable, within 20 years, of doing any work a man can do."


 * Two years later, MIT researcher Marvin Minsky predicted, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."


 * Popular culture jumped onto the artificial intelligence bandwagon and gave us Rosie the Robot from the Jetsons, HAL from the movie 2001 and R2-D2 from Star Wars.


 * Yet, here we are, decades later, and what has artificial intelligence done for us lately? If you define artificial intelligence as self-aware, self-learning, mobile systems, then artificial intelligence has been a huge disappointment.


 * On the other hand, every time you search the Web, get a movie recommendation from NetFlix or speak to a telephone voice recognition system, tools developed chasing the great promise of intelligent machines do the work. In other words, we may not have full-functioning robots that cater to our every need, but artificial intelligence is embedded in our everyday lives.

Humans as Tool-using Animals
The first Homo sapiens were descendants of a long line of tool-using proto-humans. Quoting from the [Wikipedia https://en.wikipedia.org/wiki/Human_evolution the Wikipedia:]


 * The earliest documented representative of the genus Homo is Homo habilis, which evolved around 2.8 million years ago, and is arguably the earliest species for which there is positive evidence of the use of stone tools... During the next million years a process of rapid encephalization occurred, and with the arrival of Homo erectus and Homo ergaster in the fossil record, cranial capacity had doubled to 850 cm3... It is believed that Homo erectus and Homo ergaster were the first to use fire and complex tools, and were the first of the hominin line to leave Africa, spreading throughout Africa, Asia, and Europe between 1.3 to 1.8 million years ago.

As anatomically modern humans developed perhaps a hundred thousand years ago, they used a variety of stone and wooden tools as well as fire. While it is difficult (if not impossible) to determine when spoken language developed, it could well be that this occurred at about the time anatomically modern humans emerged. Spoken language is a very very powerful tool!

For the next (approximately) 90,000 years, humans developed tools for hunting, gathering, and fighting that aided in their survival. Then, about 12,000 years ago, humans developed agriculture and gradually weaned themselves from being primarily hunter-gatherers.

I like to think of the agricultural developments as a combination of aids to humans' physical and mental abilities. People who were gathering fruits, vegetables, roots, and grains located some places where they were more abundant and perhaps grew better. Perhaps they scattered some of the seeds and left some of the roots, and discovered that this was a good thing to do.

About 9,000 years ago, this trial-and-error genetic engineering that went on year after year led to better crops (maize or corn, and grains) and animals (sheep, goats, cattle, and pigs). People developed villages and towns based on the availability of suitable land and water, and on the local weather.

The next huge leap forward was the development of reading and writing a little over 5,000 years ago. This greatly improved human capabilities to preserve and communicate knowledge and skills over distance and time. By then we were making some progress in developing the discipline and language of mathematics, as well as progress in astronomy and medicine.

Thousands of years of improvements in aids to our physical and mental capabilities eventually led to the Industrial Revolution which began in about 1760. The steam engine was the key to supplementing human physical power. A 10 horsepower steam engine could provide sustained "brute force" physical energy equivalent to a team of about 10 horses or 50 to 60 humans.

The Industrial Revolution produced and combined better machines using water power and steam power. This led to major improvements in the manufacturing and distribution of a wide variety of products.

Less than 80 years ago, electronic digital computers were developed as aids to storing, processing, and retrieving information. A single “run” of the Large Hadron Collider now produces about 30 petabytes of data—the equivalent of about 30 billion books. In 2016, the world's fastest supercomputers could perform more than 90 trillion (that is, 90 million million) arithmetic computations per second. (Compare that number with how long it takes you to do a multiplication or division problem of two multidigit numbers.)

Photography, telephones, television, electronic storage and playback devices, and computers are all predecessors to today’s Smartphone. The first telephones combining the concepts of intelligence, data processing, and visual display screens became commercially available in 1993. In each year 2013-2015, total worldwide production of Smartphones was about 1 billion per year—that is, about one for each person on earth in each of these three years.

The “smartness” of Smartphones is quite impressive and is increasing year to year. Some of the smartness features are a Global Positioning System, a voice input and output system, and access to increasingly smart Web search engines. Some of the artificially intelligent smartness is built into a Smartphone, and some comes from access to and use of the steadily growing accumulation of human knowledge stored on the Web and in other digital libraries.

That is, if you own a Smartphone it will get better over time through upgrades to its software and through upgrades to applications that you can access from the Smartphone. I find it interesting to think about buying a product and having it get better (at no cost to me) over time.

Information Age
In the United States in 1956, the "white collar" employment overtook "blue collar" employment. The Industrial Age was coming to an end and the Information Age was beginning. A steadily increasing number of employers were hiring employees to handle paperwork or work directly with customers. While computers were becoming important, they were certainly not a major cause of changes in the nature of employment to produce more and more while collar jobs and fewer and fewer blue collar jobs.

By 1956, the commercial production of computers had been going on for about five years. A computer could rapidly and accurately follow a step-by-step set of instructions written in a language the machine could follow. In terms of carrying out arithmetic operations and other routine, repetitious tasks, the earliest electronic digital computers could far outproduce a human. While their overall level of "intelligence" was low, computers proved to be very valuable and cost effective aids to human intelligence.

Today's computers are more than a billion times "better" in terms of speed, reliability, and cost effectiveness. I find it hard to imagine taking a tool that was useful and cost effective in its original use, and then making it a billion times better!

Increased speed and reliability, along with decreased cost, were key aspects of being "better." In addition, computers became steadily "smarter." Eventually, this aspect of computers came to be called artificial intelligence or machine intelligence.

The academic discipline of Computer and Information Science (CIS) arose from work in engineering and physics, work in business, and work in mathematics. As Computer and Information Science departments began to be developed by universities starting in about 1960, some departments came from engineering, some from math, and some from business.

Artificial Intelligence
Artificial Intelligence (AI) is an increasingly important component of the discipline of Computer and Information Science. It also is of growing importance in education. As machines grow "smarter," our educational systems need to be in a process of continually rethinking what students should be learning.

One way to think about AI is by comparing and contrasting the capabilities and limitations of a human brain with that of an artificially intelligent computer. Both a human brain and a computer "brain" can input, store, process, and output information. Both can contain declarative and procedural knowledge. Put simply, a human brain memorizes facts (declarative knowledge) and learns procedures for processing declarative knowledge.

Right now, a human brain is far, far better at thinking, understanding, and knowing what it is like to be human than is a computer brain. However, a computer brain can carry out a variety of procedures that humans consider to be important much faster and more accurately than can a human brain.

Thus, we need an educational system that prepares humans to work in an environment in which both human brains and computer brains are valued and useful. This general idea underlies the field of Computational Thinking.

Overview of AI in Education
I have long been interested in applications and implications of Artificial Intelligence (AI) in education. While the following free book is out of date, it provides a good introduction to the topic.


 * Moursund, D.G. (2005, 2006). Introduction to Educational Implications of Artificial Intelligence. Eugene, OR: Information Age Education. Download PDF file from http://i-a-e.org/downloads/free-ebooks-by-dave-moursund/6-introduction-to-educational-implications-of-artificial-intelligence-1/file.html. Download Microsoft Word file from http://i-a-e.org/downloads/free-ebooks-by-dave-moursund/5-introduction-to-educational-implications-of-artificial-intelligence/file.html.

The following quote from the abstract of the book provides a good starting point for further exploration of the field of AI in education.

Abstract


 * This book is designed to help preservice and inservice teachers learn about some of the educational implications of current uses of Artificial Intelligence as an aid to solving problems and accomplishing tasks. Humans and their predecessors have developed a wide range of tools to help solve the types of problems that they face. Such tools embody some of the knowledge and skills of those who discover, invent, design, and build the tools. Because of this, in some sense a tool user gains in knowledge and skill by learning to make use of tools.


 * This document uses the term “tool” in a very broad sense. It includes the stone ax, the flint knife, reading and writing, arithmetic and other math, the hoe and plow, the telescope, microscope, and other scientific instruments, the steam engine and steam locomotive, the bicycle, the internal combustion engine and automobile, and so on. It also includes the computer hardware, software, and connectivity that we lump together under the title Information and Communication Technology (ICT).


 * Artificial intelligence (AI) is a branch of the field of computer and information science. It focuses on developing hardware and software systems that solve problems and accomplish tasks that—if accomplished by humans—would be considered a display of intelligence. The field of AI includes studying and developing machines such as robots, automatic pilots for airplanes and space ships, and “smart” military weapons. Europeans tend to use the term machine intelligence (MI) instead of the term AI.


 * The theory and practice of AI is leading to the development of a wide range of artificially intelligent tools. These tools, sometimes working under the guidance of a human and sometimes without external guidance, are able to solve or help solve a steadily increasing range of problems. Over the past 50 years, AI has produced a number of results that are important to students, teachers, our overall educational system, and to our society.


 * This short book provides an overview of AI from K-12 education and teacher education points of view. It is designed specifically for preservice and inservice teachers and school administrators. However, educational aides, parents, school site council members, school board members, and others who are interested in education will find this booklet to be useful.


 * This book is designed for self-study, for use in workshops, for use in a short course, and for use as a unit of study in a longer course on ICT in education. It contains a number of ideas for immediate application of the content, and it contains a number of activities for use in workshops and courses. An appendix contains suggestions for Project-Based Learning activities suitable for educators and students.

The next eight sections contain brief summaries of the eight chapters of the book.

Chapter 1: Intelligence and Other Aids to Problem Solving
Many of us are now routinely using artificial intelligence (AI; also known as machine intelligence) as an aid to solving problems and accomplishing tasks. The book places specific emphasis on educational applications and implications of AI.

The first chapter provides background needed in the remainder of the book. The background includes:


 * Several definitions of artificial intelligence.


 * A discussion of human intelligence.


 * A brief introduction to problem solving.

Chapter 2: Goals of Education
Each person has their own ideas on what constitutes appropriate goals for education. Thus, this topic can lead to heated debate and is currently a major political issue. Curriculum content, instructional processes, and assessment are all controversial issues. What constitutes a “good” education or a “good” school?

David Perkins' 1992 book Smart Schools: Better Thinking and Learning for Every Child, contains an excellent overview of education and a wide variety of attempts to improve our educational system. He analyzes these attempted improvements in terms of how well they have contributed to accomplishing the following three major goals of education:


 * 1) Acquisition and retention of knowledge and skills.
 * 2) Understanding of one's acquired knowledge and skills.
 * 3) Active use of one's acquired knowledge and skills. (Transfer of learning. Ability to apply one's learning to new settings. Ability to analyze and solve novel problems.)

These three general goals—acquisition & retention, understanding, and use of knowledge & skills—help guide formal educational systems throughout the world. They are widely accepted goals that have endured over the years. They provide a solid starting point for the analysis of any existing or proposed educational system. We want students to have a great deal of learning and application experience—both in school and outside of school—in each of these three goal areas.

Chapter 3: Computer Chess and Chesslandia
In Minsky’s interview given in chapter 1, he noted that it is much easier to program a computer to play chess than it is to develop a computerized robot that can do routine household work. Still, developing a computer program with a high level of chess expertise proved to be a challenging AI task. This chapter explores this effort and some of its educational implications. In addition, it introduces Alan Turing and the Turing Test for computer intelligence.

Alan Turing (1912-1954) was a very good mathematician and a pioneer in the field of electronic digital computers. In 1936, he published a math paper that provides theoretical underpinnings for the capabilities and limitations of computers. During World War II, he helped to develop computers in England that played a significant role in England’s war efforts. In 1950, Alan Turing published a paper discussing ideas of current and potential computer intelligence, and describing what is now known as the Turing Test for AI (Loebner Prize, 2016).

In essence, the Turing Test involves humans communicating with some one or some thing (a human or a computer) in natural (written) language, and trying to decide whether they are communicating with a human or a computer. Developing such a computer system has proven to be far more difficult than Turing imagined. Now, nearly 70 years later, such a computer system has not yet been developed.

Chapter 4: Algorithmic and Heuristic Procedures
At some time in your life, you learned and/or memorized procedures for multi-digit multiplication and long division, looking up a word in a dictionary or a name in a telephone book, alphabetizing a list, and how to accomplish many other routine tasks.

A procedure is a detailed step-by-step set of directions that can be interpreted and carried out by a specified agent. Our focus here is on procedures designed to solve or help to solve a specified category of problems. Remember, our definition of problem includes accomplishing tasks, making decisions, answering questions, and so on. We are particularly interested in procedures that humans can carry out and in procedures that computers can carry out.

Here are two types of procedures.


 * 1) Algorithm. An algorithm is a procedure that is guaranteed to solve the problem or accomplish the task for which it is designed. You know a paper-and-pencil algorithm for multiplying multi-digit numbers. If you carry out the procedure (the algorithm) without error, you will solve the multiplication problem.
 * 2) Heuristic. A heuristic is a procedure that is designed to solve a problem or accomplish a task, but that is not guaranteed to solve the problem or accomplish the task. A heuristic is often called a rule of thumb. You know and routinely use lots of heuristics. They work successfully often enough for you so that you continue to use them. For example, perhaps you have a heuristic that guides your actions as you try to avoid traffic jams or try to find a parking place. Perhaps you use heuristics to help prepare for a test or for making friends. Teachers make use of a variety of heuristics for classroom management.

Chapter 5: Procedures Used by a Word Processor
The chances are that you make substantial use of a word processor. This tool is certainly useful to a person who needs to write a document and produce a final product of both high quality content and good appearance. This chapter explores the algorithmic and heuristic intelligence of a word processor.

This chapter explores roles of computers in implementing the six steps in process writing.


 * 1) Decide upon audience and purpose, and brainstorm possible content ideas.
 * 2) Organize the brainstormed content ideas into a tentative appropriate order. Attempts to do this may lead back to step 1.
 * 3) Develop a draft of the document. Attempts to do this may lead back to steps 1 or
 * 4) Obtain feedback from self, peers, teacher, etc.
 * 5) Revise the document to reflect the feedback. This may require going back to steps 1, 2, and/or 3.
 * 6) Polish and publish the document.

This six-step outline of process writing can be thought of as a procedure to be carried out by a person. However, it is evident that it takes a great deal of instruction, learning, and practice to develop a useful level of expertise in carrying out this heuristic procedure. Moreover, there is no guarantee that even when a person diligently follows this 6-step procedure the result will be good writing. Thus, this six-step procedure is a heuristic procedure.

Chapter 6: Procedures Used in Game Playing
This chapter introduces the development of computer procedures that play chess, checkers, bridge, and other games that people have enjoyed playing over the years. Such game-playing computer programs typically make use of a combination of algorithmic and heuristic procedures.

To begin, explore a very simple game, tic-tac-toe (TTT). TTT is a two-player game, with players taking turns. It is fun to watch young children play the game before they develop strategies for playing well. Their moves are rather random.

These are procedures that a human can carry out and in which the human’s opponent makes random moves. (You might want to think of the latter as a simulation of a computer program that makes random moves.)

The chapter continues with an exploration of how a computer can be programmed to play a good game of TTT as well as other more challenging games.

Chapter 7: Machine Learning
In chapters 1 and 6, we noted that an artificially intelligent ICT system can make use of a combination of:


 * Human knowledge that has been converted into a format suitable for use by an AI system; and


 * Knowledge generated by an AI system, perhaps by analyzing data, information, and knowledge at its disposal. This might be done, for example, by practicing on problems that have been handled by humans in the past, and comparing its performance to those of the humans. In a computer game setting, a computer might learn by analyzing games that it plays against itself.

In addition, a computer can take a trial-and-error approach to learning. By use of its immense speed, it can do a huge number of trials and "figure out" pathways to solve certain types of problems and accomplish certain types of tasks.

One of the major goals in AI research has been to develop computer programs that can learn on their own. By this we mean developing computer programs that learn how to solve problems and accomplish tasks without human programmers writing detailed step by step programs telling the computer how to solve the problems or accomplish the tasks.

Machine learning is currently one of the "leading edge" areas of research in AI.

Chapter 8: Summary and Conclusions
Artificial Intelligence is a loaded expression, evoking strong negative emotions from many people. The book stresses the differences between human intelligence and AI.

When teaching this subject to preservice and inservice teachers, I like to bring up the idea of Artificial Muscle (AM) versus human or animal muscle. The Industrial Age was based on developing machines that had AM and could out-perform humans and animals in many different tasks. Since we have all grown up making use of AM we don't think much about it.

Today's children accept AI in much the same way that their parents and grandparents accepted AM. Probably many children think, "That's just the way the world is. It's no big deal." Smart tools and smart games are just everyday routine parts of the way their world is.

Overall, the book focuses on the use of ICT to enhance the capabilities of tools. Although we have talked about human intelligence and machine intelligence, our emphasis has been on the steadily growing capabilities of smart tools as aids to solving problems and accomplishing tasks. We have avoided getting embroiled in arguing whether computers have or will ever have consciousness in the sense that people have consciousness.

Instead we have stuck to the thesis that tools embody some of the knowledge and skills of their developers, and that this empowers users of the tools. AI can be viewed as an area of research and development that strives to increase the knowledge and skills that are embodied in tools. As such tools are widely distributed and used, they change the societies of our world.

We have examined a number of AI-related tools that augment or extend mental capabilities. In addition, we have come to understand that many of the tools that augment or extend physical capabilities now make use of AI and other aspects of ICT. That is, we are seeing a merger of the two general categories of tools.

Thus, people now have available a steadily growing number of these mental and physical tools that can “just do it” for them. That is, the mental and physical tools are sufficiently automated (have appropriate levels of AI) so that they can automatically solve or help substantially in solving an increasingly wide variety of problems.

The (Possibly) Coming Singularity
What will happen if/when AI systems become "smarter" than people? People who think about this possibility use the word singularity to describe this (possible) event. Ray Kurzweil is a leader in this field, and his 2005 book, The Singularity is Near, has received wide attention. Quoting Ray Kurzweil from his 2005 book:


 * What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one’s view of life in general and one’s own particular life.

There is now a Singularity Institute for Artificial Intelligence. Quoting from the Singularity Institute website:


 * The Singularity is the technological creation of smarter-than-human intelligence. There are several technologies that are often mentioned as heading in this direction. The most commonly mentioned is probably Artificial Intelligence, but there are others: direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity – several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.




 * The concept was solidified by mathematician and computer scientist Vernor Vinge, who coined the term “technological singularity” in an article for Omni magazine in 1983, followed by a science fiction novel, Marooned in Realtime, in 1986. Seven years later, Vinge presented his seminal paper, “The Coming Technological Singularity,” at a NASA-organized symposium. Vinge wrote:


 * What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities – on a still-shorter time scale.

Robots
We all know about outsourcing jobs to countries that have low labor costs. Perhaps we are less concerned about another type of outsourcing when industrial robots in our country and in many other countries take over jobs formerly performed by humans. This second type of “outsourcing” is decreasing the number of industrial manufacturing jobs performed by humans in the United States—a large and rapidly growing change.

A 2015 report from The Boston Consulting Group provides current information and projections for the next ten years (BCG, 2/10/2015). Quoting from the reference:


 * Investment in industrial robots will increase to about 10% a year over the next decade, bringing down manufacturing labor costs and boosting productivity, a new report predicts.


 * Robots have been used in manufacturing for decades but Boston Consulting Group notes in the report that they currently perform, on average, only around 10% of manufacturing tasks that can be done by machines. By 2025, BCG estimates, the portion of “automatable tasks” performed by robots will near 25% for all manufacturing industries worldwide.


 * “The use of advanced industrial robots is nearing the point of takeoff,” the report says.


 * According to BCG, this growth is being fueled in part, by declining costs. The total cost of owning and operating an advanced robotic spot welder, for example, has plunged 27%, from an average of $182,000 in 2005 to $133,000 in 2014 — and is forecast to drop by a further 22% by 2025. In addition, the performance of robotics systems is likely to continue improving by around 5% each year.


 * Annual growth in investment in industrial robots is currently averaging 2% to 3%. Boston Consulting sees that investment increasing to 10% over the next 10 years and, as a result, the total cost of manufacturing labor in 2025 could be 16% lower, on average, in the world’s 25 largest goods-exporting nations than it would be otherwise.


 * Moreover, depending on the industry and country, output per worker could rise by an estimated 10% to 30% over and above productivity gains that typically come from other measures.

I find the second to last paragraph particularly interesting. What does “the total cost of manufacturing labor in 2025 could be 16 percent lower, on average” mean to people working in industrial manufacturing?

You need to realize that the use of robots in industrial manufacturing is only part of the huge wave of change being brought on by Information and Communication Technology (ICT), and that this has been going on (and increasing) for many years. I am reminded of this every time I try to use my telephone to get some help from a company, and I first have to communicate with a computerized telephone answering system. I am reminded of this when I go shopping, and see the high level of ICT used in the check-out process. I am reminded of this when I make an online purchase. I still think of it as a modern miracle when I can purchase a “special sale” online book for $.99 and have it delivered to my tablet computer in a few seconds. No human worker is involved in this process.

Finally, think about how ATM machines have affected employment in the banking industry. This is an example of training people so that the customer and the machine together can do what an employee did in the past.

Educational and Job Implications
The number of middle class jobs has been declining in the U.S. and other industrialized countries for many years. Quoting from the article, Job Polarisation and the Decline of Middle-class Workers’ Wages (Boehm, 2/8/2014):


 * The decline of the middle class has come to the forefront of debate in the US and Europe in recent years. This decline has two important components in the labour market. First, the number of well-paid middle-skill jobs in manufacturing and clerical occupations has decreased substantially since the mid-1980s. Second, the relative earnings for workers around the median of the wage distribution dropped over the same period, leaving them with hardly any real wage gains in nearly 30 years.


 * The major growth in jobs has been at the lower pay levels—there are still lots of jobs that require only a modest amount of education and offer only a modest amount of pay. For example, you might want to analyze the requirements for a high school diploma versus the skills needed to be a clerk in a fast food outlet.


 * Employers also have job openings at a much higher level—jobs that require good problem-solving skills and good abilities to make use of modern technology to aid in solving problems and accomplishing tasks. Indeed, employers complain about a shortage of qualified job applicants at this level, and this is one indication that our educational system is not doing nearly as well as it should.

John Markoff's 2012 article, Skilled Work, Without the Worker, provides a good overview of robotics in manufacturing (Markoff, 8/18/2012). Quoting from the article:


 * At a sister factory here in the Dutch countryside, 128 robot arms do the same work [as in the Chinese factory] with yoga-like flexibility. Video cameras guide them through feats well beyond the capability of the most dexterous human.


 * One robot arm endlessly forms three perfect bends in two connector wires and slips them into holes almost too small for the eye to see. The arms work so fast that they must be enclosed in glass cages to prevent the people supervising them from being injured. And they do it all without a coffee break — three shifts a day, 365 days a year.


 * All told, the factory here has several dozen workers per shift, about a tenth as many as the plant in the Chinese city of Zhuhai.


 * This is the future. A new wave of robots, far more adept than those now commonly used by automakers and other heavy manufacturers, are replacing workers around the world in both manufacturing and distribution. Factories like the one here in the Netherlands are a striking counterpoint to those used by Apple and other consumer electronics giants, which employ hundreds of thousands of low-skilled workers. [Bold added for emphasis.]

My advice to the students I talk with can be summarized by:


 * Develop your “people” and communication skills. Become fluent in face-to-face, written, and computer communication skills. If you have the opportunities to do so, become bilingual and bicultural.


 * Focus your education on gaining higher-order, creative thinking, understanding, and problem-solving knowledge and skills in whatever areas you decide to study.


 * Learn about current and near-term capabilities and limitations of computers and robots. Plan your education and develop your abilities so that you do not end up in head-to-head competition with computers and robots in areas where they are already quite good and are getting better.


 * Make very sure that you learn to make effective and fluent use of ICT, both in general use and in the discipline areas you choose to study. Remember, the combination of a human brain and a computer brain can increasingly outperform either one working alone.


 * If you are “really into” computers, continue to develop your knowledge and skills in this area, but also work toward gaining a high level of expertise in one or more other career fields. This will help prepare you for many of the jobs currently held by people who are not keeping up with changes in ICT, and for new jobs requiring a combination of ICT and “traditional” knowledge and skills.


 * Develop learning skills and habits of mind that will serve you throughout your lifetime.


 * Think about what you want in your future. What informal and formal education do you need to help ensure that you will achieve your goals and attain a decent quality of life?

References and Resources
BCG (2/10/2015). Industrial robots ‘nearing point of takeoff.’ The Boston Consulting Group. Retrieved 6/23/2016 from http://ww2.cfo.com/applications/2015/02/industrial-robots-nearing-point-takeoff/.

Boehm, M. (2/8/2014). Job polarisation and the decline of middle-class workers' wages. VOX. Retrieved 6/23/2016 from http://voxeu.org/article/job-polarisation-and-decline-middle-class-workers-wages.

Charles Sturt University (7/30/2012). Retrieved 6/23/2016 from http://www.sciencealert.com/learning-patterns-to-aid-computers. Quoting from the article:


 * Patterns needed to help computers think better have been investigated by an international research group including a Charles Sturt University (CSU) expert, with the results reported in the latest issue of the international journal Nature Scientific Reports.


 * The results come from ten years of collaboration between the Director of CSU’s Centre for Research in Complex Systems, Professor Terry Bossomaier, and colleagues at the University of Sydney lead by Professor Allen Snyder. Quoting Professor Bossomaier:


 * The great cognitive scientist Herbert Simon, who won the Nobel Prize for Economics, recognised this as needing to build up chunks of little patterns, and needing at least 50,000 of these to reach expert level at anything. We now think it is more than 100,000 patterns.

Comment from David Moursund : The number "100,000 patterns" helps to explain why it takes so many years of study and practice for a person to develop a world-class level of skill in games such as chess and Go.

Deutsch, D. (10/3/2012). Creative blocks: The very laws of physics imply that artificial intelligence must be possible. What's holding us up? Aeon Magazine. Retrieved 6/23/2016 from http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/. Quoting from the article.

Gaskin, J.E. (6/23/08). What ever happened to artificial intelligence? The grand promise of intelligent machines underestimated the complexity of reproducing human cognition. Network World. Retrieved 6/23/2016 from http://www.computerworld.com/article/2534413/business-intelligence/what-ever-happened-to-artificial-intelligence-.html. Quoting from the article:


 * Stanford University computer science professor John McCarthy coined the phrase in 1956 to mean "the science and engineering of making intelligent machines," In the early years of the artificial intelligence movement, enthusiasm ran high and artificial intelligence pioneers made some bold predictions.


 * In 1965, artificial intelligence innovator Herbert Simon said that "machines will be capable, within 20 years, of doing any work a man can do."


 * Two years later, MIT researcher Marvin Minsky predicted, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

Comment from David Moursund : AI has made a lot of progress, but the progress tends to be in much smaller areas than the forecasts listed above. The article contains a number of examples of such successes.

Hardesty, L. (9/9/2013). Artificial-intelligence research revives its old ambitions. MIT News. Retrieved 6/23/2016 from http://web.mit.edu/newsoffice/2013/center-for-brains-minds-and-machines-0909.html. Quoting from the document:


 * The birth of artificial-intelligence research as an autonomous discipline is generally thought to have been the month long Dartmouth Summer Research Project on Artificial Intelligence in 1956, which convened 10 leading electrical engineers — including MIT’s Marvin Minsky and Claude Shannon — to discuss “how to make machines use language” and “form abstractions and concepts.” A decade later, impressed by rapid advances in the design of digital computers, Minsky was emboldened to declare that “within a generation ... the problem of creating ‘artificial intelligence’ will substantially be solved.”


 * The problem, of course, turned out to be much more difficult than AI’s pioneers had imagined. In recent years, by exploiting machine learning — in which computers learn to perform tasks from sets of training examples — artificial-intelligence researchers have built special-purpose systems that can do things like interpret spoken language or play Jeopardy with great success. But according to Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT, “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence. We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. NY: Viking Press.

Lobner Prize (2016). The Lobner prize. Retrieved 6/23/2016 from http://www.aisb.org.uk/events/loebner-prize. Quoting from the website:


 * The Loebner Prize is the oldest Turing Test contest, started in 1991 by Hugh Loebner and the Cambridge Centre for Behavioural studies. Since then, a number of institutions across the globe have hosted the competition including recently, the Universities of Reading, Exeter and Ulster. From 2014, the contest will be run under the aegis of the AISB, the world’s first AI society (founded 1964) at Bletchley Park where Alan Turing worked as a code-breaker during World War 2.


 * The 2015 contest was run in a similar way to those in previous years. The contest consists of 4 rounds where in each round, the 4 judges will each interact with two entities using a computer terminal. One of these entities will be a human ‘confederate’ and the other an AI system. After 25 minutes of questioning the judge must decide entity is the human and which is the AI. If a system can fool half the judges that it is human under these conditions, a Silver Medal and $25,000 will be awarded to the creator of that AI system.

Markoff, J. (8/18/2012). Skilled work, without the worker. Business Day. Retrieved 6/16/2016 from http://www.nytimes.com/2012/08/19/business/new-wave-of-adept-robots-is-changing-global-industry.html?pagewanted=all&_r=0.

Moursund, D. (2016). What the future is bringing us. IAE-pedia. Retrieved 6/23/2016 from http://iae-pedia.org/What_the_Future_is_Bringing_Us.

Moursund, D. (2015). Two brains are better than one. IAE-pedia. Retrieved 6/23/2016 from http://iae-pedia.org/Two_Brains_Are_Better_Than_One.

Moursund, D. (5/16/2015). Technology-based mini-singularities. IAE Blog. Retrieved 6/23/2016 from http://i-a-e.org/iae-blog/entry/technology-based-mini-singularities.html.

Moursund, D. (3/5/2015). Education for the coming technological singularity. IAE Blog. Retrieved 6/23/2016 from http://i-a-e.org/iae-blog/entry/education-for-the-coming-technological-singularity.html.

Moursund, D. (2/5/2015). The coming technological singularity. IAE Blog. Retrieved 6/23/2016 from http://i-a-e.org/iae-blog/entry/the-coming-technological-singularity.html.

Perkins, D. (1992). Smart schools: Better thinking and learning for every child. London: Free Press.

=Author or Authors= The initial version of this page was written by David Moursund.