From the May 2018 Issue.
One of the hot areas of emerging technology is Artificial Intelligence or AI. You know a topic is hot when products use the terminology in their name or in their marketing materials. We’ve see this done with “ease of use”, Cloud Computing, and Blockchain. Artificial Intelligence is so hot of a topic among the development community that the marketing teams are saying products have AI when in fact, they do not. Because of this, I’ve been using the approach of calling this “artificial” Artificial Intelligence or “Fake” AI. Facts matter, and many products that claim to have AI simply do not. It is truly buyer beware right now in this area.
As a computer scientist by training, I admire products that have developed solutions that leverage AI. Program development in this area is not easy, and the products are just starting to work and do meaningful tasks. Some of the goals of AI are quite lofty, and the promises and risks of AI in computing are quite large. Consider the following:
On the positive side:
- Machines mimic cognitive functions associated with human minds such as learning and problem solving.
- As AI becomes more capable, tasks that were considered AI are considered solved, for example, OCR.
- Today, AI developments include human speech, autonomous cars, interpreting complex data like images and video.
- Algorithms can learn from data and provide insight and actionable items with minimal human intervention.
On the down side:
- For difficult problems, algorithms require enormous computation.
- “The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” Stephen Hawking. Other critics include: Bill Gates, Elon Musk, and Peter Thiel.
- Devaluation of humanity.
- Decrease in demand for human labor.
- Artificial moral agents.
- Machine ethics.
- Malevolent and friendly AI.
- Machine consciousness.
- Robot rights.
- Superintelligence.
AI has been under development for a long time, starting at Dartmouth in 1956. I was using LISP in 1975 and this tool was broadly used until 1987, when it was replaced by SmallTalk/Medley. AI is routinely divided into sub-fields such as robotics (a future topic of this why and how series of columns) or machine learning (third in the series) although we are treating these emerging technologies separately in the why and how series of articles. Traditional goals include: reasoning, knowledge, planning, learning, natural language processing, perception, and explainability. Tools include versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. Stuart Shapiro divides AI research into three traditions, which he calls computational psychology, computational philosophy, and computer science. Together the human-like behavior, mind and actions make up AI.
We’ve seen the results of AI with public relations stunts when:
- Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on May 11, 1997.
- In a February 2011 Jeopardy! quiz show exhibition match, IBM‘s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin.
- At the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years.
Why?
AI is a broad collection of topics and approaches. Because there are so many topics to cover, there are broad fields of study that each have a lot of depth. However, purists are after the last bullet in the list below, General Intelligence. This is not around the corner, as computer scientists of the 1950’s believed, but a decade or more into the future, even with the rapid progress being made today. There are a number of problems AI is trying to solve:
- Reasoning – AI has progressed using “sub-symbolic” problem solving statistical approaches to AI mimic the human ability to guess faster and more accurately.
- Knowledge representation – a representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them.
- Planning – intelligent agents must be able to set goals and achieve them, modifying inputs as needed.
- Learning – the study of computer algorithms that improve automatically through experience.
- Natural language processing – the ability to read and understand human language.
- Perception – the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world – think digital exhaust and IOT.
- Motion and manipulation – robots need to handle tasks such as object manipulation and navigation, with sub-problems such as localization, mapping, and motion planning.
- Social intelligence – affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects (=emotions), needed for two reasons:
o being able to predict the actions of others, such as in self-driving vehicles.
o facilitate human–computer interaction by showing emotions.
- Creativity – theoretical and/or practical generation of novel and useful outputs including music and art.
- General intelligence – researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, while a few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.
So how do Artificial Intelligence approaches work? They use:
- Cybernetics and brain stimulation – connection to neurology .
- Traditional symbolic AI – John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI“ exploring the possibility that human intelligence could be reduced to symbol manipulation.
- Cognitive simulation – Economist Herbert Simon and Allen Newell studied human problem-solving skills from psychological experiments resulting in the Soar architecture in the 1980’s.
- Logic-based – John McCarthy in his laboratory at Stanford (SAIL) used formal logic and led to the Prolog language and the science of logic programming.
- Anti-logic or scruffy – Marvin Minsky and Seymour Papert found that solving difficult problems in vision and natural language processing required ad-hoc solutions.
- Knowledge-based – led to the development in the 1970’s of expert systems, introduced by Edward Feigenbaum of Stanford.
- Sub-symbolic – when traditional symbolic AI stalled in the 1980’s unable to solve problems in perception, robotics, learning and pattern recognition, researchers tried to not encode knowledge.
- Embodied intelligence – Researchers of robotics, such as Rodney Brooks, reintroduced the use of control theory and embodied mind.
- Computational intelligence – neural networks and “connectionism” was revived by David Rumelhart leading to soft computing approaches including fuzzy systems, evolutionary computation and statistical tools.
- Statistical methods – sophisticated mathematical tools to solve specific subproblems that are truly scientific, in the sense that their results are both measurable and verifiable.
- Intelligent agent – a system that perceives its environment and takes actions which maximize its chances of success.
Read my column next month (June 2018) to find out “What AI Means for the Practice of Accounting and to Accounting Professionals.”
Thanks for reading CPA Practice Advisor!
Subscribe Already registered? Log In
Need more information? Read the FAQs