Artificial Intelligence: A Very Short Introduction (by Margaret A. Boden)
42.cx Book Review, Kurt Hornburg, January 27th 2020
In the mid-1830’s Charles Baggage was credited with designing the first digital computer and his colleague Ada Lovelace, developing a prototype of a computer program. Since many of the pioneers of AI were not only mathematicians but also trained as neurologists, psychologists, philosophers, anthropologists, Margaret Boden, is well qualified to give insights into the beginnings of AI. She is a Professor at University of Sussex, UK, holding multiple degrees in the natural sciences and she incorporates these disciplines into her AI research. The book does not require mathematic knowledge or an understanding of algorithms, it is rather an introduction to the very broad, yet increasingly specialized field of AI.
As in many scientific fields, since the beginning of AI, there have been two approaches, which have been sometimes amicable and at other times, arch-rivals. Especially when professional prestige and funding were involved, the divide deepened. The cooperation was still good during the 1956 Dartmouth Workshop where the organizer, John McCarthy choose the term Artificial Intelligence for the name of the new field.
In 1958, Frank Rosenblatt published a description of neuronlike computing or connectionism, a parallel distributed processing system (PDP) which enabled self-organized learning. This fractured the field and an intellectual dissent developed between Symbolic AI vs. Connectionist AI/ cybernetic/ neural networks. In short, advocates of symbolic AI attacked the connectionists/ neural network supporters – effectively discrediting them.
The result was funding for neural network research dried-up for the next two decades. This ‘winter network research’ came to an end in the late 1980s, when it again came into fashion, swinging the pendulum toward neural networks research.
While the section on the history of AI can be difficult to follow, Ms. Boden’s description of the key strategies for classical AI is very succinct and useful. These are some of the approaches pioneered for Good Old Fashion AI or (GOFAI): Heuristic search, originating from the Greek word ‘Eureka’ for ‘discover” or ‘find’, essentially the study of the methods and rules of discovery. Innovative heuristic research, when applied to a very narrow domain, may produce considerable benefits. Planning in AI may seem mundane; however, it plays an important role in building complex hierarchies of instructions. These techniques of ‘forward-chaining’ and ‘backward-chaining’, provides human transparency in how the program arrived at its answer/recommendation, addressing an issue raised by AI critics.
Knowledge representation is the process of presenting the problem to be solved in a manner the system understands. There are creative, tailored approaches to knowledge representation which address problems in highly specialized domains, including medicine and legal. Semantics network is a type of knowledge representation, which can logically link concepts by semantic relationships, enabling a computer to understand. It is widely used in Natural Language Processing (NLP) applications such as speech recognition and search.
The pervasive use of Machine Learning often utilizes symbolic AI / GOFAI, combined with sophisticated statistical algorithms. This is why some professionals in the field believe GOFAI is more statistics or computer science, rather than AI. However, some Machine Learning (ML) employs neural networks. There is no clear border.
The book serves as an excellent reference for defining the many abbreviations and terms used in AI. The three main types of Machine Learning utilize symbolic AI for the most part; supervised learning, unsupervised learning and reinforced learning. There are a multitude of algorithms which may be utilized in ML and only after fully understanding the data sets and the context of the learning, can ML professionals select the algorithms to be used. Off-the- shelf algorithms are frequently used, rather than bespoke developments.
Deep learning is a broad family of Machine Learning, which is based on multilayer networks, “… deep learning discovers a multilevel knowledge representation…”. This combination of technologies is able to make assumptions, test the assumptions and even reevaluate the model in a self-organizing feedback loop. The chapter on Artificial Neural Networks (ANN) gives an excellent overview of many types of neural networks; ANN or parallel-processing virtual machines on a conventional computer. PDP or parallel distributed network which is a category of ANNs.
When the major strengths of PDPs are examined, one gets insights into why deep learning is so powerful; first, the capability to learn patterns without being programmed by a human, second, ability to tolerate ambiguity in cluttered and chaotic evidence. This is done without humans developing exact definitions and required conditions. Third, the PDP networks can recognize patterns in data with lots of variance, so called ‘dirty data’.
The author’s explanation, “PDP involves distributed representation, for each concept is represented by the state of the entire network”, leads to the fourth advantage: A PDP network is dynamic, even when some nodes are down; the performance deteriorates gradually, i.e. it has ‘graceful degradation’.
Hybrid systems in AI describe those which utilize both symbolic and neural/connectionist programs. To develop hybrid systems, teams need expertise in both methodologies, the logical and the probabilistic.
Hybrid systems share some aspects with how our brain works, both sequentially and in parallel. If hybrid systems can more closely emulate the subtle cooperation between symbolic and neural programs, they will bring us a step closer to the elusive goal of Artificial General Intelligence (AGI).