Hello World: How to Be Human in the Age of the Machine (by Hannah Fry)

Hello World: How to Be Human in the Age of the Machine (by Hannah Fry) Book Review, Kurt Hornburg, November 13th 2019

In Nobel Prize winner Robert Shiller’s new book “Narrative Economics”, Shiller advocates ‘stories do still matter’ – that narratives based on fact or fiction can go viral and influence the actions of consumers and firms. Hence, economists should pay more attention to people’s narratives and be better story tellers themselves. Hannah Fry, a Professor in mathematics at University College London, uses stories effectively in her entertaining and accessible book about data and algorithms.

Ms. Fry gives an example from her mathematics of cities research. Robert Moses was one of the most influential city planners. One might say he was the master builder and power broker of New York City. One of his creations was the beautiful, world famous Jones Beach Park, built in the 1920’s. The beach is less than 20 miles from New York and accessible by roads with a series of extremely low bridges. These low bridges with 9-foot clearance were allegedly planned by Moses to restrict buses, which require 12-foot clearance. However, this design made bus access difficult for minorities and poor visitors, who did not own a car. Thus, the extremely low bridges acted as a filter…while preserving the park for white and wealthy visitors who could more easily access the beach by car.

Ms. Fry uses this real-world example to illustrate a point; AI algorithms can be inadvertently or purposely designed to filter access and they can have unintended consequences. Algorithms can concentrate power and there are always humans behind AI algorithms.

In her chapter on Data she provides a brief overview of the trade-off of free services in exchange for our data and the implications of this asymmetrical relationship with Google, Apple, Facebook, and Amazon (GAFA). If readers are interested in this topic, the recently published book, “The Age of Surveillance Capitalism” by Shoshana Zuboff, provides a comprehensive examination of how our behavioral data is used to predict and potentially control our behavior.

The collection and exchange of personal data by data brokers is controversial. A new law in Virginia, USA, requires all data brokers to register with the state. An article in Fast Company, ‘Landmark Law Nudges over 120 data brokers out of the shadows’, documents the extent of data collection. In Europe, the GDPR principal, ‘purpose limitation’, requires a clearly stated purpose at the time the data is collected. However, the initial, stated purpose of processing often changes as new insights are discovered as a result of AI analysis. How this principal is interpreted and enforced by the EU, could have significant implications for IOT and AI services.

Furthermore, the book explains the paradox of algorithm aversion, “…we have a tendency to over-trust anything we don’t understand…however, as soon as we know the algorithm can make mistakes, we revert to our own flawed judgement”. This is despite the fact that evidence-based algorithms will make fewer errors and more accurate predictions than human forecasters. People want algorithms to be perfect and are much more critical of machine errors yet are more forgiving of human errors. Research on, ‘why people don’t trust algorithms’, done by Professors Massey and Simmons at Wharton, shows interesting results; giving people some element of control, increases trust and acceptance.

Ms. Fry, as a mathematics Professor, is enthusiastic about AI. The latter part of the book uses examples to illustrate both the potential benefits as well as the challenges of AI use in crime, cars, medicine and art. The concepts of sensitivity / false positives and specificity / false negatives, in the context of cancer medical screening is explained for a non-technologist. “The problem is that in developing an algorithm often means making a choice between sensitivity and specificity”. When sensitivity is improved, specificity decreases, and these types of trade-offs have real implications for individuals.

Another challenge for AI applications in medicine is that the stakeholders can have multiple objectives, which in turn raise fundamental ethical AI questions. Parties in medicine include private insurers, national insurers, hospitals, the individual etc. and each has a slightly different agenda. If AI is used to recommend treatment, what is the objective? It is ostensibly to give individuals the best treatment, however other parties want to minimize the cost of treatment.

The section on autonomous vehicles includes another discussion of ethics posing the question, ‘So should your driverless car hit a pedestrian to save your life?’ A Mercedes spokesperson answered, “If you know you can save at least one person, at least save that one”. This is a choice between two evils. The answer generated lots of controversy. The author reflects on driver-assisted autos, “At the heart of this new technology – as with almost all algorithms – are questions about power, expectation, control and delegations of responsibility.”

The conclusion comes back to the sub-title of the book, ‘How to be Human in the Age of the Machine’. The author provides this thought-provoking vision; “(…) this is the future I’m hoping for…one where we stop seeing machines as objective masters and start treating them as we would any other source of power. By questioning their decisions; scrutinizing their motives; acknowledging our emotions; demanding to know who stands to benefit; holding them accountable for their mistakes; and refusing to become complacent.”