It is as hard to understand a technological revolution while it is happening as to know what a hurricane will do while the winds are still gaining speed. Through the emergence of technologies now regarded as basic elements of modernity — electric power, the arrival of automobiles and airplanes and the internet — people have tried, with hit-and-miss success, to assess their future impact.
The most persistent and touching error has been the ever-dashed hope that, as machines are able to do more work, human beings will be freed to do less, and will have more time for culture and contemplation. The greatest imaginative challenge seems to be foreseeing which changes will arrive sooner than expected (computers outplaying chess grandmasters), and which will be surprisingly slow (flying cars). The tech-world saying is that people chronically overestimate what technology can do in a year, and underestimate what it can do in a decade and beyond.
Depending on how you count, the AI revolution began about 60 years ago, dating to the dawn of the computer age, or has just barely begun. Its implications range from utilities routinised into daily life (like real-time updates on traffic flow), to ominous steps toward “1984”-style perpetual-surveillance states (like China’s facial recognition system).
Looking back, it’s easy to recognise the damage done by waiting too long to face important choices about technology — or leaving those choices to whatever a private interest might find profitable. These go from the role of the automobile in creating America’s sprawl-suburb landscape to the role of Facebook and other companies in fostering the disinformation society.
Genius Makers and Futureproof, both by experienced technology reporters now at The New York Times, are part of a rapidly growing literature attempting to make sense of the AI hurricane we are living through.
Genius Makers is about the people who have built the AI world — scientists, engineers, linguists, gamers — more than about the technology itself, or its good and bad effects. The fundamental technical debates and discoveries on which AI is based are a background to the individual profiles and corporate-drama scenes Cade Metz presents. The longest running, most consequential debate is between proponents of two approaches to increasing computerised “intelligence,” which can be oversimplified as “thinking like a person” versus “thinking like a machine.”
The first boils down to using “neural networks” — computer circuits — that are designed to conduct endless trial-and-error experiments and improve their accuracy as they match their conclusions against real-world data. The second boils down to equipping a computer with detailed sets of rules — rules of syntax and semantics for language translation, rules of syndrome-pattern for medical diagnoses. Much of Mr Metz’s story runs from excitement for neural networks in the early 1960s, to an “A.I. winter” in the 1970s, when that era’s computers proved too limited to do the job, to a recent revival of a neural-network approach toward “deep learning”.
Mr Metz tells the story of more than a dozen of the world’s AI pioneers, of whom two come across most vividly. One is Geoffrey Hinton, an English-born computer scientist now in his mid-70s, who is introduced in the prologue as “The Man Who Didn’t Sit Down.” Because of a back condition, Hinton finds it excruciating to sit in a chair — and he has not done so since 2005. This means, among other things, that he cannot take commercial airplane flights. In one crucial scene of Mr Metz’s tale he is placed on a makeshift bed on the floor of a Gulfstream, and then strapped down for the flight across the Atlantic to an AI meeting in London.
Book cover of FUTUREPROOF: 9 Rules for Humans in the Age of Automation
The other most prominent figure in Mr Metz’s book is Demis Hassabis, who grew up in London and is now in his mid-40s. He is a former chess prodigy and electronic-games entrepreneur and designer who founded a company called DeepMind, now a leading force in the quest for the grail of AGI, or artificial general intelligence.
“Superintelligence was possible and he believed it could be dangerous, but he also believed it was still many years away,” Mr Metz writes. “‘We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,’ he said. ‘The time we have now is valuable, and we need to make use of it.’”
Making use of that time is the entire theme of Futureproof. Kevin Roose’s book has two sections: “The Machines,” about the surprising potential and equally surprising limits of automated intelligence, and “The Rules,” which offers nine maxims for how people and organisations can best respond.
In the book’s first section, Mr Roose lays out distinctions between jobs and industries in which AI is likely to dominate, and those where it still disappoints. Computers are unmatchable in speed and complexity within known boundaries — the rules of chess, even the way points an airplane must follow through the sky. But the more fluid the setting, the greater the difficulties. “Most AI is built to solve a single problem, and fails when you ask it to do something else,” he writes. “And so far, AI has fared poorly at what is called ‘transfer learning’ — using information gained while solving one problem to do something else.”
Technology’s effects are driven by technology itself, but even more by human choice. Mr Roose warns against treating “technological change as a disembodied natural force that simply happens to us, like gravity or thermodynamics.” Instead we all should realise that “none of this is predetermined. … Regulators, not robots, decide what limits to place on emerging technologies like facial recognition and targeted digital advertising.” The message from both of these books is that the sky is not falling — but it could. There is time to make a choice.
©2021 The New York Times News Service