Are Humans and Machines at War?
With robots and artificial intelligence destined to take over the economy, does anyone have an idea of how a truce might end the war between man and machine?
Wired features an article with the title, “Inside the Amazon Warehouse Where Humans and Machines Become One.” The author, Matt Simon, reveals in careful detail how Amazon’s robotic factories work today, as seen from the point of view of a human operative. He speculates on how robotic factories are likely to evolve in the future. The article ends by raising the question of the future of human work in an increasingly roboticized economy: “Is this kind of automation bound to replace human jobs entirely, or replace parts of those jobs?”
Simon seeks to reassure his readers with this observation: “For quite some time, humans will need to (nearly) literally hold these robots’ hands.” The trope “quite some time” may not quite reassure the younger members of the human tribe who have heard of the “singularity” predicted to take place sometime before the end of their lifetime. It’s the moment when some people suspect artificial intelligence (AI) will take over the world. The older ones will take comfort from the fact that “quite some time” has come to mean counting in decades, by which time they may have retired or even left the hyper-intelligent but ecologically unstable earth in peace.
In response to the anti-technology doomsayers who not only worry that robots will take over the dreary jobs humans are so attached to, but suspect that increasingly intelligent robots may acquire an independence that transforms them into the enemy of the human race, Simon sees a harmonious perspective opening up when machines and people will be best friends forever. At the close of the article, he exclaims, “If only the Luddites could see our codependency now.”
Here is today’s 3D definition:
The mutually and reciprocally felt requirement of assistance between individuals or groups to guarantee their survival or well-being in a shared environment, even when one of those individuals or groups has no capacity to feel anything
A former Obama administration adviser, R. David Edelman, now director of MIT’s Project on Technology, Economy, and National Security, has supplied the answer to the question of employment: “Most of the research seems to suggest that the direction that automation is moving in is the displacement of skills, not jobs.”
The obvious uncertainty of the rhetorical formula, “Most of the research seems to suggest…” might inspire some to cast doubt on the assertion that follows. They could legitimately ask: What about the other research not included in the “most”? Does it say the opposite? And why does the positive research he cites only “seem to suggest” rather than affirm? Seeming to suggest points to unfounded possibilities that are likely to be far less reassuring.
Having established the abstract principle but not necessarily the facts on the ground, Edelman follows up with speculation about what Amazon will do in an undefined future. “That suggests those individuals can, by Amazon, be reskilled or leverage other skills they already have in the same job,” he says. That’s putting a lot of confidence into the power of suggestion.
This is good news for one human profession. Teachers and trainers will apparently have their work cut out for them, at least until such time as the machines take over the training of the trainers. Edelman seems to believe that future generations of skilled human trainers will always be needed to train the next generation. Humanity will thus always remain one step ahead of the machines. A pessimist might object to Edelman’s optimism by pointing out that inciting people to become teachers and trainers may not be so easy because of the low status attributed to them in an economy that traditionally regards them as non-productive expenses.
For the first time in history, progress does not appear as an extension of the present, but as a rupture with both the present and the past. This may help to explain why, in the West at least, rates of opioid addiction and suicide have been steadily increasing. It also explains why the mantra of “learn to code” has become everyone’s obsession in the realm of education, with little reflection about the consequences. Ivan Ruby and Ann-Louise Davidson write in The Conversation: “Governments, corporations, associations in the computer science field and trend-setters all assert that learning to code will play a key role in the future. In this context, learning to code is often presented as a panacea to the job market problems of the 21st century.”
A 2017 article in The Atlantic paints a different picture: “It’s been said that software is ‘eating the world.’ More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code.” People were ready to believe that “a software developer’s proper role was to create tools that removed the need for software developers.” Human society will thus be summed up by its code.
To oppose the tyranny of code, the article highlights the importance of something called “model-based design,” which appears to be a skill only humans can do because it constitutes the necessary step before attempting any coding. “[B]efore you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy.”
Whether we morph into a culture of design rather than code, no one can predict. But could this intuition provide an ethical rule to prevent the central danger of the singularity: code that writes itself? Could this lead us toward a definition of optimal codependency?
Computer vision will never resemble human vision because human vision includes perception, imagination and memory that function simultaneously and interactively (not just codependently). Furthermore, human vision implicitly contains an ethical component based on the pressure and value of decision-making within a social context.
One expert cited in The Atlantic article explains: “The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”
Coding, by definition, eliminates ambiguity. Today’s pragmatic obsession with coding as the new raison d’être of education and the key to future employment pulls us further away from the idea of model-based design, which, we should notice, is already practiced in the following professions: philosophy, painting, music, architecture, playwriting, scientific research, linguistics, the writing of history and… teaching, when it isn’t transformed into the soul-crushing activity of preparing learners for standardized tests.
Before we celebrate our codependency with machines, we need to understand what codependency not only means, but what it looks and feels like.
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.