Until Its Time Comes

Human-Machine interaction, image by PhonlamaiPhoto

Human-Machine interaction, image by PhonlamaiPhoto

In his March 31, 2021 article in The New Yorker called “Why Computers Won’t Make Themselves Smarter”, Ted Chiang argues that computers—left to their own devices, I would add—won’t make themselves smarter. Paradoxically, I agree with most of his arguments except for one simple assumption that—unfortunately—deflates his whole essay, and his take on understanding human consciousness (more about that later). As the old saying—derived from the even older “all roads lead to Rome”—goes: “there is more than one way to Rome”(1).

I mostly agree with him that ‘recursive self-improvement takes place—not at the level of individuals but at the level of human civilization as a whole’. Already in the 11th Century, Bernard of Chartres said that we were ‘standing on the shoulders of giants’.

What I don’t understand why Chiang thinks that today’s A.I. researchers are looking for a programme that can both recompile itself and magically recompile itself in such a way that the next iteration is both more efficient and smarter—while all this takes place in complete isolation. As that may have been a theme—a meme even—in science fiction and for some philosophers, I certainly hope that these now realise this is not just a cliché, but a wrong assumption—what Chiang clearly demonstrates in his piece—as well. A.I. research has passed well beyond that, and both Max Tegmark and Stuart Russell (quoted in Chiang’s article as ‘singularity’ proponents) most certainly realise that(2).

As Chiang mentions in his article, ‘Innovation doesn’t happen in isolation; scientists draw from the work of other scientists.’ Now I truly wonder what makes Chiang think that today’s A.I. versions work in total isolation. To give the most obvious examples: Google’s search algorithm is in constant contact with the people using it. The same goes for the algorithms of other search engines like Bing, Yahoo, DuckDuckGo and Ecosia.

Then there are neural networks—Artificial Neural Networks (ANNs): algorithms running on cloud computing networks inspired by biological neural networks—that are also constantly drawing data from all possible sources they can get their electronic tentacles on. They are most definitely not operating in complete isolation, but rather the contrary.

Chiang touches—albeit inadvertently—on them: ‘Now, alternatively, suppose that you’re writing an A.I. program and you have no advance knowledge of what type of inputs it can expect or what form a correct response will take’. (That is almost exactly what most neural networks do.) ‘In that situation, it’s hard to optimize for performance, because you have no idea what you’re optimizing for’. Typically, neural networks are programmed to find a certain solution. They are explicitly told what to optimise for, it is the very core of their programming.

And, indeed, quite often they find that solution. A few examples: protein folding, data mining, image classification, face recognition, text generation, language translation, and more, as this list is far from exhaustive. Sometimes, some of the more cynical amongst us would say, they mimic humanity but all too well, as in the cases of Microsoft’s Tay and Scatterlab’s Luda chatbots, who both began to deliver offensive, racist and homophobic posts, and Amazon’s AI recruitment tool that displayed gender bias (all three were quickly shut down).

To the best of my knowledge, there is nobody in A.I. research who would isolate their programmes from the rest of the world. Mostly because they agree with Chiang that it doesn’t work, and have taken a different approach in their efforts to improve the efficiency of an A.I. To re-formulate what Chiang said: ‘This is how recursive self-improvement takes place—not at the level of individuals but at the level of human civilization as a whole’. And in today’s society nobody or nothing has their finger more on the pulse of human civilisation than Big Tech’s search algorithms and neural networks.

Consciousness image by Andriy Onufriyenko

Consciousness image by Andriy Onufriyenko

Also, I think Chiang—purposely or not—skipped the next step in A.I. and jumped to the final one. Initially—a few decades ago—there was a distinction between weak AI and strong AI, but this has been fine-tuned to 7 types of A.I.:

  1. Reactive—machines that emulate the ability to respond to different kinds of stimuli, but do not learn from their experience. Example: IBM’s Deep Blue;

  2. Limited Memory—machines that not only react, but are able to learn from experience and from referencing huge volumes of training data (using tools such as backpropagation, deep learning and machine learning). Examples: image recognition AI, chatbots, virtual assistants and self-driving vehicles;

  3. Theory of Mind—machines that will try to understand humans by discerning their needs, emotions, beliefs and thought processes. These exist only as concepts or as Works in Progress (WiP);

  4. Self-aware—mchines that have developed self-awareness. The nightmare scenario for most SF novels as it is assumed that once an AI is self-aware, it will develop self-preservation and then either destroy or enslave humanity (why would it want to destroy the ones who created it?). Exits only hypothetically;

  5. Artificial Narrow Intelligence (ANI)—machines that can perform only what they are programmed to do. Encompasses type 1 and 2, even the most complex ones that use machine learning and deep learning;

  6. Artificial General Intelligence (AGI)—machines that can perceive, learn and understand like a human being. Exist only as concepts or as Works in Progress;

  7. Artificial Super Intelligence (ASI)—machines that perform all of the above, but faster due to overwhelmingly greater memory, faster data processing and analysis and quicker decision-making capablities(3). Supposedly the onset of the technological singularity. Exits only hypothetically;

Nobody in his right mind believes we are anything near stage 7. Similarly, precious few believe stage 7 will be reached through recursive self-improvement in isolation. And since stage 7 sounds—at least to me—more like a quantative rather than a qualitative step ahead, I doubt if it leads to a technological singularity (I do agree with most of the points that Ted Chiang makes, I just disagree with the arguments that lead to that conclusion). Improvement, though—and even self-improvement—is possible.

Current A.I.—as mentioned above—is in constant interaction with humanity as a whole through the internet. Researchers and A.I. work together to find and implement improvements. And while this may not seem like an ‘intelligence explosion’ to us, imagine what people from one hundred years ago—a mere blip in evolutionary terms—would encounter if they were placed in our time by means of a time machine. The most obvious thing they would notice is that everybody—and I mean literally almost everybody—uses a small gadget called a ‘smartphone’ that can be connected to almost all of humanity’s knowledge, almost instantly, almost all the time (that we mostly use it for the latest gossip is a different matter). If that’s not the manifestation of an ‘intelligence explosion’, then please tell me what is.

Mr Universe

Mr Universe

Therefore, I suspect that there will be more ‘intelligence explosions’ in the future. But that one of those will eventually lead to a technological singularity—the ultimate godhead for true nerds—is impossible to say. Eventually resources might be the most limiting factor—see also Arik Kershenbaum’s article ‘How Intelligent Could Life Be Without Natural Selection?’ at Nautilus Magazine, in which he describes Anatoly Dneprov’s Crabs on the Island novel in which self-replicating, mechanical crabs spread exponentially across the entire island. In reality, though, such crabs would run out of metal because resources on that island are limited. Similarly, even if ASI dismantled all planets in the solar system to turn into one giant Matrioshka Brain (as depicted in Charles Stross’s fixup novel Accelerando), they might even find that abundance of computronium to be too limiting, and chose to travel to places with more resources (say: our Galactic Core).

In any case, many researchers seem to agree that the (self-)awareness—(self-)consciousness stage is essential for developing truly intelligent A.I. (AGI), yet Chiang seems to say that humans may be unable—by definition—to understand the human brain, meaning there is no way for us to develop consciousness in a machine, either.

With regards to the human brain and consciousness, I’d like to counter Emerson Pugh’s ‘If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.’ with ‘Because we are powered by a mind running on a flexible and immensely complex brain, we might have the tools to understand it.’ In other words, humanity as a whole might develop enough knowledge to eventually understand the brain. As with all current science, this is not an effort of single geniuses, but a massive group effort across the globe. And a new breakthrough will happen when its time has arrived. Chiang mentions the Manhattan Project as an example that basically ‘it took the combined populations of the US and Europe(4) in 1942 to put together the Manhattan Project’. And I agree, and I will be the last to argue that 8 billion A.I.s running an emulation of human civilisation will create a technological singularity.

But what Chiang—again, purposely or not—overlooks is that current A.I. is developing in concert with human civilisation. As it is, creating a human-equivalent A.I.—an AGI—may very well be a today’s Manhattan Project, given the resources invested in it. All the Big Tech (and many, many smaller tech) companies are chasing it in one way or another across the world.

Simultaneously, there’s also a huge amount of research into the working of the human brain. Again, while a single human might not be able to fully comprehend the working of the human brain, the whole of humanity might. Unravelling how consciousness works might be the most important step in that endeavour.

On top of that there’s a qualitative difference between inventing the atom bomb and understanding consciousness: we thought the former was impossible, while with the latter we have proof of concept. Or, more precisely, more than eight billion living proofs of concept. So it’s not impossible. Therefore I think humanity—quite possibly with the help of A.I.—will eventually unravel the mystery of consciousness.

Because that is, I strongly suspect, the main component needed to develop AGI—Artificial General Intelligence. Consciousness, in combination with the development of language (which was first? Or did they go hand in hand?) acted as an evolutionary accelerator. Now, benevolent behaviour (for the species) could be taught directly from one generation to the next (instead of over many generations through ‘survival of the fittest’), but improved upon, as well. This combination of consciousness and language provided humanity with an insurmountable advantage as we began to change our environment to our needs rather than the other way around(5).

As such, computers won’t make themselves smarter because they lack the trait(s) needed: (self-)consciousness and a flexible language. Will they become (self-)aware—the first step on the gradual consciousness curve—by understanding human language as they massively interact with humans (see: Google Translate, digital assistants and a plethora of chatbots)? I suspect not. Awareness and self-awareness are deeply ingrained evolutionary characteristics that arose in an environment where such qualities would thrive (if that sounds like a Catch-22, then I have a surprise: it’s how evolution works).

Machines did not arise in an environment where such qualities would be beneficial—they were created. If machines develop a simulacrum of self-awareness purely thought their interactions with humanity, then it will be strange at best and quite alien at worst (not that an alien type of consciousness would necessarily be bad in itself, but it would be bad for our understanding of it). They might develop a modicum of self-awareness if they were in regular contact with other machines, and recognised them as such. But right now I suspect the best way to imbue an A.I. with a human-like consciousness is to explicitly program it to emulate humans, so well that they become indistinguishable from them(6).

It might also be a good way to install a strong sense of ethics in A.I., which in return might benefit humanity, so that when a ASI arises, it might view humans as a species worth keeping.

There are voices saying that we should not try to develop AGI at all, because—here comes another deep-seated science fiction cliché—the machines will take over, become ASI and then obliterate or enslave us. Now, even if these voices are correct, suppressing research into AGI is not the right thing to do.

For one, because such a diktat is not globally enforceable. If we—fill in your version of ‘we’—do not try to develop AGI, someone else will. And—I strongly believe this—if the time is right for such a breakthrough, it will happen(7). Better to be the inventor than the copier.

As such, AGI won’t happen—until its time comes. And that’s why it is good foresight that the afore-mentioned Max Tegmark co-founded the Future of Life Institute(8)—of which the other afore-mention Stuart Russell is part of the scientific advisory board—where one of its main aims is to prevent misalignment of the goals of A.I. with those of humanity. Instead of proposing that AGI will never happen, we should be preparing for it. Like actively mitigating climate change, our children will be thankful.

Footnotes:

  1. As the old adage ‘there are more ways to skin (Schrödinger’s) cat’ is considerd politically incorrect;

  2. Meaning they don’t think a technological singularity is impossible, but that it’s not going to happen through ‘recursive self-improvement’;

  3. Which to me sounds like a quantative improvement, not a qualitative one;

  4. Minus the part of Europe occupied by the Axis Alliance, which, in 1942, was almost all of it;

  5. A gross oversimplification, but not far from the truth;

  6. And I have written a duology encapsulating and demonstrating that idea: “The Replicant in the Refugee Camp” and “The Replicant, the Mole & the Impostor”, which I am pitching at agents and publishers as you read;

  7. Inventions like radio, the atom bomb and smartphones happened when the circumstances were right. The very first one is hard, but once proof of concept is delivered, the technologcal breakthrough will be copied across the world;

  8. Not to be confused with the Future of Humanity Institute;