From the Editor's Desk - Winter 2026

Printed in the Winter 2026  issue of Quest magazine. 
Citation: Smoley, Richard "From te Editor's Desk"   Quest 114:1, pg 2

richard-smoleyArtificial intelligence (AI) is both the hero and the villain of the current scene. It has enabled virtually anyone to have access to an enormous amount of information accumulated over centuries.

But AI is also the source of anxiety, much of it justified. An article in The Wall Street Journal of October 28, 2025, has the headline, “Tens of Thousands of White-Collar Jobs Are Disappearing as AI Starts to Bite.” The white-collar world is suffering from effects of automation like those that hollowed out blue-collar employment.

I’m not qualified to comment on the effects of AI from a socioeconomic standpoint. Instead, I would like to discuss some of the fears that AI is arousing on a deeper level.

People fear that the AI will soon overtake human intelligence, if that has not already happened. This is a dismaying thought, but in my opinion for very different reasons than is usually imagined.

Over recent decades, psychologists have concluded that there are many types of human intelligence. Daniel Goleman popularized one in his 1995 book Emotional Intelligence: Why It Can Matter More Than IQ. Since then, psychologist Howard Gardner has delineated nine different types of intelligence: bodily-kinesthetic; existential; interpersonal; intrapersonal; linguistic; logical-mathematical; musical; naturalist; and spatial. (Douglas Keene goes further into these types in this issue’s “Viewpoint.”)

It is very far from clear to what extent AI will be able to reproduce all these forms of intelligence. No doubt it can create a kind of simulacrum of interpersonal intelligence, for example, but it would be foolish, I think, to overestimate its capacities even here. If, as we commonly hear, 93 percent of communication is nonverbal, how exactly is AI going to simulate that?

Gardner’s list, however comprehensive, leaves out the most important form of intelligence, which in the old days was called the common sense: the ability to integrate all of these sensory inputs into a coherent view of the world.

In a paper from 1950, the computer scientist Alan Turing asked, “Can machines think?” He proposed what has come to be known as the Turing test: A given set of questions is given to both a human being and a computer. Their answers are compared. Is it possible, Turing asked, for a machine to give answers that are indistinguishable from those of a human being? He believed it was: arguably, then, the machine is capable of what could be called consciousness.

Since then, any number of computer programs have passed the Turing test admirably. Are these machines capable of thinking in the way a human is? Turing might have thought so. I leave you to answer the question for yourself.

Turing delivered his paper when behaviorism was at the peak of its influence in Anglo-American thought. Behaviorism, in certain forms, characterizes all of human consciousness in purely exterior terms: your feeling of anxiety is nothing more than your disposition to express certain forms of external behavior, verbal and nonverbal. An extreme version would say that your having a dream consists of nothing more than your disposition to say you have had that dream after you awaken.

Contrary to this purely exterior concept of human consciousness, I would say that it is complex and mysterious enough to be beyond mechanical reproduction, at least in any technological form that we can presently imagine.

All the same, there is something disturbing about the fear itself. Are human beings so mechanized at this point that they sincerely believe that their minds are nothing more than giant computers? Certain technocrats may well believe this, but that would say more about them than about the reality.

Another, deeper fear is that of AI as a sinister, evil intelligence that could overwhelm and enslave us all. I have my doubts in this regard as well: for the foreseeable future, humans will still have the power to pull the plug on the machine.

But I believe this fear of domination by mechanical intelligence (expressed, for example, in Fritz Lang’s film Metropolis, as Ray Grasse points out in this issue) is a displacement of a much greater and more pressing fear: the realization that human beings are capable of unconscious and machine-like behavior, which we do not ultimately understand and which can express itself in horrifying ways. To take a random example, one historian says of the battle of Antietam in 1862: “There was something fearless and primitive and elemental in the combat that morning, a kind of madness or possession, as soldiers left their humanity behind and became mere feral killing machines.”

Ultimately, I do not think that people fear machines, however supposedly intelligent. They unconsciously fear the sudden and ruinous eruption of their own mechanicality, not knowing whence or how it arises or when it may burst out—in others or in oneself.

Richard Smoley