top of page
  • Writer's pictureDoug Weiss

Artificial Intelligence

Scientist and science fiction author, Arthur C. Clarke famously wrote “any reasonably advanced technology is indistinguishable from magic.” It is a prescient observation as applied to the subject of Artificial Intelligence. In the event you have been living under a rock or in extreme isolation, the subject of Artificial Intelligence or AI as it is commonly known, is enjoying near daily discussion in the media and especially online. In part this is due to the recent public introduction of several new examples of AI in the realm of both art and general knowledge. ChatGPT, a general knowledge AI, in particular has captured a great deal of attention as well as controversy of late.

If you have not experienced ChatGPT yourself you may wish to give it a try by visiting Those who advocate for ChatGPT extol its benefits as a successor to search engines, capable of providing detailed and sophisticated explanations on virtually any topic, with the ‘intelligence’ to excel at various tests of superior knowledge such as Bar exams and the MCAT among others. Critics will warn that it is also capable of getting things wrong and sometimes will invent answers out of whole cloth, that it is capable of bias, and incapable of recognizing its own capacity for errors in the strictest sense .

Whether AI such as ChatGPT and its successors are dangerous developments as some are inclined to argue, or tools for extending human knowledge as others say remains to be seen, but one thing is certain, they are not at present intelligent, hence the distinction I drew in placing the word in single quotes. Nevertheless, it appears that an AI arms race is upon us despite warnings and concerns expressed by highly reputable thinkers and policy makers. So as my modest contribution to the debate over whether the public should facilitate much less endorse further development of this tool I hope to set some things straight about just what AI actually is and why it is not, and may never become truly intelligent.

Before we delve into the science of AI and its limitations allow me to introduce a relevant sub topic—namely, what constitutes intelligence. In the simplest sense intelligence is the ability to acquire and make use of information and skills in order to make decisions for one's benefit. By this definition, dogs cats, chimps and crows are intelligent. However, we would not place them in charge of making decisions for our benefit, and we might well criticize the limits of their reasoning for their own welfare as well. Whether we may find ourselves willing to cede decision making to a computer is yet another question, and it is terribly important in view of our history of placing trust in technologies we imperfectly understand.

We also recognize that intelligence can take many forms. Harvard Psychologist, Howard Gardner advanced a theory of multiple intelligences in his 1983 book, Frames of Mind: The Theory of Multiple Intelligences. Examples of these multiple intelligences include musical, spatial, linguistic and interpersonal intelligence among others. Interpersonal intelligence or as it is more commonly known, Emotional Intelligence has emerged as a critical faculty, one that governs to the greatest extent our ability to function in any group setting or endeavor. Popularized by another Harvard Psychologist, Daniel Goleman, in the 90’s EI, is commonly defined as the ability to understand and manage one’s own emotions and to recognize and influence those of others. Put a placeholder on this thought for a moment while I briefly digress on the subject of how AI actually works.

Precursors to AI as we know it have been around for some time. Early chess and game playing computer programs are good examples. In very simple terms they leverage the ability of computers to store and rapidly access large amounts of data. Since complex games are nothing more or less than structured encounters between two or more players, it is possible to collect examples of all or nearly all possible encounters and their outcomes. A computer has the advantage of accessing this knowledge in a way and to an extent that even chess grandmasters are hard pressed to emulate and for this reason such programs generally enjoy success when pitted against their human opponents. Notice I said all or almost all encounters, and therein is one limitation; sometimes humans can and do win against their machine counterparts.

If we scale up this simple idea of providing a computer program with access to an even wider array of information, give it linguistic skills and data about human actions and reactions in given circumstances we can create a model that appears to be very human. Good enough to pass the famous Turing test, conceived by scientist and cryptographer, Alan Turing in the 1950’s. The Turing Test is a method for determining a computer program’s ability to exhibit independent intelligent behavior based on questions and replies put to it and a human correspondent. And indeed, ChatGPT and other recently developed AI appear to be able to pass this test with flying colors. Even more, they exhibit the ability to respond with both self-awareness and emotional context. But does this mean they actually can think and feel?

One of our definitional challenges to the question of whether computers can learn to think rests on the ability to formulate knowledge that has not been explicitly provided. In the chess example there is a large but finite limit to the permutations of movement of the chess pieces. There is no such limit to human knowledge. So a truly intelligent AI needs to be able to develop new and original ideas from the available data it stores. And ChatGPT and its peers appear to be able to do so up to a point—but at least for now they remain limited which is why they make mistakes, and sometimes invent answers where their knowledge base is thin. But you may say in defense that this is a challenge that can be remediated in time—all it requires is more information. And to a degree that is certainly true but let us recall an earlier limitation, emotional intelligence.

Computers and computer programs are very good at doing repetitive tasks, calculating impossibly large numbers and screening equally huge volumes of raw data at speeds no humans could possibly replicate. What they do not do is feel. And by feel I do not mean process sensory input as in see or hear. They lack emotion. In a very real sense the brightest AI is nothing more than an electronic Golem. Imbued with sufficient examples of how humans act and react they are more than capable of giving the appearance of feeling but they do not feel; at least in part because their actions and reactions have no consequence for them.

Remember that scene in Clarke’s 2001 Space Odyssey where the HAL computer pleads with the human Astronaut , Dave, not to dismantle its brain? That was Clarke revealing the magic trick. HAL had a directive but no sense of consequence. It could reason and conjecture that this or that action was inimical to the success of its mission but it could not relate to the consequence of its own actions. Even in its death, as it expresses something akin to fear, HAL lacks that ineffable spark that makes us human.

It is not just the lack of consequence but also a raft of human traits that HAL and its real-life imitator, ChatGPT lack. Intuition, compassion, morality, empathy, instinct and more; abilities that humans possess not solely by virtue of thought but endowments of human spirit. No doubt given sufficient time we could teach programs to mimic such attributes but how could we ever trust them to make decisions for our welfare based on mimicry?

For this reason, my modest proposal is that we realign our objectives and focus less on AI as a substitute and more on tools to aid human intelligence. Use computers to do what they do best; help us to conduct rapid data analysis, survey the body of known information, assess the probabilities of success for alternate paths of action, advise but do not determine. Humans are imperfect and humans will continue to make imperfect decisions with or without computer assistance, but they will be humankind’s decisions and not the impersonal and inanimate reckonings of a machine we mistake for a God.

5 views0 comments

Recent Posts

See All

Dr. Strangelove

Many of us can recall the iconic movie, Dr. Stangelove, a legacy of the age of Atomic anxiety at the height of the Cold War in the 1960’s.  In the face of a Cuban missile crisis and daily shoe-poundin

Choosing Beggars

One of the only social media sites I frequent has a thread entitled Choosing Beggars.  The gist of what gets posted there are stories about ingratitude—typically of an amusing nature but sometimes so


Among many new words in our vocabularies since the advent of the Internet, disintermediation may be one of the most understated to emerge from that sea of acronyms and euphemisms coined by tech market


Subscribe and we'll send you new posts every week

  • Facebook Social Icon
bottom of page