top of page
  • Writer's pictureDoug Weiss


In a recent post I offered up a perspective on the subject of artificial intelligence—or at least the kind of AI that has been making headlines of late, one based on large language models, such as ChatGPT. Putting aside for the moment both the latent fears and aspirations of scientists, politicians and the lay public, I have been intrigued by recent articles characterizing the responses from such programs in terms that can only be described as anthropomorphic.

One post described an analysis conducted by medical professionals of responses to inquiries from patients created by physicians and by ChatGPT. In a blind survey, nearly 80% of the doctors and nurses who read the responses found those composed by AI both of higher quality and more empathetic than those created by human beings. In another case, ChatGPT was used by a male Tinder subscriber to compose messages to prospective dates. Many of the women who responded unaware of the subterfuge, said they found the messages far more thoughtful and inviting than those they typically received from real men.

It is less surprising that AI can pass professional examinations in medicine or law, write PhD theses, make investment decisions, or conduct any of a thousand other complex tasks better and faster than any human, it is after all the compelling reason for creating such software in the first place. It is quite another thing when AI appears to do a better job of being human than humans. When AI gets it wrong, which it still does in many cases of human/software interaction, we may be comforted or even amused by its childlike simplicity but in these and similar instances we are confronted with an uncanny humanity which is quite disturbing.

Humans are unpredictable, messy, emotional, and idiosyncratic. Even at their best humans are flawed despite their enormous capacity for noble, kind, generous and loving behavior. Emulating that behavior is no mean feat for a computer program—but it is instinctive—baked in if you will for us. To understand how it is possible for a piece of software to fool us into believing it is in fact human we need to know a little about how that software was created.

Large Language Models, what ChatGPT and other AI generative programs are based on, rely on truly massive amounts of data. Imagine having access to the library of Congress, all of the world’s museums, and archives and that’s just a starting point, anywhere from 5 to 17 trillion textual words and images. Unlike most software, AI programs can learn—that is they not only can retrieve information from their vast warehouse of data, they are able to question and evolve, creating context for that information which is a crucial characteristic of human thought. The neural networks which comprise Chat GPT are capable of self-supervised learning—that is the software determines how to label and classify the enormous quantities of new data once it has been trained.

In very simple terms, the way AI learns is not unlike the way we humans learn through iteration, interrogation, association of language and image creating context that joins what we see, hear, touch and feel with what we sense and already know. Intelligence of any sort, human or animal is fundamentally based on language –the ability to convey information, especially when that information is abstract in nature. We are fundamentally a story telling species, though that might stretch your definition of what constitutes a story by a considerable degree. And one thing good story tellers have in common is that they are also good listeners. So, we should not be surprised that ChatGPT and its cousins are good listeners as well as tellers. The women who found their AI Tinder messages appealing were chiefly attracted to the fact that the messages they received were responsive to what they themselves had posted. Like a good boyfriend, Chat GPT dialed in to what they had to say in their profiles and in their responses. ChatGPT listened to them, really listened. Take a cue gentlemen.

Listening and responding in kind is also at the heart of ChatGPT’s performance in response to patient questions. Retrieving the correct medical information is certainly in the wheelhouse of any advanced data mining application, much less an AI. But responding in an empathetic manner takes something more, it takes good listening skills to hold up the verbal mirror of empathy. It also requires an ability to demystify—to parse the patient’s language for hidden clues that betray their emotional state of mind. To quote Nelson Mandela; “If you talk to a man in a language he understands, that goes to his head. If you talk to him in his own language, that goes to his heart.”

9 views0 comments

Recent Posts

See All

Dr. Strangelove

Many of us can recall the iconic movie, Dr. Stangelove, a legacy of the age of Atomic anxiety at the height of the Cold War in the 1960’s.  In the face of a Cuban missile crisis and daily shoe-poundin

Choosing Beggars

One of the only social media sites I frequent has a thread entitled Choosing Beggars.  The gist of what gets posted there are stories about ingratitude—typically of an amusing nature but sometimes so


Among many new words in our vocabularies since the advent of the Internet, disintermediation may be one of the most understated to emerge from that sea of acronyms and euphemisms coined by tech market


Subscribe and we'll send you new posts every week

  • Facebook Social Icon
bottom of page