Artificial Interference
- Doug Weiss
- Apr 27
- 4 min read
I've written before about my misgivings concerning AI--specifically, generalized AI based on large language models (LLMs). Among my many concerns about the technology is the mechanisms that have been used to inform the models. LLM's require copious amounts of raw content as their fuel, so to speak. Understandably, content owners--that is anyone who has a proprietary interest in deriving gain from their intellectual property or wishes to protect their privacy, are unwilling to make it available to AI companies. As a consequence most LLMs are based on content that is freely available--much of it anonymized social media and other forms of mass media within the public domain.
No wonder then that LLMs exhibit a wide range of responses to any given query--sometimes bizarre, staggeringly incorrect, or inherently biased. Now to be fair, AI's also startle with their ability to provide very comprehensive and sometimes highly accurate responses as well. We should expect both behaviors because AI is simply reflecting all the strands of the human race with our inherent flaws as well as our ignorance, prejudice, and phobias. Culling these attributes from the billions of bits that an LLM has hoovered up is impossible--one might as well scrap them and start over, that is if one could guarantee the quality and factual correctness of the content they were able to access. But the fly in that ointment is the process of selection. How would we ensure that only true and unbiased content was consumed?
Let's step back and think about the goal of AI. We might start with a simple premise--to build a machine intelligence equal to or exceeding human ability, able to help us make decisions faster and better than we can on our own. With super-human discernment, such an intelligence could help solve our most vexing problems, find cures for disease, develop advanced technologies to free humans from our labors and ills. Sounds idyllic and utopian, and while wonderfully idealistic, there are two fundamental challenges.
I've already alluded to the first, the quality of the information on which the model is based. Ultimately, humans must act as the sieve through which content must pass and be vetted, and humans are messy, biased, and flawed. Identifying those whose attributes would qualify them for the editorial task--at the volume required is highly infeasible. But for the sake of pursuing the point let's assume for the moment we could find and access sufficient amounts of such proven objective and accurate content--copyright or protections be damned.
This is where we run into a conundrum that has played out since humans began to reason. The archetype so often portrayed of the truth seeker/truth teller is the uber rationalist, devoid of human emotion and, therefore, able to see with a dispassionate eye. A model based on this ideal would indeed be the fulfillment of the AI nightmare, in which the AI concludes that humans are the problem, beings whose capacity for self destruction is boundless, and therefore unfit to govern themselves or even to continue to exist given their penchant for despoiling the planet.
Isaac Asimov contended with this dilemma in his Robot series of Science Fiction books. He conjured three laws that would be built into every Robot. They are as follows
(Note: I've taken the liberty of inserting AI where Asimov used the word Robot) :
1. AI may not injure a human being or, through inaction, allow a human being to come to harm.
2. AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
A neat solution you may conclude, and indeed those laws have and continue to dwell in the minds of some who work with machine intelligence, if only as cautionary trivia to be dismissed. Technology worshippers have a nasty penchant for stepping around inconvenient things which stand in the way of progress. But what Asimov's laws and AI developers both fail to address is the nature of human intelligence. It is not 100% rational, in fact it is imbued with human experience and that experience has an arguably necessary emotional component. The mythic rationalist-devoid of emotion may exist but he or she is a psychological nightmare--a creature who has suppressed the very thing that makes humans human.
Consider a world without emotion. it is a world without art, music, any kind of media whatsoever, because these are ultimately the product of emotions and the source of emotional response. And while it is true that emotions can and sometimes do lead us to make unwise, ill founded decisions, bias our thinking unconsciously and can lead to massively destructive behaviors, they are not separable without losing our essential humanity. Neither, would a machine intelligence be capable of aiding humans to build a utopian society without acknowledging and building into its considerations humans' capacity to feel. And how exactly would AI address the conundrum of protecting humans from themselves, or itself if such circumstances arose?
AI is a legitimate pursuit, and indeed it has already and will continue to help humans be better at many things and perhaps live up to the idealistic promise of improving the human condition. Is it a panacea? I do not believe it can or will be--but neither do I believe it will be a nightmarish ruler of humanity. That role is sadly one that only humans will assume in the forseeable future to the detriment of our race as we are witnessing today. Until we perfect artificial emotional intelligence, AI is bound to find a self limiting wall it cannot move beyond. To paraphrase J Krishnamurti, to be human is a journey of consciousness, empathy, resilience and the pursuit of meaning and purpose.
Comments