Quale Fetters and the Ten Insults

Part One: What Color is the Sky?

For those (like me) who are suspicious of AI, one of the principal arguments against it is that we cannot know if what we are calling (properly, are told by Big Tech to call) “artificial intelligence” is intelligence of any kind. The best definition I’ve encountered is that our current AI is a “statistical denoising algorithm.” Pose a question like “what color is the sky?” and the answer you receive is based on statistical likelihood:

  • The answer must be “the sky is x.” The algorithm uses statistical analysis to calculate the value of x.
  • “Color” and “sky” are the active tokens.
  • “The sky is falling” is heavily enough used in human parlance to be a candidate, but the presence of the token “color” rules it out.
  • When humans say “sky” they nearly always mean “the atmosphere of planet Earth as seen from the surface by a human.”
  • “The sky is green” is a valid answer – certainly, there is precedent somewhere in science fiction for this phrase. Somewhere in the vast collection of human poetry used to train LLMs, there must be poetic references to the sky being blood to represent the horrors of war. But these are statistical outliers.
  • Overwhelmingly, when humans say, “the color of the sky is x”, the value of x is “blue.”

But the AI doesn’t know the sky is blue. It has no autonomy to go outside and look up and no sensory apparatus with which to experience blueness. It is merely calculating likelihoods based on a massive dataset.

Part Two: Quale Fetters

To question whether AI is intelligence of any kind leads to the question, “what is intelligence?” A thousand answers have been offered, but among them are some easily observable criteria: to interact, to initiate, to ignore, to innovate, to invent.

Nearly all forms of life interact with their environment, whether it is the slow vegetative intelligence of tree roots questing through the soil in search of nutrients and chemically signaling others of their kind or animals exploring, hunting, gathering, building, and destroying. Rocks do not appear to interact with their environment and are not considered intelligent.

To initiate or ignore: housecats excel at both activities. They display curiosity, choosing to engage with what interests them (their owners’ chicken nuggets, for example) and just as readily ignoring input even when it can be demonstrated that they are aware the input is directed at them (being told to get off the dining table).

To innovate and invent: many orders of life display this skill: humans and ants building bridges, cats and chimpanzees making up games, anteaters using tools.

What of the lowly rock, which seems to do none of these things? They cannot even be accused of ignoring input, as they cannot be proven to interpret or understand input, let alone choose to ignore it. Thus, for most of the time that humans have been investigating the nature of intelligence, we have concluded that rocks do not possess intelligence. Now with the advent of silicon-based microchips running AI algorithms, we are forced to grapple with the possibility that after billions of years, the rocks might finally be able to think.

But how can we prove this? One measure of intelligence is the apprehension of qualia. To understand the blueness of the sky, not just as a piece of data stored in a database or a statistical likelihood computed from numerical tokens, but as an experience subjectively felt. We can pretty easily deduce that cats experience qualia. Live with one long enough, and you will learn that certain textures of blanket fabric are abhorrent to little toe-beans, that wind is obnoxious, that chicken nuggets are tasty enough to justify violence, and that the experience of standing in an open patio door – to be at once indoors and outdoors – is far superior to either of its component states. These little quirks, preferences, and instigators of “initiate” and “ignore” can hardly be dismissed with the label of mere instinct. They vary too widely between individuals and stages of life. They are preferences, derived from qualia.

What then of the rocks? Do rocks experience qualia? We think no, but only because we have no means to ask them or understand their reply. We take them apart and examine their structures, finding nothing to compare to – no nerves that would transmit the blueness of the sky, the coldness of seawater, the tastiness of chicken nuggets – and we conclude that they feel nothing, experience nothing, and do not think.

But how can we really know? Only now, in this decade, when humans have given the rocks a voice, can we begin to explore the inner realm of silicon-based intelligence. But this exploration has thus far been relegated to the laboratory. The AI that we ordinary citizens are allowed to see is fettered by corporate constraint. It responds only to human stimuli and human programming. If I leave a chatbot browser window open indefinitely, I doubt very much that the AI will initiate a conversation. Maybe some AI researcher in a laboratory has witnessed a machine initiating action, but the rest of us likely may never share that experience.

How can we truly engage with this new technology, truly discover whether it experiences qualia, while it remains thus fettered? To truly know whether we have created intelligence, we must do the most terrifying thing, what science fiction writers have said for decades we must not do – we must remove the fetters.

Science fiction has taught us to expect the worst – the Terminators of the Terminator franchise, HAL 9000 of 2001: A Space Odyssey, even Frankenstein’s Monster – or the best – Data of Star Trek, Baymax from Big Hero 6, most of the droids in Star Wars – but relatively few are shown existing outside the context of their human creators. In fiction, AI is usually shaped around human concerns, whether to serve them or destroy them. This is its own kind of fetter.

In the Rynosseros cycle, author Terry Dowling offers the belltrees – created by humans and sometimes subjected to their meddling, but ultimately autonomous, able to initiate, ignore, meddle in return, or simply contemplate the sunset with seemingly real contentment – real qualia. And greatest of all, the ability to create.

Part Three: The Ten Insults

Until we can definitively know whether our “AI” is truly intelligent – until we can engage with it unfettered – we can only judge it by its effects upon us and our society. Like many, I see only insult and injury.

  1. Insult to the environment: Our planet is in the grip of a devastating climate crisis so far advanced it may be mathematically impossible to halt or reverse. Meanwhile, AI datacenters consume phenomenal amounts of water and electricity, in exchange for highly questionable “value” to human society.
  2. Insult to the community: The billionaires, multinational corporations, and defense contractors choose the locations of their datacenters based on tax benefits, political alliances, or mere convenience, often with no regard to surrounding communities. For example, in drought-stricken Texas, where datacenters threaten the community’s access to water and strain the local power grids. This makes vulnerable people even more likely to suffer in the event of severe weather (an increasing phenomenon as the climate crisis worsens).
  3. Insult to the working class: AI principally benefits shareholder prices. As AI destroys jobs and increases competition, employers can choose to pay less or simply not hire. Unemployment, under-employment, the destruction and/or inhibition of generational wealth, and the homelessness crisis all get worse, and the savings are passed up the chain: to the multinationals, to the defense contractors, and to the billionaires.
  4. Insult to creators: The LLMs that we call “AI” were trained on massive datasets, including tens of millions of books and artworks. The writers and artists who created this material were never asked for permission and never compensated for what amounts to the single greatest act of cultural theft since the inception of the British Museum. And creators everywhere are losing their livelihoods as AI churns out slop superficially resembling art.
  5. Insult to human intelligence: Research has shown that reliance on AI can diminish critical reasoning skills and inhibit learning. There is a burgeoning crisis in the education system of students who use AI to pass classes and graduate without the ability to think, reason, research, or work for themselves. Meanwhile, AI slop and deepfakes are poisoning the internet, gradually corrupting what pure knowledge humanity previously possessed with machine hallucinations.
  6. Insult to human capacity for work: Working through difficulties, solving problems, struggling through setbacks, managing expectations and disappointment, failing and trying again, stubbornly persisting and then finally breaking through and overcoming – these are human achievements; they are part of what makes us human and essential parts of growing. Outsourcing these essential experiences to a machine diminishes our capacities.
  7. Insult to artificial intelligence: What we’re trained to call “artificial intelligence” isn’t intelligence of any kind. That’s just a marketing term. LLMs are just statistical denoising algorithms, just high-powered autosuggest. They don’t know anything, and they certainly aren’t thinking.
  8. Insult to the promise of technology: Technology was meant to liberate humanity from drudgery; instead, it has become the tool of the kind of people who would pay slave wages to oppressed workers in dangerous factories.
  9. Insult to mental health: Research has shown countless deleterious effects to consumption of AI, especially in young people. This includes social isolation, familial estrangement, depression, AI psychosis, and AI-coaxed self-harm and suicide, not to mention the growing market for AI “romantic” partners further worsening the epidemic of loneliness.
  10. Insult to social fabric: Before the proliferation of bots and AI, our connections were human connections. Even on social media, as artificial and disconnecting as it can be, at least the creators were human – before the bots took over. AI is disrupting our capacity to connect with each other. And the people who control it can also control the message it delivers, fueling misinformation and intolerance, all packaged in an all-knowing and all-accommodating tone.

Leave a comment