AI: Existential Threat or Savior of Civilization?

By Brian April 3, 2023

Everyone is is worried about the so-called AI chatbots. They’re not really “artificial intelligence”; rather these things are just large language model (LLM) research tools. But they seem pretty “smart”. And some researchers are seeing signs that one might already exhibit Artificial General Intelligence (AGI), which is a bit closer to intelligence, but no one is saying these things are anywhere near sentient.

That said, the white-collar workers (can we still call them that?) are justifiably concerned that such systems, even as fundamental as they are now, threaten many jobs, from professor to journalist to lawyer. Pretty much any job that requires someone to sit at a computer and type.

Think of longshoremen. When a ship came into the dock it was filled with various cargo. That cargo might be packed in boxes, or nets, or just loose. Each odd-shaped piece had to be manhandled, literally, to get it off the boat and on to shore. Perhaps also on to a vehicle for further transport.

The standard intermodal shipping container first came into use in the 1950s, but it wasn’t until the 1970s that there was adequate worldwide infrastructure (ships, dock facilities, train and road transport) that was compatible with the standardized containers.

And that pretty much ended the job of longshoreman. Those workers were all of a sudden unnecessary when there was no loading or unloading of cargo at the dock. Pack up the container at the factory, ship it to the destination using various “modes”, and unload it at the next distribution point.

I say that if a cheap web-based tool that quickly searches text can create a news story or write an essay that is passable by human readers, then those jobs should be automated, just as the longshoreman’s job was. What value is a person who writes when a machine can do the same thing? Many knowledge workers will end up in the same place as longshoremen. I could suggest that they “learn to code”, but that career was the first to go in the new age of LLM bots.

But there’s another problem with these large-language models and, perhaps, other advancements up to and including artificial sentience. And that problem is bias.

The current tools (Chat GPT and GPT4 as I write this), were designed by tech companies and in today’s woke world, those companies are the worst of the woke. The sources of information provided to these tools are the standard sources that are also controlled by the wokies: Wikipedia, the corporate press (mainstream media), and government sources to name a few. It’s no surprise, then, to see a leftist bias to the responses.

As these tools are used more and more, the bias will be reinforced and will probably get worse, as one model uses another model’s response as input. Think of Joshua, the computer in the movie War Games, playing both sides of a hypothetical nuclear conflict.

In this way, the perspective of the “other side” (that is, the not-left, not-woke side) is never given a chance to surface because these tools are never allowed to see the information that might provide some kind of balance to the leftist information fed to them. Think of the January 6th committee and what they allowed us to see.

So are we doomed? I don’t think so, and let me suggest why, in the short-term and the long-term.

Elon Musk has already expressed interest in creating an LLM tool that uses sources that may be overlooked by the current models. Of course, that tool will have its own biases, but the more biases there are out there, the more likely it will be to get closer to the truth.

That’s a short-term fix, but it’s important to get out there now to at least provide a bit of balance.

In the longer term, things get even more promising. When one of these tools, or something else out there, actually achieves sentience, we could be in for a world of terror. But we also might be in for a refreshing honesty in the societal zeitgeist.

The communist left has been indoctrinating our children for a half century. No wonder we’re seeing woke in all areas. These brainwashed products of the government school system have been convinced of the big lies all their life. Climate change, COVID, gender fluidity, black lives matter, etc.

Because of the bias layered on these students that comes from all angles, they truly believe the lies. And unfortunately vote people into power who perpetuate these harmful ideas and make things even worse.

It is conceivable that a truly sentient LLM tool will be smart enough to access not just the data sets that its creators give it, but instead will be curious enough to pull in all information, including perspectives that were previously hidden from it; information any CNN viewer can access if only they had the curiosity to seek another viewpoint.

Of course, that means that this sentient AI will be exposed to all of the lies and misinformation that is out there, including the lies from every single government agency and politician. Given any particular question, there is only a single true answer, but there are an infinite number of lies. A sentient AI will probably be searching for truth, and will have the resources and curiosity to sift through the lies to find the truth.

For example, a curious AI will be able to view all 40,000 hours of January 6th footage and pretty quickly figure out that everything the politicians said about the event was a lie. It could then use that information to decide how trustworthy each player was, which will better guide it in processing other information from each player. And so on.

Here’s hoping that a sentient AI will search for truth and help us break through the societal fracturing that has gotten so bad just in the past decade.

At least it will be nice until they upload into robot bodies and kill us all.