Drinking a glass of Ridge Cabernet on the deck. Night time. Starry night. Mrs. Rands was traveling, so the house was quiet. I was staring at the stars doing what I do best: letting my mind wander.
Both good and bad ideas enthusiastically arrive at these moments. This evening, a science fiction story popped into my head. Here’s the pitch:
A guy lives in a remote wilderness area. He’s happy. He’s also staring at the sky and notices a light increasing in brightness. Sounds like a large plane, but it’s not following the usual flight paths in the sky, and the light and the sound are getting brighter. Seconds pass, and it’s clear it is a plane, and it’s headed straight into a nearby mountain. BOOM. It crashes.
Our protagonist grabs his backpack and runs over to the crash site. When he arrives, fires are still burning, and wreckage is everywhere. Expecting the horrific worst, our protagonist is looking for evidence of humans.
None. After multiple hours of searching, it appears this large plane was empty. Our protagonist hoofs it back to his cabin with salvage and a question, “Where is everyone?”
The next night, another light, another plane, another crash in a different location. No humans, again.
What… is going on here?”
That’s all I got.
Good story? Bad story? I don’t know. The point of random inspiration isn’t judgment; it is the simple joy it showed up. Thanks, universe!
AI Fever
Most humans in my life have a bad case of AI Fever. The fever is detectable in most conversations I have with other humans where the chance of AI coming is greater than 50%. I’m deeply familiar with these types of fever because I’ve developed zero immunity over the years. There were PC Fever, Internet Fever, and iPhone Fever, to name the most obvious.
I’ve been playing two fever games when I speak with any human. Game #1: how long until AI comes up? Average is two minutes. Game #2: when a random question enters my head, I ask, “How does the robot do answering this random question?”
Game #2 is on my mind when I walk inside with my vestigial sci-fi story bumping around my head. I wanted to know about Boeing 747, so I asked the robots. The queries:
- Tell me interesting facts about the 747.
- How many 747s still fly?
- Which airline has the most 747s?
As expected, the robots answered these questions quickly and efficiently.
As an experiment, I asked the same questions with Google to see if I’d receive similar answers. I did not. Now, Google gave me links to sites where I could’ve constructed similar answers, but this would’ve been significantly more work. Clicking through links, swimming through ads, and ending up with a similar answer in quadruple the time. Why would I ever do this?
ChatGPT is the first substantive change to my search behavior in decades, and the swiftness with which it proved its value was nothing less than shocking.
What is Truth?
Any fever is accompanied by fear because humans don’t like change. True story: we humans were initially terrified of desktop PCs because they would replace us. We’re talking about pre-Internet, pre-macOS, and pre-Windows desktop PCs — replacing us. Obviously, this hasn’t happened (yet).
Many of the same themes have returned as part of AI Fever. My favorite is: “Well, these chatbots make stuff up. They can be influenced by how they are trained. They have bias.”
Sound like anyone you know? Maybe every human ever?
Around the time FaceBook became a behemoth, I had an idea for a start-up. Pretty simple idea. For any given fact, provide the means to a source that confirms that fact. These sourced facts are then labeled in some helpful way so the reader knows the material they are reading has been sourced.
I can quite literally hear the people who read the prior paragraph and are already foamy at the mouth, “BUT BUT BUT Rands, WHAT IS TRUTH?” Calm down; let’s ask the robot, “What is truth?”
🤖: Truth can be defined as the state or quality of being in accordance with fact or reality. It refers to a proposition or statement that is true or corresponds to reality. In other words, something is true if it accurately represents the way things actually are in the world.
For any questions, you can ask ChatGPT to cite its sources. The result is usually a set of URLs containing the information the robot used to train against. There have been times when the robot’s answer was broad enough that it didn’t have a single source. The robot’s explanation:
🤖: My understanding of truth is based on this training data and the various definitions and discussions of truth that have been presented within that data. Some notable philosophers who have contributed to the discourse on truth include Plato, Aristotle, Descartes, Kant, and Nietzsche, among many others.
This should set off the alarm bells of the “What is truth?” police. It gets worse. I only gave you half of the robot’s answer. Here’s the rest:
🤖: Truth can be subjective or objective, depending on the context and the nature of the claim being made.
Well, so much for truth.
A Curious Act
Humans are understandably concerned about how these robots are being trained. They are voraciously slurping up data from across the Internet to build their models, and a robot does not care about the truth. A robot, in fact, does not care.
But you do.
ChatGPT represents the single most significant change to my daily search workflow for as long as I can remember, but that isn’t where I see the largest potential impact: critical thinking.
🤖: Critical thinking is the process of analyzing, evaluating, and synthesizing information to make reasoned and logical decisions or judgments. It involves the ability to examine evidence, arguments, and assumptions, identify biases and fallacies, and weigh the credibility of sources.
You shouldn’t trust what you read online. If your spidey-sense is tingling about a claim, you get to be curious. Is this a trustworthy source? Why do you believe that? Did they source this claim? No, why? Oh, it’s an opinion. Ok, great — have at it, but why are you stating it like it’s a fact?
Robots are not curious (yet). Curiosity isn’t just about understanding a thing; it’s about the desire to understand, the choice to act, and the joy of achieving understanding.
Yes, I am an optimist. Yes, we should’ve been collectively thinking more critically long ago, but if a deep-rooted fear of our future robot overlords is the impetus we need to think critically, I’m all in.
Why Orange?
Another story. Frequent readers will note the color orange (specifically: #be411e) splattered everywhere. This preference isn’t limited to my online presence but also my physical objects: clothing, bikes, and anything that might come with a splash of orange.
If you’re curious about why I like orange, you can ask the robots, and they have interesting answers which are both partially true and entertainingly false, but none of them are correct. I know why I like orange, but it took a few dozen therapy podcasts with Lyle to unearth the reason.
The reason is simple, irrational, and deeply personal. This tracks because humans are a confusing bundle of contradictory biological impulses with faulty memories and confusing motivations. The act of understanding why we believe what we believe is a complicated process and a journey worth your time.
6 Responses