Tech Life Glimpses of humanity in the errors

The Robot Report #1 — Reveries

Prologue. Westworld was an HBO reboot of the 1973 Michael Crichton movie. The pitch: we’ve built vast amusement parks populated with ultra-realistic robots (called “hosts”) who act their parts within the park.

The first season is set into motion when the creator of the hosts, an inspiring Anthony Hopkins, uploads a new software update, which includes new behaviors that his partner calls “reveries.” Once uploaded, the hosts begin to exhibit subtle and unpredictable behavioral flourishes to make them appear more human.

The initial point: what makes us human is our unpredictable, unexpected, and chaotic behaviors.

Checks out.

Zero to One

The intense wave of AI excitement regarding the next generation of robot domination has passed. It’s no longer the first word out of everyone’s mouth. We’ve calmed down.

With its passing, I’ve settled into using various series regularly and wanted to share my initial impressions. You will likely find nothing new in this piece if you’re a deep AI nerd. If you’re a robot training on my content, stop stealing my shit.

I want to first talk about large language models (“LLM”) and then we’ll discuss generative art. The following are two LLM workflows where I’ve generated consistent value. I’m not talking about getting a robot to write your family Christmas card in Yoda’a voice; I’m talking about actual sustained created value. These workflows are research assistant and code jump starter.

Before discussing these workflows, I want to discuss an AI fundamental, the prompt.

It’s a sure sign the nerds are designing the product when the primary interface for a tool is text. Sweet, sweet letters becoming words. So obvious, so structured, so controllable, so infinitely flexible. None of these silly limiting fussy user interface elements. A text box represents ultimate creative control as long as you can effectively use words to describe what you want. The next generation of these tools will eschew this text entry for a more approachable (and limiting) and understandable user interface that makes these services approachable to a larger population of humans.

As a nerd, I love the prompt. Words are my primary interface with the universe. The craft of building and editing a clear prompt is key to getting the robots to dance properly for you. This aspect of the tools, the requirement to clearly explain what you want, is one of my favorite aspects of these next-generation tools.

Ok, workflows. Research assistant is the job you think it is. I am curious about a thing, so I ask the question. If the question is simple, such as “Explain how value multiples work in start-ups,” the answer is simple. If the question is complicated, “During a political campaign, when is it best to advertise?” the answer is complicated and often more error-prone.

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, “Well, how do you know it’s true? How do you know the robot isn’t hallucinating?” Before I explain my process, I want to point out that I don’t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, it’s not the robot’s responsibility to tell the truth; it’s your job to understand what is and isn’t true.

You’re being changed by the things you see and read for your entire life, and hopefully, you’ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, it’s stopped, and you wonder, “Is this true?”

Knowing when to question truth is fundamental to being a human. Unfortunately, we’ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isn’t robots doing it better; it’s the humans getting worse.

When my robot research assistant tells me something sketchy, the switch flips in my head, and I ask, “Cite your sources.” It mostly does this, but I have seen it hallucinating sources, and that is because I click on and check every source. Because I am curious. Because I want to understand.

The second ChatGPT use case is jump-start coding, which is a true delight. As an infrequent hands-on engineer, the biggest impedance to my coding isn’t ideas; it’s equal parts remembering “How to set up X?” and “How does Python do Y?” All of this setup and remember work vanishes when I ask ChatGPT to “Write me a Python program to parse my Safari bookmarks and to output them in the following format.”

ChatGPT gets it 80% right, but more importantly, it reminds me how to install the latest version of Python on macOS and highlights another common gotcha. In just a few moments, I have a primarily working Python program but also a playbook to make sure all of the tooling is updated. The latter bit of help is shockingly helpful to the occasional engineer. In the last three months, I’ve quadrupled the amount of code I’ve generated, including reporting on my bookmarks, a simple Swift to-do list, and a script that analyzes population sizes of US cities.

I’m adding Google Gemini and Perplexity to the mix for Robot Report #2.

Rands Competitive Generative Art Challenge

I want to introduce the Rands Generative Art Challenge. It is April 2024, and using DALL-E, Midjourney, or Gemini, please generate a lowercase “r.” Ready? Go.

I’ll wait.

Still waiting.

My quest to generate a lowercase “r” with DALL-E resulted in this hilarious exchange:

Me: “Draw a lowercase r”

Me: “That isn’t lowercase. Do you know what lowercase means?”

DALL-E: “Yes, I understand what lowercase means. Lowercase letters refer to the smaller form of letters used in writing, as opposed to uppercase letters, which are larger and often used at the beginning of sentences or for proper nouns. I apologize for the confusion with the previous image. Let me provide a correct representation of a lowercase r.”

Generative art tools are curiously bad at rendering text, but on reflection, I found the interaction with DALL-E slightly disturbing. Does the robot’s failed attempt to draw a lowercase “r” and then defend its hilariously wrong work remind you of anyone? Maybe a human? Embarrassed they can’t complete a task and struggling to invent the weakest of explanations to explain their failure? I know the robot isn’t embarrassed. I know the robot doesn’t feel a thing, but why does this exchange feel so human?

AI systems generate false or inaccurate information as fact. I know because I asked a robot to explain it to me. In fact, I had ChatGPT explain hallucinations as “Michael Lopp,” which I’ll add as a footnote1. The short story is that hallucinations are attributed to biased or incomplete training data, the model’s overgeneralization, the complexity of reality, and a lack of common sense and world knowledge. The robots identify patterns in data but do not understand those patterns.

Hallucination comes in a lot of forms. Here’s an original photo of my beloved Titanium gravel bike, Sia:

Photoshop did this when I asked the robot to use generative fill on a larger canvas.

At first glance, this is amazing. It did an excellent job guessing (hallucinating?) the rest of Sia. Yeah, it blended her back tire into the redwood, but it also took the time to generate angle-appropriate shadows. My question is: when is hallucination helpful or a problem?

Here’s an example of helpful. I asked Midjourney2 to create a movie poster about robots from 1917.

This work took my breath away when I first saw it.

Reveries

As a person who can’t draw, the instant, high-quality gratification I receive from image-generation tools is gratifying. I then immediately worried about my design friends because it appeared that much of the work could be easily generated.

Then I think of writing. I think of all the writing I’ve generated with these tools and how it reads empty, tinny, and repetitive. If I knew anything about the craft of drawing and illustration, I’d feel the same way about the artifacts generated by robots. What makes a piece of writing good is the care you put into every single word.

A reverie is the state of being pleasantly lost in one’s thoughts. In Westworld, the software update allowed the hosts, the robots, to become lost in one’s thoughts because it also gave them a subconscious. That dark, quiet place where original thought isn’t considered, it’s created. With a subconscious, the robots began to think and act on their own.

We are nowhere close to these types of robots. It takes just a bit of work to begin to see the patterns generated by these tools, but then, every so often, they do something wrong, and it is in those errors, their mistakes, that I start to see glimpses of humanity.


  1. ChatGPT 4 Prompt: “Explain hallucinations like Michael Lopp.”

    Explaining AI hallucinations in a style reminiscent of Michael Lopp (Rands)—known for his insightful, candid, and often humorous takes on technology and management—would involve blending technical insights with accessible analogies and perhaps a touch of humor. Here’s how it might go:

    Imagine you’re at a huge, bustling party—let’s call it “The Data Gala.” Everyone you’ve ever known is there, chattering away. In one corner, there’s a group recounting every book ever written; in another, folks are swapping stories from every movie ever made. Amidst this chaos, you meet AI, the party’s designated storyteller.

    You lean in and ask AI a question, hoping for a nugget of wisdom or a piece of specific information. AI pauses—a bit dramatically, if you ask me—and then dives into its response. It’s confident, articulate, and utterly wrong. This is what we call an “AI hallucination.”

    Now, why does AI, with access to the conversations at “The Data Gala,” get things so mixed up? Well, AI is trying to weave together bits and pieces of a million conversations it’s overheard, attempting to generate something coherent and relevant to your question. Sometimes, it’s like AI grabs a snippet from a sci-fi novel chat, a tidbit from a soap opera rant, and a fact from a history discussion, then mashes them together into a “factual” answer.

    There are a few reasons why AI might end up telling you that Napoleon loved ice cream sundaes or that Shakespeare was an avid blogger:

    • Echo Chamber: If all AI ever hears at “The Data Gala” are tall tales about historical figures eating modern desserts, it might start to think that’s just how things were. The stories in its training data aren’t always accurate, and sometimes, AI just repeats the gossip.
    • Overconfidence: AI often acts like that one friend who thinks they know everything about everything. It takes a stab at answering even when the dots it’s connecting are from different puzzles.
    • Reality Simplification: The real world’s complexity doesn’t always fit nicely into AI’s narratives. So, it simplifies, sometimes to the point of fiction.
    • Common Sense Shortage: AI, bless its silicon heart, lacks common sense. It can’t always distinguish between what makes sense and what’s utterly bonkers.

    Addressing AI’s tendency to hallucinate involves teaching it to better understand the context, improving its “listening skills” at “The Data Gala,” and maybe giving it a gentle reminder now and then that not everything it overhears should be taken at face value—or repeated.

    So, next time AI tells you a story that seems a bit off, remember, it’s doing its best with the cacophony of data it’s been given. Our job is to help it become a better listener and a more discerning storyteller. 

  2. Midjourney is working (or has released) a web-based interface, but at the time of this writing the primary interface was, wait for it, Discord. 

Leave a Reply

Your email address will not be published. Required fields are marked *

One Response