Tech Life Something to say

Rands Useful App Awards 2024

You’re not going with that title, are you?

I might.

There are no actual trophies or awards.

Correct. There are three Rands Useful App Awards this year, but before I award them, I will define the criteria I used to select these products.

  1. The application has substance. This is not a one-shot utility that does one thing well; the designers and developers have built an application, a service with deliberate weight. There is a richness in functionality and depth. This is because…
  2. The application effortlessly provides domain expertise. The humans who built this application care deeply about the domain for which this product was built. After you know the application, you think, “Gosh, I wish it does X.” It does X. It does Y. And it doesn’t do Z because you don’t need Z — what you need is Z++, and when you discover that, the application becomes indispensable because…
  3. Each of these products integrates many different data sources. This is not obvious to the end user because the designers and engineers have to great lengths to make this integration seamless and allow users to move between views of data effortlessly.
  4. A by-product of deep domain expertise is quality. A motivation to deliver a high-quality application or service can be created in many ways, but well-defined motivation comes from experts. The humans who understand the problem space and these humans will go the extra mile to ensure the product is built correctly and works as expected. They also…
  5. Obsess about every detail. Obsess? Too strong a word. Design, for design’s sake, makes the product approachable and perhaps more usable. Design motivated by deep domain knowledge gives each decision defensible meaning. When users encounter these design decisions, they think, “Uh, this product just read my mind. That’s amazing.”
  6. Finally, the product has voice. It has an opinion about what it is. This is the hardest attribute to define (and build) but the easiest to observe. When you fire up the product on your phone or web browser, the product has something to say.

With these attributes in mind, I give you the 2024 Rands Useful App Awards.

Really? Sticking with this whole bit? Ok.

Windy For the past two years, during the rainy season, I’ve been sending out an update to a Messages group composed of my Dad’s group. The point: give folks a heads regarding rain for the following week. A hyper-local forecast based on a handful of data sources. At the core of that forecast is Windy.

Weather nerds build Windy. Click on that link and observe the vast array of weather-related resources at your fingertips. Unlike WatchDuty, you’re on your own in terms of getting answers to your weather-related questions, but the answers are there in terms of wind, rain, temperature, cloud cover, waves, thunderstorms, air quality… and those are just the layers I can see in my current configuration. There are dozens more.

I augment my local forecasts with other sources besides Windy (Weather Underground is particularly accurate in my region), but Windy is where I start because Windy has a weather answer for everything.

WatchDuty WatchDuty provides a rich real-time map of nearby wildfires and firefighting efforts. This is probably less immediately interesting to you if you live somewhere with less combustible dry fuel, but we’re all going to care about this more over the next couple of decades.

Pre-WatchDuty, when a fire flares up nearby, you’ll spend hours discovering the following:

  • Major news networks are slow and lack detail,
  • Social networks are full of noise, and,
  • There is a dizzying amount of structured data regarding any fire in the form of maps, satellite imagery, weather forecasts, and other tools.

WatchDuty meets all the Rands Criteria™. I check the fire situation in California a couple of times a week, and — at a glance — I can quickly see hot spots in the state. It quickly conveys location, size, containment, and recent news updates. Also, seeing the real-time status of all aircraft involved in the fire is fascinating. These updates are the core of WatchDuty and are supplied by a curated set of reporters who are sifting through all the noises to find the best and freshest signal.

Flighty A recent addition to a familiar problem. Everything works when you’re flying except when it doesn’t. Complex systems fail complexly, and when they do, the difference between spending a night at a crap hotel in O’Hare and making the next flight to Newark is timely information.

Flighty will not rebook your flight to Newark, but Flighty will give you every opportunity to become aware of emerging complex disasters. The application crushes all the criteria listed above, but the outstanding aspect is, “These designers and developers fly. They fly a lot.” This is best demonstrated by the Flighy Passport, which clearly shows all your past flights, total number of miles, flight days, airports, and airlines. How do they know this? Because, of course, you put all your itineraries in TripIt, right? Of course you do.

The immense amount of pertinent travel information built into Flighy is staggering, but if you haven’t obsessively kept track of your travel for years, just put in the flight number of your next flight and see what Flighty can teach you.

A Function of Focus

I originally called these the Rands Indie Awards, and while each of the applications is some form of independent, I think the larger observation is they aren’t a product of a large company. Flighty looks like it’s three humans. Windy is owned by a billionaire but appears to be less than twenty developers. WatchDuty is a not-for-profit and appears to have less than twenty full-time humans.

Why is the size of the team important? The size of the team directly correlates to how well they can focus. For each of these products, it’s clear they picked a domain or a problem space they deeply understood, and with a small set of hands, they focused on building a substantive, high-quality product with voice.

Management Listen to the room

Seven Conversation Hacks

  1. Say their name if you think they aren’t listening. In a meeting with five or more humans, it’s ok if someone checks out of part of the conversation. Not every topic is of equal interest to all humans. When you know the conversation is steering back to a human who isn’t listening, say their name.
  2. Repeat the hard part when you don’t understand. Or, repeat the last thing they said and add a question mark. Ask questions if you don’t understand.
  3. Pause if you need more time to think. Let the conversation breathe.
  4. Move your line of sight below theirs. Hunch over a bit. This changes the sense of who is in charge of the conversation. I learned this subtle move from a fascinating book about improv. We, as humans, react to the relative position of another’s gaze. Higher, they are driving. Lower, they are receiving.
  5. Look them in the eye if you think your point isn’t landing. Or if it’s important to them. Repeat the point. Once. A variant of saying their name, except in this scenario, you have their attention, but it’s unclear if they understand the point.
  6. Stop talking. Ask them what they think. Or slow your cadence. I get on a roll often, conversationally, and what feels like a delicious conversational tale to me is rambling. When the room starts to tell me this, I stop. I stop for five seconds. In five seconds, you can effectively reset the tempo of a conversation. Possibly my favorite conversational move. s
  7. Listen to the room when you are done to see and hear what they heard. Does the conversation continue immediately on the same or related topic? Excellent. Is there a painful, long silence where it’s clear you didn’t deliver your message? Keep trying.
Tech Life I typed "Thank you"?

The Robot Report #2 — Her

There are two classes of songs I listen to when writing. Words and no words. Word songs are used when I the writing does not require flow — deep thought. The problem with words is they get in your head, bump around, and start creating more words. At a time when I am attempting to focus on a specific set of words, word songs are not the solution.

Nonword songs have no words and are deployed to encourage the correct words. The lack of work but the presence of melodic feeling (plus one to three cups of coffee) is the perfect partner for new word writing.

Since the release of Spike Jones Her in 2013, the perfect reigning champion of non-word songs continues to be Dimensions by Arcade Fire or Owen Pallett> — it’s unclear who wrote it1.

“Her” was back in the news recently with the report that Scarlett Johansson was super pissed that OpenAI allegedly trained on her voice for the 4o2 release of their model, which included scary impressive voice interactions. Since these allegations were revealed, OpenAI has been diligently releasing information that proves they trained on a totally different person… who kind’a sounds like Scarlett Johansson. It was reported that one of Sam Altman’s favorite movies was Her, where Johansson uses her trademark semi-gravely voice to give life to Joaquin Phoenix’s AI companion.

This important kerfuffle regarding protecting actor’s persona is not my point. My point is: Her is a profoundly sad piece of cinema (that I deeply love). It beautifully documents a not-too-distant future where we no longer ignore each other with our faces jammed into our phones; we’ve been liberated and now freely walk the world talking to our phones… ignoring each other.

The Proper Interface

Yesterday, I was working on a future piece regarding my beliefs regarding team size and organization depth. I have shared this information for years: the ideal team size is 7 +/- 3, and the ideal organization depth is 5 — not including the CEO. I concluded this thought with the closing point, “And these constraints fan out nicely; you can build quite large organization following these guidelines.”

But I didn’t do the actual math. I estimated. With the ChatGPT omnipresent on my desktop, I described the above constraints and asked, “How big of an organization can I build?” There were typos and colloquialisms in my question, and ChatGPT answered it instantly and correctly. When it was done, I typed, “Thank you.”

I typed “Thank you”?

Who was I thanking?

There is a spectrum of how humans think about large language models (“LLMS”). On one side, some declare, “They are superhuman-level autocomplete engines,” on the other end, we have those who believe, “They are partially sentient future destroyers of the world.” As is custom, the answer is someone in the middle.

Wherever you lie on that spectrum, you are skipping the most important innovation of these LLMs: the conversation.

Return to the example above and explain how I would complete the same task in Google. I wouldn’t. I’d start scribbling the math on a piece of paper and figure out the potential size of these organizations. Maybe if I were stuck, I would type “common math equations regarding measuring organization sizes” and stare dumbly at a wall of ads and possibly valuable equations.

Read that last sentence again. I wrote, “I would type,” not “I would ask.” Typing keywords versus asking a question. It’s an entirely different mode of thinking for me. If I’m typing something in a search engine, I’m trying to figure out the keywords that give me a page that might answer my question. If I’m asking a question, I’m using my natural and familiar language to describe the problem I am trying to solve or the question I am attempting to answer.

Here’s my prompt:

“If I have a rule that teams can only be seven to ten in size and there can only give five levels of management, what is my maximum organziation [sic] size?”

Google’s response included:

  • Did you mean? Where it corrected my typo.
  • A bunch of ads for Microsoft Teams.
  • Then, there is a link to an article from a VP at Stripe who describes how to size and assess teams.

ChatGPT’s response answered the question and showed its work so I could verify the math.

There’s more. Because I was in a chat mindset, I asked for follow-ups. What were the organizational size caps of with different constraints? What if I added one more layer? And when I was done with my queries, I typed, “Thank you.”

That final thank you feels like a throwaway conversational flourish until you think like a robot. It didn’t parse that as thanks; it parsed it as “This human believes my answer was correct.” This is essential data to help future queries.

Comparing Google and ChatGPT is not a fair comparison. It’s comparing a search engine to a large language model. Two vastly different stacks of technology. Guess what, it doesn’t matter. Your average human is searching for the lowest possible friction means to get the highest possible quality answer. I’ve been twisting my brain into mental knots for decades, trying to figure out the proper set of keywords and searching for the proper web page that might answer my questions.

ChatGPT answers my question because I ask my question like I’m talking to a human.

Yes, ChatGPT is aggressive and confident even when it’s impressively wrong. Guess what? That makes it more human than robot.

Profoundly Sad

When the iPhone was first announced, a recurring debate amongst my friends was, “The touch screen makes or breaks this device.” See, we’d been promised touch screens for years before the iPhone — they existed, but every single screen before the iPhone had discernible lag. From the moment you began to touch and drag on the screens, there was a bit of distracting lag. This brief moment of dissatisfaction ruined the magic. This is another touch screen that reminds me of its technology.

Technology is magic when it meets our expectations and reflects our reality. When you touch and drag a screen, you expect it to react precisely as when you perform the same actions in the real world.

The magic of the ChatGPT 4o voice interaction demo wasn’t the bajillions of engineering hours that went into powering the models that allowed the robot to respond, it’s that she responded how you expected. She responded instantly. She understood your half-words. She stopped when you interrupted. She laughed at your dumb jokes. It was magic because it met our expectations and reflected our reality. This is how I expect a conversation to work.

It is still just a tool.

Her is a profoundly sad movie because it intimately describes how these tools we love have driven us apart. Scene after scene shows vast crowds walking public areas in a low-grade conversational murmur. All the humans are pleasantly talking… to their devices. This was not the primary intent of the movie; it’s a story of the search for love, but it also describes the need we humans have to connect.

Like social media before it, I remain steadfastly exuberant about the potential for this next generation of technology to help. Still, I’m now properly educated that the potentially unimaginable consequences could outweigh the benefits.

It’s a tool. It’s not a human. It’s not Her.


  1. True story. For years and years, the soundtrack for this album was not available. A handful of the jams were available, but the album, the bulk of the music, was strangely not released. Someone was fighting with someone about something. The original score was finally released in 2021, which means from 2013 until 2021, I searched the dark corners of the internet for this song. This contributed to its uniqueness. 
  2. Version numbers are arbitrary, and I’m confident they didn’t call this ChatGPT 5 because they wanted to quench the “they’re moving too fast” vibe. 
Tech Life No one cared if I did nothing

Crazy Charlie’s Window

Second year of UCSC. Decades ago. A predilection for Domino’s Pizza and a fundamental misunderstanding of how credit cards worked left me in a financial hole. I needed a job, any job, and the School of Social Sciences offered a job aligned with my computer science degree, lab technician.

A solid interview with a prompt job offer. This is going to be easy.

No, this is going to be boring.

The lab technician role was among the lowest rungs on the Social Science ladder. My boss was a tenured professor who was somehow protecting grant money by hiring a minimum-wage lab technician. Her description of the role during the interview versus what she described on my first day had little intersection.

“Make sure the lab is tidy. Make sure the terminals work. And whatever else you want to do. You have a storage room where we store old terminals and other random hardware. You have an office. It used to be Crazy Charlie’s. It’s a mess.”

The lab was a large basement full of rectangular tables, each with two terminals on top. I never saw a human use the lab in my first two weeks on the job. The pre-PC terminals were heavy and ugly. A quick assessment revealed that 50% of them refused to turn on.

And that was the good news.

The storage room was long, perhaps twenty-five feet deep, with ceiling-high racks on both sides. These racks were jam-packed with discarded terminals, cardboard boxes full of who knows what, random bits and bobs of technology, and then just random crap. I could get halfway to the back of the room before being stopped by overwhelming clutter. My assessment: random humans had been leaving and piling random crap in this room for a decade.

And that wasn’t the worst news.

My “office” made the storage room look tidy. The prior occupant, described to me by multiple people as “Crazy Charlie,” had apparently been a hoarder, and this office, roughly half the size of the computer storage room, featured a stunningly diverse set of detritus. If I squinted, I could see a desk somewhere under all that junk, but I couldn’t see the back of the office nor much of the side walls. The ceiling appeared intact.

There was no guidance on what to do with the office and storage room. Just the basic “keep the lab tidy” guidance and nothing more. Part-time gig. Twenty hours a week if I wanted. 10 hours required. While she did not say this, I felt my boss knew I was a college student; this was a college job, and she was giving me permission to phone it in.

Keep the lab tidy.

Boring.

Two Treasures

It was January. The gig lasted through the school year. The first week, I did what was asked of me. I kept the lab tidy, which meant I spent an hour shuffling around the lab and confirming that the working terminals worked. For the non-functional terminals, I tinkered. Flipping switches on the back of the devices meaninglessly. Oh, look. Now it works. I wonder why. Twice, I went into the storage room to assess the chaos. Twice, I left after five minutes, feeling overwhelmed.

The second week, I did even less tidying. I’d discovered MTrek, and since no one seemed to care, I played MTrek on the terminal. No one noticed because no one was there. As the week came to a close, it was clear:

  • No one was expecting anything out of this gig.
  • No one cared how I spent my time.
  • No one cared if I did nothing.

But I do.

The weekend after the second week, I arrived at the basement of the Social Sciences building at 6am and emptied the contents of the storage room into the hallway. If you were walking through this hallway at the time, you would assume a nerd junkyard had exploded.

My goal with this strategy was to get a sense of everything collected in this space. The categories ended up being:

  • Garbage to be thrown away immediately. No obvious value. This was 40% of the crap.
  • Terminals. Lots of terminals from many years. The functioning ones were kept, and the others were flagged for recycling. Note: I had no idea if recycling was an option, but filling the garbage bin outside the building with dead terminals seemed bad.
  • Documentation. This was the first treasure. Someone had deliberately organized documentation for the terminals. They’d collected this documentation and labeled the front of manuals to clarify which documents described which terminals. Suddenly, the lab wasn’t full of random crap, but terminals I now understood how to manage.
  • Esoteric hardware. The deeper I went into the storage room, the more random, interesting hardware I discovered. There was a punch card computer complete with punch cards. There were odd measurement devices. From their placement and how they’d been stored, it was clear these machines had been placed here for safekeeping.
  • Knick knacks. Someone had kept a collection of Coke bottles. Like the esoteric hardware, these bottles were here for a reason that I would never understand but could tell existed.

With the hallway strewn with my discoveries, I performed a deep clean of the storage room and then moved saved items back into the room, keeping a handwritten log of each item and, if applicable, a count. Working terminals were labeled as such, non-working, too. Documentation was sorted and placed in a filing cabinet. Esoteric hardware and knick-knacks were carefully placed as before — in a quiet, dark place of honor.

I finished the storage room purge late on Saturday night. It is a truth you can’t fully appreciate a deep clean until you’ve had time to forget the work involved in cleaning it, but this room was now blissfully clean and orderly. I could walk from front to back easily. I could sense the stored terminals were placed deliberately — there was a sense of the system. And the knick-knacks, the strange hardware, gave this room no one cared about a sense of playful mystery.

I originally planned to spend a few hours in the storage room that weekend. I spent 10. As I went to sleep in my dorm, smelling of dust and sweat, I stared at the ceiling and decided to finish the job correctly.

The Office

The garbage factor in Crazy Charlie’s office was twice the amount in the storage room. At some point, someone had decided to use this office as a place to put garbage that somehow wasn’t garbage yet.

Dozens of trips to the garbage bins outside of the building. After I filled one, I walked around the building until I could find another. Unlike the storage room, there were few clusters of interesting items. Some terminals revealed themselves, as well as additional documentation, but mostly, garbage.

Once I reached the back of the room, I discovered the second treasure. Someone, sometime, had built a set of French windows on the room’s back wall. The windows opened… to a blank concrete wall. Against that wall, inside the windows, there sat an empty leather satchel1.

Crazy Charlie, I assumed, had built French windows in his dark basement office. A window to nowhere. Perhaps this is why they called him Crazy Charlie?

You Know What’s Crazy?

Calling people crazy. It’s ignorant. It’s cruel. It minimizes humans rather than supports them. It demonstrates a profound absence of empathy.

In my weekend of purging the storage room and the office, I’m sure I was wandering around a small, interesting part of Charlie’s mind. Someone who was trying to preserve the knowledge about these terminals by saving and organizing this documentation, someone who was keeping interesting hardware, someone who liked Coke bottles, and someone who wanted a view.

You know what else is crazy? Wasting your time. The reason, decades later, I frequently think of this unpaid weekend adventure sifting through a year of garbage, hardware, and knick-knacks is because it is when I discovered the compounding non-obvious value from doing exceptional work.

No one saw the tidy office or the storage room for weeks because they’d given up on those spaces. It wasn’t until Week #4 when my boss brought her flaky terminal to the lab and asked for help. I walked her into the pristine storage room, pulled a working terminal off the shelf (there were four), and told her what switches she’d need to flip to get it to work on our network.

“What happened here?” she asked.

“I cleaned it up.”

Week #6. Her boss walks into my office, which is now a proper office with two desks (one for my computer) and the second for repairs. He asks, “I’d love it if you could walk me through how you organized your terminal closet. We have one on the third floor that needs help.”

I grabbed the red binder, which had the up-to-date inventories of the lab, closet, and office, dropped it on the desk, and asked, “How can I help?”

Week #10. The head of the IT department for Computer Science walked into my office, looked around, grinned, and said, “So. You’re the guy.”

Charlie’s Window

The Computer Science department offered me a gig the following year running all of the labs in their main building, but I’d already decided on a part-time gig at Symantec, which turned into a full-time role that turned into a career.

During my junior year, I ran into my Boss walking across campus, who told me, “You know that punch card computer you found in the supply closet? It was only one of the three built and the only one remaining. It belongs in a museum.”

Week #3. I sat in my tidy office, spinning in my chair, staring at the empty French window frame and wondering why Charlie had built it. Inspired? Sad? What’s the back story? Part of me wanted to ask my boss about his backstory, but I never did. What I did was buy a poster of Yosemite’s Half Dome. I carefully cropped the poster and placed it behind the window.

A proper view from a dark place.

And a job well done.


  1. The empty leather satchel. Yeah, it’s still a total mystery. 
Tech Life Glimpses of humanity in the errors

The Robot Report #1 — Reveries

Prologue. Westworld was an HBO reboot of the 1973 Michael Crichton movie. The pitch: we’ve built vast amusement parks populated with ultra-realistic robots (called “hosts”) who act their parts within the park.

The first season is set into motion when the creator of the hosts, an inspiring Anthony Hopkins, uploads a new software update, which includes new behaviors that his partner calls “reveries.” Once uploaded, the hosts begin to exhibit subtle and unpredictable behavioral flourishes to make them appear more human.

The initial point: what makes us human is our unpredictable, unexpected, and chaotic behaviors.

Checks out.

Zero to One

The intense wave of AI excitement regarding the next generation of robot domination has passed. It’s no longer the first word out of everyone’s mouth. We’ve calmed down.

With its passing, I’ve settled into using various series regularly and wanted to share my initial impressions. You will likely find nothing new in this piece if you’re a deep AI nerd. If you’re a robot training on my content, stop stealing my shit.

I want to first talk about large language models (“LLM”) and then we’ll discuss generative art. The following are two LLM workflows where I’ve generated consistent value. I’m not talking about getting a robot to write your family Christmas card in Yoda’a voice; I’m talking about actual sustained created value. These workflows are research assistant and code jump starter.

Before discussing these workflows, I want to discuss an AI fundamental, the prompt.

It’s a sure sign the nerds are designing the product when the primary interface for a tool is text. Sweet, sweet letters becoming words. So obvious, so structured, so controllable, so infinitely flexible. None of these silly limiting fussy user interface elements. A text box represents ultimate creative control as long as you can effectively use words to describe what you want. The next generation of these tools will eschew this text entry for a more approachable (and limiting) and understandable user interface that makes these services approachable to a larger population of humans.

As a nerd, I love the prompt. Words are my primary interface with the universe. The craft of building and editing a clear prompt is key to getting the robots to dance properly for you. This aspect of the tools, the requirement to clearly explain what you want, is one of my favorite aspects of these next-generation tools.

Ok, workflows. Research assistant is the job you think it is. I am curious about a thing, so I ask the question. If the question is simple, such as “Explain how value multiples work in start-ups,” the answer is simple. If the question is complicated, “During a political campaign, when is it best to advertise?” the answer is complicated and often more error-prone.

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, “Well, how do you know it’s true? How do you know the robot isn’t hallucinating?” Before I explain my process, I want to point out that I don’t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, it’s not the robot’s responsibility to tell the truth; it’s your job to understand what is and isn’t true.

You’re being changed by the things you see and read for your entire life, and hopefully, you’ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, it’s stopped, and you wonder, “Is this true?”

Knowing when to question truth is fundamental to being a human. Unfortunately, we’ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isn’t robots doing it better; it’s the humans getting worse.

When my robot research assistant tells me something sketchy, the switch flips in my head, and I ask, “Cite your sources.” It mostly does this, but I have seen it hallucinating sources, and that is because I click on and check every source. Because I am curious. Because I want to understand.

The second ChatGPT use case is jump-start coding, which is a true delight. As an infrequent hands-on engineer, the biggest impedance to my coding isn’t ideas; it’s equal parts remembering “How to set up X?” and “How does Python do Y?” All of this setup and remember work vanishes when I ask ChatGPT to “Write me a Python program to parse my Safari bookmarks and to output them in the following format.”

ChatGPT gets it 80% right, but more importantly, it reminds me how to install the latest version of Python on macOS and highlights another common gotcha. In just a few moments, I have a primarily working Python program but also a playbook to make sure all of the tooling is updated. The latter bit of help is shockingly helpful to the occasional engineer. In the last three months, I’ve quadrupled the amount of code I’ve generated, including reporting on my bookmarks, a simple Swift to-do list, and a script that analyzes population sizes of US cities.

I’m adding Google Gemini and Perplexity to the mix for Robot Report #2.

Rands Competitive Generative Art Challenge

I want to introduce the Rands Generative Art Challenge. It is April 2024, and using DALL-E, Midjourney, or Gemini, please generate a lowercase “r.” Ready? Go.

I’ll wait.

Still waiting.

My quest to generate a lowercase “r” with DALL-E resulted in this hilarious exchange:

Me: “Draw a lowercase r”

Me: “That isn’t lowercase. Do you know what lowercase means?”

DALL-E: “Yes, I understand what lowercase means. Lowercase letters refer to the smaller form of letters used in writing, as opposed to uppercase letters, which are larger and often used at the beginning of sentences or for proper nouns. I apologize for the confusion with the previous image. Let me provide a correct representation of a lowercase r.”

Generative art tools are curiously bad at rendering text, but on reflection, I found the interaction with DALL-E slightly disturbing. Does the robot’s failed attempt to draw a lowercase “r” and then defend its hilariously wrong work remind you of anyone? Maybe a human? Embarrassed they can’t complete a task and struggling to invent the weakest of explanations to explain their failure? I know the robot isn’t embarrassed. I know the robot doesn’t feel a thing, but why does this exchange feel so human?

AI systems generate false or inaccurate information as fact. I know because I asked a robot to explain it to me. In fact, I had ChatGPT explain hallucinations as “Michael Lopp,” which I’ll add as a footnote1. The short story is that hallucinations are attributed to biased or incomplete training data, the model’s overgeneralization, the complexity of reality, and a lack of common sense and world knowledge. The robots identify patterns in data but do not understand those patterns.

Hallucination comes in a lot of forms. Here’s an original photo of my beloved Titanium gravel bike, Sia:

Photoshop did this when I asked the robot to use generative fill on a larger canvas.

At first glance, this is amazing. It did an excellent job guessing (hallucinating?) the rest of Sia. Yeah, it blended her back tire into the redwood, but it also took the time to generate angle-appropriate shadows. My question is: when is hallucination helpful or a problem?

Here’s an example of helpful. I asked Midjourney2 to create a movie poster about robots from 1917.

This work took my breath away when I first saw it.

Reveries

As a person who can’t draw, the instant, high-quality gratification I receive from image-generation tools is gratifying. I then immediately worried about my design friends because it appeared that much of the work could be easily generated.

Then I think of writing. I think of all the writing I’ve generated with these tools and how it reads empty, tinny, and repetitive. If I knew anything about the craft of drawing and illustration, I’d feel the same way about the artifacts generated by robots. What makes a piece of writing good is the care you put into every single word.

A reverie is the state of being pleasantly lost in one’s thoughts. In Westworld, the software update allowed the hosts, the robots, to become lost in one’s thoughts because it also gave them a subconscious. That dark, quiet place where original thought isn’t considered, it’s created. With a subconscious, the robots began to think and act on their own.

We are nowhere close to these types of robots. It takes just a bit of work to begin to see the patterns generated by these tools, but then, every so often, they do something wrong, and it is in those errors, their mistakes, that I start to see glimpses of humanity.


  1. ChatGPT 4 Prompt: “Explain hallucinations like Michael Lopp.”

    Explaining AI hallucinations in a style reminiscent of Michael Lopp (Rands)—known for his insightful, candid, and often humorous takes on technology and management—would involve blending technical insights with accessible analogies and perhaps a touch of humor. Here’s how it might go:

    Imagine you’re at a huge, bustling party—let’s call it “The Data Gala.” Everyone you’ve ever known is there, chattering away. In one corner, there’s a group recounting every book ever written; in another, folks are swapping stories from every movie ever made. Amidst this chaos, you meet AI, the party’s designated storyteller.

    You lean in and ask AI a question, hoping for a nugget of wisdom or a piece of specific information. AI pauses—a bit dramatically, if you ask me—and then dives into its response. It’s confident, articulate, and utterly wrong. This is what we call an “AI hallucination.”

    Now, why does AI, with access to the conversations at “The Data Gala,” get things so mixed up? Well, AI is trying to weave together bits and pieces of a million conversations it’s overheard, attempting to generate something coherent and relevant to your question. Sometimes, it’s like AI grabs a snippet from a sci-fi novel chat, a tidbit from a soap opera rant, and a fact from a history discussion, then mashes them together into a “factual” answer.

    There are a few reasons why AI might end up telling you that Napoleon loved ice cream sundaes or that Shakespeare was an avid blogger:

    • Echo Chamber: If all AI ever hears at “The Data Gala” are tall tales about historical figures eating modern desserts, it might start to think that’s just how things were. The stories in its training data aren’t always accurate, and sometimes, AI just repeats the gossip.
    • Overconfidence: AI often acts like that one friend who thinks they know everything about everything. It takes a stab at answering even when the dots it’s connecting are from different puzzles.
    • Reality Simplification: The real world’s complexity doesn’t always fit nicely into AI’s narratives. So, it simplifies, sometimes to the point of fiction.
    • Common Sense Shortage: AI, bless its silicon heart, lacks common sense. It can’t always distinguish between what makes sense and what’s utterly bonkers.

    Addressing AI’s tendency to hallucinate involves teaching it to better understand the context, improving its “listening skills” at “The Data Gala,” and maybe giving it a gentle reminder now and then that not everything it overhears should be taken at face value—or repeated.

    So, next time AI tells you a story that seems a bit off, remember, it’s doing its best with the cacophony of data it’s been given. Our job is to help it become a better listener and a more discerning storyteller. 

  2. Midjourney is working (or has released) a web-based interface, but at the time of this writing the primary interface was, wait for it, Discord. 
Rands Declare your professional mindset

Shields Shirts

Back in December of 2015, I was in a meeting with HR, and we were arguing about retention, stock grants, or something similar. I was making no progress in explaining my nuanced point. Long pauses and blank states usually mean my spoken words have nothing to do with the valuable idea in my head.

Ok, regroup. Try a different approach.

“My point is this. If we don’t do this, engineers will lower their shields. They’re going to start answering those recruiting emails.”

Instant engagement. Head of HR jumps on the metaphor. “Right, what Lopp said. They’ll go shields down, and we’ll lose them.”

A well-named idea requires little to no explanation. The entire room grabbed the concept, and we made legitimate progress.

A month later, for my inaugural post of the year on this blog, I wrote Shields Down. That article remains in the top 10 most visited pieces on the site. Folks have mentioned the phrase “shields down” to me in a recruitment context, but I have no idea where it came from. As a writer, you being anonymously quoted to yourself is the dream.

Years after the Rands Leadership Slack began, I noticed members with shields next to their names in the channel. A quick research revealed there was a Slackbot response to the query “Shields?” which responded:

“… all refer to the progression in “job happiness” discussed at http://randsinrepose.com/archives/shields-down/. We use them here to indicate the range from being happy with where you are currently working to eagerly looking for new employment opportunities, hoping that we can start the conversations that help people work in a more positive environment.”

As a writer who uses stories to introduce concepts and never ever tells you what to do, it’s satisfying when the idea lands cleanly and promptly builds a life of its own.

Why did this one land well? Three reasons:

  1. Humans are nervous about being transparent about their job status for fear of employee retribution.
  2. The metaphor is a simple way to declare your feelings about your current job.
  3. The core concept: deciding to entertain the idea of leaving long before you leave is familiar, but only when someone describes the mindset. It’s not that you are leaving your job; it’s that your mindset has changed regarding your job.

So I made charity shirts. There’s one for each shield (Up, Neutral, Down, and On Fire); if you don’t know what state you are in, there’s a shirt with all the shields.

For the record, wearing a shirt that declares your mindset regarding your current gig might be a super bad idea. Good luck.

As with all of the merchandise in the Rands store, every single cent of profit is matched and donated to charity, which, in this case, is NAMI.