3–5 minutes
Originally posted on Ghost as Weeknote, July 4: Ghost in the machine; A heat workshop on a cold morning.

Work things

The team I work for was one of the earliest AI startups in freight-technology. We mainly automate operations and accounting processes. Working with Expedock was what inspired me to actually go back to school and understand “what AI is“.

  • Before this, I came from a legacy bank, and had been there for ~5 years; it was far from implementing any form of AI. I’m also a bit of a Luddite. Basically, I would not have organically done a deep-dive about AI on my own.
  • So, back in 2021, working with Expedock allowed me to have front-row seats to the AI “show” before the LLM- and agent-hype broke out

One challenge that we run into quite often, working at an AI-focused startup – is the line between automation and human intervention.

And, now, having gone to classes for AI Foundations (thanks, Amy!), I now have a clearer understanding of why having a “human-in-the-loop” is a critical piece of practically any AI SaaS tool for essential tasks.

You could be the “human-in-the-loop”, or some other person, but the point is someone needs to be in-the-loop.

Let me oversimplify here:

  • Artificial Intelligence largely works as “best guesses”: the most likely answer
  • “Most likely” is not the same as “correct”.

To illustrate what this means for user experience (UX), I once designed a “confidence tracker” or accuracy-bar in our interface, partly thinking it would a) look cool, b) allow users to assess which documents they needed to review.

The Engineering team had to explain to me that that wasn’t a meaningful or useful data point.


After taking two whole semesters on AI foundations and application, the topic I got tripped up on the most was: limits.

For over a week, I couldn’t understand what it meant or why it mattered.
Apparently, it matters because AI doesn’t care that it hits the mark. It only cares that it’s on its way to hitting the mark as close as it can. And then, stops there.

That’s where we – humans – come in.

Why do you need to keep giving ChatGPT feedback? Why do you need to review what it’s saying? Why do you need to not just copy-paste it into a test, especially if it’s a code exercise (*cough cough*)? Because it doesn’t care about being 100% right.

In fact, I once tried to make it track a list of items for me. I realized that, even on the paid version, once my list hits 12 items, it’ll keep dropping or forgetting 1. I will need to remind it to put the item back on the list, which often leads to it fumbling a different item. This is on the paid version.

And, that proportion tracks. 11 out of 12 items right is around 92% accurate, which sounds about right for a machine learning model.


Circling back to work-relevant things this week: We often find ourselves needing to explain why there’s a turnaround time for an AI-automated process (“Shouldn’t it be instant?”).

Yes, I thought so, too. I know better now, though.

As an Engineering leader told me, “Angela, if an important document is 80% accurate versus 98% accurate, wouldn’t you still check both?

Yes, I would.

Life things

This part actually connects to what I just said about work.

Saw this LinkedIn post today (and I threw in a related one, for more context).

Link to post, in the image

Of course it failed an ethics test. IT’S A MACHINE.

What are we expecting, people? That it learns a conscience? (Many humans seem to have lost one!) And, we’re thinking that the learning machine can now think of greater humanity, too?

To be fair, I did also watch iRobot. And, yes, the scene where it suddenly started questioning things, and what it meant to hurt/ not hurt humans was…mentally itchy (?).

But, it seems many AI fans have really confused the “learning” in machine learning, with “learning” like human learning. LLMs are a pattern-recognition machine based on math.

Ethics…is not math.

Ethics and morality are philosophical imperatives. There is not a single approach to it. And, I can remember this because this was one of my three favourite college classes – the Philosophy of Morality. Where we pitted the 5 great theories of morality against each other.

Now, maybe, we can train the LLM on it, yes – but that’s precisely the thing, it needs to be fed into the machine. It won’t ruminate on ethics on its own.

Rant done. For now.

Leave a comment