Issue# 4: Bias, where?

AI-generated cartoon of Jay McGrane sitting at a desk working at a laptop with a note that says “bias,” a maple-dip donut and 2 Tim Hortons coffees.

Written by Jay McGrane with an assist from Paula

Everyone knows AI is biased. Why don’t we see it?

Any guesses on how many of Paula’s students have found bias in AI content they generated themselves?

One.

Weird, isn’t it? You’d think it would be higher for such a widespread problem.

To be clear, I do not want to downplay the issue of bias in AI-generated content. We know from research that AI systems reproduce sexist, racist, and any other “-ist content” you can think of. Most people even bring up bias as one of their top concerns for even using AI in the first place. 

Yet both Paula and I struggled to think of clear examples we faced on a daily basis. 

This issue will explore the tension between our ethical concerns and how bias shows up (and gets forgotten) in our day-to-day interactions. Today, I’ve added the unscripted dialogue of our conversation to show you how difficult it is to pinpoint the less stereotypical aspects of AI bias.


What was that student’s example of AI bias you ask?

“Empowered.” 

The AI described female business owners as “empowered.” However, it never used this word when describing men. So…men get to be just plain ‘ol business owners, but women have to be “empowered” to leave our kitchens. Definitely reinforcing gendered stereotypes with that language! 

Gender bias is pretty easy to spot. But what about the kind of bias that you can’t pinpoint…because it’s operating by omission. Even worse, you won’t be able to tell what voices and perspectives the AI has omitted. I’m calling it omission bias.


Gotta say, this one worried me and Paula didn’t make me feel any better…

Jay: Omission bias was one thing that stood out for me. What has the AI omitted that I've not noticed, right? Bias does not only operate as what is created, but a lot of bias operates on what's omitted. Whose voices and perspectives has the AI not given me?

Paula: When OpenAI first released image generation, they shared a sample set in Williamsburg, New York — one of the oldest ultra-Orthodox Jewish communities in the world. The AI-generated image included witches and magical elements, which would be deeply offensive to that community.

The problem wasn’t just what was in the image. It was that no one seemed to consider it might be offensive at all. Humans miss things.

Jay: Exactly. And I think that’s what bothers me most about omission bias. It’s almost impossible to notice. If it wasn’t flagged, how would you know it even happened?

Then Paula pointed out something even worse than omission bias… what if the bias is coming from me?

Paula:
People bring their bias in. Even if the bias is somewhat mitigated... if you are bringing bias in, if it is not trained to catch that, then it is going to produce bias.

Jay: For me, in my daily life... AI is by far too affirmative of what I'm doing to be a truly useful thinking partner.

Paula: It's made to affirm whatever you're doing. So if you are bringing bias in... it is going to produce bias.

(Feeling called out? Same.
I built a short 3-email course to help you clean up your prompts, sharpen your tone, and train your AI to actually sound like you.
👉 Sign up here. No fluff, just strategy.)

There you go, folks. You’ve got a few unadulterated snippets from us and you’ve discovered that we have no words of wisdom for conquering bias in AI content.

Only more questions…


Why bias strikes fear into my English teacher heart

As I wrote this newsletter, I came back to this quote from Vauhini Vara in The Guardian:

“when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English.”

She’s right. 

Beautiful texts exist in dialects. Think Huck Finn. Or Red Wall. Or Things Fall Apart. Or a multitude of other writers.

Textual diversity is beautiful. The world will lose something if we all begin writing in the same uppercrust tones of standardized English. 

As someone with an English degree you’d think I’d fit perfectly into the AI landscape…but even I fight back. To be clear, my voice is in no danger of being erased as a Canadian, white woman. 

(Paula here: she’s politely deconstructing language bias with Tim Hortons in hand, obviously, eh? 🇨🇦)

I still stand up for my voice in one minor way, though. I still write in short sentences. Blithely ignoring the fact that I create sentence fragments as I go… 😙🎵

What can you do about bias today?

Paula, made the point that the same training that re-creates bias can also be harnessed for good. You can train your AI to suit your own style. Mine has already begun writing in shorter sentences. 

To all the small business owners out there, this quirk of AI presents a huge opportunity: 

➡️ Train your AI on your style guide — sentence length, tone, vocabulary, all of it. Not just to avoid bias, but to reclaim clarity and precision.

Like most things with AI, mitigating bias comes down to awareness. The more aware you are of bias then the more likely you are to notice when it comes up. 

Paula and I don’t think we’ll ever be fully done exploring this topic. Bias in AI is a moving target. We’re still figuring out how to spot it and push back. 

What about you? Do you have any great examples of bias in AI?

Here’s to staying human with (and biased) AI friends!


What I’m doing as a parent in the AI chaos (Jay)… 

My daughter and I have been editing AI-generated work. I’ve long thought editing has been an under-taught skill, but it’s difficult to convince children to make large edits. They spend so long writing anything that it feels heartbreaking to cut or change even a single word. 

AI gave us a playspace to re-write most of the story. In fact, we re-generated our first version entirely! Then, we made a few minor adjustments. 

I did notice some bias in this AI-generated story about vampires, though. We re-named Victor to Willie, but the AI still added a Vlad living in a gloomy mansion. Vampire discrimination right there.

What tools I’m playing with (Paula)...

This one is for all the AI haters out there — who will probably hate me even more after this week’s tool.

I’ve been using Suno to create songs with lyrics written by ChatGPT. (Really — they’re hilarious. You can listen.) I started using it to teach AI literacy to people usually left out of tech conversations. It’s fun, accessible, and helps make complex ideas easier to grasp. 

But I know there are trade-offs.

Maybe I’m taking work from musicians by not hiring them (though I wouldn’t have anyway) or by normalizing a process that leaves them out. Maybe I’m reinforcing something bigger: whose work gets valued, and what kinds of creators get left behind. And yes, one of the songs sounds suspiciously like John Denver. So now we’re in copyright territory, too.

What’s the ethical answer here? I’m not sure. (Though, yes, I did co-write a song about that, too.)

But here’s what surprised me…

My father-in-law, who has Alzheimer’s, was recently hospitalized with pneumonia after a root canal infection. To entertain him, I co-wrote a song, “The Tooth That Took Me Down,” in a bluegrass style, retelling the story of what happened. It made him laugh so hard that he still asks to hear it several times a day!

Bias, copyright, tool debates aside – it helped, Isn’t that the point of technology?


If you’re thinking more seriously about how to use AI in your work or classroom…

I’ve got classes coming. Human-centered. Accessible. Made for people like us. Click here to join the list.


We hope you’ve enjoyed our thoughts on how to keep the human at the center of AI. We’ll be back in 2 weeks with our next installment! Until then…

https://www.linkedin.com/in/paulamcconnell/
https://www.linkedin.com/in/jaymcgrane-edtechwriter/

Jay & Paula


P.S: Also, yes — that’s AI-generated art.

And yes — it gave Jay two Tim Hortons coffees. One to stay, one to go. ☕☕

The sticky note says “Bias?”

Which is funny, considering the AI also told me maple dip was the most beloved Canadian donut.

Jay texted me when I sent her the image: “Because there aren’t any stereotypes at work in your source…”

AI, you’re doing great. But you’re still weird.


AI disclosure: We use Riverside to record the conversation for the future podcast. Jay writes the newsletter (no AI), pulling quotes from the transcript with ChatGPT. Paula takes the final newsletter, adds her part with an AI assist, creates the image, and loads it into Flodesk. So sure, we use AI tools, but it is built on very human conversations and Jay’s excellent writing.

Previous
Previous

Issue# 5: AI & Research at School

Next
Next

Issue# 3: Research, AI Style