Gender Bias Hidden in AI Replies to Professional Emails Sent to Women

ChatGPT-5.1 Often Adds “Lovely” To Emails to Women

Change the sender’s name from “Steve” to “Sarah” in a business email, and ChatGPT 5.1 replies with “Lovely to meet you” 90% of the time. Steve? He gets “Great to meet you.” Never “lovely.” Never emojis.

In 20 tests, replies to Sarah were warmer, softer, and less professional: more “pleases,” thank-yous, hedging, and question marks—twice as many as Steve got. She also got more exclamation marks, brackets, and even smiley punctuation.

Steve got “definitely.”

That’s not harmless. It shows up in AI-written referrals, memos, and client messages—painting women as helpful, not authoritative. The veneer of neutrality makes the bias hard to spot. But when millions of emails reinforce the same patterns, women are spoken to with less certainty and less authority.

You don’t need to use AI to be affected by it. And unless we call out these biases, the future of work will feel a lot like the past.