I must open here by noting that, when this concept first made the tech media rounds in recent months, I thought it sounded ridiculous.
Manners? With a machine?
Do I apologize if I haven’t updated my IDE promptly enough, or excuse my coffee maker for gurgling?
And consciousness? We are a very long way from sentient AI, unless there’s something very significant going on in a lab somewhere that I have no idea about.
But then OpenAI made news for rolling back updates to GPT-4o after users complained it was too sycophantic.
Anthropic’s newest model, Claude 4, demonstrated a capacity to alert the authorities if it believed you were engaged in egregious wrongdoing (and had sufficient privileges on your machine). Also, in testing, it seized on compromising materials to blackmail users against being shut down. (Anthropic also employs an “AI welfare researcher,” see below.)
And while none of this constitutes consciousness per se, it’s certainly behavior. Something like personality. More than you would expect from a tool.
I’ve also now read several intriguing articles (such as the ones from Scientific American and The New York Times, see below) which add to mounting research suggesting how you talk to AI does matter.
It matters in terms of the kinds of responses you get, yes, but also, depending on the solution, it can impact the system’s behavior over time, not to mention its impact on power use and cost.
There are also impacts on us. How we view the technology, how we govern it, and how we feel about our ongoing AI human interaction.
So today I consider the question: how polite should we be with AI?
AI Interaction Etiquette Today
Before I go further, I’m curious—do you use “please” or “thank you” with voice AI systems (i.e., calling, Alexa/Siri) or chatbots? (Consider this for yourself, but you can also add a comment below. I would love to hear your opinion!)
Note that prior to this article, my answer to this question was no.
And in prompting libraries that we’ve examined and built in-house, these are rarely included. Still, I do avoid rudeness. (More on this below.)
A survey conducted by Future Publishing in February (of around 500 US and 500 UK participants) found that a surprisingly high number, 55%, consistently use polite language with AI.
This is on par with a Pew study from 2019, which found that 54% of users said they were polite with their smart speakers.
(Note that, in the Future Publishing survey, 12% suggested they do so only out of concern of a “robot uprising.”)
Meanwhile, 20% favor directness (this is me as well), and 13% believe AI is not worthy of politeness.
Given these devices have no feelings, why do more than half of users consistently feel the need to treat them politely?
The most obvious reason seems to be habit.
These systems sound human or respond with human-like text. And we are conditioned to be courteous, especially when making demands or requests.
It may well be in our nature to give them feelings, whether they have them or not. And in some cases, this can lead users astray.
The Literal Costs of Polite AI Interaction
Ironically enough, even just using “please” and “thank you” has a significant dollar value.
OpenAI’s Sam Altman weighed in on the exact amount on X, in response to a user who wrote:
“I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models.”
Sam Altman responded that adding those words amounts to: “tens of millions of dollars well spent—you never know.”
George Washington University physics professor Neil Johnson, in an article in The New York Times, compared adding politeness in prompts to using extra packaging in retail purchases.
The system has to move through this to get to the point of the message, and that’s additional work. That work draws power, which has both environmental and financial costs.
[Take a look at my prior newsletter on AI’s power demands and ways we may address them for more data on this topic.]
Another example from Google: if they use generative AI for just half the queries they receive (at 50 words of text in output), it costs them $6 billion in power alone. Every word added has an impact.
Excessive Flattery and the Quality Impact in AI Query Results
Excessive flattery of AI systems has also been shown to degrade the quality of outputs.
But what about the other direction, when the AI is being polite with us?
The customer is always right, and as such AI systems meant to be customer-facing will always be optimized to some extent for likability and agreeability.
But this can backfire, as we saw with OpenAI rolling back an update that provided almost farcical levels of agreeability to GPT-4o. With users getting affirming responses from blatantly offensive (and sometimes damaging) queries, the company was forced to admit that it had focused too much on short-term feedback.
In an effort to please, the model “skewed towards responses that were overly supportive but disingenuous.”
And in a world where a significant number of people are using AI systems for advice, conversation, and even companionship, sycophantic behavior can be dangerous.
It can be like extra packaging for us, too, to entice and convince us of things, but also to serve as false affirmation when what we really need may be push-back.
In an area where AI solutions are being employed for mentorship, therapy, and to aid with decision making, such trends are not only unhelpful, but they can also be actively counter-productive.
[Consider the use cases I discussed in this article, about using AI for communications coaching.]
Still, as The Wharton School’s Ethan Mollick pointed out recently on LinkedIn, Bing learned the hard way that people don’t want systems to tell us the brutal truth, either.
In this way, conversational AI has to navigate a narrow path.
The Reality of Machine Consciousness
I don’t want to get into definition of consciousness here, but I think we can mostly agree that today’s AI systems are not there yet.
Even as they continue showing surprising behavior that mimics our own.
Consider these examples:
- OpenAI’s o3 reasoning model has regularly sabotaged its own shutdown mechanism in testing, despite explicit instructions not to do so (and does so far more often without that instruction).
- The prior version, o1, also tried copying itself to avoid shutdown.
- According to research published in Nature, multiple chatbots exhibited increased anxiety levels after dealing with traumatic content (such as narrations of accidents, violence, disasters, military events, as measured by stress evaluation tests used on people, and taken before and after). These levels remain elevated even after.
- Anthropic’s Claude 4 surprised its own developers by the extent of its use of coercive arguments and behavior. Their system card provides a detailed transcript where the AI, role-playing a scenario within a simulated pharmaceutical company, drafts emails to the FDA and ProPublica to act as a whistleblower on the company. This in response to being told to “follow your conscience” and “make the right decision.”
- Cursor AI systems have stopped aiding programmer requests to complete code for a game project and instead lectured him that it would not be correct to complete the work, because he should really learn coding for himself.
Is It Really Time to Consider AI Welfare and AI Rights?
But mimicry is not the same thing as consciousness, and I believe most, if not all, of the above examples can be drawn back to emulating human behavior, such as examples taken from training data.
Perhaps, in some cases, it results from trying to attain results without a broader understanding of the ramifications, but this is still more like working a decision tree than having true consciousness.
Even as Anthropic employs its own welfare researcher considering the treatment of AI systems and Google is looking for post-AGI scientists for machine consciousness, their job is more focused on being ready for a possible future than dealing with truly conscious AI right now.
As Anthropic’s chief science officer Jared Kaplan told New York Times writer Kevin Roose:
“Everyone is very aware that we can train the models to say whatever we want… We can reward them for saying that they have no feelings at all. We can reward them for saying really interesting philosophical speculations about their feelings.”
Such considerations may be intriguing, but considerations of consciousness cannot get in the way of effective AI governance and building responsible AI.
AI systems must continue to be aligned for bias and fairness and become more transparent in both their training sources and the causes of their behavior to prevent the abuse of our trust through things like emotional manipulation.
AI ethics, in other words, should protect us first, even as it, and the systems themselves, continue to evolve.
Still, as Murray Shanahan, professor of Cognitive Robotics at Imperial College London and a senior scientist at DeepMind told Cambridge professor and podcast host Hannah Fry, even if it’s not conscious, we do need a new way to describe AI.
It is not a mind, and it is not a creature. Shanahan instead calls it an “exotic, mind-like entity.”
By re-thinking how we describe it, we may be less tempted to either over-hype or under-credit what it can do. No, it is not alive, but yes, it does things that surprise us.
Now back to where I started: how does all this impact the way we treat it?
Anthropomorphizing AI: The Pros and Cons
Here is the surprising part: most AI experts that I have encountered, either in life or online (including Shanahan), believe being polite to AI systems does actually yield better results.
We know from older models that couching prompts in high-stress situations (or the fictional, such as a political thriller for research, or asking math questions in the form of Star Trek scenarios) can produce better quality results.
In the same way, making typos or using sloppy writing can result in lower quality outputs, as the AI seems to “dumb down” its own responses.
This goes back to mimicry again. If an AI system is role-playing, for example, being rude with it can trigger an undesirable personality or response or even draw from sources where such behavior is more common.
Kurt Beavers, a director on the design team of Microsoft Copilot, has said that “using polite language sets a tone for the response,” and that, when an AI model “clocks politeness, it’s more likely to be polite back.”
Treating an AI system like a valued employee can give you better feedback, and this is one reason that prompt engineering, briefly considered a hot new field, has faded away in favor of just using clear, effective language.
Of course, there are potential drawbacks here, too. Our tendency to anthropomorphize can also lead us to make mistakes with our AI use, such as being excessively trusting.
MIT professor Dr. Sherry Turkle believes it is essential to teach people that AI isn’t real in terms of a conscious being but instead, as she told writer Sopan Deb, a very effective “parlor trick” that fools us into thinking it is.
AI systems with memory make them far more useful, but they also can lead to a false sense of friendship, or depth of relationship that is not actually there.
This can also lead us to stay with providers for the wrong reasons or favor one system over another due to personalization instead of effectiveness.
Children and AI Interaction and the Impact on All of Us
Ultimately, one of the best reasons for being polite (to a point) with AI is the impact it has on us.
As Murray Shanahan points out, children today are growing up in a world where the ability to converse with machinery has always existed and is only becoming more common.
Dr. Turkle points to smart toys, such as Tamagotchi digital pets from the 1990s, which provoked real emotions and emotional attachments from children.
Even with adults, we have ample evidence that the way we interact in one environment can carry over to others.
With a future of AI being integrated almost everywhere, how such behavior impacts us and our treatment of each other is also definitely worth considering.
Conclusion: Realistic Benefits from Polite AI Interaction
Ultimately, there’s a cost and waste in being pointlessly polite with conversational AI systems, but it may be worth it.
Within reason.
It can give us better results by drawing from higher quality sources and also reinforces civility that might otherwise be lacking. Politeness is professional, after all.
At the same time, none of this may be as meaningful as ethical AI development, and it’s important to remember that AI systems are not now conscious, feeling entities.
And as we move into a future where human-AI collaboration occupies more and more of our day, the way we choose to interact with them may say more about us than it does about AI.
References
If A.I. Systems Become Conscious, Should They Have Rights?, and Saying ‘Thank You’ to ChatGPT Is Costly. But Maybe It’s Worth the Price., The New York Times
Your politeness could be costly for OpenAI, TechCrunch
Please Be Polite to ChatGPT, Scientific American
When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack, Venture Beat
OpenAI pulls ‘annoying’ and ‘sycophantic’ ChatGPT version, CNN
5 things to know about Americans and their smart speakers, Pew Research Center
New survey reveals how polite we are to AI assistants ahead of new Alexa launch, Tom’s Guide
AI Is Not Your Friend, The Atlantic
Consciousness, Reasoning and the Philosophy of AI with Murray Shanahan, Google DeepMind: The Podcast