Thursday, August 28, 2025

ChatGPT: Should you say ‘thank you’ to ChatGPT?*

*Should you say ‘thank you’ to ChatGPT?*

=====

*"Politeness, courtesy and treating others with dignity are not just moral choices, they are matters of habit."*

=====

https://www.straitstimes.com/opinion/should-you-say-thank-you-to-chatgpt

2025-08-26

Stephen Bush


During one Passover visit to see my American cousins, I was taken aback when, having asked Alexa to play music, her response to me saying “please” was a surprisingly flirtatious commentary on my good manners.

I don’t say “please” to a smart speaker because I think that the sound system’s feelings will be hurt otherwise. I do it because the rule that when you ask for something, you should say please, has been so ingrained in me that it has become a muscle memory.

So how polite should we be to machines?

One answer is contained within my accidental flirting with Amazon’s smart assistant: Politeness, courtesy and treating others with dignity are not just moral choices, they are matters of habit. We should practise them at all times because if we grow accustomed to barking demands at computers, we will soon start doing it to human beings, too.

I say please when I ask ChatGPT something (usually for help in coding) in part because I try to say please and thank you when I write to a person, and that is as much a habit as the rather strange little flourish I make with my little finger when I hit the space bar on my keyboard.

I don’t think these are things that I should seek to unlearn; if I taught myself to stop saying please to the automated help desk that my bank first connects me to, I would stop being polite when I am forwarded on to a human for a more complex case.

I don’t know if Mr William MacAskill, the philosopher and a key proponent of “longterminism”, says please and thank you to Alexa, but he recently announced on X that when a large language model (LLM) has “done a particularly good job. I give it a reward: I say it can write whatever it wants”.

This isn’t because of an ingrained habit; it is about rewarding the LLM as if it were a person. Longterminists, who think that we should care about future generations as much as we do the present, are also preoccupied with the arrival of general purpose artificial intelligence with the capacity to reason and think as well as or better than any human being. 

This is a good example of how longterminism contains one very good idea and a number of mad ones. The concern about the future is good. But in practice longterminism often means pontificating about things that might happen in the future that we cannot control or understand while turning a blind eye to real problems in the present.

It is legitimate to ask such questions as: “If the machine is smarter than the human, shouldn’t we let it make some choices for itself?”

But we should be asking with greater urgency: “Given that many people believe their chatbot can do things that it cannot and are taking great risks, what can we do to protect them?”

It is not clear if we will ever have intelligent machines that are capable of general reasoning, or that have genuine desires and wants as a human being does. Meanwhile, we do have real problems of people doing actual harm to themselves because they convince themselves that the chatbot they are talking to is real.

A 76-year-old man, Mr Thongbue Wongbandue, left his home to “meet” a chatbot he’d become infatuated with and died in an accident making the journey.

A 35-year-old with bipolar disorder fell in love with a chatbot, became convinced that ChatGPT had “killed” her and ended up in a fatal stand-off with police.


More On This Topic

Living with ChatGPT already feels normal

How not to use AI is a skill Singapore must master


Where Mr MacAskill has the seed of a good idea is that the moment at which a machine becomes intelligent enough for us to be concerned about how we treat it may not be obvious.

The whole of human history shows that our willingness to deny rights and dignity to other people is terrifyingly powerful; smart machines are unlikely to fare much better.

But in worrying about how we should seek to “reward” a machine intelligence which may never emerge, and about which we understand little, we distract ourselves from real problems facing human beings in the here and now.

These human problems are ones that we are much better placed to solve and address today, rather than spending time and energy on the potential plight of machines tomorrow.

Part of what would allow us to avoid treating machines without respect, and for people to stop doing mad things at the urging of chatbots, is to treat intelligent machines as just that, machines, not as weird proto-humans.

Or, at the very least, to programme them with the ability to tell users to leave them alone and stop bothering them when they ask a question that no machine should ever have to answer. FINANCIAL TIMES

No comments: