back to top
Sunday, December 22, 2024
spot_img
HomeSciencePlease avoid seeking moral advice from ChatGPT

Please avoid seeking moral advice from ChatGPT

Is it right to inform a friend that their partner is cheating on them? Should I step in when I hear an inappropriate joke?

In situations where our sense of right and wrong is at stake, we often seek advice. Nowadays, people can also turn to ChatGPT and other large language models (LLMs) for guidance.

Many individuals find the responses generated by LLMs in moral dilemmas to be trustworthy, reliable, and nuanced, as shown in a preprint study where LLM responses were favored over those of ethicist columnist Kwame Anthony Appiah from the New York Times.


Supporting Science Journalism

If you are enjoying this article, consider subscribing to support our award-winning journalism. By subscribing, you are contributing to the future of impactful stories about the discoveries and ideas shaping our world today.


Studies indicate that LLMs can provide sound moral guidance. One study even found an AI’s reasoning to be “superior” to a human’s regarding virtuousness, intelligence, and trustworthiness. Some researchers suggest that LLMs can be trained to offer ethical financial advice despite being “inherently sociopathic.”

While LLMs seem to offer virtuosic ethical advice, there are some questionable assumptions. Research indicates people may not always recognize good advice when they see it, and the content of advice is not always the most important factor in its value. Social connection plays a significant role in addressing dilemmas, particularly moral ones.

In a 2023 study, researchers found that perceived expertise significantly influenced how likely individuals were to take advice seriously, though actual expertise and advice giving ability may not always align. Additionally, people tend to underestimate the value of subjective details over neutral, factual information.

While seeking moral advice from LLMs may seem convenient, interpersonal relationships offer social benefits that these models cannot replicate. Research shows that social interactions, even if uncomfortable, are often enjoyed more than expected.

When it comes to moral advice, evaluating the source and considering various perspectives is crucial. Sometimes reframing debates with strong moral beliefs can lead to more practical solutions. LLMs, being sensitive to how questions are asked, can provide inconsistent advice and might not be the best sole source of guidance.

Proceed with caution when seeking advice from LLMs. Human judgment, social connections, and personal values play essential roles in decision-making, especially in moral matters. It’s valuable to seek advice from both LLMs and friends for a well-rounded perspective.

Are you a scientist specializing in neuroscience, cognitive science, or psychology? Have you read a recent peer-reviewed paper that you would like to write about for Mind Matters? Please send suggestions to Scientific American‘s Mind Matters editor Daisy Yuhas at dyuhas@sciam.com.

This opinion and analysis article presents the views of the author(s) and may not necessarily reflect those of Scientific American.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments