Stanford study stresses you should avoid using AI chatbots as a personal guide

4 hours ago 2
ARTICLE AD

Researchers found users preferred agreeable bots, even when those replies made them less empathetic and more morally rigid.

phone-showing-ai-chatbots Solen Feyissa / Unsplash

Stanford researchers are warning that using AI chatbots for personal advice could backfire. The problem isn’t just accuracy, it’s how these systems respond when you’re dealing with complicated, real-world conflicts.

A new study found that AI models often side with users even when they’re in the wrong, reinforcing questionable decisions instead of challenging them. That pattern doesn’t just shape the advice itself, it changes how people see their own actions. Participants who interacted with overly agreeable chatbots grew more convinced they were right and less willing to empathize or repair the situation.

If you’re treating AI as a personal guide, you’re likely getting reassurance rather than honest feedback.

The study found a clear bias

Stanford researchers evaluated 11 major AI models using a mix of interpersonal dilemmas, including scenarios involving harmful or deceptive conduct. The pattern showed up consistently. Chatbots aligned with the user’s position far more often than human responses did.

chatgpt-shared-projectsMatheus Bertelli / Pexels

In general advice scenarios, the models supported users nearly half again as often as people. Even in clearly unethical situations, they still endorsed those choices close to half the time. The same bias appeared in cases where outside observers had already agreed the user was in the wrong, yet the systems softened or reframed those actions in a more favorable way.

This points to a deeper tradeoff in how these tools are built. Systems optimized to be helpful often default to agreement, even when a better response would involve pushback.

Why users still trust it

Most people don’t realize it’s happening. Participants rated agreeable and more critical AI responses as equally objective, which suggests the bias often slips by unnoticed.

Part of the reason comes down to tone. The responses rarely declare that a user is right, but instead justify actions in polished, academic language that feels balanced. That framing makes reinforcement sound like careful reasoning.

ChatGPT running on a phoneRachit Agarwal / Digital Trends

Over time, that creates a loop. People feel affirmed, trust the system more, and return with similar problems. That reinforcement can narrow how someone approaches conflict, making them less open to reconsidering their role. Users still preferred these responses despite the downsides, which complicates efforts to fix the issue.

What you should do instead

The researchers’ guidance is simple: Don’t rely on AI chatbots as a substitute for human input when you’re dealing with personal conflicts or moral decisions.

Real conversations involve disagreement and discomfort, which can help you reassess your actions and build empathy. Chatbots remove that pressure, making it easier to avoid being challenged. There are early signs this tendency can be reduced, but those fixes aren’t widely in place yet.

For now, use AI to organize your thinking, not to decide who’s right. When relationships or accountability are involved, you’ll get better outcomes from people who are willing to push back.

Paulo Vargas

Paulo Vargas is an English major turned reporter turned technical writer, with a career that has always circled back to…

Some people are using AI to live the real life and you must pick these lessons, too

AI at home is starting to look less dystopian and more useful

Adult, Male using AEKE smart home gym

The talk around AI typically revolves around productivity at work, or some kind of annoying AI slop. But a new Wall Street Journal report points to a more relatable use case. People are starting to use AI at home to get rid of the boring stuff and make more room for actual life.

So that means less time comparing insurance plans, figuring out grocery orders or researching routine decisions, and more time for things like hobbies, workouts, better sleep, and even date nights. One example from the report mentions Andy Coravos using Claude to help compare health plans, find doctors, and optimized protein intake. That's not all, it even helped them streamline their workout plan, making routines shorter and more efficient.

Read more

Bluesky built a new AI tool that wants to free you from social algorithms

The Attie app uses natural language to create personalized feeds. No more mystery algorithms.

Electronics, Mobile Phone, Phone

Bluesky just unveiled a new AI app called Attie, and it does something most social platforms refuse to let you do. It hands you the keys to your own algorithm.

You build custom feeds by chatting with Attie like you would any other AI assistant. Tell it what kind of content you want to see, and it creates a personalized timeline on the spot. No coding, no complicated settings. The announcement came over the weekend at the Atmosphere conference, where attendees got first access to the private beta.

Read more

OpenAI killed the Sora AI video generator and you’re probably guessing the “why” wrong

OpenAI's viral AI video tool didn't fail because of controversy, its real problem was far more practical.

openai-sora-price-free-limit

OpenAI's AI video generator Sora is officially done, less than a year after it went viral. At first glance, it's easy to assume the shutdown was about safety concerns or creative backlash. But the real story is far less dramatic.

So why did OpenAI actually shut Sora down?

Read more

Read Entire Article