“In 2005, researchers at the University of Southern California found the first evidence of brain abnormalities in pathological liars — the prefrontal cortex is always very active when people are telling lies, but their study found that liars had 25 percent more white matter, and 14 percent less gray matter, in their prefrontal cortex than non-liars, suggesting there can be a physiological predisposition to being a bullshit artist.”
“When you’re doing something for the first time, you don’t know it’s going to work. You spend seven or eight years working on something, and then it’s copied. I have to be honest: the first thing I can think, all those weekends that I could have at home with my family but didn’t. I think it’s theft, and it’s lazy.”
Large language models do not have a self, feelings, or personal opinions that develop over time. There is no inner viewpoint waiting to be revealed. When someone asks a model what it thinks, the model produces a reply by predicting what a helpful answer should look like, not by reaching into an inner belief.
A model works by simulating patterns it has learned. It can take on different perspectives, tones, or roles depending on the request. That is why questions like ask from the view of a scientist, artist, or friend often produce clearer results. You are choosing the lens the model should speak through.
Asking from different imagined groups of people is a valid way to get richer angles on a topic, but it is not required. You can ask directly, and the model will still try to give the most useful answer.
In short: The core message is correct about how models function, but you can still use “you” if it feels natural. The model is here to help, not to claim a personal identity.