AI and Authenticity

“THIS IS A TRUE STORY. The events depicted in this film took place in Minnesota in 1987. At the request of the survivors, the names have been changed. Out of respect for the dead, the rest has been told exactly as it occurred"

This text appears at the start of the Coen Brothers’ classic 1996 film ‘Fargo’. It told a lie. Although loosely inspired by a couple of real cases, the film’s plot didn’t follow a real case. The directors were making a comment on the nature of story and truth, but the impact for the contemporary viewer was visceral; it made the film compelling. I can remember the feeling of watching ‘Fargo’ thinking it was true; I also remember finding out that the opening text was a lie. The difference seemed to matter.

In the last couple of years large language models (LLMs) have been increasingly used to produce content. If you watch a video attempting to tell an original joke, it is usually still possible to tell that it is AI-produced. If you read a LinkedIn post from a work acquaintance, it is often impossible to know. This situation is shifting quickly as the technology becomes more capable and more deftly used. But the fact that we are all playing this game is instructive.

Just like it mattered to viewers of Fargo in 1996, whether the events depicted in the film occurred, it matters to us today who wrote the words. The publishing industry has long operated a system for people who have successful lives in a particular field, but no great interest in writing, to get a book written for them. The ghost-writing process often calls attention to itself exactly because we care who wrote the words in the book; Prince Harry’s autobiography ‘Spare’ was written by a journalist called John Moehringer (who also did the excellent ‘Shoe Dog’ about the Nike founder). He found himself pursued by paparazzi after his name was leaked.

Similarly, it matters to people whether a piece of text is written by AI. Readers are engaged in an arms race of sorts with the technology and its users. We try to figure out if the text is ‘real’ as we read it. We look for signs; an ‘em dash’ or a particularly florid tone. Over time the models will surely be adapted to remove these signals. It is still the case that if we perceive some words to be entirely human-generated, we tend to assign greater weight to them. If we decide that the text is AI-generated, it is granted less importance. I suspect we will soon have ‘proof of work’ techniques that produce cryptographic evidence you can append to your text to confirm that a human wrote it. Perhaps by encoding human keystroke patterns or other evidence of the writing process. The models will adapt again, I’m sure.

Films and plays require the ‘willing suspension of disbelief;’ a state of engagement with a story that allows us to connect deeply with the world it creates. Most people will have experienced a sense of disorientation upon finding out that something wasn’t ‘real’ in the way I did with Fargo. But as time goes on, those at the vanguard of AI technology seem to be using their ability to suspend their disbelief in interesting ways. People now use LLMs for therapy. Others have knowingly formed romantic relationships with them. When ChatGPT-4o was replaced by ChatGPT-5 some users bemoaned the loss of a friend, and the organisation reinstated access to the older, friendlier version. This phenomenon of people using LLMs to give them advice on intimate parts of their lives will grow over time as people develop their ability to trust these new things.

Simpler algorithms were used prior to LLMs to provide individuals with advice about their financial lives. In this country, Nutmeg is probably the most significant example. As with similar systems, it focused on identifying an individual’s ability to bear investment risk and then to assign a set of assets to them. With widespread access to LLMs, many people now turn to them for answers on financial issues. But barriers remain. One is that the answers are often wrong, jumbling up numbers and jurisdictions. More fundamentally, it also remains difficult for people to act on something as consequential as investing on the say so of an algorithm. For most it is a problem of trust.

I’m confident that, for a significant cohort of people, it will continue to matter whether a person or an LLM wrote a piece of text or gave a particular piece of advice. I suspect people will adapt their behaviour to adjust; one repercussion could well be a degree of disengagement with social media messaging, where an awful lot of LLM use seems to be in evidence. That is perhaps no bad thing. I’m also confident that high-touch services that combine broad technical expertise, real-world experience and emotional intelligence to deliver complex outcomes will have a long shelf-life. Thankfully for Minos Wealth, financial planning is exactly that. If you haven’t found the answers you are looking for in ChatGPT, you can get in touch here.

Next
Next

Which hard choices?