Content by no one for no one: Surviving the Gen AI bubble

“The creative spirit of humanity is an incredibly important thing. And we want to build tools that lift that up, that make it so new people can create better art—better content, write better novels that we all enjoy. I do believe that humans will be at the center of that.

“I also believe that we need to figure out some sort of new model on the economics of creative output. I think there are incredible new business models that we and others are excited to explore…”

Thus spoke Sam Altman, CEO of OpenAI at TED early this year. The interview has been watched 1.7 million times at this writing.

OpenAI, you will recall, is a company founded by Sam Altman, Elon Musk and other online technology experts in 2015 “with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity”.

The OpenAI founders are all pretty smart people. Neither Sam Altman nor OpenAI President Greg Brockman graduated from university, which is certainly evidence of just how smart they are.

Smart people graduate from Stanford. Really, really smart people drop out. Apparently. Brockman actually dropped out of two elite universities, so that may make him some extra extra smart.

In 2023, Altman and Brockman also both dropped out of their own company when they were offered a sweet deal to leave OpenAI for a new AI team at Microsoft. The two were quickly lured back to their alma mater by shareholders where they remain, for now.

It would be churlish not to admit that Gen AI has positive qualities. But fidelity and integrity are not among them.

I gives the appearance of performing great feats mostly because we can’t see what goes on behind the scenes. We can’t see what’s happening in the kitchen.

So here is a quick recap of how your favorite Gen AI tool actually works (spoiler: there is no kitchen): …

How Gen AI actually works

Gen AI works like a wacky restaurant from a Dr. Seuss story.

Imagine you’re sitting at your eat-a-ma-thing. You type out what you would like to eat. You describe it in detail.

What then happens is your fancy robot waiter zips downstairs into a deep vault containing every imaginable thing that could be eaten.

The owners of the restaurant have travelled the world and stolen every version of every sandwich and pie and salad and crab fried rice and stored them in a temperature-controlled facility as big as Ireland.

Speeding through the vault, your robot waiter picks a meal off the shelf which looks closest to the meal you described. It zips back up to the restaurant and presents that meal to you.

You are delighted. You say “My compliments to the chef. He’s a genius. Just like Sam Altman.”

Of course the chef is not a genius. There is no chef. There is only a vault filled with every meal that anyone has ever made.

You decide you would like your meal with pepper on it. But the robot waiter doesn’t add pepper. What it does is zip back down to the vault and find a version of your meal with pepper on it, which has also been waiting in the vault, all this time, ready to go.

If you would like even more pepper, the robot waiter does not add even more pepper. It goes back and finds yet another version of the meal, with even more pepper, that has also been stored in the vault all this time.

If you decide you want to add ice cream, the waiter doesn’t go and retrieve ice cream. It goes and retrieves the version of your meal (with the extra extra pepper) with ice cream this time

The reason your waiter can bring back your meal so quickly is it uses a jetpack powered by truffula trees.

Alright, so Gen AI doesn’t actually work like that.

And, yes, actually Gen AI does actually work like that.

The main point to ingest here is that in order for it to appear that a chatbot is catering casually to your every whim, it has had to steal and store an awful lot of versions of an awful lot of meals and use an awful lot of natural resources to give you the impression that it’s serving you on the fly, and to ensure you give the restaurant a five star review.

It doesn’t always get it right. When you ask for KFC, you can always tell there are only 8 or 9 secret herbs and spices in there.

Stolen property

“In my first encounter with generative AI, my feelings were that this is magical. How on Earth does it work?” said Gen AI expert Graham Lovelace to the ¡AU! Content & IP Defense Summit. Lovelace has become a top analyst on the effects of Gen AI on the media and entertainment sector.

“Then it very quickly turned to ‘This is going to take everyone’s job’. The more I looked into it, the more angry I became. It quickly became apparent— which we all know now—that the large language models behind the well known chatbots have been trained on stolen material.”

The Content & IP Defense Summit looked at the traditional culprits in content theft— pirates and hackers—but there is a new genus of content thieves operating globally.

They are the biggest content pirates in history.

And governments seem to be falling over themselves to collaborate with them.

“They have been trained on the world’s intellectual property,” Lovelace said. “They have scooped up that IP without consent, without any offer of compensation, and now in an act of organized crime, are creating the most capitalized companies on Earth ever. And yet the content creator industry is now going through some very, very hard times.”

THE BIG THREE GEN AI TOXINS

Lovelace explained that the threats posed by Gen AI to content businesses fall roughly into three main categories:

I. EROSION OF TRUST

The ability for anyone, anywhere to instantaneously produce highly convincing text, image, audio and video content interferes with our ability to tell what is true.

Gen AI content can be very convincing—you could say its primary purpose is to be convincing. Even when we intellectually know what we’re seeing isn’t true, our brain still may still register it as a real event, something we have an opinion and feelings about. Remember the clip of Volodymyr Zelensky knocking out Donald Trump with a right hook—and the subsequent surge of well-being one felt?

We react to what we see viscerally, before the rational mind can step in to explain that what we’ve seen isn’t real.

Knowing this, and knowing that realistic AI imagery is running loose in the media ecosystem, we may start to experience doubt about what we see. Even blatantly obvious attempts to fool people can do a lot of damage before they are debunked The week of this writing, a deep fake video of Alexandria Ocasio-Cortez, in which she appeared to criticize to absurd extremes a controversial American Eagle jeans ad, was amplified by television host Chris Cuomo. On social media, Cuomo used the video, which he thought was real, as an opportunity to attack AOC.

Cuomo was ridiculed online, by AOC among others, for falling for the deep fake. He delivered an apology on his show, which included: “I was wrong…but what is right?”

With deep fakes and AI generated content becoming part of regular discourse, it becomes equally easy to ridicule genuine footage as fake AI content. The technique of labelling your opponents assertions as a hoax or “fake news” is already in regular use now. When the pervasive attitude becomes “who knows what’s real anyway”, social contracts, democracies, and even markets start to weaken.

Untrustworthiness is built into the DNA of Gen AI itself. The purpose of Gen AI is not to deliver something true, but to deliver something you feel to be true. A chatbot’s entire goal is to make you satisfied with its response.

“The technology can’t help but give you an answer to a question,” Lovelace said. “No chatbot will every say ‘I don’t know the answer to that.’ They will always have a go, based on the corpus of data they’ve been trained on, and try to infer an answer.”

II. DAMAGE TO INTEGRITY

When companies use AI to create content they might be playing with fire as far as brand integrity and public image are concerned.

The point that needs to be stressed repeatedly is that Gen AI is designed to satisfy. And it’s easy for us to confuse feeling satisfied with getting the truth. Fun fact: The truth often leaves us feeling unsatisfied and uncomfortable. If you’re feeling satisfied all the time, you may not be getting enough truth in your diet.

Every week there is a new example of AI making someone look ridiculous.

One recent AI self-own was perpetrated by Vogue, a brand which has spent decades honing an image of quality, exclusivity, and elite glamour. But for an internet moment this summer, they are a laughing stock—along with their advertiser Guess— for running an ad featuring an AI-generated model.

Unrealistic body proportions have been a staple of fashion since ancient times. The fact that the photorealistic model in the ad is about 10 heads tall, though weird, is in keeping with tradition. The laughable part is the rest of its anatomy which includes a unnerving mermaid-like waist, a suggestion of more than two legs beneath the dress, arms of different lengths, and a gigantic hand that looks like it should be covering John Hurt’s face. At a glance, most people are convinced by the ad. Their eyes take in the image, and their brain says “Yep, an attractive woman.”

But designers, brands, and others who know quality— Vogue advertisers, for example—will care.

Vogue’s brand is about adorning real human bodies. While AI is a tool to make it easier to avoid human bodies—with all their whining and their need for money, food, and time.

The AI fashion company commissioned by Vogue, Seraphinne Vallora, is run by two London architects. Their website says “We want to harness the incredible power of AI to revolutionize marketing images. We realized that AI offered a hassle-free path to design brilliance.”

The company has created AI campaigns for Elle, Grazie, WSJ and the Financial Times.

Of course, we only have the word of Seraphinne Vallora that this is an entirely original AI creation. The training data is likely to remain forever obscure. Could it be, in part, based on the likenesses of real people? When you use Gen AI, you are exposing yourself to potential lawsuits later.

Lovelace explains: “We will very soon have some massive scandals where we see the interior of people’s homes or hear things that have been said in private moments which will appear for everyone to see because of what these have been trained on. Identifiable images of our children will start to be available.”

III. FINANCIAL INSTABILITY

As a “disruptor” technology, AI is luring media companies into jettisoning current business models, perhaps without thinking through the consequences.

In a very short time, Google has ceased to be useful as a search engine, with its AI having been pushed forward to answer questions directly—based on other people’s content—to keep you from clicking out of the main Google search page.

“News publishers in particularly are dependent on search for between 30% and half of their traffic,” says Lovelace. “Now AI scrapes those publishers’ content to create AI-generated snippets. They would argue that they provide the links and citations, but it’s clearly the aim of the developers to keep you in the Google AI overview environment.”

ChatGPT Search, Perplexity and other tech companies are exploring the same model. Why be a gateway to other information sources, when you can scrape those sources and become the sole destination yourself?

AI is often touted as a way for smaller companies to do more with less, a tool to empower and rejuvenate a variety of sectors. This doesn’t appear to be the case with news and media production. In fact, it’s still unknown how much AI can add to the bottom line.

Right now, companies are laying off staff—or dismantling government departments—based on the projection that AI will be able to take over those jobs much more efficiently. But these decisions are based on virtually zero real world data. IBM boldly laid off thousands of employees, assuming AI would somehow take over their jobs, and ended up hiring almost as many back.

Big changes in tech generally fuel changes in society. A historian of Europe will tell you that the printing press helped launch religious wars which killed millions of people. One difference between AI and the printing press though is the printing press reproduced information faithfully.

Gen AI can create an infinite amount of content, but for whom and for what reason is an unanswered question.

Are billions of dollars that could be funding local news, journalism schools, arts education for kids, performance venues, and community media ultimately being spent on infrastructure to create content by no one, for no one, for no useful purpose?

This article originally appeared in the Autumn 2025 issue of ¡AU! Journal.

Watch Graham Lovelace discuss the pitfalls of Gen AI at the Content & IP Defense Summit