Over the course of the 23 years I worked at the Edmonton Journal I wrote about all kinds of important things, from murders to forest fires to constitutional crises. But the two “investigations” that got me the most attention were among the least “important” stories of my career.
In 2004 I broke the story that a paper Ralph Klein had tabled in the legislature, an essay on the Chilean Revolution that he’d written for a class at Athabasca University, was a collection of quotes lifted from other sources without citations. (Our lawyers advised me not to use the term plagiarism. Other news outlets were less punctilious.) How did I score this scoop? I’d simply typed various phrases from Klein’s paper into Google—which was still something of a novelty—and found the sources he’d borrowed from so liberally.
Five years later I broke a story about the Stelmach govern-ment’s multimillion-dollar rebranding campaign, which featured a photo of two blond children, in cable-knit sweaters, frolicking on a beach. Except the kids weren’t Albertan. Nor was the beach. It was a stock photo of English children in Northumberland, playing near a castle that, according to legend, had been home to a knight of the Round Table. The Journal story, headlined “Sir Lancelot wants his beach back,” included a ludicrous claim from the government that there had been no intent to deceive. “The children are a symbol of the future,” I was told. “They symbolize that Albertans are a worldly people.”
From the vantage of 2026 it may be hard to understand why these two stories were front-page scandals. With ChatGPT and other large language models running amok in high school and university classrooms, Klein’s cut-and-paste sloppiness seems almost quaint. At least the premier went to the effort of finding sources and copying them himself. Today artificial intelligence has automated plagiarism and made a mockery of our long-held taboos about passing off other people’s work as our own. Intellectual property? Original thought? What do they matter, when with a few keyboard strokes a program can stitch together a plausible paper—or column? Never mind that the AI is stealing or, worse, making stuff up, adding citations and quotations it has simply manufactured.
And oh, how I long for a time when stock photos were the worst of our problems. In 2026 AI doesn’t just plunder bits and pieces from original work to concoct believable fakes, stealing the livelihoods of artists and photographers and graphic designers. AI has made it simple to strip everyday photos of women and girls, turning them into instant fake porn.
Artificial intelligence can manipulate photos and videos so we no longer know what’s true.
Indeed, AI can manipulate photos and videos so that we no longer know what’s true or false and when to believe the evidence of our eyes. When Nekima Levy Armstrong, a Black civil rights activist from Minneapolis, was arrested this past January for protesting, the White House shared an AI-altered photo of her arrest. The doctored version darkened her skin, removed her lipstick, made her look heavier, and turned her proud stoic expression into a flood of messy tears. Amidst all the horrors in Minnesota that month, that incident didn’t get much attention. But the racism, the misogyny and the petty malice chilled me in a different way than the murders of Renee Good and Alex Pretti—the White House was willing to trample the truth with an ease that would have made Goebbels pea-green with envy and Orwell sob into his beer.
I’m no Luddite. I’ve toured the cutting-edge AI labs at the University of Alberta and the Alberta Machine Intelligence Institute. Alberta researchers are world leaders, doing exciting work in machine reinforcement learning and developing AI to do everything from improving the diagnosis and treatment of cancerous tumours to running power plants more efficiently. Nor am I worried about sentient AI becoming self-aware and leading a robot rebellion. What worries me isn’t smart computers—it’s stupid humans, using the crudest energy-sucking generative AI tools to make themselves dumber, debase creativity and undermine the very concept of truth itself. (And I haven’t even touched on the horror stories of people who fall in love with their chatbots, or turn to them for dangerously bad advice, without seeming to understand they’re caught in a narcissistic feedback loop, speaking to their own reflection.)
When I see governments and businesses pushing people to use AI to answer every email, goose every online search or even draft official documents, I can only hope this fever dream will pass, that it will become fashionable again to write and create and think for ourselves. Until then, I’ll be boycotting anything that blurs the line between reality and faked fantasy. Maybe I am a latter-day Luddite. After all, they weren’t afraid of machines. They were afraid of what the Industrial Revolution would do to their communities and societies. Before our own post-industrial AI revolution overwhelms us, let’s decide which human values we’re willing to sacrifice—and which we’re not.
Paula Simons is an Alberta senator and a member of the Standing Senate Committee on Transport and Communications.
____________________________________________
