From left to right: Dr. Kamal Al-Solaylee, Dr. Alfred Hermida, Dr. Saranaz Barforoush, Nicola Jones.
Reported by Helen Wu
“Don’t let the chatbot be your first thought or your last thought,” said Nicola Jones, a science journalist giving advice to the audience. “Start yourself, let your brain do some thinking, because that’s how you learn, and then consult a chatbot to expand your thoughts and find new perspectives.”
Graduating from UBC in 2000, Jones was among the first students in the Master of Journalism program. In celebration of the 25th anniversary of UBC’s School of Journalism, Writing, and Media (JWAM), she returned to the UBC campus at Robson Square for a panel discussion joined by Dr. Alfred Hermida and Dr. Saranaz Barforoush, both professors in the journalism program, discussing how artificial intelligence (AI) is shaping media, education, and science.
Dr. Kamal Al-Solaylee, director of the school and moderator of the event, started the presentation by looking back at the past years of the program. The turn of the century was still the early years of digitization, and 25 years later, it’s the dawn of AI.
When encountering new technology, it’s natural to feel both curiosity and fear. Barforoush recalled showing her students a clip from the Today Show dated back to 1994, where the hosts were discussing the Internet as a phenomenon. The “@” symbol, which we have fully accepted and stopped paying much attention to today, was a peculiar sign back then. The young generation might laugh at this anecdote, and yet, as Barforoush reminded the audience, what AI is to us now is what the Internet was to people three decades ago.
“What begins as alien can become second nature,” she said, “and it can transform and threaten journalism in ways we cannot predict.”
A veteran journalist and teacher, Hermida believes decision-making by human beings is more critical than ever before. Instead of replacing workers with AI to make journalism “cheaper,” publications and media outlets should try to make it “better.” As Hermida pointed out, human labor is incomparable in journalism, where physical presence, in-person communication, ethical decisions, and unique perspectives play important roles.
Whether it’s the internet or AI, technology can be a saver or a killer; the choice lies in how we use it and what we use it for. “We can use it to create fake videos with crocodiles, or we can use it to tell interesting 10-second stories,” Hermida said.
Similar duality exists in science. Jones acknowledged the benefit of embracing AI in scientific research for collecting and analyzing large sets of data, but she was also concerned by the inaccuracies observed in many different aspects. Among them, illogically generated content presents a peculiar challenge.
“I once asked the generative AI to draw a ball bouncing around in a theater,” Jones said. “The ball bounces, and it goes down, and then it bounces back up; it goes around, and then it hits the ceiling. It doesn’t know what gravity is.”
Although AI seems to be knowledgeable and powerful in many ways, it’s fundamentally “stupid” at the current stage. The AI’s tendency to produce so-called “AI hallucinations” is a major issue that comes from its lack of fundamental understanding of the world because it can have major implications on real world applications (e.g. law or healthcare). And yet, it’s been designed to give an answer even if it does not know the answer.
“The thing with these systems is that they try to please, which is why they’re hallucinating,” Hermida said. “It’s like those students in your class: they desperately want to give you an answer. They don’t know that it’s okay to not have an answer and that they can fail.”

However, the panelists are not trying to keep people away from AI. This is especially true for educators responsible for the young generation and future journalists in classrooms. Leading a journalism course on AI in the second term, Hermida encouraged people to play and explore, as AI has already become an integrated part of people’s daily lives. It just has to be done with caution.
“A lot of discussions about generative AI are driven by fear,” Hermida said. “Fear is not your friend; caution is.”
In the Q&A session, a local startup founder outside of the media industry asked about what happens when factual truth, manufactured content, and synthesized information coexist. It’s evident that the public is deeply concerned with the accuracy of the media content they consume daily.
Barforoush, who specialized in media ethics, emphasized that the issue of “trust” has long existed between the media and the public, while it has worsened since the boom of AI and fake media content. Journalism education therefore needs to not only prepare young journalists for working with AI but also raise their awareness of credibility and media literacy.
“I think it’s been a long time since students came to higher education just for knowledge transfer,” she said. “Students come to classes for skills like critical judgment and ethical reasoning. They come for the discussions that can only happen through human interaction inside a classroom, where complicated, complex topics like ethics are discussed, inviting people with different perspectives.”
On the other hand, truth isn’t always at the top of media consumers’ priorities. Stories can provoke thought and stir emotion even when they’re not true, especially when audiences are searching for sentimental value. According to Hermida, research found that many people in populist movements evaluate news less by its accuracy and more by how much it aligns with their worldview. Today, the powerful influence once held by myths and fables can be easily found in manipulated content that caters to confirmation bias.
To reshape the media landscape, we’ll need “credibility builders” among the younger generation: those who can help audiences tell truth from fabrication. Meanwhile, media professionals still have a long road ahead. In the age of AI, journalism programs like UBC’s will continue their pursuit of truth over the next 25 years.
