Utah and its flagship university are diving headfirst into artificial intelligence, come what may. | Cover Story | Salt Lake City Weekly

June 25, 2025 News » Cover Story

Utah and its flagship university are diving headfirst into artificial intelligence, come what may. 

Out of the (Black) Box

Pin It
Favorite
COVER BY FREEPIK
  • Cover by Freepik

Patrick Abbott loves cinema and would very much like to be a filmmaker. As a University of Utah student, he initially studied screenwriting before turning to editing, with the pioneering editor and sound designer Walter Murch (The Godfather, Return to Oz) as his inspiration.

"There's a lot of creativity left to the editor to make things their own," said Abbott, who asked to be identified by a pseudonym. "It highlights the ability to so drastically change something."

Looking for upper division credits to complete his degree, Abbott was intrigued by an experimental course listed in the U's catalog for spring 2024: "AI Filmmaking."

"I took it initially thinking it would broaden my horizons in terms of the subject," he recalled. "I figured that since it was from the university that it would have more research and thought put into it. I was left unimpressed."

The stated goal of the class—according to its syllabus—was to familiarize students with generative AI tools in a workshop atmosphere as they produced "synthetic media" projects, and to "give them a competitive advantage as contemporary artists in the 21st century." Other listed objectives included understanding the role AI "will" play in future filmmaking, identifying "disruptive innovations" allegedly wrought by the technology, and reflecting on the changing role of the artist "in a world where AI can generate new art."

AI generators—such as China's Kling and San Francisco's ChatGPT—are introduced to students in the first few weeks, leaving the rest of the semester for them to develop their synthetic media. While an AI photo program is available with the Adobe package provided through their tuition, students are advised by their instructor to use as many other platforms as possible, necessitating paid subscriptions. And multiple AI prompts are typically required to produce a desired shot, adding to the out-of-pocket costs.

But expenses aside, Abbott said he was bothered by the central features of the course: the premise that AI was "creating" art at all and that AI's many costs and drawbacks were going unmentioned.

click to enlarge “I’ve always tried to plug in the next new thing.” - —Kenneth Collins - COURTESY PHOTO
  • Courtesy photo
  • “I’ve always tried to plug in the next new thing.”—Kenneth Collins

"I think it's harmful to the students to introduce AI and treat it as if it's on equal ground with traditional filmmaking," Abbott remarked. "To act like prompting something is equal to art is irresponsible. I can't think of a worse thing to tell young creatives than that anything they get right is because of the machine and anything that comes out poorly is due to user error."

Kenneth Collins, the course instructor, takes such criticisms in stride. An affable man in his second year of teaching at the U, Collins is a professional artist with a background in live performance.

"I've always integrated technology in my work," Collins explained. "Over the years, I've always tried to plug in the next new thing."

Jumping the Gun
It wasn't until the social distancing of COVID that Collins developed an interest in incorporating AI into his work. His subsequent graduate studies at the University of Iowa advanced that interest even further.

Today, with the technology supposedly improving from week to week—or so he insists—Collins believes that AI will be used either to produce entire films or augment traditional ones.

"I think it's becoming more user friendly and accessible as a technology," he said. "It's going to become increasingly incorporated into people's workflows."

He acknowledged how little even he perceives about AI. "Most of us don't understand how any of these programs really work," Collins said. "There's a little bit of a black box phenomenon happening."

Collins noted that students who take his course hold different perspectives regarding AI, some of whom eventually warm to it, while others are quite the reverse. But he emphasized that demand for the course has been "consistently strong." Consequently, he has organized campus events like the Deepfake Festival and Live/Wired to showcase his students' synthetic work.

Art, to Collins, is more of an "intellectual exercise" than one of ability. As to public concerns about AI—as a devourer of copyrighted material to train its models or its enormous use of power—such issues are not his "sandbox."

"I don't know that it's entirely clear to me what it means to make something original aside from AI," he says. "These are issues that are worthy to be addressed and there are numerous people in the world doing their best to solve them."

He also asserts that the power demands of AI data centers will decline over time. "I think its environmental impact will hopefully be mitigated by its ability to figure out solutions to its environmental impact," Collins contended.

click to enlarge “Our ability to use [AI] as a tool is going to evolve.” - —Manish Parashar - COURTESY PHOTO
  • Courtesy photo
  • “Our ability to use [AI] as a tool is going to evolve.”—Manish Parashar

Such was the optimistic tone of Manish Parashar and James Agutter, representatives of the university's One-U Responsible AI Initiative (RAI) and the Center for Teaching Excellence, respectively, with whom Abbott met to discuss his experience in Collins' class.

"AI has gone from zero to 100 very, very quickly," Parashar told Abbott during a video call (which City Weekly also attended), admitting that AI has ethical, environmental and security issues as well as bias.

Parashar said that he and his colleagues at RAI seek to build the "right guardrails" for AI's responsible use by students, faculty and administrators.

"We're going to come up with new concepts—formulations that are more energy-efficient, that use less data," Parashar said. "I think the technology itself is going to evolve and our ability to use it as a tool is going to evolve."

Launched last year, RAI proposes to build a cyberinfrastructure of "computational resources, data, testbeds, algorithms, software, services, networks and user training and expertise" across university fields, per a 2023 press release.

As for raising awareness of AI's risks, materials and workshops are available to instructors—but whether they actually use these resources is another matter.

"I think we might have jumped the gun a bit. All of those AI programs have immense costs and are not vetted," Abbott said. "Over the whole semester, I didn't even know that the Responsible AI Initiative existed at the U. What is there really to teach if not preparing students for the ramifications and risks of AI?"

Agutter responded that individual course reviews were available to students, but conceded that RAI's literacy efforts to faculty could use some work.

"Can we do a better job? Yes," he said during the video call.

But like Collins, RAI asserts that its mission is on behalf of students who are, in the words of Parashar, "best prepared for a future where AI is going to be part of it." And they're among many who are well-positioned to get the machinery in place to create such a future, whether one wants it or not.

RAI's external advisory board includes strategists, academics and industrialists as well as Margaret Busse of the Utah Department of Commerce.

Busse leads the state's Office of Artificial Intelligence Policy, whose primary existence, according to its website, is to mitigate regulations upon AI-using companies (which the department deems "unnecessary restrictions") and offer exemptions, cap penalties for violations and tailor mitigation agreements.

click to enlarge “There’s little space for resistance, for critical thinking, about this technology.” - —Audrey Watters - COURTESY PHOTO
  • Courtesy photo
  • “There’s little space for resistance, for critical thinking, about this technology.” —Audrey Watters

Machine Learning
Audrey Watters is a New York-based writer with extensive study in the field of education technologies and a consistent critic of the proliferation of AI into classroom settings.

"Much of what gets sold to schools as 'AI Literacy' is really 'tool training,'" Watters told City Weekly via email. "There's little space for resistance, for critical thinking about this technology."

Watters noted that AI studies have existed for roughly 70 years. And in that time period, the field has gone through high-profile "winters" when seemingly-promising breakthroughs ran into dead ends.

She said generative AI and large-language models may be headed for its own winter after an explosion in public use. "I'm not sure one can really make the argument that AI is getting better every day—not with a straight face," Watters said.

But even if these models did continue to improve (rather than merely upgrade), Watters asserted that people should still question what "improvement" actually means and whether it is desirable. AI, after all, is wrong a not-insignificant amount of the time and has been trained off a "massive corpus" of stolen work that also encompasses social media posts, web pages, court proceedings and so on, in order to generate statistically likely images and sentences.

It is not an innovation, "disruptive" or otherwise, Watters stressed, but a management tool that fuels "growing economic inequality."

Other studies lend further depth to the problem. Adam Zewe, reporting for MIT News in January, found that the immense power required to train generative AI models "demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid."

That doesn't even account for the power expended by individual users of these platforms once the short-lived models are trained, the millions of gallons of water continually required to cool the hardware, and the toxic chemicals used to fabricate processors that can handle generative AI workloads.

Data centers across the globe, by themselves, Zewe continued, are currently "the 11th largest electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 terawatts)." AI power consumption is expected to raise data centers to a combined fifth-place ranking worldwide by 2026.

With every synthetic film generation, internet meme, email summary and coerced AI addition to one's device, a lot of energy is expended, and at great cost to ourselves as human beings, as this and other lines of research suggest.

In another January study, Prof. Dr. Michael Gerlich found a "significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading." And with the proliferation of AI into education, as one teacher told reporter James Walsh for the May 2025 Intelligencer, the learning process has been compromised, no longer prioritizing effort or creativity but rather proper input commands, producing students who are "essentially illiterate."

"The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT," Walsh added. "The combination of high costs and a winner-take-all economy had already made it feel transactional, a means to an end. ... In a way, the speed and ease with which AI proved itself able to do college-level work simply exposed the rot at the core."

Indeed, as Watters asked in a 2014 publication, "when we see signs of thinking or teaching in machines, what does that really signal? Is it that our machines are becoming more 'intelligent,' more human? Or is it that humans are becoming more mechanical?"

Reworking the Narrative
AI is not "intelligent" at all, but rather something designed to simulate knowledge according to mathematical algorithms, observed Theodore Roszak in The Cult of Information. "The mind thinks, not with data, but with ideas whose creation and elaboration cannot be reduced to a set of predictable rules," he wrote.

The mysterious moment of inspiration, the dream or flash of intuition cannot be replicated artificially, he added, and any institution or industry that foists mechanical substitutes upon the young are stifling them before they have a chance to truly develop.

"What do we gain from any point of view by convincing children that their minds are inferior to a machine that dumbly mimics a mere fraction of their native talents?" Roszak asked.

For students like Abbott, the answer to that question, whether from a sociological, ethical, environmental or creative standpoint, is resoundingly less than zero.

"People are entitled to their opinions, and it's ok if they're excited about AI," Abbott remarked, "But as someone who doesn't like AI, I wish it wasn't being adopted by the U and the state. I think people are getting ahead of themselves."

He stressed the importance of getting laws and regulations in place and to address the problems of AI before it is fully adopted.

"We just don't know enough about it as a public—where the money's going, how our information is being used, and how much AI already 'knows' about its users," Abbott said.

Despite the confidence of its many boosters and technicians, AI remains a dubious and corrosive force. So why is it being embedded so strenuously?

"As with climate change," James Bradley wrote for The Guardian in 2024, "we have been tricked into thinking there are no alternatives, and that the economic systems we inhabit are natural, and arguing with them makes about as much sense as arguing with the wind. In fact the opposite is true. Companies like Meta and Alphabet and, more recently, OpenAI, have only achieved their extraordinary wealth and power because of very specific regulatory and economic conditions. These arrangements can be altered."

We can expect this familiar tale to play out until enough people decide to drastically change. Such a course correction, critics and scholars suggest, requires many long hours of revising and, in many cases, complete restructuring to the systems in which we operate and live.

It's not unlike the artful adjustments that any skilled editor can lend to a particularly troubled film production. The real question, at this point, is whether the tech industry and its powerful allies will allow such a reworked narrative to play.

Pin It
Favorite

Tags:

About The Author

Wes Long

Wes Long

Bio:
Wes Long's writing first appeared in City Weekly in 2021. In 2023, he was named Listings Desk manager and then Contributing Editor in 2024. Long majored in history at the University of Utah and enjoys a good book or film, an excursion into nature or the nearest historic district, or simply basking in the... more

Latest in Cover Story

Readers also liked…

© 2025 Salt Lake City Weekly

Website powered by Foundation

OSZAR »