AI literacy course prepares ASU students to set cultural norms for new technology

Scroll Down

Most people’s first encounter with artificial intelligence happened on social media, when AI-generated self-portraits started popping up on multiple platforms. 

It seemed like it was just fun, but then artists started weighing in—artists whose work had been “scraped” by AI and added to its toolbox, so that now anybody could make work in a particular artist’s style without actually going through the artist.

Thousands of arts and organizations have signed a statement in opposition to AI companies training their tools with copyrighted and unlicensed works. Artists, authors, musicians, actors and other creators signed the statement, including Amoako Boafo, Margaret Drabble, Kate Bush, Thom York, Kate McKinnon and Rosario Dawson.

In 2023, a group of visual artists filed one of the first major lawsuits against a generative AI company when they brought a case against Stability AI, maker of the text-to-image model Stable Diffusion, saying AI image-generators infringe on artists’ rights by using their work to learn and then producing derivative works. Corporate copyright owners have also filed lawsuits against AI companies, including Getty Images and Universal Music Group. 

Shawn Lawson, animation program director and professor in the School of Art, told the State Press that while he sees potential for AI to be harnessed in beneficial ways, there are also concerns with the information that AI tools pull from when learning. 

“That text information is biased, based on the person who is putting in that information in terms of what their tagging that data is,” he said. 

Herberger Institute Professor Wanda Dalla Costa also shares concerns over obstacles to inclusiveness when the data used for training AI is from a limited population.

“Whose sentiments are you aligning to?” Dalla Costa asked during a panel at the ASU’s Smart Region Summit in March. 

She said that although engineers may make use of alignment — a process of encoding human values into large language models — there is invisible data, such as oral traditions, that is not counted. 

“This perpetuates … layers of systems that work against many communities of color and voices that have not been recorded,” Dalla Costa said.  

Understanding both the limitations and the possibilities of AI is something Lance Gharavi hopes ASU can teach students. As the use of artificial intelligence spreads rapidly to every discipline at the university, it’s essential for students to understand how to ethically wield this powerful technology.

Gharavi, a professor in the School of Music, Dance and Theatre in the Herberger Institute, has been teaching a new course this semester called “AI Literacy in Design and the Arts,” which covers the benefits, challenges and ethics surrounding AI.

Gharavi has been working with the provost’s office since 2022 on how to teach AI literacy, and then collaborated with curriculum designers and content creators to build the course as the semester proceeds.

The course, which will be offered again in the spring, is designed to serve as a template for AI literacy courses in other disciplines.

 

What IS AI? And how can artists, designers and others in creative fields use it rather than being used by it?

“In all this material we’ve created, the lessons are discipline-agnostic,” he said. “The specific discipline content will come from whoever is teaching the course.”

The course is divided into modules; so, for example, the module on AI ethics can be used with any subject matter.

The goal is to have the content ready for any college at ASU to be able to teach its own version of the course by next year, he said.

Gharavi said that learning AI literacy is critical for students.

“I think there are three possible reactions that ASU could have to AI as an institution. We could try to play defense and say, ‘Let’s keep this out of higher education.’ Or we could pretend it’s not happening and sit around and wait for some new consensus to evolve in higher education. That’s about the most un-ASU thing I could imagine,” he said.

“The other option is to embrace it and say, ‘We need to find ways of using this technology that is in line with our values as an institution.’

“And we have very specific values literally carved in stone on our campus. So how can we harness these technologies in a way that advances those values?”

Gharavi answered some questions from ASU News about the course.

Question: Who is the class geared toward?

Answer: The interesting thing about this is that it doesn’t really matter if you’re a sophomore in college or a PhD student or even if you’re faculty. Everybody is at the same level and everybody’s going to be able to engage with the material through their own level of skills, knowledge and development in their field.

Q: What do you mean by “AI literacy”?

A: AI is, if nothing else, a very important new technology, very powerful and potentially transformative of a lot of different fields and a lot of different sectors.

I worked with the provost’s office and got input from probably like 50 faculty from across the university in developing a definition of AI literacy: “AI literacy is the evolving set of knowledge and skills necessary to understand, critically evaluate and use AI responsibly. And effectively, AI literacy enables informed, ethical decision-making to equitably advance the well-being of the communities ASU serves.”

Q: What does the course teach?

A: One is a fundamental understanding of what generative AI is and the technology. This is not at the level of a PhD in computer science. This is basic layperson’s non-expert literacy. If you don’t understand how these technologies work, you’re not going to have a reliable way to think about, for instance, the legal issues around AI.

Then we introduce various AI tools, not just large-language models but image creators, music, sound, video, etc. We teach students how to get the output they want — prompt engineering and how to evaluate the output.

And we go over a variety of use cases. What are all the different kinds of things you can do with this technology?

We have a rather large module for students to learn to think critically about the technology and not just blindly accept whatever Silicon Valley is shoving into their faces. This covers ethical, legal, safety, security and sustainability issues around generative AI.

Then there’s a unit on how to use this technology responsibly as end users. We’ll talk a little bit about the future.

Q: So the course is neither a warning nor a promotion of AI?

A: Thinking critically is not, “We’re a cheerleader for it” or “We’re a doomsayer.”

You may arrive at one of those but wherever you arrive is through a process of thinking critically and engaging. I think as of myself as someone who is engaging in a spirit of curiosity with the technology to see, “What should we do with this and what shouldn’t we do with this?”

Students are not just consumers of these different perspectives — they’re going to be the people who will shape how we pursue these technologies in the future. They’re hopefully going to be leaders.

Q: Why is the timing of this course so important?

A: What’s happening now is that we’re creating new social and cultural norms around this technology. And we as an institution, but also as a collection of individuals, need to take agency in shaping those new norms and values.

Norms, once established, are very difficult to unestablish. If we don’t seize agency and responsibility in helping to shape those norms, and just allow big corporations to do that for us, we will be left with whatever they provide for us.

Q: How have you had to adjust for the rapidly changing technology?

A: Everything is moving so quickly that the course is going to have to be revised significantly, maybe not just every year but every semester. Because regularly I’ll go into class with a lesson plan that I developed a week ago with the team, and I’ll throw out half of it to bring up something that just emerged in the last week.

There have been several new frontier models or updates to frontier models that have just come out in the past three months; and countless new products, new features, new tools.

Q: Some people are familiar with how AI can create text content. What can AI do in design and the arts?

A: In music, for example, there are platforms in which you can enter text for the kind of song you want and it’ll produce it a really remarkable output. You can record a piece of music or take an existing recording and run it through this AI software and it will separate all of the instruments into different tracks perfectly, with no noise, no overlap.

For fashion, there’s this tool where you take a photo of yourself and then describe the outfit that you want, and it will put that outfit on the photo of you. What a remarkable tool for a fashion designer to brainstorm or create prototypes or proof of concepts.

Q: How do the arts have an advantage in embracing AI?

A: There’s a lot of fear on the part of a lot of artists about job replacements. There’s discussion about whether a product of AI is even art — whether AI can produce art.

But I think that when it comes to accepting AI, the arts have a particular advantage because the issues and challenges that AI presents are issues we’ve already dealt with through the late 19th, throughout the 20th century and into the early 21st century.

The idea that art has to be beautiful and the idea that art has to be a product of the skill of the artist — we dismissed that long ago. If you give a professional artist an interesting tool, they will find things to do with it and ways of working with it that you never expected.

I think there’s great potential for AI not to be a replacement but to be a kind of prosthesis or creativity partner or collaborator. And I think it’s exciting; it’s also terrifying.

Q: Do you use AI?

A: I do. I use large-language models for text development and for brainstorming. I recently used AI to develop a logo for a project that I’m working on. I’m working on a project now that will involve AI in almost every aspect, from writing to design to ideation and even performance.

Further reading: AI in Design and the Arts

A group of ASU researchers, including from the Herberger Institute, received a $2.8 million grant and are collaborating with the National Sciences Foundation on a project to expand AI in devices people use every day in the real world, such as phones, smartwatches and sensors for health benefits.

“This grant leverages many interdisciplinary strengths at ASU, including media arts and sciences, computing sciences and health sciences,” said Pavan Turaga, director and professor with the School of Arts, Media and Engineering. “We are very excited at the possibilities from such a radical fusion and expansion of AI capacity at ASU.”

The grant is part of the NSF’s ExpandAI initiative, an effort to bring Hispanic-Serving Institutions (such as ASU), minority-serving institutions and historically Black colleges and universities more opportunities to do high-level research in the field, and it demonstrates the growth of AI across all fields and sectors of society, including design and the arts. 

In fall of 2023, The Design School hosted an AI workshop on the emerging field of AI-generated architecture. The School of Arts, Media and Engineering launched an artificial intelligence and digital media certificate this year. Jorge Costa, an assistant professor at the School of Music, Dance and Theater who teaches classes in sound engineering and music, has been exploring an AI beatmaker tool. The Sidney Poitier New American Film School recently kicked off its new speaker series with a discussion on the impact of AI on the entertainment industry led by industry futurist and entertainment executive Ted Schilowitz. Pooyan Fazli, assistant professor in the School of Arts, Media and Engineering, is leading a project to develop artificial intelligence–driven tools to generate descriptions for online videos, making them accessible to blind and low-vision individuals.