

We invited California community college educators to experiment with video-based GenAI tools and ponder their potential role and impact on diverse students in open access institutions. Those who participated (without compensation) agreed to meet six times for 90 minutes and critically examine their risks and opportunities, particular in relation to online teaching and learning. The group was comprised of online education faculty and staff who had a mix of skeptical curiosity and suspicious reluctance about synthetic media’s role in teaching and learning.
Creating Our Space
As we began our three month journey together, the group co-created a community agreement to clarify their collective needs to create an environment that would be supportive and promote the psychological safety needed to try new things and share questions, concerns, and successes. I learn a lot from these community agreements. Patterns emerged that reaffirmed the simple and very important idea that college educators are just like students — because we are all humans. One of my favorite contributions to this particular agreement was, “Embrace experimentation without any pressure to participate.” As the facilitator (and community member/learner), I took that to heart by warmly encouraging experimentation and creating spaces to share work and/or share experiences without expectations or judgments. Just support all participants to work through their concerns and hesitations — or engage and learn without creating content if they don’t “get there.”
We began our first session by defining what it means to take an “identity conscious” (Costino, 2018) approach to our work. To center the notions of intersectionality and power/privilege from step one, we all engaged in a reflective activity about our social identities and examined the Wheel of Power and Privilege before moving into breakout rooms for small group discussions. Having these personal conversations set a tone for all of our conversations and began to set the tone for explring AI technologies through the lens of different human experiences, as opposed positioning AI generated video as a neutral tool to simply plop into our toolkit.
Engaging as Learners and then Creators
Early on, we all completed the (brief and free) course by Leon Furze, This Course is AI Generated, which uses AI generated content including a “digital double” of Furze in the instructional videos. I asked our community members step into the role of learner in the course and be critically aware of how they felt as they watched and listened to the videos. In our next session, we went into small breakout rooms and shared our experiences about the course and then came together to debrief as a whole group. It was quite fascinating to hear the varied responses — from “I felt creeped out the whole time” to “the avatar didn’t phase me at all” to “I got used to it pretty quickly and would expect our students to do the same.” Hearing from the diverse perspectives was important because people often have a tendency to assume their experience represents the experiences for everyone. And that is never the case.
From there, some of us began creating and sharing our own digital doubles, creating them with a free HeyGen account. Each time we met, more an more people began to experiment, but they were never required to create an account if they did not feel comfortable doing so. We discussed the tool’s privacy policies and learned how to opt out of contributing to its training.
Power Norms at Play
By using the tools with a critical eye and making space to share experiences, it didn’t take long for some of the biases within the algorithms to surface. I was reminded of Maha Bali’s thoughts about the need to engage critically with AI to teach students to see the forces of dominant culture at play, while also teaching them to critique them at the same time. And, of course, educators need to develop this critical AI literacy first, in order to apply the skills to their practice.
During one of our sessions, we discussed the uncanny valley in relation to how it felt to watch a digital double of yourself deliver words you never spoke. It’s hard to translate this feeling into words. Some of us laughed, others cringed. I think everyone agreed that the experience was plain creepy. But there were also some powerful and critical takeways, as well. One of our members, Jane Lê Skaife, who teaches Ethnic Studies at Sierra College, began by describing her experience as, “It’s like me, but not like me” and then continued to unpack this experience further. Like most participants, Jane had her family and a friend or two watch her digital double video and closely observed their reactions. In our group discussion, Jane shared that several people responded by saying, “It sounds like you, but white.” Jane asked us, “What does that even mean?”
Intrigued by this question, Jane chose to (temporarily) upgrade to a premium account to gain access to a few more features, including the ability to apply different accents to her digital double videos. One of those accents was “American.” When she translated her authentic voice into what HeyGen describes as an “American accent,” she observed that she sounded different. Here is Jane’s written reflection about how this experience resonated with her (shared with permission):
”This ultimately sent a very powerful message to me that I did not fit the idea/ideal perception of what an American was supposed to sound like. This was quite disconcerting for me considering that I was born here, and I do consider myself very much American by ‘jus soli’; that is, American by ‘right by the soil’ or ‘American by birthplace.’ For me, this experience reinforced the dominant narrative in the U.S. that American equals white, which is problematic and emphasizes the need to put more diverse people in the development and design of synthetic media tools and AI tools in general.”
Jane’s observation, as well as her courage for surfacing this important conversation in our predominantly white space, engaged other members in a reflective conversation about how it felt to have their accents altered by the tools. One person noted, “It was like part of myself was being erased.” Those of us, like me, who did not notice a change in their accent, recognized our privilege.
Practical Application of Synthetic Media
Below is a video that I contributed to the CoP as a possible use case for synthetic media in instruction. I call it a synthetic media mash-up. The video provides a summary of an openly shared research article, Is Deepfake Diversity Real? (Kaate et al., 2025), that examined the diversity of AI avatars in synthetic media tools. While the content of the article is compelling and important, the video represents a peek at one way to use brief AI-generated videos of research studies or other written content that is openly shared to avoid copyright violation. The video took me about 20 minutes to create.
Videos like this aren’t inspirational or transformative. Leon Furze has insightfully inquired about whether they’re anything more than “digital plastic.” Furze writes, “As the layer of digital plastic builds up in the digital ecosystem, it becomes more compressed and more homogenous, eventually flattening out to a sickly looking smear of uniform data. The rise of GAI content spells the end of the richer, human epoch of online data.” But could they provide different ways for students to access content and create more streamlined workflows for faculty?
As teachers considering how/if to use GenAI, we must recognize the critical role that our authentic presence plays in creating spaces where students want to lean in an learn. Spaces in all modalities where students feel seen, where students feel supported and where students feel challenged. Those are feelings that make us human and they’re feelings each of us crave. Without experiencing these feelings of connection, we wither. Humans need interactions with other humans to thrive. Synthetic media can relay information, but it won’t inspire or motivate.
About the video above:
- The script was generated by Claude Sonnet 3.5 (Prompts: 1) Summarize the research questions, findings, and discussion from this article about diversity in deep fakes. What should college instructors using synthetic media to support the learning needs of diverse students take away from this study? 2) Write an engaging script for a 2-minute video overview of this study summarizing research questions, the number of avatars included in the sample, its main findings and takeaways.).
- The slides were generated using Gamma using the script, reviewed for accuracy, and downloaded.
- The slides were loaded into HeyGen with the script. An AI avatar of me was used for the opening video, then my voice clone was used for the narrative over the slides.
- The closing video is my authentic self, recorded by me in my backyard.
Risks and Guidance
As our CoP came to a close, I asked our community members to reflect on what they learned and to identify some risks and guidance for colleagues to consider. Here are our contributions to this conversation. Please take these as recommendations for getting started.
What risk(s) of synthetic media are at the forefront of your mind?
- Losing connection and trust with students by using synthetic media as a replacement for our authentic selves online
- Proliferating identity stereotypes by leaving out neurodiverse people, those who are older, those who are disabled, and intersectional identities and reinforcing language norms
- Teaching Evaluation Conflicts – This is a real story: a colleague who teaches online is using a digital double as videos for lectures and is getting pushback in their evaluation (“It is not acceptable; this is not a human.”) Who decides?
- Disengagement may increase for some students
- Future generations – What concerns surface if young people are raised in an environment that uses digital avatars all the time
- Environmental risks – Synthetic media uses large amounts of water and energy (although how much is unclear). Leon Furze’s theory of “digital plastic” resonates
- Copyright – who owns the copyright of these videos?
What guidance would you give to community college educators about using synthetic media?
- Be intentional and transparent if you use synthetic media in your teaching
- Think about this: “What is the intent of this video? Is it to deliver content or to develop trust with students?” And use those answers to guide your use
- Guide your use of SM with a critical focus on using it to improve the experiences of diverse students
- Use it in addition to your human presence
- Maybe the better use is for students to use synthetic media to adapt content to meet their individualized needs
- There may moments/opportunities where a “shallowfake” might be better than “deepfake” (e.g., an obvious cartoon seems more authentic than a hyper realistic version of ourselves)
- Use in small doses–don’t rush to convert to full synthetic video right away
What would you add to these lists?
Participants who chose to create accounts in this CoP started with a free HeyGen account and a few also experimented with a free Synthesia account.
In June 2025, Saša Stojić-Ito and I presented a session at the Online Teaching Conference in Long Beach, CA about our CoP. Below is a link to our resource site, which includes our slides and more video examples and resources.
Many thanks to my colleagues who participated in this CoP and for their willingness to share their experiences more broadly.