Research-Practice Webinar | Bugs, Bots, and Algorithmic Bias: Examining the Power and Peril of AI in Education

Bugs, Bots, and Algorithmic Bias: Examining the Power and Peril of AI in Education (21CSLA Webinar)

Speakers

tiera tanksley headshot

Tiera Tanksley

Faculty Fellow, Critical Race Technology Studies, UCLA Center for Critical Internet Inquiry

jabari mahiri headshot

Jabari Mahiri

Professor and Faculty Director, BSE Leadership Programs; Chair, 21CSLA Leadership Board

rebecca cheung headshot

Rebecca Cheung

Assistant Dean, Leadership Development Programs, Berkeley School of Education

Speaker bios

Dr. Tiera Tanskley is a Faculty Fellow in Critical Race Technology Studies at the UCLA Center for Critical Internet Inquiry. Her scholarship, which theorizes a critical race technology theory (CRTT) in education, extends conventional education research to include socio-technical and techno-structural analyses of artificially intelligent (AI) technologies. Specifically, her research examines anti-Blackness as “the default setting” of AI and examines the socio-emotional, mental health and educational consequences of algorithmic racism in the lives and schooling experiences of Black youth. Her work simultaneously recognizes Black youth as digital activists and civic agitators, and examines the complex ways they subvert, resist and rewrite racially biased technologies to produce more just and joyous digital experiences for Communities of Color across the diaspora

Jabari Mahiri (Professor and Faculty Director, BSE Leadership Programs; Chair, 21CSLA Leadership Board) is the author of  Deconstructing Race: Multicultural Education Beyond the Color-Bind, as well as host of podcast Equity Leadership Now!

Webinar Highlights

Research Perspective

With the proliferation of Artificial Intelligence (AI) across educational contexts, Dr. Tiera Tanksley outlines and debunks four myths about technology:

  1. Technology is inherently neutral, and its results are objective, trustworthy, and valid.
  2. Because technology is neutral, it can be trusted to make unbiased decisions.
  3. AI is the “silver bullet” to remediating inequity in general, and educational inequity in particular.
  4. Racist outcomes in AI are harmless “bugs” or “glitches” that are unavoidable and unforeseeable.

Dr. Tanksley argues that “the anticipated relationship between our communities and technology is one of coercion, control, extraction, and containment.” Her research identifies several concerning “bugs” (defined as “small, seemingly mundane errors in an otherwise effective or efficient codified system”) in educational technology:

  • Chatbots like "Harriet Tubman AI" and other historical figure simulations frequently produce historically inaccurate and harmful information that perpetuates anti-Blackness.
  • Automated grading systems score Black students and speakers of African American Vernacular English (AAVE) significantly lower than other demographic groups.
  • Anti-cheating software has failed to recognize Black students' faces as human, resulting in locked exams and disciplinary actions.
  • Safety technologies like Geolitica (formerly known as PredPol) disproportionately identify Black and Brown names as future and self-identified likely criminals.

Research shows that schools with high-tech surveillance infrastructures place more students on disciplinary tracks and create environments where all students experience higher levels of chronic stress and anxiety, decreased feelings of belonging, and lower math scores.

Dr. Tanksley uses the term algorithmic microaggressions—“the subtle, seemingly innocuous racial assaults or put-downs that are automated and unconsciously encoded”—to describe a technologically codified system of white supremacy. She argues that "anti-Black outputs are neither a bug nor a glitch, but a foundational design feature" of our technologies, schools, and society.

Intersection of Research and Practice

Dr. Tanksley’s framework for critical AI literacy is grounded in the dual nature of abolition: destruction followed by new growth, critique paired with freedom dreaming, and building hope. The three tenets that undergird her approach to critical AI literacy are:

  1. Fostering socio-technical consciousness: Make sense of everyday experiences with digital and algorithmically-mediated racism.
  2. Developing socio-technical resistance: Critically navigate, resist, and subvert algorithmic racism in everyday technologies.
  3. Encouraging socio-technical freedom dreaming: Reimagine and dream up counter-technologies that protect and sustain Black life, joy, and wellness on a techno-structural and sociotechnical level.

The Race, Abolition, and AI program at UCLA is an example of how youth are invited to take a more critical stance on technology and learn from abolitionist practices. Dr. Tanksley centers an ecological and systems-conscious approach that considers historical knowledge, algorithmic consciousness, experiential insights, sociopolitical context, and environmental consciousness. Through activities like analyzing the "if-then" logic behind Trayvon Martin's killing or examining grocery store aisle organization, students recognize racial algorithms in everyday life. They experiment with Google searches (comparing results for "professional hair" versus "unprofessional hair") and learn to fix biased algorithms by retraining them with more diverse data.

Educational leaders must approach AI implementation with careful consideration rather than rushing to adopt technologies without proper evaluation. Dr. Tanksley cautions against the "move fast and break things" mentality and highlights two strategies:

  1. Develop procurement protocols that include critical questions about what datasets were used to train the AI, carceral connections, data privacy, environmental impacts, and potential bias.
  2. Include families and young people in conversations about whether and how to adopt technology.

Discussion/Reflection Questions

  1. How might you develop a comprehensive evaluation framework grounded in equity for AI technologies to assess bias, data privacy, environmental impact, and connections to surveillance systems?
  2. What structures could you implement to meaningfully include student and family voices in decisions about AI adoption? How might you prioritize the voices of those most likely to be negatively impacted by these technologies?
  3. How can you balance teaching students practical AI skills while fostering their critical consciousness about algorithmic bias and providing spaces to reimagine AI?

Related resources