Skip to main content
News

Rethinking AI in Learning: Inclusive Design, Responsible Impact

By August 13, 2025August 29th, 2025No Comments

Later, Shi’s experience as an instructional designer at a learning technology startup exposed her to several real-world challenges in digital learning design. One major issue was that some designers lacked direct access to student performance data, making it challenging to improve digital learning materials based on actual outcomes iteratively. She also observed how unclear stakeholder expectations could lead to misalignment during the design process. In addition, designing for differentiation became especially challenging when working with educators who had low tech proficiency, underscoring the need for more inclusive, intuitive tools.

This experience directly informs her current research with EarSketch, a creative coding platform co-created by Georgia Tech’s Professor Brian Magerko. EarSketch teaches programming through music: users write Python or JavaScript code to mix beats and songs, learning computer science concepts along the way. Backed by a nearly $3 million NSF grant, Ding is working to adapt EarSketch to be accessible for blind or visually impaired (BVI) students, using sonification to turn interface cues into sound.

At Georgia Tech, Ding focuses on integrating generative AI and accessibility tools into EarSketch, which traditionally relied on visual elements. While these features work for many users, they are challenging for BVI learners who often rely on screen readers. Ding’s goal is to create an assistant “like an Alexa for coding” to improve accessibility for students with visual impairments.

By grounding her designs in real student data and collaborating with teachers, she is creating more inclusive learning environments that offer real-time, personalized support to address gaps she experienced as both a teacher and an instructional designer.

Her perspective was shaped by earlier experiences studying education at Minzu University in Beijing, where she saw the disconnect between technology and teaching. In applied linguistic classes, she observed professors struggling to use basic digital tools to engage learners in studying foreign languages. “Learning foreign languages is difficult, just like programming languages,” she notes. “When students aren’t properly engaged—whether in Mandarin, English, or Python—successful outcomes become nearly impossible.” This disconnect between teaching methods and digital-native learners would later define her research mission.

Many teachers were uneasy about integrating new tools, even when their students often outpaced them in terms of tech know-how. “Students have more tech experience and proficiency than teachers oftentimes, but teachers have a hard time integrating new tech into their curriculum,” Ding observes. The pandemic only underscored these issues, as many teachers struggled to adapt to online learning.

Determined to close this gap, Ding set her sights on Georgia Tech. In 2023, she joined Professor Brian Magerko, a pioneer in AI-driven education, and his team in the Expressive Machinery Lab. “I specifically chose Georgia Tech because of Dr. Magerko’s background in AI and education, AI literacy, and creative learning technology,” Ding notes. Now a doctoral researcher in Georgia Tech’s Digital Media program, she focuses on AI chatbot design and AI literacy, helping both students and teachers better understand and use AI, and on creating equitable tools that ensure no learner is left behind. “We are not designing or building a smart tool. We’re building AI partners that understand teachers’ real workflows and students’ diverse needs to make the AI systems accessible and responsible for all learners,” Ding states.

Co-Designing an AI Assistant for Blind Coders

Rather than design solutions in isolation, Ding engaged the experts who mattered most: blind students and their teachers. She organized co-design sessions at a specialized school for the blind (in partnership with the California School for the Blind), where students, instructors, and researchers reimagined EarSketch. This participatory approach, treating users as co-designers, was crucial. Engaging BVI teens, a group often overlooked in design research, ensured the project focused on their day-to-day challenges and creative ambitions.

The co-design process spotlighted pain points. For example, students found the interface’s Content Manager cumbersome with a screen reader, forcing them to listen to excessive information and navigate long menus. One student suggested a “universal search” to instantly pull up desired sounds or sections. Others wanted smarter navigation—if they selected a category like “Sound Library,” the content should auto-load and be announced, and the system should provide filters to narrow results.

Teachers also highlighted significant challenges. Switching between the code editor, curriculum lessons, and audio workspace proved difficult for BVI users. One teacher envisioned a chatbot that could provide step-by-step guidance and keyboard shortcuts for tasks while using a screen reader. In practice, blind students often required one-on-one help for even simple tasks, such as writing a line of code. This process turned coding into a series of rote steps, rather than fostering creative, independent exploration.

 

How Large Language Models Can Help

Ding’s research focuses on a promising solution: integrating a large language model (LLM) like ChatGPT directly into the EarSketch environment. These AI models can understand natural language and generate context-aware responses. Ding and her colleagues identified several ways an LLM assistant could transform the learning experience for BVI coders:

  • Seamless Search & Navigation: An AI assistant could serve as a smart search bar for EarSketch, fetching relevant sounds or sections based on natural language queries.

  • Guidance Between Coding and Curriculum: LLM integration offers a virtual teaching assistant to guide learners through multi-step tasks such as using EarSketch functions.

  • Suggestions, Not Solutions: AI should empower learning, not do the learning for students. By prioritizing guidance over answers, AI helps students build their own problem-solving skills.

This research has been recognized at the Artificial Intelligence in Education (AIED) 2025 conference, where Ding’s paper was accepted for publication.

Building Human-Centered AI

After months of co-design and prototyping, Ding’s project is taking shape as a model of what she calls “human-centered AI.” The team has developed a preliminary GPT-4-powered chatbot integrated with EarSketch’s curriculum content. Early tests have shown promise: students were able to work through coding and music tasks with improved workflow and more immediate feedback. One student tester, for example, used voice input to ask the AI for help and received spoken hints in return, a glimpse of what a voice-interactive “Alexa for coding” could be. There were, unsurprisingly, a few hiccups: the prototype sometimes disrupted the interface or gave inconsistent responses. But each issue informs the next iteration. Ding is now exploring specialized plugins to enhance the AI’s reliability in searching the sound library, navigating code, and giving feedback.

For Ding, the technical advances are always tied to the bigger picture of AI literacy and fairness in education. AI literacy isn’t just about knowing how AI works; it’s about ensuring access to AI-augmented learning for all students and teaching them to use these tools critically and responsibly. By bringing blind and visually impaired students into the design process, she is also demystifying AI for them. These learners are not just consumers of an AI helper; they have actively shaped it. In doing so, they’ve learned how AI can amplify human capabilities and what its limitations are.

This research follows Magerko’s philosophy of designing “toward the margins” to include the most underserved users, ultimately capturing everyone in between. By focusing on those with the greatest accessibility needs, the solutions tend to improve usability for a much broader audience as well. Features like voice-assisted search or better error explanations benefit any novice coder who might be struggling, not only those with disabilities. In this way, equitable design and AI literacy go hand in hand. When you build technology that different kinds of people can understand and engage with, you empower more people to understand and engage with technology.