Practitioner Voices Summit—Shaping the Future of AI in Math Education
All fourteen tables around the room buzzed with educators deep in conversation—a vivid reminder of what happens when teachers are given sufficient time, space, and support to collaborate. Throughout the EduNLP Lab’s Practitioner Voices Summit at Stanford this June, we centered educator voices, creating space to discover, share, and feel empowered. “There was so much care in the room. It reminded me how many of us are holding a deep commitment to students and education, even if we’re coming at it from different angles,” Vanessa Hernandez, a teacher from Oakland, shared.
When we first publicized this summit on language technologies in math education, we expected a few dozen applications; we received more than 400. The math educators we invited to participate came from all different grade levels and twenty-two states around the country. Forty-six were classroom teachers, and fifteen were instructional specialists, school or district leaders. Some were very skeptical of AI tools, some were enthusiastic, and many were in between—cautious optimists. For some, this was the first time logging into ChatGPT, while others were daily users of large language model (LLM)-powered tools.
Across two days, educators participated in small-group conversations—grounding discussions in their values and challenges, exploring LLM-powered tools, and developing criteria for evaluating AI tools and their outputs for teaching. Teachers’ informed these criteria through reflecting critically on LLM outputs: 'Are they mathematically accurate? Do they encourage complex mathematical thinking? Do they reflect students’ lived experiences?’, as well on the usability of tools: 'Is the tool adaptive? Does it understand educator language? Is it user-friendly? Does it protect student privacy?'
Attendees learned together about LLM-powered tools and got to share and shift their stances. Laura DuMont, an elementary school math specialist in San Jose, reflected: “I was honestly terrified to take a chance on an unknown tool. Now I feel like I can play around with tools to see what fits my classroom and the needs of my students.” This kind of grounded curiosity was echoed across the room, as educators tested, questioned, and reimagined how AI might serve their work rather than distract from it.
Participants joined breakout groups led by members of the EduNLP Lab, each exploring a distinct dimension of math education: student talk (Math Out Loud), teacher talk (M-Powering Teachers), feedback on student work (Feedback Footprint), and curriculum adaptation (ScaffGen). Educators engaged with the behind-the-scenes structures of these LLM-powered tools and dreamed expansively about how these technologies could best support math instruction and learning.
One thing that struck us was how easily teachers moved between stances. In the span of a single discussion, educators would transition between critical and hopeful, maintaining curiosity throughout. Educators handle nuance every period of every school day; it shows when they dissect technology, too.
That flexibility is exactly why we need to involve teachers at the front end of AI research and development, not just after a tool is built. Too often the cart—an almost‑finished product—shows up first, and the horse—educators—gets asked for late‑stage feedback. Our summit took the opposite approach. We explored ways to engage teachers in projects that are emerging and in early stages. Gwen Faulkner, an elementary math specialist from Cook County, IL, reflected on this process: “It was invigorating to feel like a part of the construction process. Similar to how my students grow in class when they feel like co-authors of our learning environment.” Our goal is to learn and share what deeper engagements with educators in R&D can look like and what these partnerships can elicit. As Moe Htet Kyaw, a high school math teacher in San Francisco, offered a helpful reminder: “The AI is only as strong/useful as the educator themselves. At the core of this practice, there should be an educator or practitioner who is very aware and intentional about their practice and the use of the tool.”
Our broader goal is to build a sustained platform to amplify teacher perspectives, give educators the opportunity to represent their communities, and shape future research directions. The conversations captured here are being coded and analyzed now; findings will be shared this fall in an open‑access report and will inform new prototypes co‑designed with the same educators who sparked them. If we keep the horse in front of the cart—letting practitioner insight steer technical work—AI tools stand a much better chance of helping the students who need them most.