Assistant Professor Bertrand Schneider, who will arrive at HGSE in January 2017, is excited to be joining the community. “When I visited the HGSE last January, I was amazed by the depth of the discussion we had on my research. Each interaction that I had with the faculty left me energized, bubbling with questions and new project ideas,” he says. “I’m looking forward to making this a daily experience.”
Currently, a postdoctoral researcher at Stanford University’s Graduate School of Education, where he recently obtained a Ph.D. in the Learning Sciences and Technology Design Program, Schneider focuses on how emerging technologies can help educators design better learning environments.
“I am especially interested in environments that can facilitate access to abstract concepts in STEM through physical interactions like having students build their own physical artifacts,” he says, pointing to his work with Tangible User Interfaces and Fabrication Labs.
How and why is the development of new educational interfaces a significant part of the field?
Research in educational technologies is going through an exciting time. There has been this realization that it’s not enough to put computers in schools, you actually need interdisciplinary teams of educators, researchers and engineers to build compelling learning experiences. Historically, there has always been this idea that technology can create interfaces that make complex ideas easier to comprehend. Papert, for instance, argued that tangible programming could be a great entry point for learning math. Engelbart had this grandiose view that technology could augment human intellect. We are at a turning point where technology is becoming cheap and mature enough to realize those visions. I am very optimistic that those efforts will produce ways to learn that we have not experienced before, and I’m excited to be part of this endeavor.
What is your latest research project?
My group at Stanford has been at the forefront of a new research methodology called multi-modal learning analytics (MMLA). Basically, the idea is to use high-frequency sensors (eye-trackers, motion sensors, video streams, GSR sensors, etc.) to collect massive datasets on students’ behaviors, and use data mining techniques to make sense of them.
One motivation for this approach is that "we teach what we can measure." If we can measure more complex forms of learning, we are taking the first step toward helping students acquire those new skills. Collaboration is one example. It’s a multi-faceted construct that's hard to measure. But it turns out that you can use synchronized eye-trackers to capture a group's visual coordination, and that those patterns are a really good proxy for students’ quality of collaboration. That’s a great starting point. The hope is that we can extend this approach to other (sub)skills, and build tools that provide real-time feedback to scaffold students’ learning.
What courses will you be teaching in the upcoming year?
I will be teaching a class on MMLA in the spring. This is a huge technical challenge, in the sense that handling massive multimodal datasets requires years of programming experience, and I will be teaching this class to students who have limited programming experience. But democratizing access to these new methods is crucial: students in education need to be part of this discussion, because they are going to be the educational leaders of tomorrow.