The Baker Learning Analytics Prizes | Alelo | The Baker Learning Analytics Prizes
  • Oct 10, 2019

The Baker Learning Analytics Prizes

By Ryan Baker
Associate Professor, University of Pennsylvania and Director of the Penn Center for Learning Analytics

Ryan will be presenting a webinar on Wednesday, October 16, 1:00 PM-2:00 PM Eastern Time and Thursday, October 17, 1:00 PM-2:00 PM Eastern Time.

A field only moves forward if it chooses the right problems: big goals that make a difference. Many–perhaps most–of the adaptive learning systems out there are based on the same technology from the 1990s. Much of the research published today is tiny optimizations on small problems. Incremental progress can be valuable. But it shouldn’t be the primary focus of a field.

In this talk, I present a vision for some directions I believe the field of artificial intelligence and education should go: towards greater interpretability, generalizability, transferability, applicability, and with clearer evidence for effectiveness. I pose these potential directions as a set of six contests, with concrete criteria for what would represent successful progress in each of these
areas: the Baker Learning Analytics Prizes (BLAP).

To briefly summarize these challenges.

  • BLAP1: Build a model in one system and have it transfer information to another system in a way that makes that system’s predictions and behavior better. We learn so much about students in AI-based educational technologies… and then we forget it when the student moves on to the next educational technology.
  • BLAP2: Demonstrate that learners who received an individually-targeted intervention have better outcomes in the longer term than learners who didn’t. Also demonstrate that the individual targeting matters: it’s not simply that everyone benefits from the intervention.
  • BLAP3: Build a (good) explanation of a complex model that educational
    practitioners can understand as well as data scientists. If teachers and school leaders don’t understand the technology, they don’t trust it–and often they don’t use it.
  • BLAP4: Get the same quality of knowledge tracing for learning that occurs in groups, or in classrooms, as for learning that occurs in one student, using one computer. A small proportion of learning occurs with one student and one computer, but that’s where the technology works best. We need to spread the benefits of AI and education to a broader range of learning situations.
  • BLAP5: A detector of boredom that “just works” in an entirely new system. I choose boredom because it is associated with several-year differences in student outcomes (i.e. San Pedro et al., 2013), but the big idea here is: emotion matters, and adapting to its seems to be beneficial, but right now detection research is focused on single systems. If we could break out of this limitation, the benefits of affective computing would spread much more widely.
  • BLAP6: Build a model that “just works” for an entirely new population. Right now, learning system developers don’t pay enough attention to whether their models work across populations, and when researchers have looked at this, often we have models that work well on the suburban and small city populations of convenience the systems have been developed for, but not more broadly.

In my upcoming webinar I will provide detail on these challenges, and how solving these challenges will bring the field closer to achieving its full potential of using data and artificial intelligence to benefit
learners and transform education for the better.