Recently, the Bill & Melinda Gates Foundation posted a new resource, at coursewarechallenge.org with findings from their last round of NGCC grant funding. Acrobatiq was part of that round, and I uncovered a quote from myself within the site. I thought I would expand on it a bit. The quote:
“If students don’t provide enough data in the form of practice, then courseware providers can write an adaptive experience, but it won’t actually adapt to the students. That’s a trickier thing you can’t necessarily expect instructors to just know.”
Bill Jerome, Senior Product Manager at Acrobatiq
To expand a little bit:
In any adaptive learning platform (assuming most generally accepted definitions, where a platform adapts instruction without intervention from instructors), data is required. Within knowing what a student knows, there is no immediate basis to adapt their instruction. That would be like asking a human tutor to “adapt instruction” to a student without letting them ask the student any questions. So we know we need at least one question. Not many people would argue that, even for a human tutor, one question is likely not enough to really know what a student knows or doesn’t know, at least when considering most questions we deliver online (multiple choice, maybe an entered word). So now we have made the case the answer is “at least a couple”. Practically speaking, you need a number of questions related to a single learning objective to really understand where a student is, especially if those questions allow gaming strategies or luck such as multiple choice questions.
All of which I think makes sense when talked through like above, but it is not necessarily obvious to an instructor writing content. And what is the right number? That gets even more complicated because it depends on things such as the difficulty of the question, the level of Bloom’s you are trying to assess, the “gambility” of the item, etc..
This issue is what I was trying to convey in that short quote.