Khan in the classroom?

      Filed under: Articles   

If the idea of “blended learning” — combining elements of traditional classroom instruction with software-based supplements — sounds appealing in theory, it’s worth inquiring about how it plays out, in practice. There are a few dozen pilots in progress right now, and the first few findings have been trickling in. One of these was shared last month in the form of a white paper “Lessons Learned from a Blended Learning Pilot”, summarizing key findings from a pilot project at Envision Academy in Oakland, CA in which Khan Academy was incorporated into an Algebra I summer school course. While the report authors — Brian Greenberg, Leonard Medlock, and Darri Stephens — do present some preliminary quantitative analysis from a small controlled study, the sample size is said to be too small (although, strangely, size is never actually specified in the paper) to be able to draw any meaningful conclusions about learning gains. Stephen Downes, Seb Schmoller, and Alfred Essa have focused on the reported non-effect, but since it’s really a non-test, I’m not as concerned with the results. The real value that I find in this report is the qualitative feedback presented from teachers, students, and partners (from Stanford’s d.school and Google’s Chromebook team.) Here’s why: The ways in which students used (and misused) the Khan Academy software highlights opportunities for how “blended learning” software could (and perhaps should) work in the future. Edtechies, take note.

Below are a few of the passages that I found most interesting. What I find interesting is, of course, colored by my own research focus, so the highlights for you may be entirely different than these. </disclaimer>

By observing the data screens, a teacher can easily see that a group of three or four students are all struggling with the same concept. The teacher can call these students together and provide a targeted mini-lesson. Even better, the teacher can call over a student who has proven mastery on the topic, and ask the student to provide the instruction to his/her peers.

Absolutely. In the context of a classroom, the actionable feedback would be suggesting ways to rearrange where students are sitting and who they are encouraged to talk with. There’s a relatively new startup doing just that: Learning Catalytics (Eric Mazur, Garry King, and Brian Lukoff) looks to have built a system to do this quite nicely, and their approach is rooted in Mazur’s extensive work on Peer Instruction. I’ve been kicking around designs for an entirely-online venue for peer instruction with peer assessment for the past few years, so stay tuned for announcements on that front.

A related (and encouraging) result of the blended learning model was that students began to work together much more collaboratively than usually observed in high school classrooms. Although the instructional videos were present, most students preferred to work though the practice problems themselves, with the help of the teacher, or by soliciting peer assistance. Students were surprisingly comfortable asking each other for help. One student told us, “Because we are all working on different things, it’s easier to ask for help.”

Reading these observations is encouraging. We recently introduced a new mode of study in parts of Grockit, dubbed “study hall”, that lets each student work through their own adaptive/personalized problem-solving study session, but have access to a real-time public chat channel with all of the other students. As soon as a student gets stuck, they can click a button to signal to other students that they want assistance. Others can then hop into that student’s personalized session to work through the problem with them. Complementing personalized study sessions with affordances for on-demand peer assistance is one of many ways to bridge the personalization-collaboration divide. A paper describing our first set of experiments using this model is headed to the Intelligent Tutoring Systems conference for peer review later this month.

In one example, we observed two students working on the same section at the same time. They worked individually but conferred before submitting their answers. If they disagreed on a solution, they tried to convince one another or they looked for possible errors together. Other times students would tutor each other on different sections as needed. We see tremendous potential in this peer-coaching model and are interested in thinking about ways for students to signal to peers that they need additional help or to identify themselves as coaches on given topics.

There’s a great study from a team of physics education researchers at the University of Colorado, Boulder that looked at the effects of peer discussions (in a lecture hall) among students who disagree on the correct answer to a clicker-style question: Why Peer Discussion Improves Student Performance on In-Class Concept Questions. Definitely worth reading if you’re interested in the topic.

Most people are drawn to Khan based on its massive video library and Sal’s own charming and engaging teaching style. Like many, we assumed the videos would be the predominant learning mechanism for students tackling new material. In fact, the students rarely watched the videos. This result is consistent with some of the observations in the Los Altos pilot. The students greatly preferred working through the problem sets to watching the videos. Students turned to their peers, the hint, and the classroom teacher much more often than they did the linked Khan video.

There’s a difference between learning something (for the first time) and reviewing/reinforcing what you’ve previously learned. Video lessons, as a form of direct instruction, are a good fit for students who are trying to learn something that they do not yet know. Problem sets, as a mechanism for practice and assessment, are a good fit for reviewing and reinforcing what a student knows. When a student starts a new topic, it makes sense to me to start with video and then move to the exercises (as is traditionally done in a textbook). If a student already knows a topic, skipping directly to exercises makes sense. Once you’re in an exercise, a topic-focused video feels less relevant. A progression of prepared hints and assistance from friends can help both a student directly work through the challenge in front of them, in a way that a topical video cannot.

In a paper that we published last summer, “A Comparison of the Effects of Nine Activities within a Self-Directed Learning Environment on Skill-Grained Learning”, the opportunity to watch a topic-focused video after answering a question incorrectly was both not very popular and not effective. Again, this is not to say that video lessons themselves are not valuable, but just that the placement of a topical video in the context of practicing specific exercises on a known skill isn’t the best spot for it. As a result, Grockit has moved towards is offering video focused on instruction in one part of the application, and offering videos focused on solving specific questions as screencasts available after answering that specific question. These worked-solution videos are immediately relevant at the time that they are made available, more similar to question-specific hints or raising a hand to ask for targeted help from a teacher. (In this case, since there is no classroom teacher to ask, we developed Grockit Answers to fill the void (needless to say: YouTube comments don’t cut it), and Answers now powers every video lesson and question explanation, so students can ask specific confusing moments in a video and get answers from other learners. So I’m arriving at a different conclusion than the white paper authors about the students’ lack of interest in viewing Sal’s videos from within the exercises: the videos aren’t necessarily too long or not sufficiently interesting, they may just be presented in the wrong place or wrong time in the context of the exercises. Timing and context often matter a lot more than we realize.

I look forward to seeing how feedback from blended learning classroom pilots such as this one ultimately affects how these educational software systems continue to evolve.