Last month I read a post, "Your GPS is Making You Dumber," by Dan Meyer with great interest. In it, Dan explores the dichotomy of providing steps for students to use in solving math problems vs. providing the problem without the steps to let students grapple with how to solve it. If you haven't read it, the post and the 40+ comments are a great read. I highly recommend it.
I mention it here because this year I tried something different when I taught stoichiometry, the mathematical relationships inherent in chemistry. I blogged about my new approach here and here. To summarize, instead of showing my students exactly how to solve stoichiometry problems, I presented the problems and suggested they figure it out. I helped and prodded and eventually showed several different systems. This post is the end of the story.
At the end of the year, my students take an end of course exam. It's a fourteen question test over the big ideas in chemistry, written and graded by me, to assess how much growth they have made over the course of a year and amounts to 10% of their grade in the course for the year. I guess that qualifies it as high stakes. Or at least high stakes-ish. There is, of course, a stoichiometry problem where students are given the mass of a reactant and asked to calculate the mass of a product. This is a chemistry standard, something I would want every student to be able to correctly do by the end of the class.
In the 2014-2015 school year, I showed my students discrete steps for solving stoichiometry problems. We return to these problems every month of the year, so by April when they take the end of course exam, they have seen the process many times. In that year, 87% of my students solved the stoichiometry problem correctly. The other 13% didn't leave it blank or earn 0 points; they made a mistake or two but earned partial credit.
In the 2015-2016 school year, I did not provide the discrete steps. I focused instead on helping students get there in their own way. Again, we revisited the concept many times throughout the year and on the formative assessments, my results were typical with other years. On the end of course exam, though, only 62% of my students correctly solved the problem. Again, the other 38% didn't leave it blank or earn 0 points, but often they left out the step that uses the balanced equation, the part that shows they have connected the problem to the reaction that is taking place.
A drop of 25% has spooked me about trying this again next year. On the other hand, perhaps comparing what I did for 24 years with what I tried in one year isn't a fair comparison. If I could compare my first year's results with this year's results, would they be this different? There is no way of knowing because I don't have those results.
I am a believer in inquiry or discovery or constructivism or whatever the word is to describe that when students build meaning, it leads to better understanding. Looking at my data, though, I am wondering if it was my question or my method or something else that caused this big drop in results.
In his post, Meyer writes:
Similarly, our step-by-step instructions do an excellent job transporting students efficiently from a question to its answer, but a poor job helping them acquire the domain knowledge to understand the deep structure in a problem set and adapt old methods to new questions.
My stoichiometry question certainly measured whether or not they could solve a routine question in chemistry. Maybe if I had asked a different kind of question, one that measures their understanding of "the deep structure in a problem," then I might have different results? What do you think? Please add your thoughts as comments. I would love to hear them!