More on metacognition and assessment

For a couple of years now, I’ve tried to build low stakes, formative assessment into my intro courses. The intent was to help students become more responsible for their own learning and promote more mid-course adjustments to study strategies while there was still plenty of time for that. (Contrast this approach with the mid-term, final exam model.) The way I implemented this metacognitive effort was to encourage students to take the online quizzes provided by the text book publisher at the end of each chapter. As long as students took the quizzes, I gave them extra credit regardless of how well they did. And, to judge by their scores, nearly everyone did well. When individuals did very poorly, which was rare, I spoke to them publicly about whether they had taken the quiz seriously or just for the credit. The message was received and the students almost always did substantially better next time.

Unfortunately, the plan didn’t work as well as I had hoped. In past years I found that the assessment quizzes, based on textbook test banks, didn’t correspond well to my exam questions. The quizzes were not very challenging and thus not adequate preparation for my exams. Students could do well on the quizzes and then not so well on the exams. I felt like the lack of correlation between quiz and exam questions defeated the purpose of the exercise and encouraged students to think of the former as busy work, unrelated to assessing their mastery of the material.

This past semester, to address this concern, I adopted the Aplia product, customized for the text I was using. I made this selection principally because of the higher quality of their quiz (problem set) questions compared with textbook test banks. The result was disconcerting. Student scores on the quizzes, were low, and as the semester went on they got worse. After about the fourth week, the class mean never reached 70%. I got the feeling that students were not taking the quizzes seriously, that they were just going through the motions. Economics suggests that people behave rationally, predicting that students would merely make an effort to get the credit, while not taking the assessment seriously. And yet this seemed to conflict with my experience in past semesters when the students did consistently well on the quizzes.

To examine this paradox, I added a question to the final exam, which I told students I wouldn’t look at until after grades were in. The question asked:

The average score on the Aplia problem sets was pretty low. How did you approach them?
a. I tried to do my best, but the problems were difficult; when I did badly on the problem sets, I took that at as a sign that I hadn’t mastered the material. (11%)
b. I tried to do my best, but the problems were unlike the ones we did in class. (32%)
c. I tried to get the right answer but it didn’t bother me when I got the questions wrong. (41%)
d. I didn’t take the problems too seriously; I was just trying to get the points for trying. (16%)

I intended ‘a’ to be the right answer, the answer which indicated that the exercise was achieving its goal. Answer ‘b’ was included to reflect the fact that mid-way through the semester, many quizzes began to include numerical questions, which usually were based on methods of solution that I didn’t emphasize in the course. The advantage of numerical questions is that they result in “exact” answers. The problem is that often numerical questions are driven by the need to keep the math simple, which requires the economics to be trivial or an unusual case.

The frequency distribution of the responses is given in parentheses. Thinking about the likely bias in the responses this suggests to me that probably 60% of the students didn’t take the quizzes seriously, which is a problem.

What I’ve decided to do is raise the bar a bit. For this coming semester, I will give credit for the quizzes as long as the score is 70% or more. We’ll see if that makes it more of a useful assessment while still keeping the stakes low.

This entry was posted in The Experiment. Bookmark the permalink.

2 Responses to More on metacognition and assessment

  1. terry dolson says:

    This is really interesting. I agree with your analysis that focuses on the “but it didn’t bother me” response. So, it seems that you are considering making the grades count so that it will “bother” them (so, giving the testing some stakes). Have you considered going a metacognitive route? If they get less than an “A”, they could write out the following for the questions they got wrong: Here is where I went wrong when solving the problem…to get the right answer…
    in other words, get them to see their mistake/misunderstanding and correct it. That makes for powerful learning.
    Just a thought. I admire the way you reflect on your teaching strategies : )

  2. dispersemos says:

    I, too, appreciate the reflection on this kind of assessment. My parallel attempt at the same kind of assessment has been with on-line exercises and quizzes for beginning language students. The exercises, provided by the text book publisher, are brief, easy to use and provide immediate feedback. They do count for some of the student’s grade, but not too much. I’ve tried to keep students accountable and yet keep this kind of assessment in appropriate perspective given course goals.

    But I’ve found similar problems. Students often stop taking them seriously. The questions and format mostly don’t match up with what I am assessing on in-class exams. And I’m not convinced that the kind of feedback they receive on-line is the kind that promotes learning. Mostly students are told that an answer, or part of an answer, is incorrect and then they have one or two chances to fix it. Students complain that this isn’t enough – they need more specific feedback about why the answer is incorrect before they can decide how to fix it. [Referring back to the text book is apparently not a logical solution for my students.]

    My conclusion has been that since the on-line exercises don’t match up with sumative assessments in my class that I should move toward different forms of formative assessment. It seems like the students who struggle to learn new aspects of language are also the ones most frustrated by the on-line practice exercises. Also, those who don’t learn well independently (working alone with on-line assessments) seem not to benefit and are easily frustrated.

    Thanks for the post.

Leave a Reply

Your email address will not be published.