Leaving the second exam, several students independently made essentially the same point to me, that they â€œstudied the wrong stuff.â€? One student said, â€œI remember coming across the stuff on the test, but I gave it lower priority than other material we studied.â€? Another noted that the first exam was drawn primarily from material in the text while this one was primarily from the notes. So he mis-studied! (His words, not mine.) I was struck by the commonality of these comments, so to explore this issue further, I emailed the class and asked if they agreed with that sentiment. I received responses from 23 students (16 from the early class; 7 from the late class), roughly 45% of my total students in the course.
The actual responses are here.
Summary of Responses:
â€¢ 10 students agreed with the sentiment.
â€¢ 5 students complained that the exam included questions on material that wasnâ€™t covered in lectures or texts. (N.B. Some of these critical comments regarded material from the Wall Street Journal, which apparently students didnâ€™t count as part of the text material.) This involved only 2/30 questions.
â€¢ 7 students disagreed with the sentiment, responding generally favorably to the exam.
â€¢ 2 students had general complaints about the exam.
At one level, the students are right. For the material on the second exam, our coverage differed substantially from the book, first because we did things in different order (e.g. we covered the role of money in the economy immediately after Supply & Demand, rather than much later in the course as preliminary to the discussion of monetary policy) or different degree of detail (e.g. our explicit discussion of the role of government in the economy, not merely in the context fiscal policy), and second because we replaced the AD/AS model which was prominent in the book with the Income-Expenditure model that was relegated to an appendix in the book. Despite all my warnings to this effect, it was apparently still a surprise to many students.
This highlights to me the importance of the text to the students. You may recall that the first week of class I very clearly told the students that they would need to read the text since I wouldnâ€™t lecture on material that was explained there. (What a difference from my past practice of not really caring which book I assigned since I lectured on all the important material.) When the text diverged from what was covered in class, students didnâ€™t seem to be able to make the switch, even when told to do so.
This suggests tactical behavior on the part of students (looking at the last exam to prepare for the next), rather than believing what I tell them. I wonder how many of these students completed the meta activities? Going back and cross-referencing the names of the respondents with their meta submissions indicates that the average score of the 10 students was 2.3 of 8 meta activities to this point in the class.
As I tried to make clear all semester long, the meta activities were designed to answer question of what is important to learn. Why didnâ€™t more students do the meta assignments? Was it perhaps because the metas are hard work? Did they possibly not believe that the metas would solve this problem? Did they rationally decide that the effort wasnâ€™t worth the expected benefit? How often do students ignore instructorsâ€™ guidance? Are they right to do so? Do they have a different objective (e.g. satisfycing behavior rather than maximizing learning) than we think they do or should?