Postmortem on my experiments with teaching this year.

At the end of the school year, about a month ago, I was pretty unhappy with my teaching experiments this year.

In my intro course, the metacognition experiment didn’t seem to go very well this year. Part of it was me–the FSEM was taking so much of my time and energy that I wasn’t able to devote as much to the intro course. But it also seemed as if the students weren’t taking the meta assignments seriously. It made me wonder if trying to teach freshmen and sophomores metacognition was perceived as teaching it out of context. While I explained carefully what I was trying to teach them–that experts learn differently than novices, and that here was a framework they could use to learn economics the way economists see it, they seemed to perceive that explanation as abstract and unrelated to the content of the course. It seemed beyond their ken, at least for most students. Clearly teaching novices to learn like experts is a challenge.

Another part of my discomfort was my insistence on not curving the test grades this year to provide a clear incentive for using the metacognitive framework. The students ended up doing poorly in terms of grades. Unfortunately, when students did badly on the exams, it didn’t seem to make them want to do the meta activities any more. A contributing problem is that the first exam tends to be fairly easy since the material includes a great deal of common knowledge. This provided a false signal for the second exam which was much more difficult as was the final. With only two midterms and a final, students apparently weren’t getting enough timely feedback on their learning.

What I did learn was that providing regular formative feedback, in the form of automated quizzes is a good thing. The challenge is finding quiz questions that adequately represent the ones I test with. Perhaps the answer is to write my own questions, if I can then find a testing platform to put them on.

The whole experience made me think carefully about the best way to teach introductory economics–I ended up wondering if the writing intensive method I used for years was more effective than any of my recent experimental approaches. The reason for that conclusion is the writing assignments I gave required students to do economics, which learning strictly from the text and lectures doesn’t.

As I discussed previously, my experimental approach to intermediate macro was largely a failure in that students didn’t learn as much as they did during my previous approach. At the same time, I learned a great deal that I will be able to apply to courses in the future.

I also learned a lot about social software and learning in the international finance seminar. I’m going to apply some of that to the FSEM next fall–e.g. group research presentations instead of a research paper.

At one level, my understanding of the best ways to teach my courses seems to be right back where I started several years ago. But then it occurred to me that I may have experienced what Jerome Bruner calls “the spiral of learning.” I may be back to the approaches I originally used, but my understanding of what makes them effective is much greater than it was when I started. Additionally, some of the details are quite different.

I guess that’s progress.

This entry was posted in BigWiki, Teaching and Learning, The Experiment. Bookmark the permalink.

1 Response to Postmortem on my experiments with teaching this year.

  1. Angela says:

    I think it’s absolutely great that you’re willing to throw caution to the wind and experiment. Much you might throw out, but don’t throw the baby out with the bathwater!!

Leave a Reply

Your email address will not be published. Required fields are marked *