Learning Analytics at the 2012 ELI Meetings

One of the themes of this year’s ELI conference was Learning Analytics, a movement which comes at least in part from the practices of For-profit universities.  A good introduction to the topic is ELI’s 7 Things You Should Know about First Generation Learning Analytics.

Learning Analytics starts with the premise that student engagement in a course is highly correlated with student learning.  Learning analytics seeks to apply powerful software to large datasets using background characteristics that students bring to higher education as well as data from students’ participation in course work (as measured by the institution’s learning management system) and measures of the students’ academic outcomes.  The idea is to predict when students are on a path likely to lead to failure and to communicate that information to the individual students, faculty advisers, etc.  A good example of Learning Analytics is the Signals Project at Purdue University, led by John Campbell.  Signals periodically sends students a “traffic light” indicating that they are doing well academically (a green light), that they might need to reconsider their study practices (a yellow light), or that they need to make a change to avoid failure (a red light).  This is data mining to the nth degree, and to some, it raises the specter of “big brother.” Learning Analytics may be gross (in more ways than one), but we shouldn’t forget that at this point, it is only in its infancy.

For me, the most interesting part of the Learning Analytics panel was Randy Bass from Georgetown, who noted:  “The current enthusiasm for Learning Analytics is a presenting symptom of higher education’s failure to respond to research on teaching and learning over the last 40 years.”

Bass pointed out that the Scholarship of Teaching & Learning might be viewed as the “Slow Analytics Movement,” in contrast to the current Learning Analytics.  He then posed an important question:  In recent years, we have discovered the educational value of the so-called “High Impact (Learning) Practices.”  Why then are the results of High Impact Practices (as well as the NSSE) invisible to Learning Analytics?  How can we measure experiential learning or the effects of seminars and other practices which do not have economies of scale?  Important questions!

This conversation made me wonder if a system like Purdue’s could actually make students less reflective and metacognitive about their learning analogous to how I’ve found software like Pearson’s MyEconLab or Cengage’s Aplia makes students “stupid” when they don’t bother to do reading or homework until the software tells them to.  I hope not, but I wonder.  Additionally, the system only works if faculty fully use the LMS.  In other words, if readings and assignments are not on the LMS, student activities in those areas won’t be captured and the system will give false readings.  I also wonder when students will start to try to “game the system” to get credit for clicks when they’re not actually doing the work.  Lots to think about.

Photocredits:

  • http://farm1.staticflickr.com/69/229822320_403849c7b4_d.jpg
  • http://farm5.staticflickr.com/4019/4272283260_7a6d3958e6_d.jpg
This entry was posted in Teaching and Learning. Bookmark the permalink.

One Response to Learning Analytics at the 2012 ELI Meetings

  1. Pingback: Learning Analytics at the 2012 ELI Meetings | Pedablogy: Musings … » Ed Tech Inspirations

Leave a Reply

Your email address will not be published. Required fields are marked *