In a large university full of lecture classes, it can be hard to pinpoint the students who are falling through the cracks. Over the past several years, Purdue University has been experimenting with a data-driven solution way to find kids who are at risk for dropping out, or who–in a critical mass–might indicate which classes or majors have inadequate instructors. Administrators call it a “student success algorithm,” but it’s official name is Course Signals–and if it works, it could change the way modern universities are run.
Incorporating data-mining and analysis tools, Course Signals not only predicts how well students are likely to do in a particular class, but can also detect early warning signals for those who are struggling, enabling an intervention before problems reach a critical point.
Results so far are impressive. According to data released by Purdue last month, six-year graduation rates are up 21.48% since the project’s start, while grades for those students who use Signals in two or more classes are improved significantly next to those who don’t. So far, close to 24,000 students have been directly impacted by the project, with more than 145 Purdue instructors now using the algorithm in at least one of their courses. This semester a total of 100 courses are covered–with even more planned for next Spring.
The question is: How do students feel about having their academic careers predicted for them? How would you feel if your next student advisor was an algorithm?
Unlike traditional means of quantifying student success, Purdue’s algorithm looks at more aspects of the learning experience than just the basics: taking into account 20 different reference points, from standardized test scores and current grades, to past academic history and efforts to interact with software like e-learning tool Blackboard Vista.
At its core, Course Signals is based on an observation that belongs more to the social sciences than to computer science. The so-called “Hawthorne Effect” states that people will improve or modify aspects of their behavior when they know that they are being studied. Translated to academia, the idea is that the more feedback students receive about their current standing, the higher the grade they will ultimately achieve.
Of course, a neat insight is nothing without the proper implementation. According to Matthew Pistilli, a research scientist in Academic Technologies at Purdue, getting the technology accepted by students came down to two things: understandability and access. To achieve the former, Pistilli decided to adopt the familiar metaphor of traffic-light signals to help contextualize a student’s success as they continue along a particular route.
“The algorithm divides students into three different risk groups: red, amber, and green,” he explains. A green signal indicates that students have a high likelihood of success as long as they continue working as they are at that moment. Amber means that there are potential problems with success on a particular course, and red (meaning “stop”) indicates that there is a high likelihood of them failing.
Once Course Signals has assigned students a classification, a personalized message is then generated using their name, lecturer, and specific topical references (the latter to stop the message looking too automated) and sent out by email. These messages don’t simply offer students predictions about their likelihood of eventual success (or failure), but also give strategic tactical directions so that students can work to either maintain or improve their overall grades.
The embryonic stages of Course Signals were created in 2007, based upon the research thesis of John Campbell, now associate provost for Information Technology and chief information officer at West Virginia University. Entitled “Utilizing Student Data Within the Course Management System to Determine Undergraduate Student Academic Success,” Campbell’s work laid out the weighted algorithm that now forms the basis for Course Signals.
Upon its initial integration into university life, the algorithm was executed manually using an Excel spreadsheet containing student data–although it is now fully automated owing to both the scale and complexity of the project. “The algorithm tends toward over-inclusion,” Pistilli notes. “I’d rather erroneously target a student who’s not at risk with a message that says, ‘Hey, you look like you might be struggling’ then I would fail to tell a student who thinks they’re doing well, when they’re actually doing poorly.”
It’s not just students who can benefit from Signals, either. The data-driven feedback loop can additionally help instructors to fine-tune their own practices. “We wanted faculty members to have a better understanding of how they are teaching,” continues Pistilli. “If a faculty member runs an intervention in Course Signals and it comes back as 80% red lights–and they thought the students had a better understanding of the material than that–this gives them the opportunity to go back and see where the problems are, and then have a conversation with students about the topics they don’t seem to have understood. It also means the professor can incorporate these insights from the start the next time they teach a particular course. It’s a data-driven approach to teaching.”
Oddly enough, students love the system. “Overwhelmingly students tell us that they want Course Signals in every single class,” Pistilli says. “They crave the the feedback. They like being told how they’re doing, and they like being told how they’re doing in a wider context. Here’s the thing: If you ask a student who’s not using Signals how they’re doing in a class they can normally tell you whether they got an A or B in the last test, but show less of an understanding of the aggregate piece. What Course Signals does is to quantify this context so students have a better overall understanding of where they currently stand. It can help set them up for life.”