Earlier this month, I previewed a talk at the Games+Learning+Society Conference 8.0 at the University of Wisconsin. Jody Clarke-Midura and Jennifer Groff have since given their much anticipated talk titled “Formal Game-Based Assessments: The Challenge and Opportunity of Building Next Generation Assessments” and it is now available to watch online here. I think they make some important and realistic points about the future of game design in education, especially when it comes to using games in educational testing.

Clarke-Midura and Groff laid out the pitfalls facing innovations in game-based assessments as well as their promise. They also provided two examples of current game-based assessments which blend the methodology necessary for a reliable assessment as well as the engaging and creative elements of game design that are just as important.

One prominent example presented was a game-based assessment which tested middle-school age children on critical thinking, research and evidence collection. In the game-world, students used an avatar to navigate through a virtual world to solve a problem—they need to figure out through research and evidence collection why a frog was mutated to have six legs. They could be assessed based on their actions and conclusion in the virtual world.

Clarke-Midura and Groff don’t want actual games to be assessments though. Instead, they want to take aspects of game design and incorporate it into building more effective assessments. That crossroads promises better assessments, but also poses issues for both game developers and those who measure and evaluate psychometrics.

It may seem counterintuitive to draw from games—which are dynamic environments—to design assessments, which must be tightly contained environments to ensure standards are consistent. However, there are parts of game design which should be incorporated into tests, Clarke-Midura said, such as

  • Clear goals
  • Freedom to experiment
  • Freedom of identity
  • Narrative
  • Agency
  • Interaction

There is some tension in actually incorporating game-based assessments in schools. Clarke-Midura explained that Race to the Top, a program put forth by the Obama administration to foster learning in K-12 public schools, has provided funding to schools, which in turn means there is more interest in game-based assessments. However, there has also been some hesitation—while teachers and administrators see game-based assessment’s value, they are hesitant for it to become mainstream and widely-used even though the assessments would work on existing technology.

But why make the switch from pencil and paper tests to game-based ones? Clarke-Midura said multiple choice tests—which are widely used in Wisconsin to test fourth and eighth grade competency in many different subject areas—show proficiency in facts, but do not show proficiency in reasoning, research and critical thinking. A game-based assessment can measure actions in a virtual world to measure those cognitive cornerstones.

An Expert’s Opinion

Clark Aldrich

Clark Aldrich, author of the Complete Guide to Games and Simulations is a thought leader on how scenario based test questions could add to the validity of high stakes tests like the SAT.  I was very curious as to his reaction to the GLS Session on Game based Assessments.

Click the play button below to listen to the interview.

Clark Aldrich Interview – Thoughts on Formal Game-Based Assessments

The traditional multiple-choice assessment has some flaws, and people are turning to computer game models to try to not only fill some of the gaps, Aldrich said, but also to test individuals in different professional and academic spheres on many different and complex topics that traditional tests cannot.

“As far as game-based assessments go, the future is here, it’s just not evenly distributed. We’re already seeing assessment models that are pushing this boundary. But it brings up a lot of questions. In this example that we saw [in the presentation], one of their big takeaways is a computer game is necessarily a teaching mechanism. Inherent in almost any computer game design is learning. With this kind of academic assessment, you don’t have the option of teaching them anything. There’s no feedback. All the cues that we’re so used to aren’t there. It’s a whole new way of designing an interactive experience and has the ability to capture a lot of information,” he said.

“[We should move] toward this kind of assessment because we can do it now. It’s the simple reality of, if we can measure more kinds of things, measure them faster, come to conclusions faster and feedback the information on what we’ve learned faster, and at less cost, then we ought to do it. It simply makes sense to do. Simply putting it online has benefits, but online testing also has the potential to tap into more advanced assessments and applications, like the example in the presentation,” Aldrich said.

Aldrich said the presentation was important simply because people need to start thinking about game-based assessments and their possibilities, especially in their nascent stages of development. The efforts at game-based assessments definitely indicate that there is a long road ahead for them to be all they can be, however, the first attempts are nonetheless impressive.

The other opportunity is moving away from putting people on a bell-curve when results are measured. Instead say, “What are you good at, and what are you bad at, and what are you good at in ways other people are not?” Future analysis wouldn’t look at a percentile, but rather what people are uniquely good at compared to others, and how can we design a customized curriculum, not how you stack up against your peers.

Don’t miss Clark at the Serious Play Conference in Redmond, WA from August 21 to August 23, where dozens of speakers will talk about the future of gaming, education and industry.

Speaker Bios

Jody Clarke-Midura is a learning scientist at the Harvard Graduate School of Education, currently heading the Virtual Assessment Research Group. The group’s research focuses on designing and studying virtual assessments as a way to gauge critical thinking and inquiry in scientific disciplines.

Jennifer Groff is currently the Director of Learning and Program Development for the Learning Games Network. She has also worked and researched at the MIT Education Arcade as well as the Harvard Graduate School of Education. She has multiple graduate degrees—one in educational technology, and another in neuroscience in education. Her research has a specialized focus concerning how the crossroads of education, technology and design work together and are evolving.

Managing eLearning is written by the Blog team at Web Courseworks which includes Jon Aleckson, Karissa Schuchardt and Adelaide Blanchard.  Ideas and concepts are originated and final copy reviewed by Jon Aleckson.