This is our most popular Managing eLearning post every year, and by now we know you want to see the hype curve first. Here it is:
Our 2018 eLearning predictions set in terms of Gartner’s hype cycle.
As always, we try to use this blog post to speak from our experience as a learning technology partner to organizations that provide continuing education and professional development. We use our knowledge of practical use cases to put the media and marketing hype in perspective. You’ll notice that right away below the fold.
Distributed CE ledgers: AKA credit hours as cryptocurrency. Distributed ledgers, also known as the blockchain, power Bitcoin and other types of cryptocurrency. Distributed ledgers are essentially a record of monetary transactions that is constantly copied across every computer in a peer-to-peer network. In combination with encryption, this ingeniously prevents counterfeiting and fraud. Cryptocurrencies are clearly at the peak of the general-purpose (non-eLearning) hype curve – a couple of thousand percent increase in value will do that. The concept of distributed ledgers has usefulness outside of its thriving use in financial transactions and currency speculation, and it is actively triggering innovation in healthcare IT.
In eLearning, we are at the very start of the hype curve with distributed ledgers: we can imagine the technology solving one of our recurring problems. To wit, we in online learning are often in the business of managing transactions. Not financial transactions—those get offloaded to payment gateway. We manage continuing education (CE) credit transactions: a learner completes a CE activity and receives CE credit, or “credit hours,” in return. When a learner accumulates a sufficient balance of CE credits over a certain span of time, she achieves maintenance of certification.
As we know, these transactions are often managed in the most labor-intensive way possible. There are big problems with tracking and verifying a learners’ CE balance. Reporting CE issuance to accreditors is also a mess (are you familiar with PARS?). This state of affairs makes distributed CE ledgers look very appealing. First, as with Bitcoin, the distributed ledger method is an ideal way of tying credits to individual learners’ accounts, reducing the potential for counterfeiting credits. Just as importantly, ledgers transparently track the movement of credits from party to party – in our case, from CE provider to certificant to certifying board. Distributed ledgers mean that every account-holder in the network – i.e., every CE provider, certificant, and certification board – has a complete record of every transaction made using the currency, which has real promise for building applications that could automate the processes of claiming credits, reporting individual learner credits to certifying boards, reporting aggregate credits from providers to accreditors, and periodically tallying learners’ credits.
Cutting back on the administrative headache of tracking credits for certificants is an imperative for the eLearning industry. There are other projects underway to solve it, including the MedBiquitous Activity Report standard and (in a sense) the Mozilla Open Badges project. As Mark Iafrate of Accredible, an evangelist for badges, has suggested, these standards could continue to exist in a layered system that rests atop distributed ledgers. The distributed ledger could serve the basic purpose of adding and subtracting credits from accounts in a secure way, while the other standards offer the metadata that is required by accreditors in order to verify educational quality – which brings us to our next topic.
Value-based accreditation: Continuing with the CE credit economy, we note that many CE providers are concerned with competition in the market, which we can describe in economic terms as deflation of the price of CE credit. Less abstractly, the CE business of non-profit associations is threatened by for-profit CE providers that undercut the associations’ prices. From a five-mile-high perspective, however, lower prices for education shouldn’t bother us (if you’re currently looking at shrinking CE revenues in your annual report, just try to bear with me for a second). What should bother us is the possibility of lower educational value being associated with the flood of cheap CE “products” into the market. A better system would (1) quantify the educational value provided by CE providers, (2) create a race to the top in which CE providers were awarded an allowance of credits that they could market in return for the value they created. By “throttling” the amount of credit available in the market, accreditors could ensure that the number of credits on the market doesn’t expand to infinity – driving deflation in the price – and encourage increased educational quality. This is in essence what the ACCME seems to be driving at with its mandates for improved outcomes reporting in order to achieve coveted Accreditation with Commendation: the ACCME offers commendation “to encourage and reward accredited CME providers for implementing best practices in pedagogy, engagement, evaluation, and change management, and for focusing on generating meaningful outcomes.” It’s not hard to imagine the ACCME eventually extending these requirements to all CME providers, or putting restrictions on the quantity of credit that low-quality providers could award, thereby constituting a system of value-based accreditation.
EHR-Integrated Performance Improvement: Uncertainty around the U.S. Federal government’s policy on value-based care is surely putting a damper on the hype around quality improvement initiatives, but we adjudge ourselves to be incrementally closer to the dream of electronic health records (EHR) data being incorporated into performance improvement projects this year due to two phenomena we have observed: (1) the continued formalization of healthcare IT as an industry, with concomitant media and professional certification helping to tame the chaos and build capacity, and (2) the keen interest among CE providers in using performance data to drive targeted recommendations for learning activities. The data is there, and the demand is there; it’s just a matter of doing the work. We think it’s likely that we’ll see a successful pilot integration of EHR data with an external LMS in the next three years.
IOT Physical Simulators: The idea here is to build physical training devices that connect to online learning technology. An example would be getting your high-fidelity medical simulator to talk to your LMS and LRS. This idea is still incubating in most places in the industry, but we have seen some evidence of movement outside of our own pilots. The Army Research Laboratory in Orlando, Florida (the Silicon Valley of simulation), is, a source tells us, leaning toward specifying xAPI (see below) as the standard for LMS-simulator communication. If implemented, that will make a big difference. Separately, MedBiquitous’s Learning Experience Working Group has completed a draft of its first profile, Virtual Patients, and has started work on its second profile, titled Human Patient Simulators, Mannequins and Task Trainers. This profile will promote standardization in how simulators report over the Internet to databases. These standards are a necessary step toward implementation, and make us, as implementers, more confident that we’ll see greater application in the next three years or so.
Peak of Inflated Expectations
Artificial Intelligence and Predictive Modeling: As we’ve previously pointed out, it’s hard to define what AI is, and if you accept a wide definition of AI then it has been in use in eLearning for a while. But using widely accepted definitions, we’d say that where we see most of the application these days is on the marketing side of eLearning – targeting applications and the like. Higher ed and for-profit CE providers are doing it; most associations only aspire to it. We would hope that these marketing algorithms could inspire really valuable educational recommendations in the future, but before that happens there is a lot of work that needs to be done to improve interoperability between learning platforms, accreditation platforms, and identity management platforms, among other things. When we open that can of worms, we’ll know we’ve reached the trough of disillusionment.
Augmented and Virtual Reality: Everyone is appropriately excited about the idea of overlaying educational graphics on the real world (augmented reality) or creating truly immersive virtual worlds (virtual reality). The challenges of building these experiences have decreased as device manufacturers have provided application developers with new toolkits for developing applications. Enough providers are now doing it that it’s talked about enough to be at the peak of inflated expectations. It’s still in the realm of custom content development – not something you can assign to your staff instructional designers – but it’s cheaper to do. It’s a great attractor for learning experiences at conferences and multi-day live courses. As with physical simulation, we expect to see challenges around integrating the AR/VR experience into your learning infrastructure. And we wouldn’t expect virtual reality to be a popular on-demand product until the required devices become more widely owned. But nonetheless, the future looks rosy.
Trough of Disillusionment
Curation and Subscription Learning: Each year when we get to the trough, it’s always important to specify that all good ideas go through the trough at some point. There’s just a human tendency to get excited about a good idea and view it as the solution to all your problems: “if all you have (on the mind) is a hammer, everything looks like a nail.” So we want to say that when it comes to curation and subscription learning, we’re all for it. But what we’ve seen in the market out there is that people are seeing subscription pricing models as a solution to the problem of deflation in CE pricing (see the Value-based Accreditation entry, above, for more discussion of that issue). The theory is that people who do not see value in paying for an individual chunk of CE will see value in paying for access to a large library of online CE – like the Spotify model of online education. We like that idea, and we agree that you need to provide many different pricing models. We are all for it. We especially like the idea of curation, which is providing value by letting recognized experts and social networks recommend online learning created by other authors: this is like the community playlists and celebrity playlists on Spotify.
But these pricing and bundling options are unlikely to singlehandedly make up for the gap in your balance sheet. Nor is the subscription likely to be very popular if you don’t actually have a large library of online CE. Your content still needs to be valuable enough to justify the cost. Finally, you might discover that your learning technology isn’t designed to support the subscription model – it kind of forces you into the on-demand model. For all these reasons, organizations are facing some disillusionment.
Unless you happen to have access to next-generation technology that makes it very easy to achieve the subscription or curation model, we’d recommend that only organizations that have already proven the value of their content through on-demand sales make the investment required to implement subscription learning. If you are making a profit off on-demand sales, offer the subscription package as an additional purchasing option to attract more learners. If you are just building your first set of online courses and implementing your first (traditional) LMS, it’s probably not the time to develop a 12-tiered system of subscription packages.
As for that other definition of subscription learning, spaced learning, we are, again, all for it. One case study a week paired with a multiple-choice question makes a ton of sense, perhaps even as a replacement for high stakes certification. But if learners don’t see the value in spending time on your content, it’s not going to matter whether it’s packaged into fifteen minutes once a week or a three-day workshop. Think of those Google Alerts you excitedly set up in 2008 and haven’t read since 2009. If the learner is not actually opening the content, then you’re not providing a valued service, you’re just contributing to alert fatigue.
Slope of Enlightenment
xAPI: It is eerie how xAPI has exactly followed the hype curve. In the early days, it had a different name (Tin Can) and people just knew it was a thing. At the peak of inflated expectations, xAPI was hyped as “next generation SCORM.” This was a pithy marketing pitch, but it was also (impressively!) simultaneously an overreach and an oversimplification: those who have tried to use xAPI to do simple SCORM stuff have been disappointed. For one thing, you need a whole new piece of software (an LRS) that is separate from your LMS and is going to cost you more money. And getting your LMS to talk to that LRS, which you probably thought was necessary, will also cost you money. It’s not clear how you identify the person who is doing the learning. People who learned all this for the first time in 2017 were wading in the mucky waters of the Trough of Disillusionment, trying not to get pulled under.
Down in the trough, though, you learn things: how there is real value in xAPI’s specification of a standard for logging learning activities in a database (the LRS, or learning record store), how the LRSes offer built-in data visualization tools and the capability for external tools to extract data for visualization and analysis, how the profile concept built into xAPI has provided a useful framework for organizations like MedBiquitous to construct vocabularies for educational uses. The real use cases for xAPI that we can see in the near term have less to do with SCORM-style strict monitoring of identity and completion and more to do with big data and analytics.
Moreover, the Department of Defense has recently given xAPI a shot in the arm with guidance that DoD learning should “Implement the xAPI specification to enable interoperable experience or performance-tracking capabilities, learning analytics, or data integration with multiple applications or systems.” That list of uses is apt: xAPI is the tool organizations should be using to study the efficacy, engagement, and value of online education. To reference earlier points in the hype curve, xAPI is what the ACCME should be using to analyze the aggregate educational value of CME activities. It’s notable, too, that the DoD guidance tells implementers to continue to use SCORM for asynchronous course tracking. In other words, we are in a period when SCORM and xAPI will co-exist simultaneously. In 2018, we expect to see organizations using both SCORM and xAPI within the same learning activity for different purposes: SCORM for tracking completion on an individual level, and xAPI for deeper analysis and visualization of learning activity in the aggregate.
Gamification: When we say game-based learning we mean learning activities that are games. When we say gamification, we mean the application of gaming exostructure, like points, levels, and badges, to learning. To the cognoscenti, they’re two different things, but for the purposes of this blog post, and to save space on the accompanying graphic, we’re lumping them together. Both are on the slope of enlightenment. In game-based learning, mini-game widgets like H5P have made simple learning games like memory games and flashcards even more accessible: you can author them directly in your LMS or CMS. Gamelike branching simulations can be authored in a sweet graphic user interface using Branchtrack, or you can delve into advanced functionality and extensibility with the venerable open-source OpenLabyrinth, which now offers a new version. Corporate training attempts at gamification are still the worst thing ever, but board-prep apps that help learners study for high-stakes certification exams using adaptive quiz algorithms are surprisingly gamelike and very popular with highly paid professionals.
Plateau of Productivity
MOOCs: We will go against the grain a bit here and argue that MOOCs AREN’T DEAD. Our evidence? All the MOOCs out there. And all the quasi-MOOCs in higher education and CE: the cheap online master’s degrees that seemingly every big-name university is now offering. These online courses might be less massive and open than a purist’s MOOC, but there’s no question that the universities experimented with the Coursera model and learned from it, creating a new mode of education that meets keen demand among learners for credentials they can earn while working. Associations and other CE providers have been slower to learn the lessons of this mostly academic model, but certain design principles popularized (if not invented) by MOOC platforms – putting the faculty (instead of the faculty’s PowerPoint) on screen, interactive video, peer assessment, synchronous deadlines with asynchronous time commitments – are now informing the best continuing education providers.
A couple years ago, I started noticing people from higher education referring to any old online course as a “MOOC.” As an online learning geek, that bothered me. And that has thrown off many informed commentators: the way that the meanings of the term MOOC have shifted to become synonymous with online education in the minds of many journalists, academics, and students. When the MOOC hype started around 2012, we eLearning analysts knew there already was an online education industry, and some aspects of MOOC technology and instructional design were non-groundbreaking from our informed perspective. We thought what was revolutionary about the MOOC was the Massiveness and the Openness. It turns out, though, that what was groundbreaking from the perspective of others was the Onlineness and the Courseness: the fact that MOOCs were higher-ed style courses from household-name universities executed rather nicely in one’s web browser. The new class of online master’s degrees is not open, but it is more open: freed from the traditional barriers of paying full tuition to show up physically at a certain location (the campus) at a certain time. These are either MOOCs or quasi-MOOCs, and if you can accept that shift in meaning, then MOOCs are enjoying a thriving productivity.
That concludes the set of technologies, models, and ideas we’ve chosen to highlight this year. We could say more, but so much already is said. Want to hold us to our past predictions? Here are the last three years:
Ask us more about this year’s eLearning predictions at the Alliance 2018 Annual Conference, January 20th-23rd. Visit us at booth 706 or contact us below to meet up for hot chocolate or coffee!