An Internet Resources Review
There are a lot of exciting things happening with Experience API this year, making our prediction for 2013 starting to potentially ring true—this new open source standard for eLearning interoperability is certainly gaining steam. Colloquially known as Tin Can, the standard makes big promises for the mobility and flexibility of data gathering and analysis. With those big promises, however, come big challenges.
What is lying underneath the fluff and sparkle of Tin Can’s promises? What is noteworthy in terms of achievements so far, and what is still floating around with no answers yet? We dove head-first into the “junk”—the vast array of resources and reviews—looking for treasures. The search yielded some great up-to-date information along with lingering concerns about Tin Can.
What is Experience API (Tin Can) hoping to do?
Mainly, it hopes to appeal to the eLearning masses by offering a simpler, cleaner, and more thorough record of learning activities, both formal and informal. SCORM—Tin Can’s parent standard—still reigns as the widely-used standard for publishing and sharing online educational content. However, Tin Can offers the opportunity to do more, and to do it better as well:
- better portability for content and data
- better analytics of a user’s learning experiences
- more mobile and offline access for learning
- more tracking of real-world activities
- recording formal learning activity and informal learning activity
Due to these promises, Tin Can has gained attention from eLearning providers and application companies. Rustici Software helped coordinate programming efforts and compliance for the standard’s code. The company’s President, Mike Rustici, has high hopes for Tin Can’s potential to support “K-12, teachers, mobile developers, web developers, universities, government, education technology, MOOCs, games, and an array of real-world use cases we can’t even imagine yet.”
As it is still a relatively new standard, however, it is still a long way from having the sheer amount and variety of adopters that SCORM has. Tin Can’s promises may be challenging to deliver this early in the process. Currently, some questions remain about Tin Can’s implementation and the implications it has for eLearning.
Why revisit this now?
Almost a year ago, Web Courseworks CEO Jon Aleckson interviewed Michael Rochelle of Brandon Hall and Aaron Silvers of ADL about their excitement towards Experience API/Tin Can. The news of an emerging standard that could improve and surprass SCORM functionality was intriguing to us as an eLearning company. However, at the start of 2013, our team was split on how effective Tin Can would be in delivering on its promises while balancing the costs of implementation.
Version 1.0 was officially released this past April, an exciting landmark for the early adopters who were involved in implementing the early codes and sharing their experiences. The Advanced Distributed Learning Initiative (ADL), Rustici Software, and a host of contributors from the eLearning community all played an integral role in producing, tweaking, and testing the new standard. This year has brought more examples and information to light due to the efforts of early adopters, but many questions still remain that will be important for programmers, eLearning managers, and others to consider.
Here are four questions that help us get a picture of where things are at for the Experience API/Tin Can standard. Looking through current online literature and videos on Tin Can, the answers seem to be bubbling right below the surface. As adoption of the new standard continues, hopefully more information about these questions will be shared.
#1
How will Tin Can change learning design?
Since Tin Can’s goals support multiple formats of learning, how will best practices for eLearning instructional design be impacted? Epic Learning Group, an early adopter of Tin Can, believes that instructional designers will be free to “think creatively outside of what was previously possible with SCORM.” In theory, that does sound pretty great! There are some underlying questions that need to still be considered in terms of learning design.
“Real-world activities” can be tracked with Tin Can, which may lessen the amount of control an eLearning team will have over the design of the activity environment. Will internal learning design be drastically changed if most activities are external? For example, a content writer/designer may need to focus more on how to lead learners to different external resources and then back again to the module, rather than focusing on how to incorporate content into the module. This could include directing the user out to YouTube to watch a video, and tracking the user’s interactions on YouTube itself to view similar videos before the user returns to the content package. Additionally, the eLearning team has no control over how content is displayed on YouTube or any other external source. How will that impact the design of eLearning modules?
YouTube videos are a popular example of how Tin Can could track informal learning activities. However, efforts to “Tin Cannify” external content platforms such as YouTube are still underway in terms of coding and implementing. This reveals another challenge that could affect learning design—as well as programming efforts. At the very least, this may require the relationship between programmers and the instructional design team to change. Supporting this, eLearning enthusiasts David Kelly and Kevin Thorn note that most of the discussions so far on Tin Can are very technical still. They question, “If the Experience API is the future of learning and performance, and it requires the ability to actually write code, how does it impact the vast majority of instructional designers who do not have coding skills?”
In a broader sense, some questions have been raised about the emphasis of tracking gaining precedence over the emphasis on learning. Learning design should focus on the needs of the audience, rather than the needs of data collecting. Will Tin Can strike that magical balance between the two? Are we “obsessing over the ability to track everything we learn,” as eLearning blogger Mark Aberdour asks, or will this truly lead to a “future of personalised, adaptive, just-in-time learning” as promised?
#2
How will learning activity statements be triggered and reported?
Tin Can reports activities in the form of statements, using the sequence “Actor verb activity/object.” For example, “Tricia watched a YouTube video.” Or, “Dan attempted Module 1.” How will these statements be chosen? It appears that the eLearning company will need to make that decision and do the appropriate programming as requested by each client. Rustici has created a good resource on getting started with Tin Can statements, and ADL has a base set of recommended verbs to use in these statements. However, depending on the needs of the company or client, customizations could build up quickly.
To compile all these statements and produce reports, Tin Can uses a Learning Records Store (LRS). Tin Can operates on the belief that the LRS will create a “much more accurate picture of learners.” The LRS is flexible in that it can be connected to an LMS or another reporting tool, but the amount of programming needed to do this will vary greatly.

An example of how an LRS might interact with external activities. Additional examples of connecting to an LMS can be seen by clicking on the image.
Additionally, translating statements into computable and comparable data about learners is far from a light undertaking by an eLearning company choosing to transition to using an LRS. Tin Can developers assert that statements can be as complex as needed but it will also require the LRS to have smarter reporting features set up correctly. Here again the value of choosing and wording statements will be key. Developers will undoubtedly need to work closely with content writer and with the client to determine how learning activity reports are triggered, what format the statements will be in, and what recording purposes are needed by the client.
It’s also important to consider how the LRS provides information on when learning is interrupted or experiences other errors. When users encounter errors with the software or websites they are attempting to use based off the Tin Can content, it would be helpful if the activity statements reflected a disruption of some kind. Can statements be submitted to show this, or will there simply appear to be a gap in the user’s activity statements? How will the LRS process this information as a statement? It is unclear if the LRS will be able to handle such requests, or if errors will remain a difficulty for eLearning companies to track.
#3
What implications does Tin Can’s self-reported learning aspect have for eLearning?
As a part of the “informal learning” that Tin Can can track, the learner could have the capability to submit self-generated statements to the LRS about activities they did. The base code for allowing this functionality is seen in the Bookmarklet tool, described here. Essentially, a Tin Can-friendly is added to a web browser toolbar and can be clicked on to report that a user accessed a website. There are clear limitations to this feature currently, to be balanced with the potential advantages.
One example of self-reported learning could be when a user scans the barcode of a library book they are checking out to expand their knowledge of content discussed in a corresponding module.
How to best set this up so that these self-reported learning activities are guaranteed to be tied to the content of the Tin Can package? What options will developers need to pursue for verifying these submissions? Will different verbiage for statements be allowed, and if so, will learners be able to choose how to record their activity as “learned,” “explored,” “discussed,” and so on?
An interesting blog post by behavioral scientist Eric Fox further examines these intruiging implications of Tin Can’s self-reported learning options. He points out there is little-to-no accountability for this type of reporting for activities. Did the student read whatever was on the webpage? Was it actually the student who accessed the page? Why were they on that webpage? There are a lot of unknowns with self-reporting. Fox worries that as Tin Can increases in usage and people become more aware of how the system works, some learners may try to “game the system,” especially if it benefits them in a lasting way. He offers some interesting, if not somewhat cynical, examples to illustrate his point. Essentially, though, his concerns can be summarized that self-report data provided to an LRS should be interpreted cautiously and be clearly labeled—or it could overwhelm and potentially pollute the rest of the data.
Mike Rustici offered one view of this situation, stating that “even with junk in the records, we can get an interesting picture of the learner.” A broader picture, as well. While this is true, if the door is opened to allow for self-reported learning activities, programmers will need to determine a viable way to measure the accuracy of that data (if at all), and to allow for some filtering in reports. It is unclear also how easy it will be to remove these statements if they prove faulty but have been transmitted to the LRS.
#4
Where can examples of Tin Can be found?
As noted earlier, much of the discussion surrounding Tin Can has involved its technical specifications and what it is supposed to do. While adoption of the standard is beginning to spread, and programming base codes are easily available, it is not as easy to view examples of the standard as implemented. This could present a hurdle for interested eLearning companies in determining whether the overall experience is truly what they would like to invest in at this time.
Generally speaking, access to finished examples that demonstrate how Tin Can is created, integrated, and used to report learning activities do exist. However, most examples are piecemeal and require a great deal of research by the interested party to really get a full picture of how the standard can be implemented.
Overviews of Tin Can-supported efforts can be found, such as in this video presentation by Mike Rustici (see below also). Early adopters may offer more detailed examples through their websites, or may offer samples in webinars such as LectoraElearning’s video here. These samples do take digging to find, and some may not be available without contacting the companies directly. What is available does begin to show how other eLearning companies could customize Tin Can to their needs. However, specific examples obviously include unique features of the authoring software and content services (such as an LMS) used to create and host the Tin Can package. Additionally, not all examples may show both internal and external content that is tracked with Tin Can. The overviews of these examples will still be very intriguing to consider as a starting point!
On a more general level, Rustici and other companies such as vTraining Room offer tips on how to use Tin Can, how to access free or extended trials, or where to find “sandbox” environments to encourage programmers to play with Tin Can’s code. For example, Rustici created a public LRS to demonstrate to users how statements could be generated. This process included allowing anyone to use the Tin Can Bookmarklet tool to generate statements, which can then be viewed in the Public LRS. All prototypes and examples can be accessed here for those that are curious.
Conclusion
There are a great deal many other questions regarding how the Experience API/Tin Can standard could be used to improve eLearning. Rustici and other developers and suppliers are attempting to answer these questions, but in the initial phase of adoption for Tin Can, it should not be surprising that some questions are left unanswered—this leaves open the possibility that improvements are in the mix for the next release. As more companies seek out this potential game-changer for eLearning opportunities, the standard’s applicability and functions can only improve!
Managing eLearning is written by the Blog team at Web Courseworks which includes Jon Aleckson and Meri Tunison. Ideas and concepts are originated and final copy reviewed by Jon Aleckson.