The Do’s and Don’ts for Winning a Learning Impact Award: Insights and Advice from a Long-time Judge
The Learning Impact program was introduced by 1EdTech in 2007. Of course, at that time, we were IMS Global Learning Consortium. The program, which includes the annual Conference and Awards competition, aims to recognize innovative and influential uses of technology to support learning and teaching worldwide. Through it, we seek to identify the repeatable usage of teaching and learning technology to help private and public institutions and educational authorities to:
Create personalized learning
Improve student engagement and experiences
Promote actionable assessment
Advance edtech ecosystem evolution
The program's emphasis is on...drumroll, please...improving learning impact. The focus is not on just using technology in learning. It is not even about using interoperability standards to get better information exchange. It is about improving learning and teaching experiences through the use of technology.
The Learning Impact Awards process consists of three stages. The first is the initial submission through the online form.
Step 1: Application
The award nomination consists of a descriptive title, a brief (<1000 characters) overview of the project, and, most importantly, detailed descriptions (<2,500 characters) about the impact on Personalized Learning, Institutional Performance, and the Digital Learning Ecosystem each. Applicants must also categorize the maturity of the activity (Research, New, or Established) and its scope (International, Countrywide, System/School District, Institution/School, Departmental, or Workforce/Education-to-workforce). It doesn't matter who submits the nomination, but the project must involve a hosting user institution and the solution supplier.
Step 2: Supporting Materials
The finalists—typically between 30-40 organizations—are selected from the online nominations, plus the winners (top submissions) from regional competitions like the one hosted by the 1EdTech/IMS Japan Society. Finalists are selected by the 1EdTech team based on criteria to identify exemplary implementations of technology that demonstrate the greatest impact, or potential impact, on addressing the challenges facing the global learning sector and possessing the greatest potential for generating a positive return in investment. Most of the applicants rejected at this stage are because they are just a sales pitch for a particular product. While most submissions are from North American organizations, a growing number are received from the rest of the world. The second stage consists of:
- A one-page document in English outlining the challenge, solution, learning impact outcomes, and return on investment
- A four-minute or less video pitch about the project
The project materials are made available to the judges for review (and later posted on 1EdTech's website for everyone to access). The session with the judging panel is the third and final step.
Step 3: Presentation
- A ten-minute live presentation to the expert judges split into five minutes of demonstration/visuals and five minutes of question and answer. Each of the five-minute periods is rigorously enforced.
This year, the final presentations to the judges will be on Monday, June 5, at the start of the Learning Impact Conference in Anaheim, California.
Good quality information for all three phases is important. A good submission provides complementary information.
The Learning Impact Award judges are volunteers from the 1EdTech Contributing Members institutions and staff, mostly from North America. 1EdTech Members are from 28 different countries, so more than 20% of our membership is outside North America. Around the world, there are different perspectives on Learning Impact, and so we need more volunteers1 from our non-North American institutional members.
There is no limit on the number of years someone can be a judge. I have been one since 2007, but most people do it for 1-2 years. Being a judge takes approximately 2-3 days of work over a couple of months (including the intensive day of the presentations). Depending on their preferences, judges are allocated to either the Established or the New/Research categories. We would like a balance of judges from higher education/K-12/corporate training, but we have to use those who volunteer. Typically, each judge gets 15-20 entries to evaluate (we try to keep the number of successful submissions in each category balanced).
Each judge has their style of preparation, but everyone has done the background work before the day of the presentations. My approach is to read all the online submissions to get a feel for the entire set I’m judging first. I then go through each submission and read the 1-page document and the accompanying video. Only then do I go back and allocate my initial scores. Each judge must score each submission on the evidence for the Impact on Personalized Learning, Impact on Institutional Performance, and Impact of the Digital Learning Ecosystem. We use a scale of 1-5, with 5 being the top mark. The final step is to listen to the presentations and raise any questions. After each presentation, I amend my scores; it is not unusual for at least one score to be changed. Each judge scores independently, and there is no discussion between the judges about their scores.
Scores and Awards
The final scores are produced by averaging out the scores from judges. One final tweak is that we provide an opportunity for everyone to rate the submissions (although they do not attend or see recordings of the presentations and Q&A). The scores from the public voting are averaged to become the equivalent of one other judge. We then take the ordered list and name two winners for each award level: platinum, gold, silver, and bronze.
There is no balancing of winners with respect to the types or maturity levels of the project categories. Occasionally, but rarely, when the scores are very close, we can have three winners in one award grouping. Winners are announced during one of the general sessions at the Learning Impact conference and in a public press release.
What Makes a Winner
The five features that are most common in platinum, gold, silver, or bronze awards winners are:
Provision of clear, strong evidence that supports the claims being made in the application for the improvement in learning impact. The objective is better than the subjective. Facts-and figures are more important than verbal supporting statements.
There are clear and significant benefits for each of the Personalized Learning, Institutional Performance, and Digital Learning Ecosystem perspectives. That does not mean that the benefits are equal in the three perspectives.
The focus of the information presented in the three formats is on the improvement in the learning impact. It is clear how the new technology is being used to overcome a weakness or provide a new teaching and learning capability that was otherwise missing.
Solutions make use of the relevant technical standards and specifications. This includes, where appropriate, the use of the 1EdTech specifications (it is a "horses for courses" approach, but the judges are skeptical of any solution that uses no technology standards or specifications). Technology solutions that focus on providing a total edtech solution invariably do badly. The aim should be to enable combinations of innovative solutions and strategies and to avoid ‘lock-in’ to any product and/or vendor.
There is a clear and strong working relationship between the submitting supplier and the institution. Furthermore, the more submissions an organization makes to the same Learning Impact Award competition, the less successful the outcomes. Organizations with similar submissions over multiple years are also less successful. Quality, focus, and clarity are key.
What to Avoid
The five most common mistakes, in order of significance, made in submissions that are not platinum, gold, silver, or bronze award winners are:
The improvement in learning impact is unclear. Remember, this is an award for learning impact, so be very clear on how learning and teaching are improved. This is not just about the use of educational technology. It is about using such technology to improve teaching and learning experiences.
There is a lack of information or participation from the institution (particularly during the presentation and Q&A session) that benefits from the learning impact. This is supposed to be a joint submission from the supplier and institution using the technology.
Insufficient context is supplied. This is very important for non-North American submissions. What may be mundane in North America (and remember, most judges are North American) may be state-of-the-art elsewhere. It is about the improvement in learning impact. Do not assume the judges will appreciate the learning and teaching context in your country; they will probably not.
The same information is repeated in the same way in each of the three submission formats (one-page document, video, and presentation). The video is a great way to demonstrate some key activities. I would avoid too many talking head clips. The one-pager and presentation are great for summarizing facts and figures;
The timing of the presentation goes awry, and key points are not covered before the end of the strict five minutes. Get the key points presented early so that nothing important is lost if you run out of time. Of course, rehearsal of the five minutes is important.
Like most successful endeavors, early planning is important. When preparing your submission, make sure you provide:
- Evidence, as much as possible, for the claimed improvement in learning impact
- Information for all three evaluation criteria (Personalized Learning, Institutional Performance, and Digital Learning Ecosystem)
- Evidence of strong collaboration between the solution supplier and the hosting institution
- Complementary styles of information using the different submission formats
The nomination period for the 2023 Learning Impact Awards begins on January 31 and ends on February 28. Click here to submit. Good luck!
1If you are from a 1EdTech Contributing Member district, state, or higher ed institution and would like to volunteer to be a judge at an upcoming competition, contact Cara Jenkins at email@example.com.