Skip to main content

Collaborating on Generative AI to Help Improve Education

In this blog post, I provide some thoughts on how our 1EdTech community can leverage collaboration to understand and harness the potential of generative AI. The community and others who wish to join this conversation will meet 1EdTech’s annual Learning Impact Conference in Anaheim from June 5-8. If you are interested in this topic, I encourage you to join us there.

At a recent 1EdTech board meeting, there was a discussion about what might be 1EdTech’s “position” on the important topic of generative Artificial Intelligence (AI). While AI is not a new topic in education, the relatively sudden general availability of ChatGPT, with its ability to generate credible human-like chat, essays, homework, and exam responses, has increased the urgency to understand the potential impact in education.

Many educational institutions are understandably taking steps to develop policies to limit the use of ChatGPT and similar technologies. Turnitin is providing tools to detect the use of AI writing tools. However, leaders in the sector are also eager to understand how generative AI tools may be leveraged in the educational setting. Every week, there are numerous announcements from edtech suppliers touting AI-enhanced remediation, assessment, and content selection.

In a recent CNBC interview, legendary investor Warren Buffet—admittedly relying more on the expertise of tech titans than his own—summed up his opinion on generative AI/ChatGPT as “It’s extraordinary, but I don’t know if it’s beneficial.” Many others have noted the potential power of these tools to save time searching and distilling vast quantities of written materials that this technology can ingest. But many leaders are also calling for a pause to understand better how the algorithms are being trained and where this technology may lead.

As a person whose background is deep in production use of AI, machine learning, and neural networks (from my days working on advanced computing technology in Silicon Valley), I believe there are good reasons for both the excitement and the calls for better understanding. Generative AI at the scale demonstrated by ChatGPT has the potential to enable profound improvements in productivity. Perhaps the first of these will be the race toward a new generation of Internet search. But the algorithms being used by a neural network are not transparent to humans. And the algorithms can only work as well as the quality of the data and heuristics being used to train them. Indeed, generative AI’s “black box” architecture is more difficult to trust than the average edtech software. We have seen significant headwinds for adaptive learning courses in education, particularly in higher education, where a lack of trust by faculty has curtailed widespread use. But the potential of generative AI approaches also means that in scenarios with enough high-quality data focused on well-defined learning challenges, it could eventually be vastly superior to recommending personalized learning paths or providing academic support than a single faculty member.

How might the 1EdTech community work together to build trust and accelerate the practical benefits of generative AI?

When it comes to a potentially transformational but risky area such as generative AI, it seems like a prominent area to set expectations would be disclosure. For example:

  1. Disclosing that a product makes use of generative AI techniques: If generative AI is used by a product, explain what it is used for. Is generative AI used to draw conclusions or provide recommendations to human users? Does the product allow a student to generate work that may be passed off as the student’s own?
  2. Disclosing the data attributes used to train the generative AI: What are the attributes of data used to create the model? Where did the data sets come from? How much data has been used? Was the product organization granted rights to use this data? What has been done to understand the potential biases in the data?
  3. Disclosing characteristics of what the AI system was trained to do: What were/are the AI algorithms trying to optimize? What data attributes are most important in determining the resulting outputs? How should users compare performance with AI versus not using AI?

You might ask, well, if the AI tool is just suggesting or recommending (versus making final decisions regarding the path of learning), would this sort of disclosure still be required? My answer would be that until there is a clear undisputable high level of confidence in the recommendations provided, the answer is yes.

In addition to setting expectations in terms of disclosure, there is a significant opportunity for deeper levels of collaboration across the education sector when it comes to potentially accelerating the efficacy of AI. Here are some critical areas of collaboration that would help:

  1. Data sharing/scaling: For many types of AI models sharing of data is likely required to achieve the scale needed to effectively train them. Institutions explicitly opting into data-sharing collaboratives for the dual purposes of helping to accelerate the effective use of the models and to declare/stake the ownership of their data seems like both an opportunity and a hurdle to be addressed.
  2. Results, research, and roadmap sharing: For edtech in general, educational leaders at institutions want to engage with supplier partners who can explain how their products are evolving and why. In K-12, this is especially true right now to help deal with what is expected to be a 4–5-year period of dealing with the “unfinished learning” from the pandemic era. As with medical technology, edtech product suppliers need to engage leaders, and leaders can collaborate to make that engagement fruitful for suppliers.
  3. Best practice sharing: This is the normal sort of exchange, but specifically as it relates to products claiming the use of AI or helping to reduce any harmful impacts (e.g., cheating).

In 1EdTech, we have already discussed some of the above ideas with institutional members. We look forward to the many innovative AI-related products and other ideas we expect to see in the next 12 months.

The 1EdTech community understands that openness, trust, and innovation are three factors that reinforce each other as we build a better, more productive future for edtech. An open ecosystem lowers the barriers to innovation, but unless a high level of trust can be achieved, it is unlikely that an innovative product will be successful in the educational context. As such, the TrustEd Apps Program has become foundational within 1EdTech to enable institutions of all sizes and types to design and manage their edtech ecosystems.

 

Let’s seize the moment on this important opportunity to collaborate to help shape the use of AI in the open, trusted, and innovative edtech ecosystem together!

Categories
Published on 2023-04-27

PUBLISHED ON 2023-04-27

Photo of User
Rob Abel, Ed.D.
CEO
1EdTech
Help us improve the accessibility of this site by emailing recommendations to web@imsglobal.org