Reflections on the closure of the OER Foundation and the implications for OER policies worldwide

The recent announcement of the impending closure of the OER Foundation (based in Otago in New Zealand) has drawn attention again to the sustainability (or not) of major OER initiatives and the longevity of OER policies. Acting on this we decided to do one of our rapid AI-assisted reviews of the current state of the OER domain.


(The issue has been of concern to us since being asked two years ago to review and update the OER policies for a particular country (one not in EU or OECD as it happens), part of which required a global LLM-heavy analysis across countries of perceived relevance to the client.)

For speed, this review initially started with a series of seven questions put to ChatGPT (version 5.2) – which we have used a lot over the last three years for policy analysis and related tasks. The precise formulation of each question depended on the answers to the previous ones.

The seven questions

The questions ended up as:

  1. What has gone wrong with the OER Foundation in recent years?
  2. I suspect that politics in NZ and the reorganisation of the polytechnic sector has contributed [to the closure]. Views?
  3. Before I ask about comparators to the OER Foundation closure, I would like to ask you to explore whether and if so why “EU microcredential policy unintentionally marginalises classic OER models”. Please first describe whether you think it is microcredential policy generally (NZ, AU, UK, CA and US are not in EU yet all have some kind of policy for microcredentials).
  4. That’s enough for now on microcredentials. Returning to the main thrust, I would like to ask you to comment on the reasons behind other closures or shrinking of OER entities, with particular reference to the closure of the JISC/HEA OER programme in the UK, the dwindling of OER repositories in many parts of the EU, and the failure to ignite centrally-directed/funded OER initiatives in many other countries of the world (AU, Scotland, Wales etc), in contrast with the success of OER entities in British Columbia (despite earlier failures of innovative institutions), recent adoption in Ontario, recent OER developments in Germany and the continued growth of OER initiatives at federal, statal and institutional levels in the US despite the relentless focus on “business models” in that society.
  5. I don’t need any more depth at present [on projects]. Yet I was intrigued that you did not mention the MOOC movement (2011-20) as a distractor and competitor for “oxygen” in EU and UK circles in particular, yet not it seems so much in US – much more oxygen in US?
  6. It would be [interesting to look more at FutureLearn versus BCcampus versus OpenStax], but for a different article. Finally, I would like you to ponder the issue of the extent to which AI will bring the curtain down on OER even in these jurisdictions (US, parts of CA, DE) where OER has held firm or even grown. As a prompt, I think I read recently that GPT use has now overtaken Wikipedia accesses worldwide.
  7. As a postscript, many of your conclusions will seem novel and challenging even to experts in OER and policies. Yet even around ten years ago there were signs in EU reports that all was not well with OER. 1) The EU carefully started using the phrase “opening up education” not “open education”. 2) The JRC 2018 report on “Policy Approaches to Open Education” documented the embarrassing lack of member state policies and activities in OER. 3) The Erasmus+ D-TRANSFORM report proposed a much more serious “business case” approach to OER. 4) The much-criticised POERUP project pointed out the lack of national OER policies. 5) The Neil Butcher report on “Researching the effectiveness of Open Educational Resource (OER) policies” found very few national policies with any practical outcomes.

Next steps to prepare the final report

The conversation was then edited by human means into an extensively footnoted document. Thar document was then fed into a second powerful LLM (in search and automation terms) – Manus – which was tasked to find additional evidence documents, quotations and commentary to support or refute assertions in the initial document. These were added as endnotes to the main report.

The process was designed to be a rapid, evidence-based synthesis, leveraging AI capabilities for research and analysis while being guided by human expertise.

Value of this approach

This process demonstrates a powerful model for producing timely, in-depth analysis on complex policy issues, combining the speed and scale of AI-driven research with the critical thinking and framing of a human expert. Our team has used such approaches for at least three years now. The tools vary but ChatGPT, Manus and Perplexity currently figure strongly, with other ones (accessed via the Poe aggregator) often used for early scoping probes.

The resulting report is not an expression of the AI’s “opinion”, but a structured reflection of the evidence found in the public domain, presented to inform and stimulate expert debate. Having said that, the extent of the correlations (and anti-correlations) that the LLMs found was impressive – even disturbing – to an experienced expert and LLM user.

As the report’s preface notes, the process surfaced some deep and complex findings, thus raising “questions about the casual dismissal of complex modern multi-level LLMs as just stochastic parrots“.

The full report is at this link – Closure of OER-F and implications for OER

A series of derivatives and supplementary reports is nearing completion also.

Although it seems a distractor from our work on reconstructing assignments in an AI-heavy world, the report is at 8,000 words, about the length of a shortish undergraduate dissertation, and its complex structure in Word, with footnotes and endnotes, makes it ideal as a testbed for programmatic creation and editing via LLMs and thus part of our tring agenda too. More on that soon, ready for the next semester.

Leave a comment