Ed-Technical

By: Owen Henkel & Libby Hills
  • Summary

  • Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the Ed-Technical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about?

    © 2025 Ed-Technical
    Show More Show Less
Episodes
  • Babies & AI: what can AI tell us about how babies learn language?
    Jan 27 2025

    In this episode, Libby and Owen interview Mike Frank, Professor at Stanford University and leading expert in child development. This episode has a different angle to the others, as it is more about AI as a scientific instrument rather than as a tool for learning. Libby and Owen have a fascinating discussion with Mike about language acquisition and what we can learn about language learning from large language models. Mike explains some of the differences between how large language models develop an understanding of human language versus how babies do this.

    There are some big questions touched on here, including how much of the full human experience it’s possible to capture in data. Libby and Owen also make excellent use of Mike’s valuable time by asking for his expert view on why infants find unboxing videos - videos of other children opening gifts - so addictive.

    Links

    • Mike Frank’s biography
    • New York Times piece about Mike’s work
    • An interview with Mike about his research


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    35 mins
  • Teachers & ChatGPT: 25.3 extra minutes a week
    Jan 13 2025

    In this short, Libby and Owen discuss a hot-off-the-press study that is one of the first to test how ChatGPT impacts the time science teachers spend on lesson preparation. The TLDR is that teachers who used ChatGPT, with a guide, spent 31% less time preparing lessons - that’s 25.3 minutes per week on average. This very promising result points to the potential for ChatGPT and similar generative AI tools to help teachers with their workload. However we encourage you to dig into the summary and report to go beyond the headline result (after listening to this episode) - this is a rich and rigorous study with lots of other interesting findings!

    Links

    • EEF summary
    • Full study



    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    11 mins
  • How & why did Google build an education specific LLM? (part 2/3)
    Dec 16 2024

    This episode is the second in our three-part mini-series with Google, where we find out how one of the world’s largest tech companies developed a family of large language models specifically for education, called LearnLM. This instalment focuses on the technical and conceptual groundwork behind LearnLM. Libby and Owen speak to three expert guests from across Google, including DeepMind, who are heavily involved in developing LearnLM.

    One of the problems with out-of-the-box large language models is that they’re designed to be helpful assistants, not teachers. Google was interested in developing a large language model better suited to educational tasks, that others might use as a starting point for education products. In this episode, members of the Google team talk about how they approached this, and why some of the subtleties of good teaching makes this an especially tricky undertaking!

    They describe the under-the-hood processes that turn a generic large language model into something more attuned to educational needs. Libby and Owen explore how Google’s teams approached fine-tuning to equip LearnLM with pedagogical behaviours that can’t be achieved by prompt engineering alone. This episode offers a rare look at the rigorous, iterative, and multidisciplinary effort it takes to reshape a general-purpose AI into a tool that has the potential to support learning.

    Stay tuned for our next episode in this mini-series, where Libby and Owen take a step back and look at how to define tutoring and assess the extent to which an AI tool is delivering.

    Team biographies

    Muktha Ananda is Engineering leader, Learning and Education @Google. Muktha has applied AI to a variety of domains such as gaming, search, social/professional networks and online advertisement and most recently education and learning. At Google Muktha’s team builds horizontal AI technologies for learning which can be used across surfaces like Search, Gemini, Classroom, and YouTube. Muktha also works on Gemini Learning.

    Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for Education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches. Before embarking on AI research, Markus completed a PhD in black hole physics.

    Irina Jurenka is a Research Lead at Google DeepMind, where she works with a multidisciplinary team of research scientists and engineers to advance Generative AI capabilities towards the goal of making quality education more universally accessible. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an Experimental Psychology student at Westminster University. This was followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence.

    Link

    The LearnLM API


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    38 mins

What listeners say about Ed-Technical

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.