Episodes

  • Babies & AI: what can AI tell us about how babies learn language?
    Jan 27 2025

    In this episode, Libby and Owen interview Mike Frank, Professor at Stanford University and leading expert in child development. This episode has a different angle to the others, as it is more about AI as a scientific instrument rather than as a tool for learning. Libby and Owen have a fascinating discussion with Mike about language acquisition and what we can learn about language learning from large language models. Mike explains some of the differences between how large language models develop an understanding of human language versus how babies do this.

    There are some big questions touched on here, including how much of the full human experience it’s possible to capture in data. Libby and Owen also make excellent use of Mike’s valuable time by asking for his expert view on why infants find unboxing videos - videos of other children opening gifts - so addictive.

    Links

    • Mike Frank’s biography
    • New York Times piece about Mike’s work
    • An interview with Mike about his research


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    35 mins
  • Teachers & ChatGPT: 25.3 extra minutes a week
    Jan 13 2025

    In this short, Libby and Owen discuss a hot-off-the-press study that is one of the first to test how ChatGPT impacts the time science teachers spend on lesson preparation. The TLDR is that teachers who used ChatGPT, with a guide, spent 31% less time preparing lessons - that’s 25.3 minutes per week on average. This very promising result points to the potential for ChatGPT and similar generative AI tools to help teachers with their workload. However we encourage you to dig into the summary and report to go beyond the headline result (after listening to this episode) - this is a rich and rigorous study with lots of other interesting findings!

    Links

    • EEF summary
    • Full study



    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    11 mins
  • How & why did Google build an education specific LLM? (part 2/3)
    Dec 16 2024

    This episode is the second in our three-part mini-series with Google, where we find out how one of the world’s largest tech companies developed a family of large language models specifically for education, called LearnLM. This instalment focuses on the technical and conceptual groundwork behind LearnLM. Libby and Owen speak to three expert guests from across Google, including DeepMind, who are heavily involved in developing LearnLM.

    One of the problems with out-of-the-box large language models is that they’re designed to be helpful assistants, not teachers. Google was interested in developing a large language model better suited to educational tasks, that others might use as a starting point for education products. In this episode, members of the Google team talk about how they approached this, and why some of the subtleties of good teaching makes this an especially tricky undertaking!

    They describe the under-the-hood processes that turn a generic large language model into something more attuned to educational needs. Libby and Owen explore how Google’s teams approached fine-tuning to equip LearnLM with pedagogical behaviours that can’t be achieved by prompt engineering alone. This episode offers a rare look at the rigorous, iterative, and multidisciplinary effort it takes to reshape a general-purpose AI into a tool that has the potential to support learning.

    Stay tuned for our next episode in this mini-series, where Libby and Owen take a step back and look at how to define tutoring and assess the extent to which an AI tool is delivering.

    Team biographies

    Muktha Ananda is Engineering leader, Learning and Education @Google. Muktha has applied AI to a variety of domains such as gaming, search, social/professional networks and online advertisement and most recently education and learning. At Google Muktha’s team builds horizontal AI technologies for learning which can be used across surfaces like Search, Gemini, Classroom, and YouTube. Muktha also works on Gemini Learning.

    Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for Education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches. Before embarking on AI research, Markus completed a PhD in black hole physics.

    Irina Jurenka is a Research Lead at Google DeepMind, where she works with a multidisciplinary team of research scientists and engineers to advance Generative AI capabilities towards the goal of making quality education more universally accessible. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an Experimental Psychology student at Westminster University. This was followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence.

    Link

    The LearnLM API


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    38 mins
  • AI tutoring part 2: How good can it get?
    Dec 2 2024

    In this episode, Owen and Libby chat about AI tutoring with guests, Ben Kornell, Managing Partner at Common Sense Growth Fund, and Alex Sarlin, a veteran in the edtech industry. Both co-founded Edtech Insiders, a leading newsletter and podcast covering the growing Edtech industry.

    Ben and Alex differentiate between AI-powered search and true AI tutoring, and discuss trends like AI-enhanced human tutors, hybrid models, and fully autonomous AI bots. The conversation highlights the need for AI to integrate- and learn from traditional education in developing key elements, such as targeting the right zone of proximal development. Human tutors have the ability to sense motivation and frustration, helping students through the more challenging parts of learning. Emerging technologies are now using facial and physical cues to gauge engagement, proving valuable as nudges for AI tutors or human instructors to boost motivation or adjust level of content.

    They also address ethical and political risks, such as biased responses and dependency issues. With exciting developments on the horizon, the episode explores the at times seemingly sci-fi-like future of AI tutoring!

    Guest bios:

    • Ben Kornell - Ben is currently serving as the Managing Partner of Common Sense Growth Fund at Common Sense Media. Prior to that, they have worked as a School Board Member for the San Carlos School District and was the Co-Founder and Podcast Host of Edtech Insiders.


    • Alex Sarlin - Alex is a 15 year veteran of the Edtech industry, as a Product Manager and Learning Engineer at both large Edtech companies (2U, Scholastic, Chan Zuckerberg Initiative) and startups (Coursera, Skillshare, Credly, Knewton). He is currently a consultant and adviser to a number of Edtech companies in higher education and the future of work. He holds a Master's of Instructional Design from Columbia University, and is the founder of Edtech Insiders, a leading newsletter and podcast covering the growing Edtech industry.




    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    21 mins
  • Inside the black box: How Google is thinking about AI & education (part 1 of 3)
    Nov 18 2024

    This episode is the first of a three part mini-series with Google. There is a lot of interest in how big tech companies are engaging in AI and education and what their future plans are - in this mini-series, hear the latest directly from Google.

    The genesis of this mini-series was a short Ed-Technical episode from earlier this year. Libby and Owen discussed a paper Google released about the work they had done to fine-tune a LLM called LearnLM to make it more useful for education. This work was motivated by a realisation that some of the core behaviours of LLMs (helpfulness, sycophancy) aren’t aligned with what’s valuable from a learning perspective, and prompting can only go so far.

    This first episode focuses on how Google is integrating LearnLM’s capabilities into existing Google products like YouTube and new products like LearnAbout. The next episode in the mini-series will focus on LLMs and tutoring, and the final episode will be a more technical episode on the development of LearnLM.

    We had a chance to talk to a number of folks across a range of teams, including LearnX, Google Research and DeepMind. There was too much great content to squeeze into three episodes but all full interviews will be up on our YouTube channel.

    In this episode we include excerpts from interviews with four members of the team.

    Rob Wong is the Product Lead for LearnX, a team within Google that builds learning features on Search, YouTube, and Gemini chat, and also works on LearnLM in partnership with Google Research and Google DeepMind.

    Julia Wilkowski leads a pedagogy team at Google. Her team collaborates with Google product teams to apply learning science principles and teaching best practices.

    Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches.

    Angie Mac McAllister, PhD is a Group Product Manager at Google with a vision: to make a personal AI tutor available to everyone. Focused on developing learning features for Gemini, Mac combines 35 years of experience in education with cutting-edge AI to help students become better learners.

    Links:

    • Google’s technical report on LearnLM
    • Ed-Technical short episode on Google’s LearnLM paper
    • Article about Learn About, Google’s experimental new AI tool
    • Article by Angie Mac McAllister about new Gemini learning features


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    36 mins
  • Big data and algorithmic bias in education: what is it and why does it matter?
    Oct 21 2024

    This episode, Owen and Libby speak to Ryan Baker, a leading expert in using big data to study learners and learning interactions with educational software. Ryan is a Professor in the Graduate School of Education at the University of Pennsylvania, and is Director of the Penn Center for Learning Analytics.

    Ryan provides an overview of educational data mining (otherwise known as EDM) and explains how insights from EDM can help improve learner engagement and outcomes. Libby and Owen also explore the technical aspects of algorithmic bias with Ryan, discussing why it matters, how it is defined, and how it can be addressed from a technical perspective.

    Links:

    • Ryan Baker biography
    • One of Ryan Baker’s research papers about algorithmic bias
    • Big Data and Education - Ryan Baker’s free massive online open textbook


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    25 mins
  • Think aloud or think before you speak?: OpenAI’s new model for advanced reasoning
    Oct 8 2024

    In this short episode, Libby and Owen discuss OpenAI’s new model for advanced reasoning, o1. They talk about its new capabilities and strengths, and what they think about its significance for education after an initial play around. They talk through the benefits of ‘think aloud’ versus ‘think before you speak’ approaches in education, and how this relates to o1.

    Links:

    • OpenAI’s announcement about o1


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    11 mins
  • Misconceptions about misconceptions: How AI can help teachers understand & tackle student misconceptions
    Sep 23 2024

    In this episode, Libby and Owen are joined by Craig Barton, Head of Education at Eedi and host of the Mr Barton Maths and Tips for Teachers podcasts, along with Simon Woodhead, Director of Research at Eedi. Together, they explore the world of educational misconceptions—what they are, why they matter and how AI and data science can help tackle them.

    Links:

    • Craig Barton biography
    • Simon Woodhead biography
    • Eedi’s research


    Join us on social media:

    • BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
    • Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
    • Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
    • Stay up to date with all the latest research on child development and learning: https://bold.expert

    Credits: Sarah Myles for production support; Josie Hills for graphic design


    Show More Show Less
    34 mins