Brain Inspired

Brain Inspired

Paul Middlebrooks

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

Luister naar de laatste aflevering:

Support the show to get full episodes and join the Discord community. James, Andrew, and Weinan discuss their recent theory about how the brain might use complementary learning systems to optimize our memories. The idea is that our hippocampus creates our episodic memories for individual events, full of particular details. And through a complementary process, slowly consolidates those memories within our neocortex through mechanisms like hippocampal replay. The new idea in their work suggests a way for the consolidated cortical memory to become optimized for generalization, something humans are known to be capable of but deep learning has yet to build. We discuss what their theory predicts about how the "correct" process depends on how much noise and variability there is in the learning environment, how their model solves this, and how it relates to our brain and behavior. James' Janelia page.Weinan's Janelia page.Andrew's website.Twitter: Andrew: @SaxeLabWeinan: @sunw37Paper we discuss:Organizing memories for generalization in complementary learning systems.Andrew's previous episode: BI 052 Andrew Saxe: Deep Learning Theory 0:00 - Intro 3:57 - Guest Intros 15:04 - Organizing memories for generalization 26:48 - Teacher, student, and notebook models 30:51 - Shallow linear networks 33:17 - How to optimize generalization 47:05 - Replay as a generalization regulator 54:57 - Whole greater than sum of its parts 1:05:37 - Unpredictability 1:10:41 - Heuristics 1:13:52 - Theoretical neuroscience for AI 1:29:42 - Current personal thinking

Vorige afleveringen

  • 137 - BI 120 James Fitzgerald, Andrew Saxe, Weinan Sun: Optimizing Memories 
    Sun, 21 Nov 2021
  • 136 - BI 119 Henry Yin: The Crisis in Neuroscience 
    Thu, 11 Nov 2021
  • 135 - BI 118 Johannes Jäger: Beyond Networks 
    Mon, 01 Nov 2021
  • 134 - BI 117 Anil Seth: Being You 
    Tue, 19 Oct 2021
  • 133 - BI 116 Michael W. Cole: Empirical Neural Networks 
    Tue, 12 Oct 2021
Meer afleveringen weergeven

Meer Nederlandse wetenschap en geneeskundepodcasts

Meer internationale wetenschap en geneeskundepodcasts

Kies een categorie