The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet).Because the full show notes have a whopping 115 additional links, I'll highlight some that I think are particularly worthwhile here:The best article you'll ever read on Open Source AIThe best article you'll ever read on emergence in MLKate Crawford's Atlas of AI (Wikipedia)On the Measure of IntelligenceThomas Piketty's Capital in the Twenty-First Century (Wikipedia)Yurii Nesterov's Introductory Lectures on Convex OptimizationChapters(02:32) - Introducing Igor (10:11) - Aside on EY, LW, EA, etc., a.k.a. lettersoup (18:30) - Igor on AI alignment (33:06) - "Open Source" in AI (41:20) - The story of infinite riches and suffering (59:11) - On AI threat models (01:09:25) - Representation in AI (01:15:00) - Hazard fishing (01:18:52) - Intelligence and eugenics (01:34:38) - Emergence (01:48:19) - Considering externalities (01:53:33) - The shape of an argument (02:01:39) - More eugenics (02:06:09) - I'm convinced, what now? (02:18:03) - AIxBio (round ??) (02:29:09) - On open release of models (02:40:28) - Data and copyright (02:44:09) - Scientific accessibility and bullshit (02:53:04) - Igor's point of view (02:57:20) - OutroLinksLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. All references, including those only mentioned in the extended version of this episode, are included.Suspicious Machines Methodology, referred to as the "Rotterdam Lighthouse Report" in the episodeLIONS Lab at EPFLThe meme that Igor referencesOn the Hardness of Learning Under SymmetriesCourse on the concept of equivariant deep learningAside on EY/EA/etc.Sources on Eliezer YudkowskiScholarly Community EncyclopediaTIME100 AIYudkowski's personal websiteEY WikipediaA Very Literary Wiki -TIME article: Pausing AI Developments Isn’t Enough. We Need to Shut it All Down documenting EY's ruminations of bombing datacenters; this comes up later in the episode but is included here because it about EY.LessWrongLW WikipediaMIRICoverage on Nick Bostrom (being a racist)The Guardian article: ‘Eugenics on steroids’: the toxic and contested legacy of Oxford’s Future of Humanity InstituteThe Guardian article: Oxford shuts down institute run by Elon Musk-backed philosopherInvestigative piece on Émile TorresOn the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜NY Times article: We Teach A.I. Systems Everything, Including Our BiasesNY Times article: Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.Timnit Gebru's WikipediaThe TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General IntelligenceSources on the environmental impact of LLMsThe Environmental Impact of LLMsThe Cost of Inference: Running the ModelsEnergy and Policy Considerations for Deep Learning in NLPThe Carbon Impact of AI vs Search EnginesFilling Gaps in Trustworthy Development of AI (Igor is an author on this one)A Computational Turn in Policy Process Studies: Coevolving Network Dynamics of Policy ChangeThe Smoothed Possibility of Social Choice, an intro in social choice theory and how it overlaps with MLRelating to Dan HendrycksNatural Selection Favors AIs over Humans"One easy-to-digest source to highlight what he gets wrong [is] Social and Biopolitical Dimensions of Evolutionary Thinking" -IgorIntroduction to AI Safety, Ethics, and Society, recently published textbook"Source to the section [of this paper] that makes Dan one of my favs from that crowd." -IgorTwitter post referenced in the episode<...