State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI
Lex Fridman Podcast
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: lexfridman.com/sponsors/ep490-sc See below for , transcript, and to give feedback, submit questions, contact Lex, etc.
Transcript: lexfridman.com/ai-sota-2026-transcript
CONTACT LEX: Feedback – give feedback to Lex: lexfridman.com/survey AMA – submit questions, videos or call-in: lexfridman.com/ama Hiring – join our team: lexfridman.com/hiring Other – other ways to get in touch: lexfridman.com/contact
SPONSORS: To support this podcast, check out our sponsors & get discounts: Box: Intelligent content management platform. Go to box.com/ai Quo: Phone system (calls, texts, contacts) for businesses. Go to quo.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to upliftdesk.com/lex Fin: AI agent for customer service. Go to fin.ai/lex Shopify: Sell stuff online. Go to shopify.com/lex CodeRabbit: AI-powered code reviews. Go to coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to drinkLMNT.com/lex Perplexity: AI-powered answer engine. Go to perplexity.ai
OUTLINE:
- (00:00) – Introduction
- (01:39) – Sponsors, Comments, and Reflections
- (16:29) – China vs US: Who wins the AI race?
- (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
- (36:11) – Best AI for coding
- (43:02) – Open Source vs Closed Source LLMs
- (54:41) – Transformers: Evolution of LLMs since 2019
- (1:02:38) – AI Scaling Laws: Are they dead or still holding?
- (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training
- (1:51:51) – Post-training explained: Exciting new research directions in LLMs
- (2:12:43) – Advice for beginners on how to get into AI development & research
- (2:35:36) – Work culture in AI (72+ hour weeks)
- (2:39:22) – Silicon Valley bubble
- (2:43:19) – Text diffusion models and other new research directions
- (2:49:01) – Tool use
- (2:53:17) – Continual learning
- (2:58:39) – Long context
- (3:04:54) – Robotics
- (3:14:04) – Timeline to AGI
- (3:21:20) – Will AI replace programmers?
- (3:39:51) – Is the dream of AGI dying?
- (3:46:40) – How AI will make money?
- (3:51:02) – Big acquisitions in 2026
- (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
- (4:08:08) – Manhattan Project for AI
- (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters
- (4:22:48) – Future of human civilization
Raw Description
<p>Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).<br /> Thank you for listening ❤ Check out our sponsors: <a href="https://lexfridman.com/sponsors/ep490-sc">https://lexfridman.com/sponsors/ep490-sc</a><br /> See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.</p> <p><b>Transcript:</b><br /> <a href="https://lexfridman.com/ai-sota-2026-transcript">https://lexfridman.com/ai-sota-2026-transcript</a></p> <p><b>CONTACT LEX:</b><br /> <b>Feedback</b> – give feedback to Lex: <a href="https://lexfridman.com/survey">https://lexfridman.com/survey</a><br /> <b>AMA</b> – submit questions, videos or call-in: <a href="https://lexfridman.com/ama">https://lexfridman.com/ama</a><br /> <b>Hiring</b> – join our team: <a href="https://lexfridman.com/hiring">https://lexfridman.com/hiring</a><br /> <b>Other</b> – other ways to get in touch: <a href="https://lexfridman.com/contact">https://lexfridman.com/contact</a></p> <p><b>SPONSORS:</b><br /> To support this podcast, check out our sponsors & get discounts:<br /> <b>Box:</b> Intelligent content management platform.<br /> Go to <a href="https://lexfridman.com/s/box-ep490-sc">https://box.com/ai</a><br /> <b>Quo:</b> Phone system (calls, texts, contacts) for businesses.<br /> Go to <a href="https://lexfridman.com/s/quo-ep490-sc">https://quo.com/lex</a><br /> <b>UPLIFT Desk:</b> Standing desks and office ergonomics.<br /> Go to <a href="https://lexfridman.com/s/uplift_desk-ep490-sc">https://upliftdesk.com/lex</a><br /> <b>Fin:</b> AI agent for customer service.<br /> Go to <a href="https://lexfridman.com/s/fin-ep490-sc">https://fin.ai/lex</a><br /> <b>Shopify:</b> Sell stuff online.<br /> Go to <a href="https://lexfridman.com/s/shopify-ep490-sc">https://shopify.com/lex</a><br /> <b>CodeRabbit:</b> AI-powered code reviews.<br /> Go to <a href="https://lexfridman.com/s/coderabbit-ep490-sc">https://coderabbit.ai/lex</a><br /> <b>LMNT:</b> Zero-sugar electrolyte drink mix.<br /> Go to <a href="https://lexfridman.com/s/lmnt-ep490-sc">https://drinkLMNT.com/lex</a><br /> <b>Perplexity:</b> AI-powered answer engine.<br /> Go to <a href="https://lexfridman.com/s/perplexity-ep490-sc">https://perplexity.ai/</a></p> <p><b>OUTLINE:</b><br /> (00:00) – Introduction<br /> (01:39) – Sponsors, Comments, and Reflections<br /> (16:29) – China vs US: Who wins the AI race?<br /> (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?<br /> (36:11) – Best AI for coding<br /> (43:02) – Open Source vs Closed Source LLMs<br /> (54:41) – Transformers: Evolution of LLMs since 2019<br /> (1:02:38) – AI Scaling Laws: Are they dead or still holding?<br /> (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training<br /> (1:51:51) – Post-training explained: Exciting new research directions in LLMs<br /> (2:12:43) – Advice for beginners on how to get into AI development & research<br /> (2:35:36) – Work culture in AI (72+ hour weeks)<br /> (2:39:22) – Silicon Valley bubble<br /> (2:43:19) – Text diffusion models and other new research directions<br /> (2:49:01) – Tool use<br /> (2:53:17) – Continual learning<br /> (2:58:39) – Long context<br /> (3:04:54) – Robotics<br /> (3:14:04) – Timeline to AGI<br /> (3:21:20) – Will AI replace programmers?<br /> (3:39:51) – Is the dream of AGI dying?<br /> (3:46:40) – How AI will make money?<br /> (3:51:02) – Big acquisitions in 2026<br /> (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta<br /> (4:08:08) – Manhattan Project for AI<br /> (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters<br /> (4:22:48) – Future of human civilization</p>
Chapters (1/27)
Show Notes
Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch). Thank you for listening ❤ Check out our sponsors: lexfridman.com/sponsors/ep490-sc See below for , transcript, and to give feedback, submit questions, contact Lex, etc.
Transcript: lexfridman.com/ai-sota-2026-transcript
CONTACT LEX: Feedback – give feedback to Lex: lexfridman.com/survey AMA – submit questions, videos or call-in: lexfridman.com/ama Hiring – join our team: lexfridman.com/hiring Other – other ways to get in touch: lexfridman.com/contact
SPONSORS: To support this podcast, check out our sponsors & get discounts: Box: Intelligent content management platform. Go to box.com/ai Quo: Phone system (calls, texts, contacts) for businesses. Go to quo.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to upliftdesk.com/lex Fin: AI agent for customer service. Go to fin.ai/lex Shopify: Sell stuff online. Go to shopify.com/lex CodeRabbit: AI-powered code reviews. Go to coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to drinkLMNT.com/lex Perplexity: AI-powered answer engine. Go to perplexity.ai
OUTLINE:
- (00:00) – Introduction
- (01:39) – Sponsors, Comments, and Reflections
- (16:29) – China vs US: Who wins the AI race?
- (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?
- (36:11) – Best AI for coding
- (43:02) – Open Source vs Closed Source LLMs
- (54:41) – Transformers: Evolution of LLMs since 2019
- (1:02:38) – AI Scaling Laws: Are they dead or still holding?
- (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training
- (1:51:51) – Post-training explained: Exciting new research directions in LLMs
- (2:12:43) – Advice for beginners on how to get into AI development & research
- (2:35:36) – Work culture in AI (72+ hour weeks)
- (2:39:22) – Silicon Valley bubble
- (2:43:19) – Text diffusion models and other new research directions
- (2:49:01) – Tool use
- (2:53:17) – Continual learning
- (2:58:39) – Long context
- (3:04:54) – Robotics
- (3:14:04) – Timeline to AGI
- (3:21:20) – Will AI replace programmers?
- (3:39:51) – Is the dream of AGI dying?
- (3:46:40) – How AI will make money?
- (3:51:02) – Big acquisitions in 2026
- (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta
- (4:08:08) – Manhattan Project for AI
- (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters
- (4:22:48) – Future of human civilization
Raw Description
<p>Nathan Lambert and Sebastian Raschka are machine learning researchers, engineers, and educators. Nathan is the post-training lead at the Allen Institute for AI (Ai2) and the author of The RLHF Book. Sebastian Raschka is the author of Build a Large Language Model (From Scratch) and Build a Reasoning Model (From Scratch).<br /> Thank you for listening ❤ Check out our sponsors: <a href="https://lexfridman.com/sponsors/ep490-sc">https://lexfridman.com/sponsors/ep490-sc</a><br /> See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.</p> <p><b>Transcript:</b><br /> <a href="https://lexfridman.com/ai-sota-2026-transcript">https://lexfridman.com/ai-sota-2026-transcript</a></p> <p><b>CONTACT LEX:</b><br /> <b>Feedback</b> – give feedback to Lex: <a href="https://lexfridman.com/survey">https://lexfridman.com/survey</a><br /> <b>AMA</b> – submit questions, videos or call-in: <a href="https://lexfridman.com/ama">https://lexfridman.com/ama</a><br /> <b>Hiring</b> – join our team: <a href="https://lexfridman.com/hiring">https://lexfridman.com/hiring</a><br /> <b>Other</b> – other ways to get in touch: <a href="https://lexfridman.com/contact">https://lexfridman.com/contact</a></p> <p><b>SPONSORS:</b><br /> To support this podcast, check out our sponsors & get discounts:<br /> <b>Box:</b> Intelligent content management platform.<br /> Go to <a href="https://lexfridman.com/s/box-ep490-sc">https://box.com/ai</a><br /> <b>Quo:</b> Phone system (calls, texts, contacts) for businesses.<br /> Go to <a href="https://lexfridman.com/s/quo-ep490-sc">https://quo.com/lex</a><br /> <b>UPLIFT Desk:</b> Standing desks and office ergonomics.<br /> Go to <a href="https://lexfridman.com/s/uplift_desk-ep490-sc">https://upliftdesk.com/lex</a><br /> <b>Fin:</b> AI agent for customer service.<br /> Go to <a href="https://lexfridman.com/s/fin-ep490-sc">https://fin.ai/lex</a><br /> <b>Shopify:</b> Sell stuff online.<br /> Go to <a href="https://lexfridman.com/s/shopify-ep490-sc">https://shopify.com/lex</a><br /> <b>CodeRabbit:</b> AI-powered code reviews.<br /> Go to <a href="https://lexfridman.com/s/coderabbit-ep490-sc">https://coderabbit.ai/lex</a><br /> <b>LMNT:</b> Zero-sugar electrolyte drink mix.<br /> Go to <a href="https://lexfridman.com/s/lmnt-ep490-sc">https://drinkLMNT.com/lex</a><br /> <b>Perplexity:</b> AI-powered answer engine.<br /> Go to <a href="https://lexfridman.com/s/perplexity-ep490-sc">https://perplexity.ai/</a></p> <p><b>OUTLINE:</b><br /> (00:00) – Introduction<br /> (01:39) – Sponsors, Comments, and Reflections<br /> (16:29) – China vs US: Who wins the AI race?<br /> (25:11) – ChatGPT vs Claude vs Gemini vs Grok: Who is winning?<br /> (36:11) – Best AI for coding<br /> (43:02) – Open Source vs Closed Source LLMs<br /> (54:41) – Transformers: Evolution of LLMs since 2019<br /> (1:02:38) – AI Scaling Laws: Are they dead or still holding?<br /> (1:18:45) – How AI is trained: Pre-training, Mid-training, and Post-training<br /> (1:51:51) – Post-training explained: Exciting new research directions in LLMs<br /> (2:12:43) – Advice for beginners on how to get into AI development & research<br /> (2:35:36) – Work culture in AI (72+ hour weeks)<br /> (2:39:22) – Silicon Valley bubble<br /> (2:43:19) – Text diffusion models and other new research directions<br /> (2:49:01) – Tool use<br /> (2:53:17) – Continual learning<br /> (2:58:39) – Long context<br /> (3:04:54) – Robotics<br /> (3:14:04) – Timeline to AGI<br /> (3:21:20) – Will AI replace programmers?<br /> (3:39:51) – Is the dream of AGI dying?<br /> (3:46:40) – How AI will make money?<br /> (3:51:02) – Big acquisitions in 2026<br /> (3:55:34) – Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta<br /> (4:08:08) – Manhattan Project for AI<br /> (4:14:42) – Future of NVIDIA, GPUs, and AI compute clusters<br /> (4:22:48) – Future of human civilization</p>