John Schulman
John Schulman, OpenAI cofounder and researcher, inventor of PPO/TRPO talks RL from human feedback, tuning GPT-3 to follow instructions (InstructGPT) and answer long-form questions using the internet (WebGPT), AI alignment, AGI timelines, and more!
John Schulman is a cofounder of OpenAI, and currently a researcher and engineer at OpenAI.
Featured References
WebGPT: Browser-assisted question-answering with human feedback
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, John Schulman
Training language models to follow instructions with human feedback
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe
Additional References
- Our approach to alignment research, OpenAI 2022
- Training Verifiers to Solve Math Word Problems, Cobbe et al 2021
- UC Berkeley Deep RL Bootcamp Lecture 6: Nuts and Bolts of Deep RL Experimentation, John Schulman 2017
- Proximal Policy Optimization Algorithms, Schulman 2017
- Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs, Schulman 2016
Creators and Guests
Host
Robin Ranjit Singh Chauhan
๐ฑ Head of Eng @AgFunder ๐ง AI:Reinforcement Learning/ML/DL/NLP๐๏ธHost @TalkRLPodcast ๐ณ ex-@Microsoft ecomm PgmMgr ๐ค @UWaterloo CompEng ๐จ๐ฆ ๐ฎ๐ณ