Calendar
Subject to change.
Planning Foundations
- Sep 2
-
- LectureIntro
- Sep 4
- Sep 8
- Sep 9
- Sep 11
- Sep 16
- Sep 18
- Sep 23
- Sep 25
- Sep 30
- Oct 2
- Oct 6
- Oct 7
- Oct 9
- Oct 10
Learning to Make Planning Possible
- Oct 14
- No Class (Fall Recess)
- Oct 16
- No Class (Fall Recess)
- Oct 20
- Oct 21
- PapersLearning Latent Space Models for Motion Planning
-
- “Robot motion planning in learned latent spaces” (Ichter & Pavone, 2019)
-
- “Latent planning via expansive tree search” (Gieselmann & Pokorny, 2022)
-
- “Motion planning by learning the solution manifold in trajectory optimization” (Osa, 2022)
- Oct 22
- Oct 23
- PapersLearning Latent Space Models for TrajOpt
-
- “Embed to control: a locally linear latent dynamics model for control from raw images” (Watter et al., 2015)
-
- “Dream to control: learning behaviors by latent imagination” (Hafner et al., 2020)
-
- “Guaranteed discovery of controllable latent states with multi-step inverse models” (Lamb et al., 2022)
- Oct 27
- Oct 28
- PapersLearning Models for Task and Motion Planning
-
- “Predicate invention for bilevel planning” (Silver et al., 2023)
-
- “From real world to logic and back: learning generalizable relational concepts for long horizon robot planning” (Shah et al., 2025)
-
- “VisualPredicator: learning abstract world models with neuro-symbolic predicates for robot planning” (Liang et al., 2025)
- Oct 30
- No Class
- Use the extra time to work on final projects!
- Oct 31
Learning to Make Planning Fast
- Nov 3
- Nov 4
- PapersLearning to Guide MCTS
-
- “Mastering the game of Go with deep neural networks and tree search” (Silver et al., 2016)
-
- “Mastering chess and shogi by self-play with a general reinforcement learning algorithm” (Silver et al., 2017)
-
- “Mastering atari, go, chess and shogi by planning with a learned model” (Schrittwieser et al., 2019)
- Nov 5
- Nov 6
- PapersLearning Samplers for Motion Planning and TAMP
-
- “Motion planning networks: bridging the gap between learning-based and classical motion planners” (Qureshi et al., 2020)
-
- “Learning constrained distributions of robot configurations with generative adversarial networks” (Lembono et al., 2021)
-
- “Compositional diffusion-based continuous constraint solvers” (Yang et al., 2023)
- Nov 10
- Nov 11
- PapersClassical Planning with LLMs
-
- “LLMs can’t plan, but can help planning in LLM-Modulo frameworks” (Kambhampati et al., 2024)
-
- “Generalized planning in PDDL domains with pretrained large language models” (Silver et al., 2023)
-
- “Classical planning with LLM-generated heuristics: challenging the state of the art with python code” (Corrêa et al., 2025)
- Nov 12
- Nov 13
- PapersPlanning with VLAs
-
- “CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models” (Zhao et al., 2025)
-
- “Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models” (Shi et al., 2025)
-
- “MolmoAct: Action Reasoning Models that can Reason in Space” (Lee et al., 2025)
- Nov 17
- Nov 18
- PapersLearning Factored State Abstractions
-
- “State abstraction discovery from irrelevant state variables” (Jong & Stone, 2005)
-
- “Planning with learned object importance in large problem instances using graph neural networks” (Silver et al., 2021)
-
- “CAMPs: learning context-specific abstractions for efficient planning in factored MDPs” (Chitnis et al., 2020)
- Nov 19
- Nov 20
- PapersLearning Action Abstractions (Options)
-
- “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning” (Sutton et al., 1999)
-
- “Diversity is all you need: learning skills without a reward function” (Eysenbach et al., 2018)
-
- “Finding options that minimize planning time” (Jinnai et al., 2019)
- Nov 24
- Nov 25
- No Class (Thanksgiving Recess)
- Nov 27
- No Class (Thanksgiving Recess)
Planning to Learn
- Dec 1
- Dec 2
- PapersExploration + Planning
-
- “Exploration in model-based reinforcement learning by empirically estimating learning progress” (Lopes et al., 2012)
-
- “Curiosity-driven exploration by self-supervised prediction” (Pathak et al., 2017)
-
- “Trial and error: exploration-based trajectory optimization for LLM agents” (Song et al., 2024)
- Dec 3
- Dec 4
- PapersPlanning to Learn with Human-in-the-Loop
-
- “Asking for help using inverse semantics” (Knepper et al., 2014)
-
- “Human-in-the-loop task and motion planning for imitation learning” (Mandlekar et al., 2023)
-
- “To ask or not to ask: human-in-the-loop contextual bandits with applications in robot-assisted feeding” (Banerjee et al., 2024)
- Dec 15
-
- ProjectFinal Project Due