Computational Human-Robot Interaction

Instructor: Stefanos Nikolaidis (nikolaid at usc dot edu)

Lectures: Mon / Wed 15:30 - 17:20 KAP 145

Office Hours: Mon / Wed 17:30-18:00 RTH 401 (by appointment)

Course Description: In this advanced graduate-level class, you will learn about the theory and algorithms that enable robots to account for people in their decision making in a principled way. The course will contrast decision-theoretic and learning-based paradigms that allow robots to reason in the presence of uncertainty with studies in human-robot interaction. It will then focus on what makes some of these algorithms particularly effective and scalable in real-world human-robot interaction scenarios. By the end of this class, you will be able to describe and compare algorithms for deployed robotic systems interacting with people, design user studies to evaluate these algorithms and communicate your ideas to a peer audience. Evaluation is mainly based on student presentations, a final project and short quizzes based on the assigned reading material.

Learning Objectives: In this course, you will gain knowledge about planning and learning algorithms in human-robot interaction and skills in interpreting and presenting research. By the end of this course you should be able to:

Prerequisites: There are no formal prerequisites, but knowledge of probability theory and linear algebra is encouraged.

Grading:

Component Percentage
Paper Presentations 30%
Final Project 40%
Weekly Quizzes 20%
Participation 10%

Assessment of Assignments

Important Dates:

Feb 19th: Project Proposal Submission.

Project Proposal:

Schedule:

Day Date Topic Reading Notes
Mon Jan 13th What is Computational HRI?
  • Computational Human-Robot Interaction, Thomaz, Hoffman and Cakmak. (Optional)
  • Human modeling for human–robot collaboration, Hiatt et al. (Optional)
  • The Grand Challenges in Socially Assistive Robotics, Tapus et al. (Optional)
  • Social robots that interact with people, Breazal et al. (Optional)
Slides
Wed Jan 15th Probability and Bayesian inference
  • Russell & Norvig (2009). Artificial Intelligence: a Modern Approach (3rd ed.). Prentice Hall. Chapters 13, 14 and 15.
  • Real-Time American Sign Language Recognition from Video Using Hidden Markov Models, Starner and Pentland.
code
notes_A notes_B
Thu Jan 20th no class (holiday)
Wed Jan 22nd Bayesian inference (cont'd) and decision making under uncertainty
  • Russell & Norvig (2009). Artificial Intelligence: a Modern Approach (3rd ed.). Prentice Hall. Chapter 16.
notes
Mon Jan 27th Markov decision processes and applications in HRI
  • Russell & Norvig (2009). Artificial Intelligence: a Modern Approach (3rd ed.). Prentice Hall. Chapter 17.1- 17.3.
code
notes
Wed Jan 29th Action selection for collaboration (student presentations)
  • Cost-Based Anticipatory Action Selection for Human-Robot Fluency, Hoffman and Breazal. (Main:Heramb, Con:Hejia)
  • Joint action: bodies and minds moving together, Sebanz et al.(Main:Nathan)
Mon Feb 3rd Experimental Design Sample consent form notes
Wed Feb 5th Training of human teams and shared mental models (student presentations)
  • The Impact of Cross-Training on Team Effectiveness, Marks et al. (Main:Matt)
  • Planning, Shared Mental Models and Coordinated Performance: An Empirical Link is Established, Stout et al. (Main:Rey)
Mon Feb 10th Action coordination in human-robot teams (student presentations)
  • Human-Robot Cross-Training: Computational Formulation, Modeling and Evaluation of a Human Team Training Strategy, Nikolaidis and Shah
  • Adaptive Coordination Strategies for Human-Robot Handovers, Huang et al.
Wed Feb 12th Intent inference (student presentations)
  • Goal Inference as Inverse Planning, Baker et al.
  • Planning-based Prediction for Pedestrians, Ziebart et al
Mon Feb 17th no class (holiday)
Wed Feb 19th Expressiveness in robot motion (student presentations)
  • Expressing thought: improving robot readability with animation principles, Takayama et al.
  • The Illusion of Robotic Life, Ribeiro and Paiva.
Project Proposal Due.
Mon Feb 24th Generation of expressive motion (student presentations)
  • Generating Legible Robot Motion, Dragan and Srinivasa.
  • Enhancing Interaction Through Exaggerated Motion Synthesis, Gielniak and Thomaz.
Wed Feb 26th Guest Lecture (TBD)
Mon Mar 2nd Planning with partial observability notes
Wed Mar 4th Planning with partially observable human states (student presentations)
  • Intention-aware motion planning, Bandyopadhyay et al.
  • Belief Space Planning for Sidekicks in Cooperative Games, Macindowe et al
Mon Mar 9th Planning with human state dynamics (student presentations)
  • Formalizing Human-Robot Mutual Adaptation: A Bounded Memory Model, Nikolaidis et al.
  • Planning for Autonomous Cars that Leverage Effects on Human Actions, Sadigh et al.
Wed Mar 11th Planning in shared autonomy domains (student presentations)
  • Shared Autonomy via Hindsight Optimization, Javdani et al.
  • Autonomy Infused Teleoperation with Application to BCI Manipulation, Muelling et al.
Mon Mar 16th no class (spring recess)
Wed Mar 18th no class (spring recess)
Mon Mar 23rd Learning techniques for HRI
  • Russell & Norvig (2009). Artificial Intelligence: a Modern Approach (3rd ed.). Prentice Hall. Chapter 20.
  • The Expectation Maximization Algorithm, A short tutorial, Borman. (optional)
  • A survey of robot learning from demonstration, Argall et al.(optional)
notes
Wed Mar 25th Learning from demonstration in HRI (student presentations)
  • Trajectories and Keyframes for Kinesthetic Teaching: A Human-Robot Interaction Perspective, Akgun et al.
  • Confidence-based policy learning from demonstration using gaussian mixture models, Chernova and Veloso.
Mon Mar 30th Active learning in HRI (student presentations)
  • Designing robot learners that ask good questions, Cakmak and Thomaz.
  • Active Preference-Based Learning of Reward Functions, Sadigh et al.
Wed Apr 1st Reinforcement learning notes
Mon Apr 6th Guest Lecture (TBD)
Wed Apr 8th Reinforcement learning with human feedback (student presentations)
  • Combining Manual Feedback with Subsequent MDP Reward Signals for Reinforcement Learning, Knox and Stone.
  • Interactive Learning from Policy-Dependent Human Feedback, MacGlashan et al.
Mon Apr 13th Integrating learning and planning in HRI (student presentations)
  • Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks, Nikolaidis et al.
  • Planning with Trust for Human-Robot Collaboration, Chen et al.
Wed Apr 15th Optimal teaching (student presentations)
  • Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications, sections 1-6, Brown and Niekum
  • Algorithmic and human teaching of sequential decision tasks, Cakmak and Lopez.
Mon Apr 20th Pedagogical reasoning (student presentations)
  • Cooperative inverse reinforcement learning, Hadfield et al.
  • Showing versus doing: Teaching by demonstration, Ho et al.
Wed Apr 22nd Communication and signaling (student presentations)
  • ConTaCT: Deciding to Communicate during Time-Critical Collaborative Tasks in Unknown, Deterministic Domains, Unhelkar and Shah
  • Implicit Communication in a Joint Action, Knepper et al.
Mon Apr 27th Project Presentation
Wed Apr 29th Project Presentation
Wed May 6th Final report due

Expectations: You can expect me to come to class on time, clearly communicate expectations for the presentations structure, format and clarity, give you feedback on a timely manner, adjust lecture material based on performance on presentations and quizzes and be available to meet regularly to discuss the progress of your project. I can expect you to come to class on time, be attentive and engaged in class, take notes and ask questions when something is not clear, spend an adequate amount of time on the readings each week (at least 3 hours), spend 60-80 hours on your final project.

Additional Policies: Please see the syllabus for the statement on academic conduct and student support systems. Unless you are assigned to compile lecture notes, please refrain from using laptops or other electronic devices during class.

Related Courses: You are encouraged to expand your readings from related courses, for example: