ROB 537: ANNOUNCEMENTS
- Welcome to the announcement page. Follow this page for up to date information for ROB 537.
- 9/20/2017: Course Introduction notes are posted on the website.
- 9/20/2017: HW1 has been posted, along with the necessary data_files.
- 9/25/2017: Please follow this link to join the Robotics-seminar mailing list.
- 9/25/2017: Reading material:
Neural network architectures
Neural networks for classification.
- 10/02/2017: Reading Material: Evolutionary Algorithms for control
- 10/03/2017: HW2 has been posted, along with the necessary data_files.
- 10/04/2017: Project Topic (due 10/09): 1-2 page intro that identifies the gap, highlights the need for a solution, current difficulties in achieving that solution, and how the proposed solution addresses those difficulties. Provide enough detail to engage the reader in reading further.
- 10/04/2017: Project Background (due 10/23): 3-4 page document in addition to the introduction, that provides a detailed analysis of current work, highlighting the strengths and limitations of each, along with a list of references.
- 10/09/2017: For the final paper, we will be using the NIPS 2017 format. A Latex shell is available here.
- 10/09/2017: Reading material:
Evolving Neural Network Ensembles for Control Problems
Robust Neuro-Control for A Micro Quadrotor.
- 10/09/17: For Wednesday 10/11, Be prepared to provide a 1 minute "pitch" of your project that addresses the domain, why it's important, what the key problem is and what you're going to do.
- 10/16/2017: Reading material:
Policy Gradient Reinforcement Learning for Fast
Reinforcement Learning: A Survey.
- 10/23/2017: HW1 Samples 1 and 2.
- 10/23/2017: HW2 Samples 1 and 2.
- 10/23/2017: Transfer Learning for Reinforcement Learning Domains: A Survey.
- 10/27/2017: HW3 Clarification - For Question 1, report your results based on a singular episode of learning. Each timestep in an episode consists of picking an arm based on your value table and policy, observing the reward for that arm, and updating your value table based on that reward observation. Compute the average reward obtained per arm pull across the 10 and 100 time step episode instances, and use this metric to compare between them.
- 10/30/2017: Reading Material: Deep Learning in Neural Networks: An Overview
- 11/04/2017: Midterm Announcement - The exam will be closed notes. No sheet of paper is allowed.
- 11/07/2017: HW4 Papers to Review - Paper 1 and Paper 2 .
- 11/10/2017: Reading Material for Dr. Hollinger's Guest Lecture on 11/15: Protocol on Blinding Laser Weapons and ICRC Blinding Laser Weapons.
- 11/22/2017: Final Presentation Schedule: Presentation Schedule
- 11/22/2017: Best Paper Award: Learning Flocking Control in an Artificial Swarm