Skip to content

Latest commit

 

History

History
731 lines (697 loc) · 35 KB

tartanSlamSeries.md

File metadata and controls

731 lines (697 loc) · 35 KB
layout title subtitle description show_sidebar hide_footer permalink hero_image image
page
Tartan SLAM Series
An interactive series of talks, tutorials, and learning on SLAM
An interactive series of talks, tutorials, and learning on SLAM
false
false
/tartanslamseries/
/img/slam_series/tartanSLAMbanner2.png
/img/slam_series/title.png

General Information

Check out our latest Fall Edition!

The goal of this series is to expand the understanding of those both new and experienced with SLAM. Sessions will include research talks, as well as introductions to various themes of SLAM and thought provoking open-ended discussions. This is the inaugural series in the lineup of events aiming to foster fun, provocative discussions on robotics.

You can add the schedule to your Google calendar here or iCal here.

Sign up here to receive email updates and reminders about the presentations.

Join our Discord server here, where we occasionally attend to Q&A's regarding SLAM while also providing resources on SLAM. Through this discord, we aim to foster a fun and inclusive learning community for SLAM. If you are an expert or newcomer, we invite you to join the discord server to build the community.

Event Format: 40 min Talk & 20 min Open-ended Discussion

Schedule

Presenter Session Title Date/Time YouTube Link

Sebastian Scherer

Associate Research Professor

Carnegie Mellon University

Challenges in SLAM: What's ahead

Outline and Links

27 May 2021

12:30 PM EST

Michael Kaess

Associate Research Professor

Carnegie Mellon University

Factor Graphs and Robust Perception

Outline and Links

3 June 2021

4:00 PM EST

Michael Milford

Professor in Electrical Engineering

Queensland University of Technology

Biologically-inspired SLAM: Where are we coming from and where could we go?

Outline and Links

10 June 2021

4:00 PM EST

Andrew Davison

Professor of Robot Vision

Imperial College London

Graph-based representations for Spatial-AI

Outline and Links

1 July 2021

12:30 PM EST

Guoquan (Paul) Huang

Associate Professor

University of Delaware

Visual-Inertial Estimation and Perception

Outline and Links

8 July 2021

12:30 PM EST

Luca Carlone

Assistant Professor in Department of Aeronautics and Astronautics

Massachusetts Institute of Technology

The Future of Robot Perception: Recent Progress and Opportunities Beyond SLAM

Outline and Links

15 July 2021

12:30 PM EST

John Leonard

Professor of Mechanical and Ocean Engineering

Massachusetts Institute of Technology

The Past, Present and Future of SLAM

Outline and Links

22 July 2021

12:00 PM EST

Krishna Murthy Jatavallabhula

PhD Candidate

Mila

Differentiable Programming for Spatial AI: Representation, Reasoning, and Planning

Outline and Links

29 July 2021

12:30 PM EST

Wenshan Wang & Shibo Zhao

AirLab

Carnegie Mellon University

Wenshan - Pushing the limits of Visual SLAM

Shibo - Super Odometry: Robust Localization and Mapping in Challenging Environments

12 August 2021

12:30 PM EST

Paloma Sodhi & Sudharshan Suresh

Robot Perception Lab

Carnegie Mellon University

Paloma -Learning in factor graphs for tactile perception

Sudharshan - Tactile SLAM: inferring object shape and pose through touch

19 August 2021

12:30 PM EST

Kevin Doherty

PhD Candidate

Massachusetts Institute of Technology

Robust Semantic SLAM: Representation and Inference

26 August 2021

12:30 PM EST


Organizers


Shibo Zhao

PhD Candidate

Carnegie Mellon University

Nikhil Varma Keetha

Robotics Institute Summer Scholar

IIT (ISM) Dhanbad

Wenshan Wang

Scientist

Carnegie Mellon University

Henry Zhang

Master's Student

Carnegie Mellon University

Brady Moon

PhD Candidate

Carnegie Mellon University


# Session Contents

Challenges in SLAM: What's ahead   Expand Contents

Outline:

  • Overview of SLAM
  • Learning-based methods for SLAM
  • How do we handle the hard cases in SLAM? What are the challenges ahead?
  • Bridging the gap between dataset validation and real-world system deployment
Slides for the talk including resources to get started with SLAM

Factor Graphs and Robust Perception   Expand Contents

Factor graphs have become a popular tool for modeling robot perception problems. Not only can they model the bipartite relationship between sensor measurements and variables of interest for inference, but they have also been instrumental in devising novel inference algorithms that exploit the spatial and temporal structure inherent in these problems.

I will start with a brief history of these inference algorithms and relevant applications. I will then discuss open challenges in particular related to robustness from the inference perspective and discuss some recent steps towards more robust perception algorithms.

Slides for the talk

Biologically-inspired SLAM   Expand Contents

In this session, Prof. Michael Milford discusses five key questions and open research areas in the bio-inspired mapping, navigation, and SLAM area, linking into past and recent neuroscience and biological discoveries:

  • The Loop Closure Question
  • The 3D Question
  • The Probabilistic Question
  • The Multi-Scale Question
  • The Behavioural Question

Prof. Michael also presents an objective take on opportunities and mysteries in the area that also recognizes the practical realities and requirements of modern-day SLAM applications.

Slides for the talk

Graph-based representations for Spatial-AI   Expand Contents

To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic 'Spatial AI' perception capability.

Andrew will give many examples from their work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and Andrew will cover research on vision algorithms for non-standard visual sensors and graph-based computing architectures.


Visual-Inertial Estimation and Perception   Expand Contents

Enabling centimeter-accuracy positioning and human-like scene understanding for autonomous vehicles and mobile devices, holds potentially huge implications for practical applications. Optimal fusion of visual and inertial sensors provides a popular means of navigating in 3D, in part because of their complementary sensing modalities and their reduced cost and size.

In this talk, I will present our recent research efforts on visual-inertial estimation and perception. I will first discuss the observability-based methodology for consistent state estimation in the context of simultaneous localization and mapping (SLAM) and visual-inertial navigation system (VINS), and then will highlight some of our recent results on visual-inertial estimation, including OpenVINS, inertial preintegration for graph-based VINS, robocentric visual-inertial odometry, Schmidt-EKF for visual-inertial SLAM with deep loop closures, visual-inertial moving object tracking and many others.


The Future of Robot Perception   Expand Contents

Spatial perception has witnessed unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. For instance, while humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics.

This talk discusses recent efforts targeted at bridging this gap. I present our recent work on high-level scene understanding and hierarchical representations, including Kimera and 3D Dynamic Scene Graphs, and discuss their potential impact on planning and decision-making, human-robot interaction, long-term autonomy, and scene prediction. The creation of a 3D Dynamic Scene Graph requires a variety of algorithms, ranging from model-based estimation to deep learning, and offers new opportunities to both researchers and practitioners. Similar to the role played by occupancy grid maps or landmark-based maps in the past, 3D Dynamic Scene Graphs offer a new, general, and powerful representation, and the grand challenge of designing Spatial Perception Engines that can estimate 3D Scene Graphs in real-time from sensor data has the potential to spark new research ideas and can push the community outside the “SLAM comfort zone”.


The Past, Present and Future of SLAM   Expand Contents

Slide Deck

Differentiable Programming for Spatial AI: Representation, Reasoning, and Planning   Expand Contents

Over the last four decades, research in SLAM and spatial AI has revolved around the question of map "representation". Where "classical" techniques in the SLAM community have focused on building general-purpose---but handcrafted---representations; modern gradient-based learning techniques have focused on building representations specialized to a set of downstream tasks of interest. Krishna postulates that a flexible blend of "classical" and learned methods is the most promising path to developing flexible, interpretable, and actionable models of the world: a necessity for intelligent embodied agents.

In this talk, Krishna will present two recent research efforts that tightly integrate spatial representations with gradient based learning.

  • 1. GradSLAM - a fully differentiable dense SLAM system that can be plugged as a "layer" into neural networks
  • 2. Taskography - a differentiable sparsification mechanism to build relational abstractions for enabling efficient task planning over large 3D scene graphs


<script> function myFunction(buttonID, blockName) { var x = document.getElementById(blockName); if (x.style.display === "table") { x.style.display = "none"; } else { x.style.display = "table"; } var el = document.getElementById(buttonID); if (el.childNodes[0].nodeValue === "Expand Contents"){ el.childNodes[0].nodeValue = "Collapse Contents"; } else { el.childNodes[0].nodeValue = "Expand Contents"; } } </script>