layout | title | subtitle | description | show_sidebar | hide_footer | permalink | hero_image | image |
---|---|---|---|---|---|---|---|---|
page |
Tartan SLAM Series |
An interactive series of talks, tutorials, and learning on SLAM |
An interactive series of talks, tutorials, and learning on SLAM |
false |
false |
/tartanslamseries/ |
/img/slam_series/tartanSLAMbanner2.png |
/img/slam_series/title.png |
Check out our latest Fall Edition!
The goal of this series is to expand the understanding of those both new and experienced with SLAM. Sessions will include research talks, as well as introductions to various themes of SLAM and thought provoking open-ended discussions. This is the inaugural series in the lineup of events aiming to foster fun, provocative discussions on robotics.
You can add the schedule to your Google calendar here or iCal here.
Sign up here to receive email updates and reminders about the presentations.
Join our Discord server here, where we occasionally attend to Q&A's regarding SLAM while also providing resources on SLAM. Through this discord, we aim to foster a fun and inclusive learning community for SLAM. If you are an expert or newcomer, we invite you to join the discord server to build the community.
Event Format: 40 min Talk & 20 min Open-ended Discussion
Presenter | Session Title | Date/Time | YouTube Link | |
---|---|---|---|---|
Associate Research Professor Carnegie Mellon University |
Challenges in SLAM: What's ahead Outline and Links |
27 May 2021 12:30 PM EST |
|
|
Associate Research Professor Carnegie Mellon University |
Factor Graphs and Robust Perception Outline and Links |
3 June 2021 4:00 PM EST |
|
|
Professor in Electrical Engineering Queensland University of Technology |
Biologically-inspired SLAM: Where are we coming from and where could we go? Outline and Links |
10 June 2021 4:00 PM EST |
|
|
Professor of Robot Vision Imperial College London |
Graph-based representations for Spatial-AI Outline and Links |
1 July 2021 12:30 PM EST |
|
|
Associate Professor University of Delaware |
Visual-Inertial Estimation and Perception Outline and Links |
8 July 2021 12:30 PM EST |
|
|
Assistant Professor in Department of Aeronautics and Astronautics Massachusetts Institute of Technology |
The Future of Robot Perception: Recent Progress and Opportunities Beyond SLAM Outline and Links |
15 July 2021 12:30 PM EST |
|
|
Professor of Mechanical and Ocean Engineering Massachusetts Institute of Technology |
The Past, Present and Future of SLAM Outline and Links |
22 July 2021 12:00 PM EST |
|
|
PhD Candidate Mila |
Differentiable Programming for Spatial AI: Representation, Reasoning, and Planning Outline and Links |
29 July 2021 12:30 PM EST |
|
|
AirLab Carnegie Mellon University |
Wenshan - Pushing the limits of Visual SLAM Shibo - Super Odometry: Robust Localization and Mapping in Challenging Environments |
12 August 2021 12:30 PM EST |
|
|
Paloma Sodhi & Sudharshan Suresh Robot Perception Lab Carnegie Mellon University |
Paloma -Learning in factor graphs for tactile perception Sudharshan - Tactile SLAM: inferring object shape and pose through touch |
19 August 2021 12:30 PM EST |
|
|
PhD Candidate Massachusetts Institute of Technology |
Robust Semantic SLAM: Representation and Inference |
26 August 2021 12:30 PM EST |
|
# Session Contents
Outline:
|
|
Factor graphs have become a popular tool for modeling robot perception problems. Not only can they model the bipartite relationship between sensor measurements and variables of interest for inference, but they have also been instrumental in devising novel inference algorithms that exploit the spatial and temporal structure inherent in these problems. I will start with a brief history of these inference algorithms and relevant applications. I will then discuss open challenges in particular related to robustness from the inference perspective and discuss some recent steps towards more robust perception algorithms. Slides for the talk |
|
In this session, Prof. Michael Milford discusses five key questions and open research areas in the bio-inspired mapping, navigation, and SLAM area, linking into past and recent neuroscience and biological discoveries:
Prof. Michael also presents an objective take on opportunities and mysteries in the area that also recognizes the practical realities and requirements of modern-day SLAM applications. Slides for the talk |
|
To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic 'Spatial AI' perception capability. Andrew will give many examples from their work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and Andrew will cover research on vision algorithms for non-standard visual sensors and graph-based computing architectures. |
|
Enabling centimeter-accuracy positioning and human-like scene understanding for autonomous vehicles and mobile devices, holds potentially huge implications for practical applications. Optimal fusion of visual and inertial sensors provides a popular means of navigating in 3D, in part because of their complementary sensing modalities and their reduced cost and size. In this talk, I will present our recent research efforts on visual-inertial estimation and perception. I will first discuss the observability-based methodology for consistent state estimation in the context of simultaneous localization and mapping (SLAM) and visual-inertial navigation system (VINS), and then will highlight some of our recent results on visual-inertial estimation, including OpenVINS, inertial preintegration for graph-based VINS, robocentric visual-inertial odometry, Schmidt-EKF for visual-inertial SLAM with deep loop closures, visual-inertial moving object tracking and many others. |
|
Spatial perception has witnessed unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. For instance, while humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. This talk discusses recent efforts targeted at bridging this gap. I present our recent work on high-level scene understanding and hierarchical representations, including Kimera and 3D Dynamic Scene Graphs, and discuss their potential impact on planning and decision-making, human-robot interaction, long-term autonomy, and scene prediction. The creation of a 3D Dynamic Scene Graph requires a variety of algorithms, ranging from model-based estimation to deep learning, and offers new opportunities to both researchers and practitioners. Similar to the role played by occupancy grid maps or landmark-based maps in the past, 3D Dynamic Scene Graphs offer a new, general, and powerful representation, and the grand challenge of designing Spatial Perception Engines that can estimate 3D Scene Graphs in real-time from sensor data has the potential to spark new research ideas and can push the community outside the “SLAM comfort zone”. |
|
Slide Deck |
|
Over the last four decades, research in SLAM and spatial AI has revolved around the question of map "representation". Where "classical" techniques in the SLAM community have focused on building general-purpose---but handcrafted---representations; modern gradient-based learning techniques have focused on building representations specialized to a set of downstream tasks of interest. Krishna postulates that a flexible blend of "classical" and learned methods is the most promising path to developing flexible, interpretable, and actionable models of the world: a necessity for intelligent embodied agents. In this talk, Krishna will present two recent research efforts that tightly integrate spatial representations with gradient based learning.
|
|
<script> function myFunction(buttonID, blockName) { var x = document.getElementById(blockName); if (x.style.display === "table") { x.style.display = "none"; } else { x.style.display = "table"; } var el = document.getElementById(buttonID); if (el.childNodes[0].nodeValue === "Expand Contents"){ el.childNodes[0].nodeValue = "Collapse Contents"; } else { el.childNodes[0].nodeValue = "Expand Contents"; } } </script>