Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Safer Multimodal Teleoperation of (Simulated) Robots #163

Open
Remi-Gau opened this issue Dec 14, 2022 · 0 comments
Open

Safer Multimodal Teleoperation of (Simulated) Robots #163

Remi-Gau opened this issue Dec 14, 2022 · 0 comments

Comments

@Remi-Gau
Copy link
Member

Remi-Gau commented Dec 14, 2022

Added as an issue for book keeping

Source: https://github.com/BrainhackWestern/BrainhackWestern.github.io/wiki/Projects

Team Leaders:

Pranshu Malik (@pranshumalik14)

Being confused, freezing, or panicking while trying hard to stop, re-direct, or stabilize a drone (or any such robot/toy) in sudden counter-intuitive poses or environmental conditions is likely a relatable experience for all of us. The idea here is about enhancing the expression of our intent while controlling a robot remotely — either in real life or on a computer screen (simulation) — while not replacing the primary instrument of control (modality) but instead by also integrating our brain states (thought) in the control loop as measured, for example, through EEG. Specifically, for the scope of the hackathon, this could mean developing a brain-machine interface for automatically assisting the operator in emergency cases with “smart” control command reflexes or “takeovers”. Such an approach can be beneficial in high-risk cases such as remote handling of materials in nuclear facilities or it can also aid the supervision of autonomous control, say in the context of self-driving cars, to ultimately increase safety.

For now, we could pick a particular type of simulated robot (industrial arms, RC cars, drones, etc.) and focus on designing and implementing a paradigm for characterizing intended motion and surprise during undesired motion in both autonomous (with no user control but robot’s self- and environmental influences) or semi-autonomous cases (including user’s control commands), i.e., we can aim to measure intent and surprise given the user’s control commands, the brain states, and robot states during algorithmically curated cases of robot motion. This will help us detect such situations and also infer desired reactions to accordingly adjust control commands to achieve desired reactions during emergencies and, more generally, to augment real-time active control to match the desired motion. We can strive to keep the approach suitable for generalizing well enough to robots of other types and/or morphologies and to more unusual environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant