You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the web interface presents buttons on the website and allows users to "click" to teleoperate the robot. Besides using a typical mouse, operators are able to use eye-tracking software, sip and puff devices, voice recognition + grid overlays, and other accessibility tools to move the cursor and "click" these buttons.
We can offer other forms of joint-level teleop that are faster than clicking with the cursor. I'd like to propose adding:
Keyboard Teleop: A mapping between buttons on the keyboard to buttons on the website. This mapping can be customizable (e.g. to support single hand teleop by mapping to keys on one side of the keyboard). We can support the 3 action modes of Step-Actions, Press-And-Hold, and Click-Click, by listening to the callbacks for key presses and releases. Because multiple keys can be pressed at once, we're enabling multi-joint teleop. Because keys are spaced closer together, we enable faster teleop by cutting out the time required to move the cursor. Keyboard Teleop and regular Mouse Teleop can happen together; they are not mutually exclusive.
Gamepad Teleop: A mapping between the joysticks, triggers, and buttons on a Xbox controller to the joints on the robot. This mapping does not have to be customizable, and likely shouldn't. The behavior of the buttons would likely be independent of the action mode selected by the user. Once again, this type of teleop enables multi-joint teleop and can be faster than Mouse Teleop alone. Based on how I suspect we'd implement this, Gamepad Teleop and other forms of teleop would likely be mutually exclusive.
One benefit of supporting the Xbox layout (instead of the PlayStation layout) is that we would also be supporting the Xbox Adaptive Controller, which is a customizable inputs hub that emulates a standard Xbox controller, but enables its users to bring the input modalities that they can work with.
We can also support greater integration with existing kinds of teleop:
Force-field Eye-tracking Teleop: Similar to the Apple Vision Pro, in this kind of teleop, the buttons and UI elements "attract" the user's cursor. This reduces the cognitive load on the user by diminishing the need for precise positioning of the cursor.
Voice Teleop: There are different levels of voice teleop that we could support. At the lowest level, there's joint level teleop, where we have a mapping between phrases to buttons on the website. At the highest level, there's task level teleop, where the user describes at a high level what they'd like for the robot to accomplish.
I suspect that even the low level version of this would enable faster teleop than the Voice Control Grid method.
Touch Teleop: Touchscreens enable multi-touch and easy detection of touch press vs. releases. This type of teleop would be similar to Keyboard Teleop, except while keys are designed to be easy to type on, UI elements on the screen can become difficult to touch. One way to deal with this is to extend the hit box of a button beyond the actual size of the button shown on screen.
I'd be curious to hear your thoughts on feasibility, implementation, and priority for this project.
The text was updated successfully, but these errors were encountered:
Currently, the web interface presents buttons on the website and allows users to "click" to teleoperate the robot. Besides using a typical mouse, operators are able to use eye-tracking software, sip and puff devices, voice recognition + grid overlays, and other accessibility tools to move the cursor and "click" these buttons.
We can offer other forms of joint-level teleop that are faster than clicking with the cursor. I'd like to propose adding:
We can also support greater integration with existing kinds of teleop:
I'd be curious to hear your thoughts on feasibility, implementation, and priority for this project.
The text was updated successfully, but these errors were encountered: