You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 17, 2023. It is now read-only.
As mentioned in #2, there are a few different ways of specifying the grasp pose. The maximum information we can get from a single 2d camera view is x, y, theta, and width.
Extracting x, y, and theta is pretty thoroughly tested, but getting width too is more complex:
Preset width buttons (similar to speed presets in stretch_web_interface)
Take mouse release position into account for SE2 press/release to get distance
User specifies the location of the both fingers
User specifies the center of the grasp and the width of the fingers
I am updating the TargetAnchor component to support the different methods so we can test them
Additionally there is the problem of giving user feedback. The more automated the grasp, the higher chance of failure. The library I am using for point cloud processing (Open3D) can save images of the 3d viewport, so we could send those back to the web interface and display them to the user. Picking a camera angle where the object and grasp are both visible is difficult. Additionally the grasp is generated by gray boxes, not a model of the gripper itself which may be confusing. If there is away to screenshot RViz and upload that it might be easier?
The text was updated successfully, but these errors were encountered:
After looking into this a bit more, it hosts its own server which we would need to somehow embed or combine with ours, so it is probably easier and more controllable to just send one image at a time over ROS when we are ready (ie when the grasps are finished generating, as opposed to webrtc which would stream the entire generation process)
Additionally, when dragging, I found it much more intuitive to specify the right and left sides of an object, while when using the buttons, I found it more intuitive to specify the center of an object.
In either case, the data sent to the grasping algorithm generator is the left side, a width, and a rotation, but there is some trig math in SE2.tsx that allows the UI to show the center while saving the left.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
As mentioned in #2, there are a few different ways of specifying the grasp pose. The maximum information we can get from a single 2d camera view is x, y, theta, and width.
Extracting x, y, and theta is pretty thoroughly tested, but getting width too is more complex:
I am updating the TargetAnchor component to support the different methods so we can test them
Additionally there is the problem of giving user feedback. The more automated the grasp, the higher chance of failure. The library I am using for point cloud processing (Open3D) can save images of the 3d viewport, so we could send those back to the web interface and display them to the user. Picking a camera angle where the object and grasp are both visible is difficult. Additionally the grasp is generated by gray boxes, not a model of the gripper itself which may be confusing. If there is away to screenshot RViz and upload that it might be easier?
The text was updated successfully, but these errors were encountered: