Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Chisel to the perception module #281

Open
wants to merge 33 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
32908ae
Hacked module for Chisel
Feb 1, 2016
d9869ec
Merge branch 'master' of https://github.com/personalrobotics/prpy int…
Feb 9, 2016
17b4c4f
For chisel module
Feb 26, 2016
9aca27c
Merge branch 'master' into perception/chisel
jeking04 Feb 26, 2016
199d542
Updating package to work with single call to DetectObject
jeking04 Feb 26, 2016
bde5ebf
Working on transforming chisel mesh to herb frame
jeking04 Feb 26, 2016
13227e5
Flipping to correct transform
jeking04 Feb 26, 2016
2457645
Cleaning up chisel class. Adding kinbody_helper class. Currently cont…
jeking04 Mar 1, 2016
3e3c93f
Updating vncc to also use the kinbody_helper.
jeking04 Mar 1, 2016
f38ad0e
Updating simtrack to use kinbody_helper
jeking04 Mar 17, 2016
abfd92e
Working on integrating simtrack object tracking with prpy
jeking04 Mar 18, 2016
debef02
Adding ability to bypass tf lookup during tracking
jeking04 Mar 18, 2016
86a9259
Some serialization fixes
Mar 22, 2016
ee1c333
Merge branch 'perception/chisel' of https://github.com/personalroboti…
Mar 22, 2016
3f54fe9
fixing bugs in chisel refactor. adding ability to transform kinbodies…
jeking04 Mar 22, 2016
ba940f2
Merge branch 'master' into perception/chisel
jeking04 Mar 26, 2016
1b4af44
Adding some comments to clarify logic
jeking04 Mar 26, 2016
0c70585
Merge branch 'master' into perception/chisel
mkoval Mar 30, 2016
f47f066
Merge branch 'perception/chisel' of https://github.com/personalroboti…
Apr 13, 2016
06a31b5
Merge branch 'master' into perception/chisel
Apr 13, 2016
99e2cbe
Merge branch 'master' into perception/chisel
Apr 22, 2016
076191f
Around 70% of Mike's original comments
Apr 22, 2016
d5afd97
Basic tracking interface
Apr 27, 2016
8fbdccb
Merge branch 'master' into perception/chisel
May 6, 2016
c62e2d3
Addressed most of Mike's other comments
May 6, 2016
b15e9f5
Merge branch 'master' into perception/chisel
jeking04 Jun 2, 2016
ea0b584
Merge branch 'perception/simtrack' into perception/chisel
jeking04 Jun 2, 2016
9193519
Small changes to get simtrack working correctly and ignoring failed p…
jeking04 Jun 2, 2016
68449f2
Fixing merge error in kinbody_detectory. Changing default chisel dete…
jeking04 Jun 2, 2016
5aec63b
Modifying transform logic to put objects in map frame for offscreen_r…
jeking04 Jun 2, 2016
bbbf4f5
adding logic to add clones of the original body to the camera in orde…
jeking04 Jun 13, 2016
cac5a96
Using 0.05 for pose offset
Jun 14, 2016
97931d3
Merge branch 'master' into perception/chisel
Jun 14, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions package.xml
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,16 @@
<run_depend>manipulation2</run_depend>
<run_depend>openrave</run_depend>
<run_depend>openrave_catkin</run_depend>
<run_depend>or_mesh_marker</run_depend>
<run_depend>chisel_msgs</run_depend>
<run_depend>offscreen_render</run_depend>
<run_depend>python-scipy</run_depend>
<run_depend>python-termcolor</run_depend>
<run_depend>python-trollius</run_depend>
<run_depend>python-rospkg</run_depend>
<run_depend>python-yaml</run_depend>
<run_depend>python-lxml</run_depend>
<run_depend>tf</run_depend>
<test_depend>python-nose</test_depend>
<!-- These optional dependencies cause unit tests can fail if missing. -->
<test_depend>cbirrt2</test_depend>
Expand Down
146 changes: 146 additions & 0 deletions src/prpy/perception/chisel.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
import logging
import openravepy
from base import PerceptionModule, PerceptionMethod

logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)


class ChiselModule(PerceptionModule):

def __init__(self, env, service_namespace='Chisel', mesh_name = 'Chisel/full_mesh',
detection_frame='/herb_base', destination_frame='/map', reference_link=None):
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a description of what ChiselModule does here.

The perception module implementation for CHISEL. It converts all unmodelled clutter from the point
cloud to an OpenRAVE Kinbody - after masking out specified known objects already in the Environment.

@param env The OpenRAVE environment
@param service_namespace The namespace to use for Chisel service calls (default: Chisel)
@param mesh_name The name of the topic on which the Chisel marker is published.
@param detection_frame The frame in which the Chisel mesh is detected. (default: map)
@param destination_frame The tf frame the detected chisel kinbody should be transformed to (default: map)
@param reference_link A link on an OpenRAVE kinbody that corresponds to the tf frame
specified by the destination_frame parameter. If not provided it is assumed that the
transform from the destination_frame tf frame to the OpenRAVE environment is the identity
"""
import rospy

self.mesh_client = openravepy.RaveCreateSensorSystem(env, 'mesh_marker')

if self.mesh_client is None:
raise RuntimeError('Could not create mesh client')

self.serv_ns = service_namespace
self.mesh_name = mesh_name

# Create a ROS camera used to generate a synthetic point cloud
# to mask known kinbodies from the chisel mesh
from offscreen_render import ros_camera_sim
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add this as a dependency in the package.xml file.

self.camera = ros_camera_sim.RosCameraSim(env)
self.camera.start('/head/kinect2/qhd')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this topic used for? Isn't this a simulated camera?

self.camera_bodies = []

self.reset_detection_srv = None
self.detection_frame = detection_frame
self.destination_frame = destination_frame
self.reference_link = reference_link

@PerceptionMethod
def DetectObject(self, robot, ignored_bodies=None, timeout=5, **kw_args):
"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a docstring that explains what this does.

Obtain the KinBody corresponding to the Chisel Mesh after masking out
any existing kinbodies if desired (as specified in ignored_bodies).

@param robot Required by the PerceptionMethod interface
@param ignored_bodies A list of known kinbodies to be masked out
@param timeout The timeout for which to wait for the Chisel Mesh
of the chisel mesh
"""
env = robot.GetEnv()

if ignored_bodies is None:
ignored_bodies = []

#Check if chisel kinbody in env
chisel_kb = env.GetKinBody(self.mesh_name)
if chisel_kb is not None:
env.RemoveKinBody(chisel_kb)

with env:
try:
from kinbody_helper import transform_from_or
import openravepy
# Add any ignored bodies to the camera
# We need to transform all these bodies from their pose
# in OpenRAVE to the equivalent pose in TF so that
# chisel can perform the appropriate masking
# Then we will transform them all back after the server
# performs the detection
for b in ignored_bodies:
if b not in self.camera_bodies:
poses = self.SamplePoses(b.GetTransform())
orig_name = b.GetName()

for idx, pose in enumerate(poses):
bcloned = openravepy.RaveCreateKinBody(env, b.GetXMLId())
bcloned.Clone(b, 0)

bcloned.SetName('%s_%d' % (orig_name, idx))
env.Add(bcloned)
transform_from_or(kinbody=bcloned,
pose=pose,
detection_frame='/map',
destination_frame=self.destination_frame,
reference_link=self.reference_link)
self.camera.add_body(bcloned)
self.camera_bodies.append(bcloned)

#Reset Chisel
import rospy, chisel_msgs.srv
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add chisel_msgs as a package.xml file.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

srv_nm = self.serv_ns+'/Reset'
rospy.wait_for_service(srv_nm,timeout)
reset_detection_srv = rospy.ServiceProxy(srv_nm,
chisel_msgs.srv.ResetService)
reset_detection_srv()

#Get Kinbody and load into env
self.mesh_client.SendCommand('GetMesh '+self.mesh_name)
chisel_mesh = env.GetKinBody(self.mesh_name)

finally:

# Remove any previous bodies in the camera
for b in self.camera_bodies:
self.camera.remove_body_now(b)
env.Remove(b)

if chisel_mesh is not None and self.reference_link is not None:
# Apply a transform to the chisel kinbody to put it in the correct
# location in the OpenRAVE environment
from kinbody_helper import transform_to_or
transform_to_or(kinbody=chisel_mesh,
detection_frame=self.detection_frame,
destination_frame=self.destination_frame,
reference_link=self.reference_link)

return chisel_mesh

def SamplePoses(self, orig_transform, radius=0.05):
"""
Sample poses around the sphere of the given radius
Sampled poses maintain orientation but have different translations
@param orig_transform The original pose
@param radius The radius of the sphere to sample from
"""
import copy, numpy
retvals = []
for i in [-1., 0., 1.]:
for j in [-1., 0., 1.]:
for k in [-1., 0., 1.]:
v = numpy.array([i, j, k])
if numpy.linalg.norm(v) > 0:
v = radius * v / numpy.linalg.norm(v)
new_transform = copy.deepcopy(orig_transform)
new_transform[:3,3] = orig_transform[:3,3] + v
retvals.append(new_transform)
return retvals
101 changes: 101 additions & 0 deletions src/prpy/perception/kinbody_helper.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@

def transform_to_or(kinbody, detection_frame=None, destination_frame=None,
detection_in_destination=None,
reference_link=None, pose=None):
"""
Transform the pose of a kinbody from a pose determined using TF to
the correct relative pose in OpenRAVE. This transformation is performed
by providing a link in OpenRAVE that corresponds directly to a frame in TF.

@param kinbody The kinbody to transform
@param detection_frame The tf frame the kinbody was originally detected in
@param destination_frame A tf frame that has a direct correspondence with
a link on an OpenRAVE Kinbody
@param reference_link The OpenRAVE link that corresponds to the tf frame
given by the destination_frame parameter (if None it is assumed
that the transform between the OpenRAVE world and the destination_frame
tf frame is the identity)
@param pose The pose to transform (if None, kinbody.GetTransform() is used)
"""
import numpy
if pose is None:
with kinbody.GetEnv():
pose = kinbody.GetTransform()

if detection_in_destination is None:
import tf, rospy
listener = tf.TransformListener()

listener.waitForTransform(
detection_frame, destination_frame,
rospy.Time(),
rospy.Duration(10))

frame_trans, frame_rot = listener.lookupTransform(
destination_frame, detection_frame,
rospy.Time(0))

from tf.transformations import quaternion_matrix
detection_in_destination = numpy.array(numpy.matrix(quaternion_matrix(frame_rot)))
detection_in_destination[:3,3] = frame_trans

with kinbody.GetEnv():
body_in_destination = numpy.dot(detection_in_destination, pose)

if reference_link is not None:
destination_in_or = reference_link.GetTransform()
else:
destination_in_or = numpy.eye(4)

body_in_or= numpy.dot(destination_in_or, body_in_destination)
kinbody.SetTransform(body_in_or)

def transform_from_or(kinbody, detection_frame, destination_frame,
reference_link=None, pose=None):
"""
Transform the pose of a kinbody from a OpenRAVE pose to the correct
relative pose in TF.This transformation is performed
by providing a link in OpenRAVE that corresponds directly to a frame in TF.

@param kinbody The kinbody to transform
@param detection_frame The tf frame the kinbody was originally detected in
@param destination_frame A tf frame that has a direct correspondence with
a link on an OpenRAVE Kinbody
@param reference_link The OpenRAVE link that corresponds to the tf frame
given by the destination_frame parameter (if None it is assumed
that the transform between the OpenRAVE world and the destination_frame
tf frame is the identity)
"""

import numpy
import tf
import rospy
listener = tf.TransformListener()

if pose is None:
with kinbody.GetEnv():
pose = kinbody.GetTransform()

listener.waitForTransform(
detection_frame, destination_frame,
rospy.Time(),
rospy.Duration(10))

frame_trans, frame_rot = listener.lookupTransform(
detection_frame, destination_frame,
rospy.Time(0))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments as above.


from tf.transformations import quaternion_matrix
destination_in_detection = numpy.array(numpy.matrix(quaternion_matrix(frame_rot)))
destination_in_detection[:3,3] = frame_trans

with kinbody.GetEnv():
body_in_or = pose
if reference_link is not None:
or_in_destination = numpy.linalg.inv(reference_link.GetTransform())
else:
or_in_destination = numpy.eye(4)
body_in_destination = numpy.dot(or_in_destination, body_in_or)

body_in_detection = numpy.dot(destination_in_detection, body_in_destination)
kinbody.SetTransform(body_in_detection)
Loading