Skip to content

Configuration files

Michal Staniaszek edited this page Jan 14, 2021 · 15 revisions

robot-config

Configuration files are used to control the behaviour of some generic components of the director, but their main use is to define settings for custom configuration of the UI and other components for displaying and operating a specific robot.

Robot configuration files are passed to director with the --robot-config argument, one per robot to display.

robotModel

The robotModel top level key should be used to specify the type of the robot that this configuration applies to. Inside director, this key will be checked in various places to determine the configuration of components to suit a specific robot. This requires writing some code if your robot requires some more specific setup.

robotName

Self explanatory. Used extensively in the system to distinguish between different robots. For example, some of the perception components will store their camera objects in a dictionary with keys being the robot name.

This must be unique, and the uniqueness is enforced when the configuration files are read by drcargs.py. If you have two identical robots, and only want to use a single configuration file for both, you can pass the same configuration file twice, but must provide an additional argument to each which will override the name given in the file.

--robot-config anymal.yaml anymal1 --robot-config anymal.yaml anymal2

urdfConfig

Deprecated, should not be used.

xacroConfig

This is used to configure the loading of a xacro file to represent the robot in director. The keys package and path are required.

The package key is the name of the ROS package where the xacro file to be loaded can be found.

The path key is the path to the xacro file within that package.

For example, with anymal boxy the config looks like

xacroConfig:
  package: "anymal_boxy_description"
  path: "urdf/anymal.urdf.xacro"
  args:
    simulation: "True"
    perfect_sensors: "True"
    fixed: "False"
    mesh: "True"
    material_color_rgba: "0.7 0.7 0.7 1.0"
    lidar: "True"
    realsense: "True"

The args key can be used to send arguments to the xacro generation command. Each of the args here is used to set an arg which in the xacro file looks something like

  <xacro:arg name="perfect_sensors"      default="true"/>

colorMode

This key sets the default value for display of loaded robot models. Possible options are

  • Textures - Uses textures defined by components of the URDF
  • Solid Color - Uses a solid colour which can be modified in the properties
  • URDF Colors - Uses colours defined in the urdf configuration itself (different from the textures)

perceptionSources

This key defines the configuration for inputs from the robot. Various perception sources are defined in perception.py, and these are the only valid sources. A source represents some kind of data from the robot which can be displayed by the director.

Source types

There are currently three perception source types: gridMap, depthImagePointCloud and pointCloud. The grid map is an elevation map generated by a custom component for the anymal robot. The depth image pointcloud is a pointcloud which is generated by combining a depth and rgb image. The basic pointcloud is used for pointclouds that are already constructed.

Explanation by example

Below is part of an existing configuration that is used to define perception sources for an anymal robot.

perceptionSources:
  gridMap:
    - name: "elevation map" # Name shown in the object model tree
      topic: "/elevation_mapping/elevation_map"
      # The object created by this configuration will be found at robotSystem.robotSystemKey
      robotSystemKey: "gridMapSource"
    - name: "lidar elevation map"
      topic: "/elevation_mapping_lidar/elevation_map"
      robotSystemKey: "gridMapLidarSource"
      # Use this key to set properties on the objectmodel created along with the source
      properties:
        Visible: False
  # Instantiations of DepthImagePointCloudSource
  depthImagePointCloud:
    - name: "head camera cloud"
      # the camera to use to generate this source. This should correspond to a key in cameras/depth
      sensor: "head camera"
      robotSystemKey: "headCameraPointCloudSource"
    - name: "ground camera cloud"
      sensor: "ground camera"
      robotSystemKey: "groundCameraPointCloudSource"
  pointCloud:
    - name: "velodyne cloud"
      sensor: "velodyne"
      robotSystemKey: "pointCloudSource"

Each source must have a robotSystemKey. The value of this key is used to insert the source object created into the robotSystem dict that represents a robot.

Depending on the source type, we use either the sensor or the topic key to define the provider for the source. Important to note here is that providers will not be initialised by director's startup.py. It is necessary to initialise them elsewhere and set the provider for the source by retrieving the object from robotSystem.

The sensor key should point to another key in the configuration file, which is a child of the sensors key. See below for details.

A source can have various properties, which can be defined with the properties key, and will be initialised when the source is created.

sensors

This key is used to define the sensors that the robot has. The sensors act as a "provider" of data to the source they are attached to. When the source is constructed, the provider is uninitialised, so the director will not receive any data to display. Once a provider is initialised (see perceptionmeta.py), the source will periodically request data from the provider and display it.

sensors:
  mode: "simulation"
  camera:
    depth:
      - name: "head camera"
        # The namespace of the camera of interest. Subscriptions are created to various topics within this namespace
        # e.g. namespace/image_raw and namespace/camera_info. The force_image_topic and force_info_topic keys can
        # be used to override these defaults, useful when image_transport is not generating the images.
        namespace: "realsense_d435_front_forward/depth"
        # The image transport to use for this camera
        transport: "raw"
      - name: "ground camera"
        namespace: "depth_camera_front/depth"
        transport: "raw"
    color:
      - name: "head camera"
        namespace: "realsense_d435_front_forward/rgb"
        transport: "raw"
      - name: "ground camera"
        namespace: "depth_camera_front/rgb"
        transport: "raw"
      - name: "special camera"
        namespace: "special/rgb"
        transport: "raw"
        image_topic: "image" # forces subscription to special/rgb/image rather than image_raw
        camera_info_topic: "info" # forces subscription to special/rgb/info rather than camera_info

  velodyne:
    topic: "/point_cloud_filter/lidar/point_cloud_filtered"

There is no generic method for reading sensors - any sort of configuration can be added here and read however is necessary for the specific setup that is required. However, the camera/color key is a special one, since cameras defined here will be displayed in tabs on the left hand side of the director, which you can click to see their output.

The default settings in this part assume that the images are being generated by the image_transport package. This package generates topics in a consistent way. If this is not the case, you can use the image_topic and camera_info_topic keys to override the defaults.

teleopJointGroups

This parameter is used by the robot pose GUI. At the moment this is not used by default for any of the robots, but it can be used to store specific poses of the robot and return to them.

drakeJointNames

This parameter should contain a list of joint names used by the robot. These should correspond to the names of joints present in the RobotState message received by the JointControlInterface. These names are used by the JointController to reference parts of the robot model and adjust the position of the joints and base to reflect the state of the robot.

The first 6 values of this parameter should be

drakeJointNames:
  - "base_x"
  - "base_y"
  - "base_z"
  - "base_roll"
  - "base_pitch"
  - "base_yaw"

because otherwise https://github.com/ori-drs/director/blob/master/src/python/director/robotstate.py#L104 will not work as expected.

robotStateJointNames

Maybe redundant? Also used by the RobotState as part of the JointController but the drake version is the one actually used to modify the pose.

fixedPointFile

This is a matlab or csv file which contains a joint state configuration. If the file is provided, this joint configuration will be set as the initial configuration in director on startup.

If using a matlab file, the variable name must be xstar. The number of rows in the matlab matrix must be equal to the number of joints in the robot model.

If any data is received from the state estimator the pose is overridden.

pelvisLink

The name of the base frame of the robot. This is used by various parts of director when accessing the "base" link of the robot. Used when moving the view camera and determining the robot's position in order to send motion goals.

This parameter should be renamed to baseLink or something similar to move it away from the quadruped/walking robot specific terminology.

headLink

This is the name of the frame of the robot which represents the main visual perspective of the robot. It is used by segmentationroutines (rarely used), and to reset the camera view to the robot's head.

This parameter should be renamed to make it more obvious that it refers to camera position

*FootLink, *HandLink

Name of the frames for feet/hands (or front feet in the case of quadruped). These are used to compute values used when sending goals to the robot.