diff --git a/docs/data-ai/ai/deploy.md b/docs/data-ai/ai/deploy.md
index 5941195188..dc54c9ab75 100644
--- a/docs/data-ai/ai/deploy.md
+++ b/docs/data-ai/ai/deploy.md
@@ -18,6 +18,7 @@ aliases:
- /ml/upload-model/
- /services/ml/ml-models/
- /registry/ml-models/
+ - /services/ml/deploy/
---
The Machine Learning (ML) model service allows you to deploy [machine learning models](/data-ai/ai/deploy/#deploy-your-ml-model) to your machine.
diff --git a/docs/data-ai/ai/run-inference.md b/docs/data-ai/ai/run-inference.md
index b3631f7a24..cc2d22fa12 100644
--- a/docs/data-ai/ai/run-inference.md
+++ b/docs/data-ai/ai/run-inference.md
@@ -8,6 +8,7 @@ modulescript: true
aliases:
- /how-tos/detect-people/
- /how-tos/detect-color/
+ - /services/vision/
description: "Run inference on a model with a vision service or an SDK."
---
diff --git a/docs/dev/reference/apis/services/generic.md b/docs/dev/reference/apis/services/generic.md
index 0fe8417f8a..79aab9c3c0 100644
--- a/docs/dev/reference/apis/services/generic.md
+++ b/docs/dev/reference/apis/services/generic.md
@@ -10,6 +10,7 @@ images: ["/icons/components/generic.svg"]
date: "2022-01-01"
aliases:
- /appendix/apis/services/generic/
+ - /services/generic/
# updated: "" # When the content was last entirely checked
---
diff --git a/docs/dev/reference/apis/services/vision.md b/docs/dev/reference/apis/services/vision.md
index 0294c2a988..e0dd78d11a 100644
--- a/docs/dev/reference/apis/services/vision.md
+++ b/docs/dev/reference/apis/services/vision.md
@@ -6,13 +6,13 @@ type: "docs"
tags: ["vision", "computer vision", "CV", "services"]
description: "Give commands to get detections, classifications, or point cloud objects, depending on the ML model the vision service is using."
aliases:
- - /services/vision/
- - /ml/vision/detection/
- - /ml/vision/classification/
- - /ml/vision/segmentation/
- - /services/vision/segmentation/
- - /ml/vision/
- - /appendix/apis/services/vision/
+ - /ml/vision/detection/
+ - /ml/vision/classification/
+ - /ml/vision/segmentation/
+ - /services/vision/segmentation/
+ - /ml/vision/
+ - /appendix/apis/services/vision/
+ - /operate/reference/services/vision/
icon: true
images: ["/services/icons/vision.svg"]
tags: ["vision", "computer vision", "CV", "services"]
diff --git a/docs/dev/tools/cli.md b/docs/dev/tools/cli.md
index 01000eb6ed..a1e77b9a46 100644
--- a/docs/dev/tools/cli.md
+++ b/docs/dev/tools/cli.md
@@ -492,8 +492,8 @@ done
| Command option | Description | Positional arguments |
| -------------- | ----------- | -------------------- |
-| `export tabular` | Export tabular or sensor data to a specified location in the .ndjson output format. You can copy this from the UI with a filter. See [Copy `export` command](#copy-export-command). | - |
-| `export binary` | Export binary or image data to a specified location. Binary data will be downloaded in the original output it was specified as. You can copy this from the UI with a filter. See [Copy `export` command](#copy-export-command). | - |
+| `export tabular` | Export tabular or sensor data to a specified location in the .ndjson output format. Downloads as two directories, first as /data , which contains the file or files and second as /metadata , which has the metadata documents related to the files. You can copy this from the UI with a filter. See [Copy `export` command](#copy-export-command). | - |
+| `export binary` | Export binary or image data to a specified location. Binary data is downloaded in the original output it was specified as. You can copy this from the UI with a filter. See [Copy `export` command](#copy-export-command). | - |
| `tag` | Add or remove tags from data matching the ids or filter. | `ids`, `filter` |
| `database configure` | Create a new database user for the Viam organization's MongoDB Atlas Data Federation instance, or change the password of an existing user. See [Configure data query](/data-ai/data/query/#configure-data-query). | - |
| `database hostname` | Get the MongoDB Atlas Data Federation instance hostname and connection URI. See [Configure data query](/data-ai/data/query/#configure-data-query). | - |
diff --git a/docs/operate/reference/services/vision/_index.md b/docs/operate/reference/services/vision/_index.md
deleted file mode 100644
index 5fab94eeec..0000000000
--- a/docs/operate/reference/services/vision/_index.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: "Vision Service"
-linkTitle: "Vision Service"
-weight: 20
-type: "docs"
-description: "The vision service enables your machine to use its on-board cameras to intelligently see and interpret the world around it."
-icon: true
-images: ["/services/icons/vision.svg"]
-tags: ["vision", "computer vision", "CV", "services"]
-no_list: true
-modulescript: true
-hide_children: true
-aliases:
- - "/services/vision/"
- - "/ml/vision/detection/"
- - "/ml/vision/classification/"
- - "/ml/vision/segmentation/"
- - "/services/vision/segmentation/"
- - /ml/vision/
-date: "2022-01-01"
-# updated: "" # When the content was last entirely checked
-# SMEs: Bijan, Khari
-toc_hide: true
----
diff --git a/docs/operate/reference/services/vision/color_detector.md b/docs/operate/reference/services/vision/color_detector.md
deleted file mode 100644
index 74d42e0ea3..0000000000
--- a/docs/operate/reference/services/vision/color_detector.md
+++ /dev/null
@@ -1,340 +0,0 @@
----
-title: "Configure a color_detector"
-linkTitle: "color_detector"
-weight: 10
-type: "docs"
-description: "A heuristic detector that draws boxes around objects according to their hue (does not detect black, gray, and white)."
-service_description: "A heuristic detector that draws boxes around objects according to their hue (does not detect black, gray, and white)."
-tags: ["vision", "computer vision", "CV", "services", "detection"]
-images: ["/services/vision/dog-detector.png"]
-aliases:
- - /services/vision/detection/
- - /services/vision/classification/
- - /ml/vision/color_detector/
- - /services/vision/color_detector/
- - /data-ai/services/vision/color_detector/
-# SMEs: Bijan, Khari
----
-
-The `color_detector` vision service model is a heuristic detector that draws boxes around objects according to their hue.
-Color detectors do not detect black, perfect grays (grays where the red, green, and blue color component values are equal), or white.
-It only detects hues found on the color wheel.
-
-{{% alert title="Tip" color="tip" %}}
-Object colors can vary dramatically based on the light source.
-We recommend you verify the desired color detection value under actual lighting conditions.
-To determine the color value from the actual camera component image, you can use a pixel color tool, like [Color Picker for Chrome](https://chrome.google.com/webstore/detail/color-picker-for-chrome/clldacgmdnnanihiibdgemajcfkmfhia).
-
-If the color is not reliably detected, increase the `hue_tolerance_pct`.
-{{< /alert >}}
-
-{{< tabs >}}
-{{% tab name="Builder" %}}
-
-Navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com).
-Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
-Select the `vision` type, then select the `color detector` model.
-Enter a name or use the suggested name for your service and click **Create**.
-
-In your vision service's panel, select the color your vision service will be detecting, as well as a hue tolerance and a segment size (in pixels):
-
-![Color detector panel with color and hue tolerance selection and a field for the segment size](/services/vision/color-detector-panel.png)
-
-{{% /tab %}}
-{{% tab name="JSON Template" %}}
-
-Add the vision service object to the services array in your JSON configuration:
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "",
- "type": "vision",
- "model": "color_detector",
- "attributes": {
- "segment_size_px": ,
- "detect_color": "#ABCDEF",
- "hue_tolerance_pct": ,
- "saturation_cutoff_pct": ,
- "value_cutoff_pct":
- }
- },
- ... // Other services
-]
-```
-
-{{% /tab %}}
-{{% tab name="JSON Example" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "blue_square",
- "type": "vision",
- "model": "color_detector",
- "attributes": {
- "segment_size_px": 100,
- "detect_color": "#1C4599",
- "hue_tolerance_pct": 0.07,
- "value_cutoff_pct": 0.15
- }
- },
- {
- "name": "green_triangle",
- "type": "vision",
- "model": "color_detector",
- "attributes": {
- "segment_size_px": 200,
- "detect_color": "#62963F",
- "hue_tolerance_pct": 0.05,
- "value_cutoff_pct": 0.20
- }
- }
-]
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-The following parameters are available for a `color_detector`:
-
-
-| Parameter | Required? | Description |
-| --------- | --------- | ----------- |
-| `segment_size_px` | **Required** | An integer that sets a minimum size (in pixels) of a contiguous color region to be detected, and filters out all other found objects below that size. |
-| `detect_color` | **Required** | The color to detect in the image, as a string of the form `#RRGGBB`. The color is written as a hexadecimal string prefixed by ‘#’. |
-| `hue_tolerance_pct` | **Required** | A number bigger than 0.0 and smaller than or equal to 1.0 that defines how strictly the detector must match to the hue of the color requested. ~0.0 means the color must match exactly, while 1.0 matches to every color, regardless of the input color. 0.05 is a good starting value. |
-| `saturation_cutoff_pct` | Optional | A number > 0.0 and <= 1.0 which defines the minimum saturation before a color is ignored. Defaults to 0.2. |
-| `value_cutoff_pct` | Optional | A number > 0.0 and <= 1.0 which defines the minimum value before a color is ignored. Defaults to 0.3. |
-
-{{% alert title="Info" color="info" %}}
-
-**hue_tolerance_pct**, **saturation_cutoff_pct**, and **value_cutoff_pct** refer to hue, saturation, and value (brightness) in the HSV Color Model, but do not set color values in Viam.
-
-**hue_tolerance_pct** specifies the exactness of the color match to **detect_color**.
-
-The optional **saturation_cutoff_pct** and **value_cutoff_pct** attributes specify cutoff thresholds levels for saturation and brightness, rather than specifying color saturation and brightness as they do in the standard HSV Color Model.
-
-{{% /alert %}}
-
-Click the **Save** button in the top right corner of the page.
-Proceed to [test your detector](#test-your-detector).
-
-## Test your detector
-
-You can test your detector with [live camera footage](#live-camera-footage) or [existing images](#existing-images).
-
-### Live camera footage
-
-1. Configure a [camera component](/operate/reference/components/camera/).
- {{< alert title="Tip" color="tip" >}}
- This is the camera whose name you need to pass to vision service methods.
- {{< /alert >}}
-2. After adding the camera, click the **Save** button in the top right corner of the page.
-3. Click on the **Test** area on the vision service configuration panel, or navigate to the **CONTROL** tab, click on the vision service and select your camera and vision service and then click **Refresh**.
- The panel will show detections with confidence above the `default_minimum_confidence` with bounding boxes on the image.
-
- ![Blue boxes detected](/services/vision/detections.png)
-
-{{% expand "Click to see how to configure a camera live feed that shows detections or classifications" %}}
-
-Configure a [transform camera](/operate/reference/components/camera/transform/) with the following attributes:
-
-```json
-{
- "pipeline": [
- {
- "type": "detections",
- "attributes": {
- "confidence_threshold": 0.5,
- "detector_name": "",
- "valid_labels": [""]
- }
- }
- ],
- "source": ""
-}
-```
-
-Then save your configuration.
-Navigate to the **CONTROL** tab, click on your transform camera and toggle it on to see a live feed with detections.
-
-![Detections on a transform camera](/services/vision/transform-detections.png)
-
-{{% /expand%}}
-
-4. To access detections with code, use the Vision Service methods on the camera you configured in step 1.
- The following code gets the machine’s vision service and then runs a color detector vision model on output from the machine's camera `"cam1"`:
-
- {{% alert title="Tip" color="tip" %}}
-
- Pass the name of the camera you configured in step 1.
- Do not pass a transform camera that already has the "detections" or "classifications" transform applied to it.
-
- {{% /alert %}}
-
- {{< tabs >}}
- {{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-camera_name = "cam1"
-
-# Grab camera from the machine
-cam1 = Camera.from_robot(robot, camera_name)
-# Grab Viam's vision service for the detector
-my_detector = VisionClient.from_robot(robot, "my_detector")
-
-detections = await my_detector.get_detections_from_camera(camera_name)
-
-# If you need to store the image, get the image first
-# and then run detections on it. This process is slower:
-img = await cam1.get_image()
-detections_from_image = await my_detector.get_detections(img)
-
-await robot.close()
-```
-
-To learn more about how to use detection, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
- {{% /tab %}}
- {{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
- "go.viam.com/rdk/config"
- "go.viam.com/rdk/services/vision"
- "go.viam.com/rdk/components/camera"
-)
-
-// Grab the camera from the machine
-cameraName := "cam1" // make sure to use the same component name that you have in your machine configuration
-myCam, err := camera.FromRobot(robot, cameraName)
-if err != nil {
- logger.Fatalf("cannot get camera: %v", err)
-}
-
-myDetector, err := vision.from_robot(robot, "my_detector")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get detections from the camera output
-detections, err := myDetector.DetectionsFromCamera(context.Background(), myCam, nil)
-if err != nil {
- logger.Fatalf("Could not get detections: %v", err)
-}
-if len(directDetections) > 0 {
- logger.Info(detections[0])
-}
-
-// If you need to store the image, get the image first
-// and then run detections on it. This process is slower:
-
-// Get the stream from a camera
-camStream, err := myCam.Stream(context.Background())
-
-// Get an image from the camera stream
-img, release, err := camStream.Next(context.Background())
-defer release()
-
-// Apply the color classifier to the image from your camera (configured as "cam1")
-detectionsFromImage, err := myDetector.Detections(context.Background(), img, nil)
-if err != nil {
- logger.Fatalf("Could not get detections: %v", err)
-}
-if len(detectionsFromImage) > 0 {
- logger.Info(detectionsFromImage[0])
-}
-
-```
-
-To learn more about how to use detection, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
- {{% /tab %}}
- {{< /tabs >}}
-
-### Existing images
-
-If you would like to test your detector with existing images, load the images and pass them to the detector:
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-from PIL import Image
-
-robot = await connect()
-# Grab Viam's vision service for the detector
-my_detector = VisionClient.from_robot(robot, "my_detector")
-
-# Load an image
-img = Image.open('test-image.png')
-
-# Apply the detector to the image
-detections_from_image = await my_detector.get_detections(img)
-
-await robot.close()
-```
-
-To learn more about how to use detection, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
- "go.viam.com/rdk/config"
- "go.viam.com/rdk/services/vision"
- "image/jpeg"
- "os"
-)
-
-myDetector, err := vision.from_robot(robot, "my_detector")
-if err != nil {
- logger.Fatalf("Cannot get Vision Service: %v", err)
-}
-
-// Read image from existing file
-file, err := os.Open("test-image.jpeg")
-if err != nil {
- logger.Fatalf("Could not get image: %v", err)
-}
-defer file.Close()
-img, err := jpeg.Decode(file)
-if err != nil {
- logger.Fatalf("Could not decode image: %v", err)
-}
-defer img.Close()
-
-// Apply the detector to the image
-detectionsFromImage, err := myDetector.Detections(context.Background(), img, nil)
-if err != nil {
- logger.Fatalf("Could not get detections: %v", err)
-}
-if len(detectionsFromImage) > 0 {
- logger.Info(detectionsFromImage[0])
-}
-
-```
-
-To learn more about how to use detection, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% alert title="Tip" color="tip" %}}
-To see more code examples of how to use Viam's Vision Service, see [our example repo](https://github.com/viamrobotics/vision-service-examples).
-{{% /alert %}}
-
-## Next steps
-
-For general configuration and development info, see:
-
-{{< cards >}}
-{{% card link="/operate/get-started/supported-hardware/" noimage="true" %}}
-{{% card link="/operate/control/web-app/" noimage="true" %}}
-{{< /cards >}}
diff --git a/docs/operate/reference/services/vision/detector_3d_segmenter.md b/docs/operate/reference/services/vision/detector_3d_segmenter.md
deleted file mode 100644
index 2b4c55d600..0000000000
--- a/docs/operate/reference/services/vision/detector_3d_segmenter.md
+++ /dev/null
@@ -1,170 +0,0 @@
----
-title: "Configure a detector_3d_segmenter"
-linkTitle: "detector_3d_segmenter"
-weight: 20
-type: "docs"
-description: "This model takes 2D bounding boxes from an object detector and projects the pixels in the bounding box to points in 3D space."
-service_description: "A segmenter that takes 2D bounding boxes from an object detector and projects the pixels in the bounding box to points in 3D space."
-tags: ["vision", "computer vision", "CV", "services", "segmentation"]
-aliases:
- - /services/vision/segmentation/
- - /ml/vision/detector_3d_segmenter/
- - /services/vision/detector_3d_segmenter/
- - /data-ai/services/vision/detector_3d_segmenter/
-# SMEs: Bijan, Khari
----
-
-_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_
-
-The `detector_3d_segmenter` vision service model takes 2D bounding boxes from an [object detector](/dev/reference/apis/services/vision/#detections), and, using the intrinsic parameters of the chosen camera, projects the pixels in the bounding box to points in 3D space.
-If the chosen camera is not equipped to do projections from 2D to 3D, then this vision model will fail.
-The label and the pixels associated with the 2D detections become the label and point cloud associated with the 3D segmenter.
-
-First, make sure your camera is connected to your machine's computer and both are powered on.
-Then, configure the service:
-
-{{< tabs >}}
-{{% tab name="Builder" %}}
-
-Navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com).
-Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
-Select the `vision` type, then select the `detector to 3D segmenter` model.
-Enter a name or use the suggested name for your service and click **Create**.
-
-In your vision service's panel, fill in the attributes field.
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "detector_name": "",
- "confidence_threshold_pct": ,
- "mean_k": ,
- "sigma":
-}
-```
-
-{{% /tab %}}
-{{% tab name="JSON Template" %}}
-
-Add the vision service object to the services array in your raw JSON configuration:
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "",
- "type": "vision",
- "namespace": "rdk",
- "model": "detector_3d_segmenter",
- "attributes": {
- "detector_name": "my_detector",
- "confidence_threshold_pct": 0.5,
- "mean_k": 50,
- "sigma": 2.0
- }
- },
- ... // Other services
-]
-```
-
-{{% /tab %}}
-{{% tab name="JSON Example" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "my_segmenter",
- "type": "vision",
- "namespace": "rdk",
- "model": "detector_3d_segmenter",
- "attributes": {
- "detector_name": "my_detector",
- "confidence_threshold_pct": 0.5,
- "mean_k": 50,
- "sigma": 2.0
- }
- }
-]
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-The following parameters are available for a `detector_3d_segmenter`.
-
-
-| Parameter | Required? | Description |
-| --------- | --------- | ----------- |
-| `detector_name`| **Required** | The name of a registered detector vision service. The segmenter vision service uses the detections from `"detector_name"` to create the 3D segments. |
-| `confidence_threshold_pct` | Optional | A number between 0 and 1 which represents a filter on object confidence scores. Detections that score below the threshold will be filtered out in the segmenter. The default is 0.5. |
-| `mean_k` | **Required** | An integer parameter used in [a subroutine to eliminate the noise in the point clouds](https://pcl.readthedocs.io/projects/tutorials/en/latest/statistical_outlier.html). It should be set to be 5-10% of the minimum segment size. Start with 5% and go up if objects are still too noisy. If you don’t want to use the filtering, set the number to 0 or less. |
-| `sigma` | **Required** | A floating point parameter used in [a subroutine to eliminate the noise in the point clouds](https://pcl.readthedocs.io/projects/tutorials/en/latest/statistical_outlier.html). It should usually be set between 1.0 and 2.0. 1.25 is usually a good default. If you want the object result to be less noisy (at the risk of losing some data around its edges) set sigma to be lower. |
-
-Click the **Save** button in the top right corner of the page and proceed to [test your segmenter](#test-your-segmenter).
-
-## Test your segmenter
-
-The following code uses the [`GetObjectPointClouds`](/dev/reference/apis/services/vision/#getobjectpointclouds) method to run a segmenter vision model on an image from the machine's camera `"cam1"`:
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-
-# Grab Viam's vision service for the segmenter
-my_segmenter = VisionClient.from_robot(robot, "my_segmenter")
-
-objects = await my_segmenter.get_object_point_clouds("cam1")
-
-await robot.close()
-```
-
-To learn more about how to use segmentation, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
-"go.viam.com/rdk/config"
-"go.viam.com/rdk/services/vision"
-"go.viam.com/rdk/components/camera"
-)
-
-cameraName := "cam1" // Use the same component name that you have in your machine configuration
-
-// Get the vision service you configured with name "my_segmenter" from the machine
-mySegmenter, err := vision.from_robot(robot, "my_segmenter")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get segments
-segments, err := mySegmenter.ObjectPointClouds(context.Background(), cameraName, nil)
-if err != nil {
- logger.Fatalf("Could not get segments: %v", err)
-}
-if len(segments) > 0 {
- logger.Info(segments[0])
-}
-```
-
-To learn more about how to use segmentation, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% alert title="Tip" color="tip" %}}
-To see more code examples of how to use Viam's vision service, see [our example repo](https://github.com/viamrobotics/vision-service-examples).
-{{% /alert %}}
-
-## Next Steps
-
-For general configuration and development info, see:
-
-{{< cards >}}
-{{% card link="/operate/get-started/supported-hardware/" noimage="true" %}}
-{{% card link="/operate/control/web-app/" noimage="true" %}}
-{{% card link="/tutorials/services/navigate-with-rover-base/" noimage="true" %}}
-{{< /cards >}}
diff --git a/docs/operate/reference/services/vision/mlmodel.md b/docs/operate/reference/services/vision/mlmodel.md
deleted file mode 100644
index fa307e857c..0000000000
--- a/docs/operate/reference/services/vision/mlmodel.md
+++ /dev/null
@@ -1,634 +0,0 @@
----
-title: "Configure an mlmodel Detector or Classifier"
-linkTitle: "mlmodel"
-weight: 10
-type: "docs"
-description: "A detector or classifier that uses an ML model available on the machine to draw bounding boxes around objects or return a class label."
-service_description: "A detector or classifier that uses a model available on the machine’s hard drive to draw bounding boxes around objects or returns a class label and confidence score."
-tags: ["vision", "computer vision", "CV", "services", "detection"]
-images: ["/services/vision/dog-detector.png"]
-aliases:
- - /services/vision/detection/
- - /services/vision/classification/
- - /ml/vision/mlmodel/
- - /services/vision/mlmodel/
- - /data-ai/services/vision/mlmodel/
-# SMEs: Bijan, Khari
----
-
-_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_
-
-The `mlmodel` {{< glossary_tooltip term_id="model" text="model" >}} of the Viam vision service supports machine learning detectors and classifiers that draw bounding boxes or return class labels based on a deployed TensorFlow Lite, TensorFlow, PyTorch, or ONNX ML model.
-
-## Prerequisites
-
-Before configuring your `mlmodel` detector or classifier, you need to:
-
-{{< cards >}}
-{{% manualcard %}}
-
-1. Train or upload an ML model
-
-You can add an [existing model](/data-ai/ai/deploy/#deploy-your-ml-model) or [train a TFlite](/data-ai/ai/train-tflite/) or [another model](/data-ai/ai/train/) for object detection and classification using your data in the [Viam Cloud](/data-ai/capture-data/capture-sync/).
-
-{{% /manualcard %}}
-{{% manualcard %}}
-
-2. Deploy your ML model
-
-To use ML models with your machine, use a suitable [ML model service](/data-ai/ai/deploy/) to deploy and run the model.
-
-{{% /manualcard %}}
-{{< /cards >}}
-
-## Configure your detector or classifier
-
-Once you have deployed your ML model, configure your `mlmodel` detector or classifier:
-
-{{< tabs >}}
-{{% tab name="Builder" %}}
-
-Navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com).
-Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
-Select the `vision` type, then select the `ML model` model.
-Enter a name or use the suggested name for your service and click **Create**.
-
-Select the ML model service your model is deployed on from the **ML Model** dropdown.
-
-Edit other attributes as applicable according to the table below.
-You can edit optional attributes in raw JSON by clicking **{}** (Switch to advanced) on the right side of your service panel.
-
-{{% /tab %}}
-{{% tab name="JSON Template" %}}
-
-Add the vision service object to the services array in your JSON configuration:
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "",
- "type": "vision",
- "model": "mlmodel",
- "attributes": {
- "mlmodel_name": ""
- }
- },
- ... // Other services
-]
-```
-
-{{% /tab %}}
-{{% tab name="JSON Example" %}}
-
-{{< tabs >}}
-{{% tab name="Detector" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "person_detector",
- "type": "vision",
- "model": "mlmodel",
- "attributes": {
- "mlmodel_name": "my_mlmodel_service"
- }
- }
-]
-```
-
-{{% /tab %}}
-{{% tab name="Classifier" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "fruit_classifier",
- "type": "vision",
- "model": "mlmodel",
- "attributes": {
- "mlmodel_name": "fruit_classifier"
- }
- }
-]
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% /tab %}}
-{{< /tabs >}}
-
-Click the **Save** button in the top right corner of the page.
-
-The following attributes are available for an `mlmodel` detector or classifier:
-
-
-| Parameter | Type | Required? | Description |
-| --------- | ---- | --------- | ----------- |
-| `mlmodel_name` | string | **Required** | The name of the [ML model service](/data-ai/ai/deploy/) you want to use the model from. |
-| `remap_output_names` | object | Optional | The names of your output tensors, mapped to the service requirements. See [Tensor names](#tensor-names) for more information. |
-| `remap_input_names` | object | Optional | The name of your input tensor, mapped to the service requirements. See [Tensor names](#tensor-names) for more information. |
-| `input_image_bgr` | bool | Optional | Set this to `true` if the ML model service expects the input image to have BGR pixels, rather than RGB pixels. Default: `false` |
-| `input_image_mean_value` | array | Optional | The standard deviation of the RGB (or BGR) values. Only required if the ML model service expects the input image to be normalized. Default: `[0.5, 0.5, 0.5]` |
-| `input_image_std_dev` | array | Optional | The standard deviation of the RGB (or BGR) values. Only required if the ML model service expects the input image to be normalized. Default: `[0.5, 0.5, 0.5]` |
-| `default_minimum_confidence` | number | Optional | Set this to apply a minimum confidence score filter on all outputs. If left blank, no confidence filter is applied. Example: `0.81` |
-| `label_confidences` | object | Optional | A map that filters on label names, applying a specified minimum confidence to a specific label. `label_confidences` overwrites `default_minimum_confidence`. If you set `label_confidences`, then `default_minimum_confidence` does not apply (the service will only use `label_confidences`). If you leave this attribute blank, no filtering on labels is applied. Example: `{"DOG": 0.8, "CARROT": 0.3}` |
-| `label_path` | string | Optional | The path to a file containing labels for the configured ML model. Set this to overwrite the default label path for this model. |
-
-### Tensor names
-
-Both the `mlmodel` detector and classifier require that the input and output tensors defined by your ML model are named according to the following:
-
-- For an `mlmodel` detector:
- - The _input tensor_ must be named `image`
- - The _output tensors_ must be named `location`, `category`, and `score`,
-- For an `mlmodel` classifier:
- - The _input tensor_ must be named `image`
- - The _output tensor_ must be named `probability`
-
-If you [trained a TFlite ML model using the Viam app](/data-ai/ai/train-tflite/), your `mlmodel` tensors are already named in this fashion, and you can proceed to [test your detector or classifier](#test-your-detector-or-classifier).
-However, if you uploaded your own ML model, or are using one from the [Viam Registry](https://app.viam.com/registry), you may need to remap your tensor names to meet this requirement, and should follow the instructions to [remap tensor names](#remap-tensor-names).
-
-#### Remap tensor names
-
-If you need to remap the tensor names defined by your ML model to meet the tensor name requirements of the `mlmodel` detector or classifier, you can use the `remap_input_names` and `remap_output_names` attributes:
-
-{{< tabs >}}
-{{% tab name="Detector" %}}
-
-To remap your model's tensor names to work with an `mlmodel` detector, add the following to your `mlmodel` vision service configuration, replacing the `my_model` input and output tensor names with the names from your model:
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "type": "vision",
- "model": "mlmodel",
- "attributes": {
- "mlmodel_name": "my_model",
- "remap_output_names": {
- "my_model_output_tensor1": "category",
- "my_model_output_tensor2": "location",
- "my_model_output_tensor3": "score"
- },
- "remap_input_names": {
- "my_model_input_tensor1": "image"
- }
- },
- "name": "my-vision-service"
-}
-```
-
-{{% /tab %}}
-{{% tab name="Classifier" %}}
-
-To remap your model's tensor names to work with an `mlmodel` classifier, add the following to your `mlmodel` vision service configuration, replacing the `my_model` input and output tensor names with the names from your model:
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "type": "vision",
- "model": "mlmodel",
- "attributes": {
- "mlmodel_name": "my_model",
- "remap_output_names": {
- "my_model_output_tensor1": "probability"
- },
- "remap_input_names": {
- "my_model_input_tensor1": "image"
- }
- },
- "name": "my-vision-service"
-}
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-When done, click the **Save** button in the top right corner of the page, then proceed to [test your detector or classifier](#test-your-detector-or-classifier).
-
-## Test your detector or classifier
-
-You can test your detector or classifier with [existing images in the Viam app](#existing-images-in-the-cloud) or [live camera footage](#live-camera-footage).
-You can also test detectors and classifiers with [existing images on a computer](#existing-images-on-your-machine).
-
-### Existing images in the cloud
-
-{{< alert title="Note" color="note" >}}
-
-The feature is only available for classifiers that were uploaded after September 19, 2023.
-
-{{< /alert >}}
-
-{{}}
-
-If you have images stored in the [Viam Cloud](/data-ai/capture-data/capture-sync/), you can run your classifier against your images in the [Viam app](https://app.viam.com/).
-
-1. Navigate to the [Data tab](https://app.viam.com/data/view) and click on the **Images** subtab.
-2. Click on an image to open the side menu, and select the **Actions** tab under the **Data** tab.
-3. In the **Run model** section, select your model and specify a confidence threshold.
-4. Click **Run model**
-
-If the classifier's results exceed the confidence threshold, the **Run model** section shows a label and the responding confidence threshold.
-
-### Live camera footage
-
-You can test your detector or classifier from the [**Control tab**](/manage/troubleshoot/teleoperate/default-interface/#viam-app) or with code using a camera that is part of your machine.
-
-#### Test your vision service
-
-1. Configure a [camera component](/operate/reference/components/camera/).
- {{< alert title="Tip" color="tip" >}}
- This is the camera whose name you need to pass to vision service methods.
- {{< /alert >}}
-
-2. After adding the camera, click the **Save** button in the top right corner of the page.
-3. Click on the **Test** area on the vision service configuration panel, or navigate to the **CONTROL** tab, click on the vision service and select your camera and vision service and then click **Refresh**.
- The panel will show detections with confidence above the `default_minimum_confidence` with bounding boxes on the image.
-
-![Blue boxes detected](/services/vision/detections.png)
-
-{{% expand "Click to see how to configure a camera live feed that shows detections or classifications" %}}
-
-Configure a [transform camera](/operate/reference/components/camera/transform/) with the following attributes:
-
-{{< tabs >}}
-{{% tab name="Detections" %}}
-
-```json
-{
- "pipeline": [
- {
- "type": "detections",
- "attributes": {
- "confidence_threshold": 0.5,
- "detector_name": "",
- "valid_labels": [""]
- }
- }
- ],
- "source": ""
-}
-```
-
-{{% /tab %}}
-{{% tab name="Classifications" %}}
-
-```json
-{
- "pipeline": [
- {
- "type": "classifications",
- "attributes": {
- "confidence_threshold": 0.5,
- "classifier_name": "",
- "max_classifications": ,
- "valid_labels": [ "" ]
- }
- }
- ],
- "source": ""
-}
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-Then save your configuration.
-Navigate to the **CONTROL** tab, click on your transform camera and toggle it on to see a live feed with detections or classifications.
-
-![Detections on a transform camera](/services/vision/transform-detections.png)
-
-{{% /expand%}}
-
-#### Test with code
-
-The following code gets the machine’s vision service and then runs a detector or classifier vision model on an image from the machine's camera `"cam1"`.
-
-{{% alert title="Tip" color="tip" %}}
-
-Pass the name of the camera you configured in step 1.
-Do not pass a transform camera that already has the "detections" or "classifications" transform applied to it.
-
-{{% /alert %}}
-
-{{< tabs >}}
-{{% tab name="Detections" %}}
-
-{{< tabs >}}
-{{< tab name="Python" >}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-camera_name = "cam1"
-
-cam1 = Camera.from_robot(robot, camera_name)
-my_detector = VisionClient.from_robot(robot, "my_detector")
-
-detections = await my_detector.get_detections_from_camera(camera_name)
-
-# If you need to store the image, get the image first
-# and then run detections on it. This process is slower:
-img = await cam1.get_image()
-detections_from_image = await my_detector.get_detections(img)
-
-await robot.close()
-```
-
-{{< /tab >}}
-{{< tab name="Go" >}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
- "go.viam.com/rdk/config"
- "go.viam.com/rdk/services/vision"
- "go.viam.com/rdk/components/camera"
-)
-
-cameraName := "cam1"
-myCam, err := camera.FromRobot(robot, cameraName)
-if err != nil {
- logger.Fatalf("cannot get camera: %v", err)
-}
-
-myDetector, err := vision.from_robot(robot, "my_detector")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get detections from the camera output
-detections, err := myDetector.DetectionsFromCamera(context.Background(), myCam, nil)
-if err != nil {
- logger.Fatalf("Could not get detections: %v", err)
-}
-if len(directDetections) > 0 {
- logger.Info(detections[0])
-}
-
-// If you need to store the image, get the image first
-// and then run detections on it. This process is slower:
-
-// Get the stream from a camera
-camStream, err := myCam.Stream(context.Background())
-
-// Get an image from the camera stream
-img, release, err := camStream.Next(context.Background())
-defer release()
-
-// Apply the color classifier to the image from your camera (configured as "cam1")
-detectionsFromImage, err := myDetector.Detections(context.Background(), img, nil)
-if err != nil {
- logger.Fatalf("Could not get detections: %v", err)
-}
-if len(detectionsFromImage) > 0 {
- logger.Info(detectionsFromImage[0])
-}
-
-```
-
-{{< /tab >}}
-{{< /tabs >}}
-
-{{% /tab %}}
-{{% tab name="Classifications" %}}
-
-{{< tabs >}}
-{{< tab name="Python" >}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-camera_name = "cam1"
-cam1 = Camera.from_robot(robot, camera_name)
-my_classifier = VisionClient.from_robot(robot, "my_classifier")
-
-# Get the top 2 classifications with the highest confidence scores from the
-# camera output
-classifications = await my_classifier.get_classifications_from_camera(
- camera_name, 2)
-
-# If you need to store the image, get the image first
-# and then run classifications on it. This process is slower:
-img = await cam1.get_image()
-classifications_from_image = await my_classifier.get_classifications(img, 2)
-
-await robot.close()
-```
-
-{{< /tab >}}
-{{< tab name="Go" >}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
- "go.viam.com/rdk/config"
- "go.viam.com/rdk/services/vision"
- "go.viam.com/rdk/components/camera"
-)
-
-cameraName := "cam1"
-myCam, err := camera.FromRobot(robot, cameraName)
-if err != nil {
- logger.Fatalf("cannot get camera: %v", err)
-}
-
-myClassifier, err := vision.from_robot(robot, "my_classifier")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get the top 2 classifications with the highest confidence scores from the camera output
-classifications, err := visService.ClassificationsFromCamera(context.Background(), myCam, 2, nil)
-if err != nil {
- logger.Fatalf("Could not get classifications: %v", err)
-}
-if len(directClassifications) > 0 {
- logger.Info(classifications[0])
-}
-
-// If you need to store the image, get the image first
-// and then run classifications on it. This process is slower:
-
-// Get the stream from a camera
-camStream, err := myCam.Stream(context.Background())
-
-// Get an image from the camera stream
-img, release, err := camStream.Next(context.Background())
-defer release()
-
-// Apply the color classifier to the image from your camera (configured as "cam1")
-// Get the top 2 classifications with the highest confidence scores
-classificationsFromImage, err := visService.GetClassifications(context.Background(), img, 2, nil)
-if err != nil {
- logger.Fatalf("Could not get classifications: %v", err)
-}
-if len(classificationsFromImage) > 0 {
- logger.Info(classificationsFromImage[0])
-}
-```
-
-{{< /tab >}}
-{{< /tabs >}}
-
-{{% /tab %}}
-{{< /tabs >}}
-
-### Existing images on your machine
-
-If you would like to test your detector or classifier with existing images, load the images and pass them to the detector or classifier:
-
-{{< tabs >}}
-{{% tab name="Detector" %}}
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-from viam.media.utils.pil import pil_to_viam_image
-from PIL import Image
-
-robot = await connect()
-# Grab Viam's vision service for the detector
-my_detector = VisionClient.from_robot(robot, "my_detector")
-
-# Load an image
-pil_img = Image.open('test-image.png')
-img = pil_to_viam_image(pil_img)
-
-# Apply the detector to the image
-detections_from_image = await my_detector.get_detections(img)
-
-await robot.close()
-```
-
-To learn more about how to use detection, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
- "go.viam.com/rdk/config"
- "go.viam.com/rdk/services/vision"
- "image/jpeg"
- "os"
-)
-
-myDetector, err := vision.from_robot(robot, "my_detector")
-if err != nil {
- logger.Fatalf("Cannot get Vision Service: %v", err)
-}
-
-// Read image from existing file
-file, err := os.Open("test-image.jpeg")
-if err != nil {
- logger.Fatalf("Could not get image: %v", err)
-}
-defer file.Close()
-img, err := jpeg.Decode(file)
-if err != nil {
- logger.Fatalf("Could not decode image: %v", err)
-}
-defer img.Close()
-
-// Apply the detector to the image
-detectionsFromImage, err := myDetector.Detections(context.Background(), img, nil)
-if err != nil {
- logger.Fatalf("Could not get detections: %v", err)
-}
-if len(detectionsFromImage) > 0 {
- logger.Info(detectionsFromImage[0])
-}
-
-```
-
-To learn more about how to use detection, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% /tab %}}
-{{% tab name="Classifier" %}}
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-from PIL import Image
-
-robot = await connect()
-# Grab Viam's vision service for the classifier
-my_classifier = VisionClient.from_robot(robot, "my_classifier")
-
-# Load an image
-img = Image.open('test-image.png')
-
-# Apply the classifier to the image
-classifications_from_image = await my_classifier.get_classifications(img)
-
-await robot.close()
-```
-
-To learn more about how to use classification, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
- "go.viam.com/rdk/config"
- "go.viam.com/rdk/services/vision"
- "image"
- "image/png"
- "os"
-)
-
-myClassifier, err := vision.from_robot(robot, "my_classifier")
-if err != nil {
- logger.Fatalf("Cannot get Vision Service: %v", err)
-}
-
-// Read image from existing file
-file, err := os.Open("test-image.jpeg")
-if err != nil {
- logger.Fatalf("Could not get image: %v", err)
-}
-defer file.Close()
-img, err := jpeg.Decode(file)
-if err != nil {
- logger.Fatalf("Could not decode image: %v", err)
-}
-defer img.Close()
-
-// Apply the classifier to the image
-classificationsFromImage, err := myClassifier.Classifications(context.Background(), img, nil)
-if err != nil {
- logger.Fatalf("Could not get classifications: %v", err)
-}
-if len(classificationsFromImage) > 0 {
- logger.Info(classificationsFromImage[0])
-}
-
-```
-
-To learn more about how to use classification, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% alert title="Tip" color="tip" %}}
-To see more code examples of how to use Viam's vision service, see [our example repo](https://github.com/viamrobotics/vision-service-examples).
-{{% /alert %}}
-
-## Next Steps
-
-For general configuration and development info, see:
-
-{{< cards >}}
-{{% card link="/operate/get-started/supported-hardware/" noimage="true" %}}
-{{% card link="/operate/control/web-app/" noimage="true" %}}
-{{< /cards >}}
diff --git a/docs/operate/reference/services/vision/obstacles_depth.md b/docs/operate/reference/services/vision/obstacles_depth.md
deleted file mode 100644
index 98c5e77918..0000000000
--- a/docs/operate/reference/services/vision/obstacles_depth.md
+++ /dev/null
@@ -1,203 +0,0 @@
----
-title: "Configure an obstacles_depth Segmenter"
-linkTitle: "obstacles_depth"
-weight: 20
-type: "docs"
-description: "A segmenter for depth cameras that returns the perceived obstacles as a set of 3-dimensional bounding boxes, each with a Pose as a vector."
-service_description: "A segmenter for depth cameras that returns the perceived obstacles as a set of 3-dimensional bounding boxes, each with a Pose as a vector."
-tags: ["vision", "computer vision", "CV", "services", "segmentation"]
-aliases:
- - /ml/vision/obstacles_depth/
- - /services/vision/obstacles_depth/
- - /data-ai/services/vision/obstacles_depth/
-# SMEs: Bijan, Khari
----
-
-_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_
-
-The `obstacles_depth` vision service model is for depth cameras, and is best for motion planning with transient obstacles.
-Use this segmenter to identify well separated objects above a flat plane.
-
-First, make sure your camera is connected to your machine's computer and both are powered on.
-Then, configure an `obstacles_depth` segmenter:
-
-{{< tabs >}}
-{{% tab name="Builder" %}}
-
-Navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com).
-Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
-Select the `vision` type, then select the `obstacles depth` model.
-Enter a name or use the suggested name for your service and click **Create**.
-
-In your vision service's panel, fill in the attributes field.
-
-{{< tabs >}}
-{{% tab name="Attribute Template" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "min_points_in_plane": ,
- "min_points_in_segment": ,
- "max_dist_from_plane_mm": ,
- "ground_angle_tolerance_degs": ,
- "clustering_radius": ,
- "clustering_strictness":
-}
-```
-
-{{% /tab %}}
-{{% tab name="Attribute Example" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "min_points_in_plane": 1500,
- "min_points_in_segment": 250,
- "max_dist_from_plane_mm": 10.0,
- "ground_angle_tolerance_degs": 20,
- "clustering_radius": 5,
- "clustering_strictness": 3
-}
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% /tab %}}
-{{% tab name="JSON Template" %}}
-
-Add the following vision service object to the services array in your raw JSON configuration:
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "",
- "type": "vision",
- "namespace": "rdk",
- "model": "obstacles_depth"
- "attributes": {
- "min_points_in_plane": ,
- "min_points_in_segment": ,
- "max_dist_from_plane_mm": ,
- "ground_angle_tolerance_degs": ,
- "clustering_radius": ,
- "clustering_strictness":
- }
- },
- ... // Other services
-]
-```
-
-{{% /tab %}}
-{{% tab name="JSON Example" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
-{
- "name": "rc_segmenter",
- "type": "vision",
- "namespace": "rdk",
- "model": "obstacles_depth",
- "attributes": {
- "min_points_in_plane": 1500,
- "min_points_in_segment": 250,
- "max_dist_from_plane_mm": 10.0,
- "ground_angle_tolerance_degs": 20,
- "clustering_radius": 5,
- "clustering_strictness": 3
- }
-}
-]
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-The following parameters are available for an `"obstacles_depth"` segmenter:
-
-
-| Parameter | Required? | Description |
-| --------- | --------- | ----------- |
-| `min_points_in_plane` | Optional | An integer that specifies how many points to put on the flat surface or ground plane when clustering. This is to distinguish between large planes, like the floors and walls, and small planes, like the tops of bottle caps. Default: `500` |
-| `min_points_in_segment` | Optional | An integer that sets a minimum size to the returned objects, and filters out all other found objects below that size. Default: `10` |
-| `max_dist_from_plane_mm` | Optional | A float that determines how much area above and below an ideal ground plane should count as the plane for which points are removed. For fields with tall grass, this should be a high number. The default value is 100 mm. Default: `100.0` |
-| `ground_angle_tolerance_degs` | Optional | An integer that determines how strictly the found ground plane should match the `ground_plane_normal_vec`. For example, even if the ideal ground plane is purely flat, a rover may encounter slopes and hills. The algorithm should find a ground plane even if the found plane is at a slant, up to a certain point. Default: `30` |
-| `clustering_radius` | Optional | An integer that specifies which neighboring points count as being "close enough" to be potentially put in the same cluster. This parameter determines how big the candidate clusters should be, or, how many points should be put on a flat surface. A small clustering radius is likely to split different parts of a large cluster into distinct objects. A large clustering radius is likely to aggregate closely spaced clusters into one object. Default: `1` |
-| `clustering_strictness` | Optional | An integer that determines the probability threshold for sorting neighboring points into the same cluster, or how "easy" `viam-server` should determine it is to sort the points the machine's camera sees into this pointcloud. When the `clustering_radius` determines the size of the candidate clusters, then the clustering_strictness determines whether the candidates will count as a cluster. If `clustering_strictness` is set to a large value, many small clusters are likely to be made, rather than a few big clusters. The lower the number, the bigger your clusters will be. Default: `5` |
-
-If you want to identify multiple boxes over the flat plane with your segmenter:
-
-- First, [configure your frame system](/operate/mobility/define-geometry/#configure-a-reference-frame) to configure the relative spatial orientation of the components of your machine, including your [camera](/operate/reference/components/camera/), within Viam's [frame system service](/operate/mobility/define-geometry/).
- - After configuring your frame system, your camera will populate its own `Properties` with these spatial intrinsic parameters from the frame system.
- - You can get those parameters from your camera through the [camera API](/dev/reference/apis/components/camera/#getproperties).
-- The segmenter now returns multiple boxes within the `GeometryInFrame` object it captures.
-
-Click the **Save** button in the top right corner of the page and proceed to [test your segmenter](#test-your-segmenter).
-
-## Test your segmenter
-
-The following code uses the [`GetObjectPointClouds`](/dev/reference/apis/services/vision/#getobjectpointclouds) method to run a segmenter vision model on an image from the machine's camera `"cam1"`:
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-
-# Grab Viam's vision service for the segmenter
-my_segmenter = VisionClient.from_robot(robot, "my_segmenter")
-
-objects = await my_segmenter.get_object_point_clouds("cam1")
-
-await robot.close()
-```
-
-To learn more about how to use segmentation, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
-"go.viam.com/rdk/config"
-"go.viam.com/rdk/services/vision"
-"go.viam.com/rdk/components/camera"
-)
-
-cameraName := "cam1" // Use the same component name that you have in your machine configuration
-
-// Get the vision service you configured with name "my_segmenter" from the machine
-mySegmenter, err := vision.from_robot(robot, "my_segmenter")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get segments
-segments, err := mySegmenter.GetObjectPointClouds(context.Background(), cameraName, nil)
-if err != nil {
- logger.Fatalf("Could not get segments: %v", err)
-}
-if len(segments) > 0 {
- logger.Info(segments[0])
-}
-```
-
-To learn more about how to use segmentation, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% alert title="Tip" color="tip" %}}
-To see more code examples of how to use Viam's vision service, see [our example repo](https://github.com/viamrobotics/vision-service-examples).
-{{% /alert %}}
-
-## Next Steps
-
-For general configuration and development info, see:
-
-{{< cards >}}
-{{% card link="/operate/get-started/supported-hardware/" noimage="true" %}}
-{{% card link="/operate/control/web-app/" noimage="true" %}}
-{{% card link="/tutorials/services/navigate-with-rover-base/" noimage="true" %}}
-{{< /cards >}}
diff --git a/docs/operate/reference/services/vision/obstacles_distance.md b/docs/operate/reference/services/vision/obstacles_distance.md
deleted file mode 100644
index 3aea94a133..0000000000
--- a/docs/operate/reference/services/vision/obstacles_distance.md
+++ /dev/null
@@ -1,156 +0,0 @@
----
-title: "Configure an obstacles_distance Segmenter"
-linkTitle: "obstacles_distance"
-weight: 20
-type: "docs"
-description: "A segmenter that takes point clouds from a camera input and returns the average single closest point to the camera as a perceived obstacle."
-service_description: "A segmenter that takes point clouds from a camera input and returns the average single closest point to the camera as a perceived obstacle."
-tags: ["vision", "computer vision", "CV", "services", "segmentation"]
-aliases:
- - /ml/vision/obstacles_distance/
- - /data-ai/services/vision/obstacles_distance/
- - /services/vision/obstacles_distance/
-# SMEs: Bijan, Khari
----
-
-_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_
-
-`obstacles_distance` is a segmenter that takes point clouds from a camera input and returns the average single closest point to the camera as a perceived obstacle.
-It is best for transient obstacle avoidance.
-
-For example, if you have an ultrasonic distance sensor as [`viam:ultrasonic:camera`](https://github.com/viam-modules/viam-ultrasonic), this model will query the sensor `"num_queries"` times, and then take the average point from those measurements and return that as an obstacle.
-
-First, make sure your camera is connected to your machine's computer and both are powered on.
-Then, configure the service:
-
-{{< tabs >}}
-{{% tab name="Builder" %}}
-
-Navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com).
-Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
-Select the `vision` type, then select the `obstacles distance` model.
-Enter a name or use the suggested name for your service and click **Create**.
-
-In your vision service's configuration panel, fill in the attributes field with the following:
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "num_queries": 10
-}
-```
-
-{{% /tab %}}
-{{% tab name="JSON Template" %}}
-
-Add the vision service object to the services array in your raw JSON configuration:
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "",
- "type": "vision",
- "namespace": "rdk",
- "model": "obstacles_distance",
- "attributes": {
- "num_queries": 10
- }
- },
- ... // Other services
-]
-```
-
-{{% /tab %}}
-{{% tab name="JSON Example" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "my_segmenter",
- "type": "vision",
- "namespace": "rdk",
- "model": "obstacles_distance",
- "attributes": {
- "num_queries": 10
- }
- }
-]
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-The following parameters are available for a `obstacles_distance` segmenter:
-
-
-| Parameter | Required? | Description |
-| --------- | --------- | ----------- |
-| `num_queries`| Optional | How many times the model should call [`GetPointCloud()`](/dev/reference/apis/components/camera/#getpointcloud) before taking the average of the measurements and returning the single closest point. Accepts an integer between `1` and `20`. Default: `10` |
-
-## Test your segmenter
-
-The following code uses the [`GetObjectPointClouds`](/dev/reference/apis/services/vision/#getobjectpointclouds) method to run a segmenter vision model on an image from the machine's camera `"cam1"`:
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-
-# Grab Viam's vision service for the segmenter
-my_segmenter = VisionClient.from_robot(robot, "my_segmenter")
-
-objects = await my_segmenter.get_object_point_clouds("cam1")
-
-await robot.close()
-```
-
-To learn more about how to use segmentation, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
-"go.viam.com/rdk/config"
-"go.viam.com/rdk/services/vision"
-"go.viam.com/rdk/components/camera"
-)
-
-cameraName := "cam1" // Use the same component name that you have in your machine configuration
-
-// Get the vision service you configured with name "my_segmenter" from the machine
-mySegmenter, err := vision.from_robot(robot, "my_segmenter")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get segments
-segments, err := mySegmenter.ObjectPointClouds(context.Background(), cameraName, nil)
-if err != nil {
- logger.Fatalf("Could not get segments: %v", err)
-}
-if len(segments) > 0 {
- logger.Info(segments[0])
-}
-```
-
-To learn more about how to use segmentation, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% alert title="Tip" color="tip" %}}
-To see more code examples of how to use Viam's vision service, see [our example repo](https://github.com/viamrobotics/vision-service-examples).
-{{% /alert %}}
-
-## Next Steps
-
-For general configuration and development info, see:
-
-{{< cards >}}
-{{% card link="/operate/get-started/supported-hardware/" noimage="true" %}}
-{{% card link="/operate/control/web-app/" noimage="true" %}}
-{{% card link="/tutorials/services/navigate-with-rover-base/" noimage="true" %}}
-{{< /cards >}}
diff --git a/docs/operate/reference/services/vision/obstacles_pointcloud.md b/docs/operate/reference/services/vision/obstacles_pointcloud.md
deleted file mode 100644
index c8314402b8..0000000000
--- a/docs/operate/reference/services/vision/obstacles_pointcloud.md
+++ /dev/null
@@ -1,188 +0,0 @@
----
-title: "Configure an obstacles_pointcloud Segmenter"
-linkTitle: "obstacles_pointcloud"
-weight: 20
-type: "docs"
-description: "A segmenter that identifies well-separated objects above a flat plane."
-service_description: "A segmenter that identifies well-separated objects above a flat plane."
-tags: ["vision", "computer vision", "CV", "services", "segmentation"]
-aliases:
- - /ml/vision/obstacles_pointcloud/
- - /data-ai/services/vision/obstacles_pointcloud/
- - /services/vision/obstacles_pointcloud/
-# SMEs: Bijan, Khari
----
-
-_Changed in [RDK v0.2.36 and API v0.1.118](/dev/reference/changelog/#vision-service)_
-
-`obstacles_pointcloud` is a segmenter that identifies well separated objects above a flat plane.
-It first identifies the biggest plane in the scene, eliminates that plane, and clusters the remaining points into objects.
-
-First, make sure your camera is connected to your machine's computer and both are powered on.
-Then, configure the service:
-
-{{< tabs >}}
-{{% tab name="Builder" %}}
-
-Navigate to the **CONFIGURE** tab of your machine's page in the [Viam app](https://app.viam.com).
-Click the **+** icon next to your machine part in the left-hand menu and select **Service**.
-Select the `vision` type, then select the `obstacles pointcloud` model.
-Enter a name or use the suggested name for your service and click **Create**.
-
-In your vision service's panel, fill in the attributes field.
-
-```json {class="line-numbers linkable-line-numbers"}
-{
- "min_points_in_plane": ,
- "min_points_in_segment": ,
- "max_dist_from_plane_mm": ,
- "ground_plane_normal_vec": {
- "x": ,
- "y": ,
- "z":
- },
- "ground_angle_tolerance_degs": ,
- "clustering_radius": ,
- "clustering_strictness":
-}
-```
-
-{{% /tab %}}
-{{% tab name="JSON Template" %}}
-
-Add the vision service object to the services array in your raw JSON configuration:
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
- {
- "name": "",
- "type": "vision",
- "namespace": "rdk",
- "model": "obstacles_pointcloud"
- "attributes": {
- "min_points_in_plane": ,
- "min_points_in_segment": ,
- "max_dist_from_plane_mm": ,
- "ground_plane_normal_vec": {
- "x": ,
- "y": ,
- "z":
- },
- "ground_angle_tolerance_degs": ,
- "clustering_radius": ,
- "clustering_strictness":
- }
- },
- ... // Other services
-]
-```
-
-{{% /tab %}}
-{{% tab name="JSON Example" %}}
-
-```json {class="line-numbers linkable-line-numbers"}
-"services": [
-{
- "name": "rc_segmenter",
- "type": "vision",
- "namespace": "rdk",
- "model": "obstacles_pointcloud",
- "attributes": {
- "min_points_in_plane": 1500,
- "min_points_in_segment": 250,
- "max_dist_from_plane_mm": 10.0,
- "ground_plane_normal_vec": {"x": 0, "y":0, "z": 1},
- "ground_angle_tolerance_degs": 20.0,
- "clustering_radius": 5,
- "clustering_strictness": 3
- }
-}
-]
-```
-
-{{% /tab %}}
-{{< /tabs >}}
-
-The following parameters are available for a `"obstacles_pointcloud"`.
-
-
-| Parameter | Required? | Description |
-| --------- | --------- | ----------- |
-| `min_points_in_plane` | Optional | An integer that specifies how many points to put on the flat surface or ground plane when clustering. This is to distinguish between large planes, like the floors and walls, and small planes, like the tops of bottle caps. Default: `500` |
-| `min_points_in_segment` | Optional | An integer that sets a minimum size to the returned objects, and filters out all other found objects below that size. Default: `10` |
-| `clustering_radius` | Optional | An integer that specifies which neighboring points count as being "close enough" to be potentially put in the same cluster. This parameter determines how big the candidate clusters should be, or, how many points should be put on a flat surface. A small clustering radius is likely to split different parts of a large cluster into distinct objects. A large clustering radius is likely to aggregate closely spaced clusters into one object. Default: `1` |
-| `clustering_strictness` | Optional | An integer that determines the probability threshold for sorting neighboring points into the same cluster, or how "easy" `viam-server` should determine it is to sort the points the machine's camera sees into this pointcloud. When the `clustering_radius` determines the size of the candidate clusters, then the clustering_strictness determines whether the candidates will count as a cluster. If `clustering_strictness` is set to a large value, many small clusters are likely to be made, rather than a few big clusters. The lower the number, the bigger your clusters will be. Default: `5` |
-| `max_dist_from_plane_mm` | Optional | A float that determines how much area above and below an ideal ground plane should count as the plane for which points are removed. For fields with tall grass, this should be a high number. The default value is 100 mm. Default: `100` |
-| `ground_plane_normal_vec` | Optional | A `(x,y,z)` vector that represents the normal vector of the ground plane. Different cameras have different coordinate systems. For example, a lidar's ground plane will point in the `+z` direction `(0, 0, 1)`. On the other hand, the intel realsense `+z` direction points out of the camera lens, and its ground plane is in the negative y direction `(0, -1, 0)`. Default: `(0, 0, 1)` |
-| `ground_angle_tolerance_degs` | Optional | An integer that determines how strictly the found ground plane should match the `ground_plane_normal_vec`. For example, even if the ideal ground plane is purely flat, a rover may encounter slopes and hills. The algorithm should find a ground plane even if the found plane is at a slant, up to a certain point. Default: `30` |
-
-Click the **Save** button in the top right corner of the page and proceed to [test your segmenter](#test-your-segmenter).
-
-## Test your segmenter
-
-The following code uses the [`GetObjectPointClouds`](/dev/reference/apis/services/vision/#getobjectpointclouds) method to run a segmenter vision model on an image from the machine's camera `"cam1"`:
-
-{{< tabs >}}
-{{% tab name="Python" %}}
-
-```python {class="line-numbers linkable-line-numbers"}
-from viam.services.vision import VisionClient
-
-robot = await connect()
-
-# Grab Viam's vision service for the segmenter
-my_segmenter = VisionClient.from_robot(robot, "my_segmenter")
-
-objects = await my_segmenter.get_object_point_clouds("cam1")
-
-await robot.close()
-```
-
-To learn more about how to use segmentation, see the [Python SDK docs](https://python.viam.dev/autoapi/viam/services/vision/index.html).
-
-{{% /tab %}}
-{{% tab name="Go" %}}
-
-```go {class="line-numbers linkable-line-numbers"}
-import (
-"go.viam.com/rdk/config"
-"go.viam.com/rdk/services/vision"
-"go.viam.com/rdk/components/camera"
-)
-
-cameraName := "cam1" // Use the same component name that you have in your machine configuration
-
-// Get the vision service you configured with name "my_segmenter" from the machine
-mySegmenter, err := vision.from_robot(robot, "my_segmenter")
-if err != nil {
- logger.Fatalf("Cannot get vision service: %v", err)
-}
-
-// Get segments
-segments, err := mySegmenter.ObjectPointClouds(context.Background(), cameraName, nil)
-if err != nil {
- logger.Fatalf("Could not get segments: %v", err)
-}
-if len(segments) > 0 {
- logger.Info(segments[0])
-}
-```
-
-To learn more about how to use segmentation, see the [Go SDK docs](https://pkg.go.dev/go.viam.com/rdk/vision).
-
-{{% /tab %}}
-{{< /tabs >}}
-
-{{% alert title="Tip" color="tip" %}}
-To see more code examples of how to use Viam's vision service, see [our example repo](https://github.com/viamrobotics/vision-service-examples).
-{{% /alert %}}
-
-## Next Steps
-
-For general configuration and development info, see:
-
-{{< cards >}}
-{{% card link="/operate/get-started/supported-hardware/" noimage="true" %}}
-{{% card link="/operate/control/web-app/" noimage="true" %}}
-{{% card link="/tutorials/services/navigate-with-rover-base/" noimage="true" %}}
-{{< /cards >}}