Skip to content

Commit

Permalink
Move vision service and update typesence logic
Browse files Browse the repository at this point in the history
  • Loading branch information
npentrel committed Jan 7, 2025
1 parent b29a737 commit 49a15b1
Show file tree
Hide file tree
Showing 43 changed files with 134 additions and 130 deletions.
22 changes: 11 additions & 11 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,13 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: components-models-file
path: ./public/components/typesense.json
path: ./public/operate/reference/components/typesense.json

- name: Upload services models file
uses: actions/upload-artifact@v4
with:
name: services-models-file
path: ./public/services/typesense.json
# - name: Upload services models file
# uses: actions/upload-artifact@v4
# with:
# name: services-models-file
# path: ./public/operate/reference/services/typesense.json

deploy:
environment:
Expand Down Expand Up @@ -134,11 +134,11 @@ jobs:
with:
name: components-models-file
path: "components"
- name: Download the services models file
uses: actions/download-artifact@v4
with:
name: services-models-file
path: "services"
# - name: Download the services models file
# uses: actions/download-artifact@v4
# with:
# name: services-models-file
# path: "services"
- run: pip install -r .github/workflows/requirements.txt
- name: "Update modular resources"
run: python3 .github/workflows/get_modular_resources.py
Expand Down
26 changes: 13 additions & 13 deletions .github/workflows/get_modular_resources.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,19 +101,19 @@ async def main():
print("INSERTED")
print(insert_resp)

# Get built-in resources from services/typesense.json
with open('services/typesense.json') as f:
resources = json.load(f)
for r in resources:
print("RESOURCE")
print(r)
r["last_updated"] = time_now
r["total_organization_usage"] = int(r["total_organization_usage"])
r["total_robot_usage"] = int(r["total_robot_usage"])
print(r)
insert_resp = typesense_client.collections['resources'].documents.upsert(r)
print("INSERTED")
print(insert_resp)
# # Get built-in resources from services/typesense.json
# with open('services/typesense.json') as f:
# resources = json.load(f)
# for r in resources:
# print("RESOURCE")
# print(r)
# r["last_updated"] = time_now
# r["total_organization_usage"] = int(r["total_organization_usage"])
# r["total_robot_usage"] = int(r["total_robot_usage"])
# print(r)
# insert_resp = typesense_client.collections['resources'].documents.upsert(r)
# print("INSERTED")
# print(insert_resp)

# Create a request to list registry items and get the response from the app
request = ListRegistryItemsRequest(organization_id=cloud._organization_id)
Expand Down
4 changes: 2 additions & 2 deletions docs/data-ai/ai/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,12 +59,12 @@ You can search the machine learning models that are available to deploy on this

On its own the ML model service only runs the model.
After deploying your model, you need to configure an additional service to use the deployed model.
For example, you can configure an [`mlmodel` vision service](/data-ai/reference/vision/) to visualize the inferences your model makes.
For example, you can configure an [`mlmodel` vision service](/operate/reference/services/vision/) to visualize the inferences your model makes.
Follow our docs to [run inference](/data-ai/ai/run-inference/) to add an `mlmodel` vision service and see inferences.

For other use cases, consider [creating custom functionality with a module](/operate/get-started/other-hardware/).

{{< alert title="Add support for other models" color="tip" >}}
ML models must be designed in particular shapes to work with the `mlmodel` [classification](/data-ai/reference/vision/mlmodel/) or [detection](/data-ai/reference/vision/mlmodel/) model of Viam's [vision service](/data-ai/reference/vision/).
ML models must be designed in particular shapes to work with the `mlmodel` [classification](/operate/reference/services/vision/mlmodel/) or [detection](/operate/reference/services/vision/mlmodel/) model of Viam's [vision service](/operate/reference/services/vision/).
See [ML Model Design](/registry/advanced/mlmodel-design/) to design a modular ML model service with models that work with vision.
{{< /alert >}}
2 changes: 1 addition & 1 deletion docs/data-ai/ai/run-inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Now if you reopen the panel, you will only see classifications or detections wit

{{< /expand>}}

For more detailed information, including optional attribute configuration, see the [`mlmodel` docs](/data-ai/reference/vision/mlmodel/).
For more detailed information, including optional attribute configuration, see the [`mlmodel` docs](/operate/reference/services/vision/mlmodel/).

## Use an SDK

Expand Down
4 changes: 2 additions & 2 deletions docs/data-ai/capture-data/filter-before-sync.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ The following steps use the [`filtered_camera`](https://github.com/erh/filtered_
Add an ML model service on your machine that is compatible with the ML model you want to use, for example [TFLite CPU](https://github.com/viam-modules/mlmodel-tflite).

{{% /tablestep %}}
{{% tablestep link="/data-ai/reference/vision/"%}}
{{% tablestep link="/operate/reference/services/vision/"%}}
{{<imgproc src="/services/icons/ml.svg" class="fill alignleft" style="width: 150px" declaredimensions=true alt="Train models">}}
**2. Select a suitable ML model**

Click **Select model** on the ML model service configuration panel, then select an [existing model](https://app.viam.com/registry?type=ML+Model) you want to use, or click **Upload a new model** to upload your own.
If you're not sure which model to use, you can use [`EfficientDet-COCO`](https://app.viam.com/ml-model/viam-labs/EfficientDet-COCO) from the **Registry**, which can detect people and animals, among other things.

{{% /tablestep %}}
{{% tablestep link="/data-ai/reference/vision/"%}}
{{% tablestep link="/operate/reference/services/vision/"%}}
{{<imgproc src="/services/icons/vision.svg" class="fill alignleft" style="width: 150px" declaredimensions=true alt="Train models">}}
**3. Add a vision service to use with the ML model**

Expand Down
4 changes: 2 additions & 2 deletions docs/dev/reference/apis/services/vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Any camera that can return 3D pointclouds can use 3D object segmentation.

- [GetObjectPointClouds()](/dev/reference/apis/services/vision/#getobjectpointclouds)

The [vision service](/data-ai/reference/vision/) supports the following methods:
The [vision service](/operate/reference/services/vision/) supports the following methods:

{{< readfile "/static/include/services/apis/generated/vision-table.md" >}}

Expand All @@ -102,7 +102,7 @@ To get started using Viam's SDKs to connect to and control your machine, go to y

When executed, this sample code creates a connection to your machine as a client.

The following examples assume that you have a machine configured with a [camera](/operate/reference/components/camera/) and a vision service [detector](/data-ai/reference/vision/#detections), [classifier](/data-ai/reference/vision/#classifications) or [segmenter](/data-ai/reference/vision/#segmentations).
The following examples assume that you have a machine configured with a [camera](/operate/reference/components/camera/) and a vision service [detector](/operate/reference/services/vision/#detections), [classifier](/operate/reference/services/vision/#classifications) or [segmenter](/operate/reference/services/vision/#segmentations).

{{< tabs >}}
{{% tab name="Python" %}}
Expand Down
12 changes: 6 additions & 6 deletions docs/dev/reference/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -620,7 +620,7 @@ You can manage the data synced to Viam's cloud with the new capability for bulk

{{% alert title="Important: Breaking Change" color="note" %}}

The [vision service](/data-ai/reference/vision/) became more modular in RDK [v0.2.36](https://github.com/viamrobotics/rdk/releases/tag/v0.2.36), API [v0.1.118](https://github.com/viamrobotics/api/releases/tag/v0.1.118), and Python SDK [v0.2.18](https://github.com/viamrobotics/viam-python-sdk/releases/tag/v0.2.18).
The [vision service](/operate/reference/services/vision/) became more modular in RDK [v0.2.36](https://github.com/viamrobotics/rdk/releases/tag/v0.2.36), API [v0.1.118](https://github.com/viamrobotics/api/releases/tag/v0.1.118), and Python SDK [v0.2.18](https://github.com/viamrobotics/viam-python-sdk/releases/tag/v0.2.18).

Find more information on each of the changes below.

Expand Down Expand Up @@ -664,7 +664,7 @@ detections = await vision.get_detections(img, "find_objects")

#### Color detector configurations

You can replace existing color detectors by [configuring new ones in the UI](/data-ai/reference/vision/color_detector/) or you can update the JSON configuration of your machines:
You can replace existing color detectors by [configuring new ones in the UI](/operate/reference/services/vision/color_detector/) or you can update the JSON configuration of your machines:

{{< tabs >}}
{{% tab name="New Way" %}}
Expand Down Expand Up @@ -739,7 +739,7 @@ You can replace existing color detectors by [configuring new ones in the UI](/da

#### TFLite detector configurations

You can replace existing TFLite detectors by [configuring new ones in the UI](/data-ai/reference/vision/mlmodel/) or you can update the JSON configuration of your machines:
You can replace existing TFLite detectors by [configuring new ones in the UI](/operate/reference/services/vision/mlmodel/) or you can update the JSON configuration of your machines:

{{< tabs >}}
{{% tab name="New Way" %}}
Expand Down Expand Up @@ -799,7 +799,7 @@ You can replace existing TFLite detectors by [configuring new ones in the UI](/d

#### TFLite Classifier configurations

You can replace existing TFLite classifiers by [configuring new ones in the UI](/data-ai/reference/vision/mlmodel/) or you can update the JSON configuration of your machines:
You can replace existing TFLite classifiers by [configuring new ones in the UI](/operate/reference/services/vision/mlmodel/) or you can update the JSON configuration of your machines:

{{< tabs >}}
{{% tab name="New Way" %}}
Expand Down Expand Up @@ -859,7 +859,7 @@ You can replace existing TFLite classifiers by [configuring new ones in the UI](

#### Radius Clustering 3D segmenter configurations

You can replace existing Radius Clustering 3D segmenters by [configuring new ones in the UI](/data-ai/reference/vision/obstacles_pointcloud/) or you can update the JSON configuration of your machines:
You can replace existing Radius Clustering 3D segmenters by [configuring new ones in the UI](/operate/reference/services/vision/obstacles_pointcloud/) or you can update the JSON configuration of your machines:

{{< tabs >}}
{{% tab name="New Way" %}}
Expand Down Expand Up @@ -913,7 +913,7 @@ You can replace existing Radius Clustering 3D segmenters by [configuring new one

#### Detector to 3D segmenter configurations

You can replace existing Radius Clustering 3D segmenters by [configuring new ones in the UI](/data-ai/reference/vision/detector_3d_segmenter/) or you can update the JSON configuration of your machines:
You can replace existing Radius Clustering 3D segmenters by [configuring new ones in the UI](/operate/reference/services/vision/detector_3d_segmenter/) or you can update the JSON configuration of your machines:

{{< tabs >}}
{{% tab name="New Way" %}}
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/reference/glossary/api-namespace-triplet.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ This requirement ensures that when a resource of that model is deployed, you can
For example:

- The API of the built-in component [camera](/operate/reference/components/camera/) is `rdk:component:camera`, which exposes methods such as `GetImage()`.
- The API of the built-in service [vision](/data-ai/reference/vision/) is `rdk:service:vision`, which exposes methods such as `GetDetectionsFromCamera()`.
- The API of the built-in service [vision](/operate/reference/services/vision/) is `rdk:service:vision`, which exposes methods such as `GetDetectionsFromCamera()`.
2 changes: 1 addition & 1 deletion docs/dev/reference/glossary/subtype.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@ Resource models belonging to a subtype share the same API.

For example, an arm is a subtype of the {{< glossary_tooltip term_id="component" text="component" >}} resource type, while the `ur5e` is a {{< glossary_tooltip term_id="model" text="model" >}} of the arm subtype's API.

The [Vision Service](/data-ai/reference/vision/) is a subtype of the {{< glossary_tooltip term_id="service" text="service" >}} resource type.
The [Vision Service](/operate/reference/services/vision/) is a subtype of the {{< glossary_tooltip term_id="service" text="service" >}} resource type.

A subtype is designated by its {{< glossary_tooltip term_id="api-namespace-triplet" text="api-namespace-triplet" >}}.
2 changes: 1 addition & 1 deletion docs/operate/reference/components/camera/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,5 +97,5 @@ For general configuration, development, and usage info, see:
You can also use the camera component with the following services:

- [Data management service](/data-ai/capture-data/capture-sync/: To capture and sync the camera's data
- [Vision service](/data-ai/reference/vision/): To use computer vision to interpret the camera stream
- [Vision service](/operate/reference/services/vision/): To use computer vision to interpret the camera stream
- [SLAM service](/operate/reference/services/slam/): for mapping
8 changes: 4 additions & 4 deletions docs/operate/reference/components/camera/transform.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ The following are the transformation objects available for the `pipeline`:

### Classifications

Classifications overlay text from the `GetClassifications` method of the [vision service](/data-ai/reference/vision/) onto the image.
Classifications overlay text from the `GetClassifications` method of the [vision service](/operate/reference/services/vision/) onto the image.

```json {class="line-numbers linkable-line-numbers"}
{
Expand All @@ -144,7 +144,7 @@ Classifications overlay text from the `GetClassifications` method of the [vision

**Attributes:**

- `classifier_name`: The name of the classifier in the [vision service](/data-ai/reference/vision/).
- `classifier_name`: The name of the classifier in the [vision service](/operate/reference/services/vision/).
- `confidence_threshold`: The threshold above which to display classifications.
- `max_classifications`: _Optional_. The maximum number of classifications to display on the camera stream at any given time. Default: `1`.
- `valid_labels`: _Optional_. An array of labels that you to see detections for on the camera stream. If not specified, all labels from the classifier are used.
Expand Down Expand Up @@ -233,7 +233,7 @@ If you have a 100 x 200 image, and you want to crop to a box between the points

### Detections

The Detections transform takes the input image and overlays the detections from a given detector configured within the [vision service](/data-ai/reference/vision/).
The Detections transform takes the input image and overlays the detections from a given detector configured within the [vision service](/operate/reference/services/vision/).

```json {class="line-numbers linkable-line-numbers"}
{
Expand All @@ -253,7 +253,7 @@ The Detections transform takes the input image and overlays the detections from

**Attributes:**

- `detector_name`: The name of the detector configured in the [vision service](/data-ai/reference/vision/).
- `detector_name`: The name of the detector configured in the [vision service](/operate/reference/services/vision/).
- `confidence_threshold`: Specify to only display detections above the specified threshold (decimal between 0 and 1).
- `valid_labels`: _Optional_. An array of labels that you to see detections for on the camera stream. If not specified, all labels from the classifier are used.

Expand Down
4 changes: 2 additions & 2 deletions docs/operate/reference/services/frame-system/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ updated: "2024-10-18"
# SMEs: Peter L, Gautham, Bijan
---

The frame system is the basis for some of Viam's other services, like [motion](/operate/reference/services/motion/) and [vision](/data-ai/reference/vision/).
The frame system is the basis for some of Viam's other services, like [motion](/operate/reference/services/motion/) and [vision](/operate/reference/services/vision/).
It stores the required contextual information to use the position and orientation readings returned by some components.

It is a mostly static system for storing the "reference frame" of each component of a machine within a coordinate system configured by the user.
Expand Down Expand Up @@ -284,7 +284,7 @@ For example:

However, an arm with an attached [camera](/operate/reference/components/camera/) might generate additional information about the poses of other objects with respect to references frames on the machine.

With the [vision service](/data-ai/reference/vision/), the camera might detect objects that do not have a relationship to a `world` reference frame.
With the [vision service](/operate/reference/services/vision/), the camera might detect objects that do not have a relationship to a `world` reference frame.

If a [camera](/operate/reference/components/camera/) is looking for an apple or an orange, the arm can be commanded to move to the detected fruit's location by providing an additional transform that contains the detected pose with respect to the camera that performed the detection.

Expand Down
2 changes: 1 addition & 1 deletion docs/operate/reference/services/navigation/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ The following attributes are available for `Navigation` services:

The [frame system service](/operate/mobility/define-geometry/) is an internally managed and mostly static system for storing the reference frame of each component of a machine within a coordinate system configured by the user.

It stores the required contextual information for Viam's services like [Motion](/operate/reference/services/motion/) and [Vision](/data-ai/reference/vision/) to use the position and orientation readings returned by components like [movement sensors](/operate/reference/components/movement-sensor/).
It stores the required contextual information for Viam's services like [Motion](/operate/reference/services/motion/) and [Vision](/operate/reference/services/vision/) to use the position and orientation readings returned by components like [movement sensors](/operate/reference/components/movement-sensor/).

{{% /alert %}}

Expand Down
File renamed without changes.
File renamed without changes.
6 changes: 3 additions & 3 deletions docs/tutorials/configure/pet-photographer.md
Original file line number Diff line number Diff line change
Expand Up @@ -828,7 +828,7 @@ Whether you've downloaded the `colorfilter` module, or written your own color fi
Next, add the following services to your smart machine to support the color filter module:

- The [data management service](/data-ai/capture-data/capture-sync/) enables your smart machine to capture data and sync it to the cloud.
- The [vision service](/data-ai/reference/vision/#detections) enables your smart machine to perform color detection on objects in a camera stream.
- The [vision service](/operate/reference/services/vision/#detections) enables your smart machine to perform color detection on objects in a camera stream.

### Add the data management service

Expand Down Expand Up @@ -873,7 +873,7 @@ To enable data capture on your machine, add and configure the [data management s

### Add the vision service

To enable your smart machine to detect a specific color in its camera stream, add a [`color_detector` vision service](/data-ai/reference/vision/color_detector/).
To enable your smart machine to detect a specific color in its camera stream, add a [`color_detector` vision service](/operate/reference/services/vision/color_detector/).
For this tutorial, we will configure the vision service to recognize a blue dog collar using `#43A1D0` or `rgb(67, 161, 208)`.
If you have a different item you want to use, or want to match to a color that matches your pet closely, you can use a different color.

Expand All @@ -896,7 +896,7 @@ Your configuration should look like the following:

![The vision service configuration panel showing the color set to blue, the hue tolerance set to 0.06, and the segment size set to 100.](/tutorials/pet-photographer/vision-service.png)

For more detailed information, refer to [Configure a color detector](/data-ai/reference/vision/color_detector/).
For more detailed information, refer to [Configure a color detector](/operate/reference/services/vision/color_detector/).

{{% /tab %}}
{{% tab name="JSON Template" %}}
Expand Down
Loading

0 comments on commit 49a15b1

Please sign in to comment.