Skip to content

Commit

Permalink
Merge pull request #2180 from MicrosoftDocs/main
Browse files Browse the repository at this point in the history
1/7/2025 AM Publish
  • Loading branch information
Taojunshen authored Jan 7, 2025
2 parents 40791b4 + 4996669 commit 8b21d47
Show file tree
Hide file tree
Showing 8 changed files with 26 additions and 14 deletions.
8 changes: 4 additions & 4 deletions articles/ai-studio/how-to/model-catalog-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ Pay-per-token billing is available only to users whose Azure subscription belong

### Network isolation for models deployed via serverless APIs

Managed computes for models deployed as serverless APIs follow the public network access flag setting of the Azure AI Foundry hub that has the project in which the deployment exists. To help secure your managed compute, disable the public network access flag on your Azure AI Foundry hub. You can help secure inbound communication from a client to your managed compute by using a private endpoint for the hub.
Endpoints for models deployed as serverless APIs follow the public network access flag setting of the Azure AI Foundry hub that has the project in which the deployment exists. To help secure your serverless API endpoint, disable the public network access flag on your Azure AI Foundry hub. You can help secure inbound communication from a client to your endpoint by using a private endpoint for the hub.

To set the public network access flag for the Azure AI Foundry hub:

Expand All @@ -177,11 +177,11 @@ To set the public network access flag for the Azure AI Foundry hub:

#### Limitations

* If you have an Azure AI Foundry hub with a managed compute created before July 11, 2024, managed computes added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new managed compute for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.
* If you have an Azure AI Foundry hub with a private endpoint created before July 11, 2024, serverless API endpoints added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new private endpoint for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.

* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a managed compute on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
* If you have an Azure AI Foundry hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing serverless API deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.

* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for MaaS deployments in private hubs, because private hubs have the public network access flag disabled.
* Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for serverless API deployments in private hubs, because private hubs have the public network access flag disabled.

* Any network configuration change (for example, enabling or disabling the public network access flag) might take up to five minutes to propagate.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,17 @@ In this article, learn about Azure Machine Learning CLI (v2) releases.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader:
`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes-v2%22&locale=en-us`

## 2024-12-17

### Azure Machine Learning CLI (v2) v2.33.0

- `az ml workspace create --provision-network-now`
- Added `--provision-network-now` property to trigger the provisioning of the managed network when creating a workspace with the managed network enabled, or else it does nothing.
- `az ml capability-host`
- Added create operation.
- Added get operation.
- Added delete operation.

## 2024-09-18

### Azure Machine Learning CLI (v2) v2.30.0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Models in the model catalog are continually refreshed with newer and more capabl
> This article describes deprecation and retirement only for models that can be deployed to __serverless APIs__, not managed compute. To learn more about the differences between deployment to serverless APIs and managed computes, see [Model Catalog and Collections](concept-model-catalog.md).
> [!NOTE]
> Azure OpenAI models in the model catalog are provided through Azure OpenAI Service. For information about Azure Open AI model deprecation and retirement, see the [Azure OpenAI service product documentation](/azure/ai-services/openai/concepts/model-retirements).
> Azure OpenAI models in the model catalog are provided through Azure OpenAI Service. For information about Azure OpenAI model deprecation and retirement, see the [Azure OpenAI service product documentation](/azure/ai-services/openai/concepts/model-retirements).
## Model lifecycle stages

Expand Down Expand Up @@ -72,4 +72,4 @@ Models labeled _Retired_ are no longer available for use. You can't create new d
## Related content

- [Model Catalog and Collections](concept-model-catalog.md)
- [Data, privacy, and security for use of models through the Model Catalog](concept-data-privacy.md)
- [Data, privacy, and security for use of models through the Model Catalog](concept-data-privacy.md)
2 changes: 1 addition & 1 deletion articles/search/includes/quickstarts/java.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ms.service: azure-ai-search
ms.custom:
- ignite-2023
ms.topic: include
ms.date: 11/01/2024
ms.date: 01/07/2025
---

Build a Java console application using the [Azure.Search.Documents](/java/api/overview/azure/search) library to create, load, and query a search index.
Expand Down
1 change: 0 additions & 1 deletion articles/search/includes/quickstarts/python-semantic.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,6 @@ semantic_config = SemanticConfiguration(
# Create the semantic settings with the configuration
semantic_search = SemanticSearch(configurations=[semantic_config])

semantic_settings = SemanticSearch(configurations=[semantic_config])
scoring_profiles = []
suggester = [{'name': 'sg', 'source_fields': ['Tags', 'Address/City', 'Address/Country']}]

Expand Down
6 changes: 3 additions & 3 deletions articles/search/search-get-started-portal-import-vectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,12 +193,12 @@ The wizard supports Azure AI Vision image retrieval through multimodal embedding

1. Make sure your Azure AI Search service is in the same region.

1. After the service is deployed, go to the resource and select **Access control** to assign the **Cognitive Services OpenAI User** role to your search service's managed identity. Optionally, you can use key-based authentication for the connection.
1. After the service is deployed, go to the resource and select **Access control** to assign the **Cognitive Services User** role to your search service's managed identity. Optionally, you can use key-based authentication for the connection.

After you finish these steps, you should be able to select the Azure AI Vision vectorizer in the **Import and vectorize data** wizard.

> [!NOTE]
> If you can't select an Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region. Also make sure that your search service's managed identity has **Cognitive Services OpenAI User** permissions.
> If you can't select an Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region. Also make sure that your search service's managed identity has **Cognitive Services User** permissions.
### [Azure AI Foundry model catalog](#tab/model-catalog)

Expand Down Expand Up @@ -331,7 +331,7 @@ Chunking is built in and nonconfigurable. The effective settings are:

1. Specify whether you want your search service to authenticate using an API key or managed identity.

+ The identity should have a **Cognitive Services OpenAI User** role on the Azure AI multi-services account.
+ The identity should have a **Cognitive Services User** role on the Azure AI multi-services account.

1. Select the checkbox that acknowledges the billing effects of using these resources.

Expand Down
4 changes: 3 additions & 1 deletion articles/search/search-get-started-text.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,15 @@ ms.custom:
- devx-track-python
- ignite-2023
ms.topic: quickstart
ms.date: 11/01/2024
ms.date: 01/07/2025
---

# Quickstart: Full text search using the Azure SDKs

Learn how to use the *Azure.Search.Documents* client library in an Azure SDK to create, load, and query a search index using sample data for [full text search](search-lucene-query-architecture.md). Full text search uses Apache Lucene for indexing and queries, and a BM25 ranking algorithm for scoring results.

This quickstart creates and queries a small hotels-quickstart index containing data about 4 hotels.

This quickstart has steps for the following SDKs:

+ [Azure SDK for .NET](?tabs=dotnet#create-load-and-query-an-index)
Expand Down
4 changes: 2 additions & 2 deletions articles/search/search-limits-quotas-capacity.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ author: HeidiSteen
ms.author: heidist
ms.service: azure-ai-search
ms.topic: conceptual
ms.date: 12/09/2024
ms.date: 01/07/2025
ms.custom:
- references_regions
- build-2024
Expand Down Expand Up @@ -69,7 +69,7 @@ Maximum number of documents per index are:
+ 288 billion on L1
+ 576 billion on L2

Each instance of a complex collection counts as a separate document in terms of these limits.
You can check the number of documents in the Azure portal and through REST calls that include `search=*` and `count=true`.

Maximum size of each document is approximately 16 megabytes. Document size is actually a limit on the size of the indexing API request payload, which is 16 megabytes. That payload can be a single document, or a batch of documents. For a batch with a single document, the maximum document size is 16 MB of JSON.

Expand Down

0 comments on commit 8b21d47

Please sign in to comment.