diff --git a/articles/ai-services/document-intelligence/language-support/prebuilt.md b/articles/ai-services/document-intelligence/language-support/prebuilt.md
index 9102bddafe2..47f92bd83b4 100644
--- a/articles/ai-services/document-intelligence/language-support/prebuilt.md
+++ b/articles/ai-services/document-intelligence/language-support/prebuilt.md
@@ -207,7 +207,7 @@ Azure AI Document Intelligence models provide multilingual document processing s
| • Spanish (`es`) |Spain (`es`)|
| • Swedish (`sv`) | Sweden (`se`)|
| • Thai (`th`) | Thailand (`th`)|
-| • Turkish (`tr`) | Turkey (`tr`)|
+| • Turkish (`tr`) | Türkiye (`tr`)|
| • Ukrainian (`uk`) | Ukraine (`uk`)|
| • Vietnamese (`vi`) | Vietnam (`vi`)|
diff --git a/articles/ai-services/language-service/language-detection/language-support.md b/articles/ai-services/language-service/language-detection/language-support.md
index 0cfcd136d5f..48ffee981b9 100644
--- a/articles/ai-services/language-service/language-detection/language-support.md
+++ b/articles/ai-services/language-service/language-detection/language-support.md
@@ -187,7 +187,7 @@ If you have content expressed in a less frequently used language, you can try La
| Kannada | `kn` | `Latn`, `Knda` |
| Malayalam | `ml` | `Latn`, `Mlym` |
| Marathi | `mr` | `Latn`, `Deva` |
-| Oriya | `or` | `Latn`, `Orya` |
+| Odia | `or` | `Latn`, `Orya` |
| Punjabi | `pa` | `Latn`, `Guru` |
| Tamil | `ta` | `Latn`, `Taml` |
| Telugu | `te` | `Latn`, `Telu` |
diff --git a/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md b/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md
index 6f17da1abdf..be151b9f97e 100644
--- a/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md
+++ b/articles/ai-services/metrics-advisor/how-tos/onboard-your-data.md
@@ -115,7 +115,7 @@ Consider the following scenarios:
* *"My data has already rolled up and the dimension value is represented by: NULL or Empty (Default), NULL only, Others."*
- This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
+ This option means Metrics Advisor doesn't need to roll up the data because the rows are already summed. For example, if you select *NULL only*, then the second data row in the below example will be seen as an aggregation of all countries/regions and language *EN-US*; the fourth data row which has an empty value for *Country* however will be seen as an ordinary row which might indicate incomplete data.
| Country/Region | Language | Income |
|---------|----------|--------|
diff --git a/articles/ai-services/openai/assistants-reference.md b/articles/ai-services/openai/assistants-reference.md
index 077596d72a3..245d6e67eef 100644
--- a/articles/ai-services/openai/assistants-reference.md
+++ b/articles/ai-services/openai/assistants-reference.md
@@ -42,6 +42,62 @@ Create an assistant with a model and instructions.
| response_format | string or object | Optional | Specifies the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106. Setting this parameter to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON. Importantly, when using JSON mode, you must also instruct the model to produce JSON yourself using a system or user message. Without this instruction, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Additionally, the message content may be partially cut off if you use `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length. |
| tool_resources | object | Optional | A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs. |
+### response_format types
+
+**string**
+
+`auto` is the default value.
+
+**object**
+
+Possible `type` values: `text`, `json_object`, `json_schema`.
+
+***json_schema***
+
+| Name | Type | Description | Default | Required/Optional |
+|--- |--- |--- |--- |--- |
+| `description` | string | A description of what the response format is for, used by the model to determine how to respond in the format. | | Optional |
+| `name` | string | The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | | Required |
+| `schema` | object | The schema for the response format, described as a JSON Schema object. | | Optional |
+| `strict` | boolean or null | Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the `schema` field. Only a subset of JSON Schema is supported when `strict` is `true`. | false | Optional |
+
+### tool_resources properties
+
+**code_interpreter**
+
+| Name | Type | Description | Default |
+|--- |--- |--- |--- |
+| `file_ids` | array | A list of file IDs made available to the code_interpreter tool. There can be a maximum of 20 files associated with the tool. | `[]` |
+
+**file_search**
+
+| Name | Type | Description | Required/Optional |
+|--- |--- |--- |--- |
+| `vector_store_ids` | array | The vector store attached to this thread. There can be a maximum of 1 vector store attached to the thread. | Optional |
+| `vector_stores` | array | A helper to create a vector store with file_ids and attach it to this thread. There can be a maximum of 1 vector store attached to the thread. | Optional |
+
+***vector_stores***
+
+| Name | Type | Description | Required/Optional |
+|--- |--- |--- |--- |
+| `file_ids` | array | A list of file IDs to add to the vector store. There can be a maximum of 10000 files in a vector store. | Optional |
+| `chunking_strategy` | object | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. | Optional |
+| `metadata` | map | Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. | Optional |
+
+***chunking_strategy***
+
+| Name | Type | Description | Required/optional |
+|--- |--- |--- |---|
+| `Auto Chunking Strategy` | object | The default strategy. This strategy currently uses a `max_chunk_size_tokens` of `800` and `chunk_overlap_tokens` of `400`. `type` is always `auto` | Required |
+| `Static Chunking Strategy` | object | `type` Always `static` | Required |
+
+***Static Chunking Strategy***
+
+| Name | Type | Description | Required/Optional |
+|--- |--- |--- |--- |
+| `max_chunk_size_tokens` | integer | The maximum number of tokens in each chunk. The default value is `800`. The minimum value is `100` and the maximum value is `4096`. | Required |
+| `chunk_overlap_tokens` | integer | The number of tokens that overlap between chunks. The default value is `400`. Note that the overlap must not exceed half of `max_chunk_size_tokens`. | Required |
+
### Returns
An [assistant](#assistant-object) object.
diff --git a/articles/ai-services/openai/concepts/models.md b/articles/ai-services/openai/concepts/models.md
index 76f06fd00c7..2510736e0cf 100644
--- a/articles/ai-services/openai/concepts/models.md
+++ b/articles/ai-services/openai/concepts/models.md
@@ -34,7 +34,7 @@ The Azure OpenAI `o1` and `o1-mini` models are specifically designed to tackle r
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
| --- | :--- |:--- |:---: |
-| `o1` (2024-12-17) | The most capable model in the o1 series, offering enhanced reasoning abilities.
**Request access: [limited access model application](https://aka.ms/OAI/o1access)**
- Structured outputs
- Text, image processing
- Functions/Tools
| Input: 200,000
Output: 100,000 | |
+| `o1` (2024-12-17) | The most capable model in the o1 series, offering [enhanced reasoning abilities](../how-to/reasoning.md).
- Structured outputs
- Text, image processing
- Functions/Tools
**Request access: [limited access model application](https://aka.ms/OAI/o1access)** | Input: 200,000
Output: 100,000 | Oct 2023 |
|`o1-preview` (2024-09-12) | Older preview version | Input: 128,000
Output: 32,768 | Oct 2023 |
| `o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption.| Input: 128,000
Output: 65,536 | Oct 2023 |
diff --git a/articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md b/articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md
index be3aa243c5b..0ca91cc694b 100644
--- a/articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md
+++ b/articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md
@@ -166,7 +166,7 @@ Added support and general availability for new voices in the following locales:
| Locale (BCP-47) | Language | Text to speech voices |
| ----- | ----- | ----- |
| `as-IN` | Assamese (India) | `as-IN-YashicaNeural` (Female)
`as-IN-PriyomNeural` (Male) |
-| `or-IN` | Oriya (India) | `or-IN-SubhasiniNeural` (Female)
`or-IN-SukantNeural` (Male) |
+| `or-IN` | Odia (India) | `or-IN-SubhasiniNeural` (Female)
`or-IN-SukantNeural` (Male) |
| `pa-IN` | Punjabi (India) | `pa-IN-OjasNeural` (Male)
`pa-IN-VaaniNeural` (Female) |
The one voice in this table is generally available and supports only the 'en-IN' locale.
@@ -293,7 +293,7 @@ Text to speech avatar is now generally available. For more information, see [tex
| `pt-PT`| Portuguese (Portugal)|
| `sv-SE`| Swedish (Sweden)|
| `th-TH`| Thai (Thailand)|
- | `tr-TR`| Turkish (Turkey)|
+ | `tr-TR`| Turkish (Türkiye)|
| `zh-CN`| Chinese (Mandarin, Simplified)|
| `zh-HK`| Chinese (Cantonese, Traditional)|
| `zh-TW`| Chinese (Taiwanese Mandarin, Traditional)|
@@ -306,8 +306,8 @@ Text to speech avatar is now generally available. For more information, see [tex
| Locale | Language | Text to speech voices |
|--------|-----------------|-------------------------|
- | `or-IN` | Oriya (India) | `or-IN-SubhasiniNeural` (Female) |
- | `or-IN` | Oriya (India) | `or-IN-SukantNeural` (Male) |
+ | `or-IN` | Odia (India) | `or-IN-SubhasiniNeural` (Female) |
+ | `or-IN` | Odia (India) | `or-IN-SukantNeural` (Male) |
| `pa-IN` | Punjabi (India) | `pa-IN-VaaniNeural` (Female) |
| `pa-IN` | Punjabi (India) | `pa-IN-OjasNeural` (Male) |
| `as-IN` | Assamese (India)| `as-IN-YashicaNeural` (Female) |
diff --git a/articles/ai-studio/how-to/deploy-models-cohere-command.md b/articles/ai-studio/how-to/deploy-models-cohere-command.md
index 6c09fae2245..ceea939ba96 100644
--- a/articles/ai-studio/how-to/deploy-models-cohere-command.md
+++ b/articles/ai-studio/how-to/deploy-models-cohere-command.md
@@ -2129,7 +2129,7 @@ For more examples of how to use Cohere models, see the following examples and tu
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| Web requests | Bash | [Command-R](https://aka.ms/samples/cohere-command-r/webrequests) - [Command-R+](https://aka.ms/samples/cohere-command-r-plus/webrequests) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/samples/cohere-command/openaisdk) |
| LangChain | Python | [Link](https://aka.ms/samples/cohere/langchain) |
diff --git a/articles/ai-studio/how-to/deploy-models-cohere-embed.md b/articles/ai-studio/how-to/deploy-models-cohere-embed.md
index 30f68747857..e73a1174a75 100644
--- a/articles/ai-studio/how-to/deploy-models-cohere-embed.md
+++ b/articles/ai-studio/how-to/deploy-models-cohere-embed.md
@@ -631,7 +631,7 @@ Cohere Embed V3 models can optimize the embeddings based on its use case.
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| Web requests | Bash | [cohere-embed.ipynb](https://aka.ms/samples/embed-v3/webrequests) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/samples/cohere-embed/openaisdk) |
| LangChain | Python | [Link](https://aka.ms/samples/cohere-embed/langchain) |
diff --git a/articles/ai-studio/how-to/deploy-models-jais.md b/articles/ai-studio/how-to/deploy-models-jais.md
index 78ba9347b91..5351c77761b 100644
--- a/articles/ai-studio/how-to/deploy-models-jais.md
+++ b/articles/ai-studio/how-to/deploy-models-jais.md
@@ -1169,7 +1169,7 @@ For more examples of how to use Jais models, see the following examples and tuto
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
## Cost and quota considerations for Jais models deployed as serverless API endpoints
diff --git a/articles/ai-studio/how-to/deploy-models-mistral-nemo.md b/articles/ai-studio/how-to/deploy-models-mistral-nemo.md
index 2c1b903cfb1..509862a493a 100644
--- a/articles/ai-studio/how-to/deploy-models-mistral-nemo.md
+++ b/articles/ai-studio/how-to/deploy-models-mistral-nemo.md
@@ -2016,7 +2016,7 @@ For more examples of how to use Mistral models, see the following examples and t
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/mistral-large/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/mistral-large/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/mistral-large/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-mistral-open.md b/articles/ai-studio/how-to/deploy-models-mistral-open.md
index 8537e2addc4..42a89eafabf 100644
--- a/articles/ai-studio/how-to/deploy-models-mistral-open.md
+++ b/articles/ai-studio/how-to/deploy-models-mistral-open.md
@@ -1285,7 +1285,7 @@ For more examples of how to use Mistral models, see the following examples and t
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/mistral-large/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/mistral-large/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/mistral-large/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-mistral.md b/articles/ai-studio/how-to/deploy-models-mistral.md
index cc1a4720044..d23acbbcc60 100644
--- a/articles/ai-studio/how-to/deploy-models-mistral.md
+++ b/articles/ai-studio/how-to/deploy-models-mistral.md
@@ -2216,7 +2216,7 @@ For more examples of how to use Mistral models, see the following examples and t
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/mistral-large/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/mistral-large/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/mistral-large/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md b/articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md
index 8fcc4dc9d6c..1506b03ffe7 100644
--- a/articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md
+++ b/articles/ai-studio/how-to/deploy-models-phi-3-5-vision.md
@@ -1614,7 +1614,7 @@ For more examples of how to use Phi-3 family models, see the following examples
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/phi-3/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/phi-3/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/phi-3/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-phi-3-vision.md b/articles/ai-studio/how-to/deploy-models-phi-3-vision.md
index aff866a1133..1e171c69b2f 100644
--- a/articles/ai-studio/how-to/deploy-models-phi-3-vision.md
+++ b/articles/ai-studio/how-to/deploy-models-phi-3-vision.md
@@ -1405,7 +1405,7 @@ For more examples of how to use Phi-3 family models, see the following examples
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/phi-3/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/phi-3/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/phi-3/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-phi-3.md b/articles/ai-studio/how-to/deploy-models-phi-3.md
index 1afc015c5d3..15a0fbaabe6 100644
--- a/articles/ai-studio/how-to/deploy-models-phi-3.md
+++ b/articles/ai-studio/how-to/deploy-models-phi-3.md
@@ -1458,7 +1458,7 @@ For more examples of how to use Phi-3 family models, see the following examples
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/phi-3/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/phi-3/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/phi-3/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-phi-4.md b/articles/ai-studio/how-to/deploy-models-phi-4.md
index 93214056843..a7e4bc88a64 100644
--- a/articles/ai-studio/how-to/deploy-models-phi-4.md
+++ b/articles/ai-studio/how-to/deploy-models-phi-4.md
@@ -1117,7 +1117,7 @@ For more examples of how to use Phi-4 family models, see the following examples
| Description | Language | Sample |
|-------------------------------------------|-------------------|-----------------------------------------------------------------|
| CURL request | Bash | [Link](https://aka.ms/phi-3/webrequests-sample) |
-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) |
+| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) |
| Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) |
| Python web requests | Python | [Link](https://aka.ms/phi-3/webrequests-sample) |
| OpenAI SDK (experimental) | Python | [Link](https://aka.ms/phi-3/openaisdk) |
diff --git a/articles/ai-studio/how-to/deploy-models-serverless-availability.md b/articles/ai-studio/how-to/deploy-models-serverless-availability.md
index 84491160501..0a61ed49344 100644
--- a/articles/ai-studio/how-to/deploy-models-serverless-availability.md
+++ b/articles/ai-studio/how-to/deploy-models-serverless-availability.md
@@ -26,7 +26,7 @@ Certain models in the model catalog can be deployed as a serverless API with pay
## Region availability
-Pay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available (see "offer availability region" in the table in the next section). If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable (see "Hub/Project Region" columns in the following tables).
+Pay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country/region where the model provider has made the offer available (see "offer availability region" in the table in the next section). If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable (see "Hub/Project Region" columns in the following tables).
[!INCLUDE [region-availability-maas](../includes/region-availability-maas.md)]
diff --git a/articles/ai-studio/how-to/model-catalog-overview.md b/articles/ai-studio/how-to/model-catalog-overview.md
index 473c7ba67b9..a254bc7e27a 100644
--- a/articles/ai-studio/how-to/model-catalog-overview.md
+++ b/articles/ai-studio/how-to/model-catalog-overview.md
@@ -157,7 +157,7 @@ In Azure AI Foundry portal, you can use vector indexes and retrieval-augmented g
### Regional availability of offers and models
-Pay-per-token billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a project resource in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints | Azure AI Foundry](deploy-models-serverless-availability.md) for detailed information.
+Pay-per-token billing is available only to users whose Azure subscription belongs to a billing account in a country/region where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a project resource in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints | Azure AI Foundry](deploy-models-serverless-availability.md) for detailed information.
### Content safety for models deployed via serverless APIs
diff --git a/articles/ai-studio/includes/region-availability-maas.md b/articles/ai-studio/includes/region-availability-maas.md
index d0218298cb2..ea71c550df5 100644
--- a/articles/ai-studio/includes/region-availability-maas.md
+++ b/articles/ai-studio/includes/region-availability-maas.md
@@ -15,31 +15,31 @@ ms.custom: include, references_regions
|Model |Offer Availability Region | Hub/Project Region for Deployment | Hub/Project Region for Fine tuning |
|---------|---------|---------|---------|
-Cohere Command R+ 08-2024 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Command R 08-2024 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Command R+ | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Command R | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Rerank v3 - English | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Rerank v3 - Multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Embed v3 - English | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Cohere Embed v3 - Multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Command R+ 08-2024 | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Command R 08-2024 | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Command R+ | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Command R | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Rerank v3 - English | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Rerank v3 - Multilingual | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Embed v3 - English | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Cohere Embed v3 - Multilingual | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Japan
Qatar |East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
### JAIS models
|Model |Offer Availability Region | Hub/Project Region for Deployment | Hub/Project Region for Fine tuning |
|---------|---------|---------|---------|
-JAIS 30B Chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Egypt | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+JAIS 30B Chat | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Egypt | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
### Meta Llama models
|Model |Offer Availability Region | Hub/Project Region for Deployment | Hub/Project Region for Fine tuning |
|---------|---------|---------|---------|
-Llama 2 7B
Llama 2 13B
Llama 2 70B | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | West US 3 |
-Llama 2 7B Chat
Llama 2 70B Chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | West US 3 |
-Llama 3 8B Instruct
Llama 3 70B Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Llama 3.1 8B Instruct
Llama 3.1 70B Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | West US 3 |
-Llama 3.1 405B Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | Not available |
+Llama 2 7B
Llama 2 13B
Llama 2 70B | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | West US 3 |
+Llama 2 7B Chat
Llama 2 70B Chat | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | West US 3 |
+Llama 3 8B Instruct
Llama 3 70B Instruct | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Llama 3.1 8B Instruct
Llama 3.1 70B Instruct | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | West US 3 |
+Llama 3.1 405B Instruct | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US
East US 2
North Central US
South Central US
West US
West US 3 | Not available |
### Microsoft Phi-3 family models
@@ -57,10 +57,10 @@ Phi-3-Medium-4K-Instruct
Phi-3-Medium-128K-Instruct | Not applicable | E
|Model |Offer Availability Region | Hub/Project Region for Deployment | Hub/Project Region for Fine tuning |
|---------|---------|---------|---------|
-Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Ministral-3B | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
-Mistral Large
Mistral-Large (2407)
Mistral-Large (2411) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Mistral Nemo | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong SAR
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Ministral-3B | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong SAR
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Mistral Small | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong SAR
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+Mistral Large
Mistral-Large (2407)
Mistral-Large (2411) | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Brazil
Hong Kong SAR
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
@@ -68,4 +68,4 @@ Mistral Large
Mistral-Large (2407)
Mistral-Large (2411) | [Microso
|Model |Offer Availability Region | Hub/Project Region for Deployment | Hub/Project Region for Fine tuning |
|---------|---------|---------|---------|
-TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Mexico
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
+TimeGEN-1 | [Microsoft Managed countries/regions](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)
Mexico
Israel | East US
East US 2
North Central US
South Central US
Sweden Central
West US
West US 3 | Not available |
diff --git a/articles/machine-learning/concept-endpoint-serverless-availability.md b/articles/machine-learning/concept-endpoint-serverless-availability.md
index 3d3e328478f..1eb32c2e4b8 100644
--- a/articles/machine-learning/concept-endpoint-serverless-availability.md
+++ b/articles/machine-learning/concept-endpoint-serverless-availability.md
@@ -25,7 +25,7 @@ Certain models in the model catalog can be deployed as a serverless API with pay
## Region availability
-Pay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available (see "offer availability region" in the table in the next section). If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable (see "Hub/Project Region" columns in the following tables).
+Pay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country/region where the model provider has made the offer available (see "offer availability region" in the table in the next section). If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable (see "Hub/Project Region" columns in the following tables).
[!INCLUDE [region-availability-maas](../ai-studio/includes/region-availability-maas.md)]
diff --git a/articles/machine-learning/concept-model-catalog.md b/articles/machine-learning/concept-model-catalog.md
index 7c0d8434827..8c544c1249b 100644
--- a/articles/machine-learning/concept-model-catalog.md
+++ b/articles/machine-learning/concept-model-catalog.md
@@ -134,7 +134,7 @@ Azure AI Foundry enables users to make use of Vector Indexes and Retrieval Augme
### Regional availability of offers and models
-PPay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) for detailed information.
+PPay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country/region where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) for detailed information.
diff --git a/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md b/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
index d5f5ad2ea7f..d3598f7457c 100644
--- a/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
+++ b/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
@@ -25,7 +25,7 @@ For no-code-deployment, Azure Machine Learning:
* Dynamically installs Python packages provided in the `conda.yaml` file. Hence, dependencies get installed during container runtime.
* Provides an MLflow base image/curated environment that contains the following items:
* [`azureml-inference-server-http`](how-to-inference-server-http.md)
- * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
+ * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.md)
* A scoring script for inferencing.
[!INCLUDE [mlflow-model-package-for-workspace-without-egress](includes/mlflow-model-package-for-workspace-without-egress.md)]
diff --git a/articles/machine-learning/how-to-deploy-mlflow-models.md b/articles/machine-learning/how-to-deploy-mlflow-models.md
index 09c5a14ee9c..1cd5d70489c 100644
--- a/articles/machine-learning/how-to-deploy-mlflow-models.md
+++ b/articles/machine-learning/how-to-deploy-mlflow-models.md
@@ -29,7 +29,7 @@ For no-code deployment, Azure Machine Learning:
- Ensures that all the package dependencies indicated in the MLflow model are satisfied.
- Provides an MLflow base image or curated environment that contains the following items:
- - Packages required for Azure Machine Learning to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst).
+ - Packages required for Azure Machine Learning to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.md).
- A scoring script to perform inference.
[!INCLUDE [mlflow-model-package-for-workspace-without-egress](includes/mlflow-model-package-for-workspace-without-egress.md)]
diff --git a/articles/machine-learning/how-to-use-mlflow-configure-tracking.md b/articles/machine-learning/how-to-use-mlflow-configure-tracking.md
index ec04ab67fe9..9690b9a267b 100644
--- a/articles/machine-learning/how-to-use-mlflow-configure-tracking.md
+++ b/articles/machine-learning/how-to-use-mlflow-configure-tracking.md
@@ -35,7 +35,7 @@ However, if you work outside Azure Machine Learning, you need to configure MLflo
```
> [!TIP]
- > Instead of `mlflow`, consider using [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst). This package is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. It's recommended for users who primarily need MLflow tracking and logging capabilities but don't want to import the full suite of features, including deployments.
+ > Instead of `mlflow`, consider using [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.md). This package is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. It's recommended for users who primarily need MLflow tracking and logging capabilities but don't want to import the full suite of features, including deployments.
- An Azure Machine Learning workspace. To create a workspace, see [Create resources you need to get started](quickstart-create-resources.md).
diff --git a/articles/machine-learning/includes/machine-learning-mlflow-prereqs.md b/articles/machine-learning/includes/machine-learning-mlflow-prereqs.md
index a8ecd54c60e..db78b4703bf 100644
--- a/articles/machine-learning/includes/machine-learning-mlflow-prereqs.md
+++ b/articles/machine-learning/includes/machine-learning-mlflow-prereqs.md
@@ -13,7 +13,7 @@ ms.author: fasantia
```
> [!TIP]
- > You can use the [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) package, which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This package is recommended for users who primarily need the MLflow tracking and logging capabilities without importing the full suite of features, including deployments.
+ > You can use the [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.md) package, which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This package is recommended for users who primarily need the MLflow tracking and logging capabilities without importing the full suite of features, including deployments.
- Create an Azure Machine Learning workspace. To create a workspace, see [Create resources you need to get started](../quickstart-create-resources.md). Review the [access permissions](../how-to-assign-roles.md#mlflow-operations) you need to perform your MLflow operations in your workspace.
diff --git a/articles/machine-learning/v1/azure-machine-learning-release-notes.md b/articles/machine-learning/v1/azure-machine-learning-release-notes.md
index 6ed1dbda006..bbeb1455819 100644
--- a/articles/machine-learning/v1/azure-machine-learning-release-notes.md
+++ b/articles/machine-learning/v1/azure-machine-learning-release-notes.md
@@ -9,16 +9,21 @@ ms.custom: UpdateFrequency5, devx-track-python
ms.topic: reference
ms.author: larryfr
author: BlackMist
-ms.date: 01/29/2024
+ms.date: 11/12/2024
---
# Azure Machine Learning Python SDK release notes
-In this article, learn about Azure Machine Learning Python SDK releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro) reference page.
+In this article, learn about Azure Machine Learning Python SDK releases. For the full SDK reference content, visit the Azure Machine Learning's [**main SDK for Python**](/python/api/overview/azure/ml/intro) reference page.
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader:
`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2024-11-12
+### Azure Machine Learning SDK for Python v1.59.0
+Support for ONNX 1.17.0 to address a security vulnerability.
+Fixed mlflow model conda definition for the ai-ml-automl-gpu environment, resolving deployment issues for models.
+
## 2024-10-18
### Azure Machine Learning SDK for Python v1.58.0
Support for Python 3.11
@@ -51,14 +56,14 @@ AutoML supports scikit-learn version 1.5.1
## 2023-11-13
+ **azureml-automl-core, azureml-automl-runtime, azureml-contrib-automl-dnn-forecasting, azureml-train-automl-client, azureml-train-automl-runtime, azureml-training-tabular**
- + statsmodels, pandas and scipy were upgraded to versions 1.13, 1.3.5 and 1.10.1 - fbprophet 0.7.1 was replaced by prophet 1.1.4 When loading a model in a local environment, the versions of these packages should match what the model was trained on.
+ + statsmodels, pandas, and scipy were upgraded to versions 1.13, 1.3.5 and 1.10.1 - fbprophet 0.7.1 was replaced by prophet 1.1.4 When loading a model in a local environment, the versions of these packages should match what the model was trained on.
+ **azureml-core, azureml-pipeline-core, azureml-pipeline-steps**
+ AzureML-Pipeline - Add a warning for the `init_scripts` parameter in the Databricks step, alerting you to its upcoming deprecation.
+ **azureml-interpret**
+ updated azureml-interpret package to interpret-community 0.30.*
+ **azureml-mlflow**
+ feat: Add `AZUREML_BLOB_MAX_SINGLE_PUT_SIZE` to control the size in bytes of upload chunks. Lowering this from the default (`64*1024*1024` that is 64MB) can remedy issues where write operations fail due to time outs.
- + Support for uploading and downloading models from AzureML registries is currently experimental
+ + Support for uploading and downloading models from Azure Machine Learning registries is currently experimental
+ Adding support for users that want to download or upload model from AML registries
## 2023-08-21
@@ -67,15 +72,15 @@ AutoML supports scikit-learn version 1.5.1
+ **azureml-automl-core**
+ Support of features/regressors known at the time of forecast in AutoML forecasting TCN models.
+ **azureml-automl-dnn-vision**
- + Enable flags for log_training_metrics and log_validation_loss for automl object detection and instance segmentation
+ + Enable flags for log_training_metrics and log_validation_loss for AutoML object detection and instance segmentation
+ **azureml-contrib-automl-dnn-forecasting**
+ Support of features/regressors known at the time of forecast in AutoML forecasting TCN models.
+ **azureml-core**
- + [Python 3.7 reached end-of-life on June 27th 2023](https://devguide.python.org/versions/). Consequently, 3.7 will be deprecated in azureml-core starting in October 2023 and azureml-core will end support for 3.7 in February 2024.
+ + [Python 3.7 reached end-of-life on June 27, 2023](https://devguide.python.org/versions/). So, 3.7 will be deprecated in azureml-core starting in October 2023 and azureml-core will end support for 3.7 in February 2024.
+ **azureml-mlflow**
- + Fix for loading models with MLflow load_model APIs when passing an AzureML URI
+ + Fix for loading models with MLflow load_model APIs when passing an Azure Machine Learning URI
+ **azureml-pipeline-core**
- + Skip child run and log error when load child run failed (e.g. 404) using `PipelineRun.get_pipeline_runs`.
+ + Skip child run and log error when load child run failed (for example, 404) using `PipelineRun.get_pipeline_runs`.
+ `PipelineEndpoint.list` introduces a new int parameter `max_results`, which indicates the maximum size of the returned list. The default value of `max_results` is 100.
+ **azureml-training-tabular**
+ Support of features/regressors known at the time of forecast in AutoML forecasting TCN models.
@@ -84,7 +89,7 @@ AutoML supports scikit-learn version 1.5.1
### Azure Machine Learning SDK for Python v1.52.0
+ **azureml-automl-dnn-vision**
- + The mlflow signature for the runtime (legacy) automl models has changed to accept binary inputs. This enables batch inferencing. The predict function is backwards compatible so users can still send base64 strings as input. The output from the predict function has changed to remove the temporary file name and the empty visualizations and attributions key when model explainability is n...
+ + The mlflow signature for the runtime (legacy) AutoML models has changed to accept binary inputs. This enables batch inferencing. The predict function is backwards compatible so users can still send base64 strings as input. The output from the predict function has changed to remove the temporary file name and the empty visualizations and attributions key when model explainability is n...
+ **azureml-contrib-automl-dnn-forecasting**
+ Fixed a bug that caused failures during distributed TCN training when the data consists of a single time series.
+ **azureml-interpret**
@@ -139,7 +144,7 @@ AutoML supports scikit-learn version 1.5.1
+ As of the deprecation date of December 4, 2023, the Azure Machine Learning SDK v1 packages will no longer receive security patches and other updates for the Python 3.7 runtime.
+ The current Python 3.7 versions for Azure Machine Learning SDK v1 still functions. However, in order to continue receiving security updates and remaining qualified for technical assistance, Azure Machine Learning strongly advises that you move your scripts and dependencies to a supported version of the Python runtime.
+ As a runtime for Azure Machine Learning SDK v1 files, we advise using Python version 3.8 or later.
- + Additionally, Python 3.7 based Azure Machine Learning SDK v1 packages no longer qualifies for technical assistance.
+ + Additionally, Python 3.7 based Azure Machine Learning SDK v1 packages no longer qualify for technical assistance.
+ Use Azure Machine Learning support to get in touch with us if you have any concerns.
## 2023-13-02
@@ -154,9 +159,9 @@ AutoML supports scikit-learn version 1.5.1
+ **azureml-automl-dnn-nlp**
+ Logs to show the final values applied to the model and hyperparameter settings based on both the default values and the user-specified ones.
+ **azureml-contrib-automl-dnn-forecasting**
- + Nonscalar metrics for TCNForecaster now reflects values from the last epoch.
+ + Nonscalar metrics for TCNForecaster now reflect values from the last epoch.
+ Forecast horizon visuals for train-set and test-set are now available while running the TCN training experiment.
- + Runs won't fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training didn't converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
+ + Runs won't fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training didn't converge." The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
+ **azureml-core**
+ Azure Machine Learning workspace creation makes use of Log Analytics Based Application Insights in preparation for deprecation of Classic Application Insights. Users wishing to use Classic Application Insights resources can still specify their own to bring when creating an Azure Machine Learning workspace.
+ **azureml-interpret**
@@ -302,11 +307,11 @@ AutoML supports scikit-learn version 1.5.1
+ **azureml-automl-dnn-nlp**
+ Remove duplicate labels column from multi-label predictions
+ **azureml-contrib-automl-pipeline-steps**
- + Many Models now provide the capability to generate prediction output in csv format as well. - Many Models predictions now includes column names in the output file in case of **csv** file format.
+ + Many Models now provide the capability to generate prediction output in csv format as well. - Many Models predictions now include column names in the output file in case of **csv** file format.
+ **azureml-core**
+ ADAL authentication is now deprecated and all authentication classes now use MSAL authentication. Install azure-cli>=2.30.0 to utilize MSAL based authentication when using AzureCliAuthentication class.
+ Added fix to force environment registration when `Environment.build(workspace)`. The fix solves confusion of the latest environment built instead of the asked one when environment is cloned or inherited from another instance.
- + SDK warning message to restart Compute Instance before May 31, 2022, if it was created before September 19, 2021
+ + SDK warning message to restart compute instances before May 31, 2022, if it was created before September 19, 2021
+ **azureml-interpret**
+ Updated azureml-interpret package to interpret-community 0.26.*
+ In the azureml-interpret package, add ability to get raw and engineered feature names from scoring explainer. Also, add example to the scoring notebook to get feature names from the scoring explainer and add documentation about raw and engineered feature names.
@@ -380,13 +385,13 @@ This breaking change comes from the June release of `azureml-inference-server-ht
### Azure Machine Learning SDK for Python v1.39.0
+ **azureml-automl-core**
+ Fix incorrect form displayed in PBI for integration with AutoML regression models
- + Adding min-label-classes check for both classification tasks (multi-class and multi-label). It throws an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It is meaningless to run classification on fewer than two classes.
+ + Adding min-label-classes check for both classification tasks (multi-class and multi-label). It throws an error for the customer's run if the unique number of classes in the input training dataset is fewer than 2. It's meaningless to run classification on fewer than two classes.
+ **azureml-automl-runtime**
+ Converting decimal type y-test into float to allow for metrics computation to proceed without errors.
+ AutoML training now supports numpy version 1.8.
+ **azureml-contrib-automl-dnn-forecasting**
+ Fixed a bug in the TCNForecaster model where not all training data would be used when cross-validation settings were provided.
- + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method would not use the most recent context data in train-valid scenarios.
+ + TCNForecaster wrapper's forecast method that was corrupting inference-time predictions. Also fixed an issue where the forecast method wouldn't use the most recent context data in train-valid scenarios.
+ **azureml-interpret**
+ For azureml-interpret package, remove shap pin with packaging update. Remove numba and numpy pin after CE env update.
+ **azureml-responsibleai**
@@ -440,7 +445,7 @@ This breaking change comes from the June release of `azureml-inference-server-ht
+ Starting in version 1.37.0, Azure Machine Learning SDK uses MSAL as the underlying authentication library. MSAL uses Azure Active Directory (Azure AD) v2.0 authentication flow to provide more functionality and increases security for token cache. For more information, see [Overview of the Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
+ Update AML SDK dependencies to the latest version of Azure Resource Management Client Library for Python (azure-mgmt-resource>=15.0.0,<20.0.0) & adopt track2 SDK.
+ Starting in version 1.37.0, azure-ml-cli extension should be compatible with the latest version of Azure CLI >=2.30.0.
- + When using Azure CLI in a pipeline, like as Azure DevOps, ensure all tasks/stages are using versions of Azure CLI above v2.30.0 for MSAL-based Azure CLI. Azure CLI 2.30.0 is not backward compatible with prior versions and throws an error when using incompatible versions. To use Azure CLI credentials with Azure Machine Learning SDK, Azure CLI should be installed as pip package.
+ + When using Azure CLI in a pipeline, like as Azure DevOps, ensure all tasks/stages are using versions of Azure CLI above v2.30.0 for MSAL-based Azure CLI. Azure CLI 2.30.0 isn't backward compatible with prior versions and throws an error when using incompatible versions. To use Azure CLI credentials with Azure Machine Learning SDK, Azure CLI should be installed as pip package.
+ **Bug fixes and improvements**
+ **azureml-core**
@@ -514,7 +519,7 @@ This breaking change comes from the June release of `azureml-inference-server-ht
+ Enable binary metrics calculation
+ **azureml-train-automl-runtime**
+ Add TCNForecaster support to model test runs.
- + Update the model test predictions.csv output format. The output columns now include the original target values and the features, which were passed in to the test run. This can be turned off by setting `test_include_predictions_only=True` in `AutoMLConfig` or by setting `include_predictions_only=True` in `ModelProxy.test()`. If the user has requested to only include predictions, then the output format looks like (forecasting is the same as regression): Classification => [predicted values] [probabilities] Regression => [predicted values] else (default): Classification => [original test data labels] [predicted values] [probabilities] [features] Regression => [original test data labels] [predicted values] [features] The `[predicted values]` column name = `[label column name] + "_predicted"`. The `[probabilities]` column names = `[class name] + "_predicted_proba"`. If no target column was passed in as input to the test run, then `[original test data labels]` will not be in the output.
+ + Update the model test predictions.csv output format. The output columns now include the original target values and the features, which were passed in to the test run. This can be turned off by setting `test_include_predictions_only=True` in `AutoMLConfig` or by setting `include_predictions_only=True` in `ModelProxy.test()`. If the user has requested to only include predictions, then the output format looks like (forecasting is the same as regression): Classification => [predicted values] [probabilities] Regression => [predicted values] else (default): Classification => [original test data labels] [predicted values] [probabilities] [features] Regression => [original test data labels] [predicted values] [features] The `[predicted values]` column name = `[label column name] + "_predicted"`. The `[probabilities]` column names = `[class name] + "_predicted_proba"`. If no target column was passed in as input to the test run, then `[original test data labels]` won't be in the output.
## 2021-09-07
@@ -741,7 +746,7 @@ The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **azureml-train-automl-client**
+ The STL featurizer for the forecasting task now uses a more robust seasonality detection based on the frequency of the time series.
+ **azureml-train-core**
- + Fixed bug where docker settings in Environment object are not respected.
+ + Fixed bug where docker settings in Environment object aren't respected.
+ Addition of new KubernetesCompute compute type to azureml-core.
@@ -1416,7 +1421,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Environment.get_image_details() return object type changed. `DockerImageDetails` class replaced `dict`, image details are available from the new class properties. Changes are backward compatible.
+ Fix bug for Environment.from_pip_requirements() to preserve dependencies structure
+ Fixed a bug where log_list would fail if an int and double were included in the same list.
- + While enabling private link on an existing workspace, note that if there are compute targets associated with the workspace, those targets won't work if they are not behind the same virtual network as the workspace private endpoint.
+ + While enabling private link on an existing workspace, note that if there are compute targets associated with the workspace, those targets won't work if they aren't behind the same virtual network as the workspace private endpoint.
+ Made `as_named_input` optional when using datasets in experiments and added `as_mount` and `as_download` to `FileDataset`. The input name is automatically generated if `as_mount` or `as_download` is called.
+ **azureml-automl-core**
+ Unhandled exceptions in AutoML now point to a known issues HTTP page, where more information about the errors can be found.
@@ -1432,10 +1437,10 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **Bug fixes and improvements**
+ **azureml-automl-core**
- + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it is automatically created.
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it doesn't already exist, it's automatically created.
+ Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ **azureml-automl-runtime**
- + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it is automatically created.
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it doesn't already exist, it's automatically created.
+ Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ AutoML Forecasting now supports rolling evaluation, which applies to the use case that the length of a test or validation set is longer than the input horizon, and known y_pred value is used as forecasting context.
+ **azureml-core**
@@ -1456,7 +1461,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Fixed issues related to removal of `RequirementParseError` from setuptools.
+ Use docker instead of conda for local runs submitted using "compute_target='local'"
+ The iteration duration printed to the console has been corrected. Previously, the iteration duration was sometimes printed as run end time minus run creation time. It has been corrected to equal run end time minus run start time.
- + When using AutoML, if a path is passed into the AutoMLConfig object and it does not already exist, it is automatically created.
+ + When using AutoML, if a path is passed into the AutoMLConfig object and it doesn't already exist, it's automatically created.
+ Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter.
+ **azureml-train-automl-runtime**
+ Improved console output when best model explanations fail.
@@ -1488,7 +1493,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-contrib-mir**
+ Added support for enable-app-insights flag in ManagedInferencing
+ **azureml-core**
- + A validate parameter to these APIs by allowing validation to be skipped when the data source is not accessible from the current compute.
+ + A validate parameter to these APIs by allowing validation to be skipped when the data source isn't accessible from the current compute.
+ TabularDataset.time_before(end_time, include_boundary=True, validate=True)
+ TabularDataset.time_after(start_time, include_boundary=True, validate=True)
+ TabularDataset.time_recent(time_delta, include_boundary=True, validate=True)
@@ -1502,7 +1507,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-opendatasets**
+ Recently published Covid-19 tracking datasets are now available with the SDK
+ **azureml-pipeline-core**
- + Log out warning when "azureml-defaults" is not included as part of pip-dependency
+ + Log out warning when "azureml-defaults" isn't included as part of pip-dependency
+ Improve Note rendering.
+ Added support for quoted line breaks when parsing delimited files to PipelineOutputFileDataset.
+ The PipelineDataset class is deprecated. For more information, see https://aka.ms/dataset-deprecation. Learn how to use dataset with pipeline, see https://aka.ms/pipeline-with-dataset.
@@ -1566,7 +1571,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-opendatasets**
+ Added support for Python 3.8
+ **azureml-pipeline-core**
- + Updated `PipelineDataset`'s documentation to make it clear it is an internal class.
+ + Updated `PipelineDataset`'s documentation to make it clear it's an internal class.
+ ParallelRunStep updates to accept multiple values for one argument, for example: "--group_column_names", "Col1", "Col2", "Col3"
+ Removed the passthru_automl_config requirement for intermediate data usage with AutoMLStep in Pipelines.
+ **azureml-pipeline-steps**
@@ -1668,7 +1673,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ run_invocation_timeout
+ run_max_try
+ Default value for process_count_per_node is changed to 1. User should tune this value for better performance. Best practice is to set as the number of GPU or CPU node has.
- + ParallelRunStep does not inject any packages, user needs to include **azureml-core** and **azureml-dataprep[pandas, fuse]** packages in environment definition. If custom docker image is used with user_managed_dependencies, then user need to install conda on the image.
+ + ParallelRunStep doesn't inject any packages, user needs to include **azureml-core** and **azureml-dataprep[pandas, fuse]** packages in environment definition. If custom docker image is used with user_managed_dependencies, then user need to install conda on the image.
+ **Breaking changes**
+ **azureml-pipeline-steps**
@@ -1679,7 +1684,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **Bug fixes and improvements**
+ **azureml-automl-core**
+ Fixed the bug where a warning may be printed during `get_output` that asked user to downgrade client.
- + Updated Mac to rely on cudatoolkit=9.0 as it is not available at version 10 yet.
+ + Updated Mac to rely on cudatoolkit=9.0 as it isn't available at version 10 yet.
+ Removing restrictions on prophet and xgboost models when trained on remote compute.
+ Improved logging in AutoML
+ The error handling for custom featurization in forecasting tasks was improved.
@@ -1717,7 +1722,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Improved error text in Dataset execution failures.
+ Workspace private link CLI support added.
+ Added an optional parameter `invalid_lines` to `Dataset.Tabular.from_json_lines_files` that allows for specifying how to handle lines that contain invalid JSON.
- + We will be deprecating the run-based creation of compute in the next release. We recommend creating an actual Amlcompute cluster as a persistent compute target, and using the cluster name as the compute target in your run configuration. See example notebook here: aka.ms/amlcomputenb
+ + We'll be deprecating the run-based creation of compute in the next release. We recommend creating an actual Amlcompute cluster as a persistent compute target, and using the cluster name as the compute target in your run configuration. See example notebook here: aka.ms/amlcomputenb
+ Improved error messages in Dataset execution failures.
+ **azureml-dataprep**
+ Made warning to upgrade pyarrow version more explicit.
@@ -1726,7 +1731,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Documentation updates to azureml-interpret package.
+ Fixed interpretability packages and notebooks to be compatible with latest sklearn update
+ **azureml-opendatasets**
- + return None when there is no data returned.
+ + return None when there's no data returned.
+ Improve the performance of to_pandas_dataframe.
+ **azureml-pipeline-core**
+ Quick fix for ParallelRunStep where loading from YAML was broken
@@ -1745,7 +1750,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Updates to error message to correctly display user error.
+ Support for cv_split_column_names to be used with training_data
+ Deprecated azureml.dprep.Dataflow as a valid type for input data.
- + Updated Mac to rely on cudatoolkit=9.0 as it is not available at version 10 yet.
+ + Updated Mac to rely on cudatoolkit=9.0 as it isn't available at version 10 yet.
+ Removing restrictions on prophet and xgboost models when trained on remote compute.
+ `azureml-train-automl-runtime` and `azureml-automl-runtime` have updated dependencies for `pytorch`, `scipy`, and `cudatoolkit`. we now support `pytorch==1.4.0`, `scipy>=1.0.0,<=1.3.1`, and `cudatoolkit==10.1.243`.
+ Added functionality to allow users to include lagged features to generate forecasts.
@@ -1759,7 +1764,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-train-core**
+ Added a new set of HyperDrive specific exceptions. azureml.train.hyperdrive now throws detailed exceptions.
+ **azureml-widgets**
- + Azure Machine Learning Widgets is not displaying in JupyterLab
+ + Azure Machine Learning Widgets isn't displaying in JupyterLab
## 2020-05-11
@@ -1776,7 +1781,7 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Fixes an accidentally left behind warning log in my previous PR. The log was used for debugging and accidentally was left behind.
+ Bug fix: inform clients about partial failure during profiling
+ **azureml-automl-core**
- + Speed up Prophet/AutoArima model in AutoML forecasting by enabling parallel fitting for the time series when data sets have multiple time series. In order to benefit from this new feature, you are recommended to set "max_cores_per_iteration = -1" (that is, using all the available cpu cores) in AutoMLConfig.
+ + Speed up Prophet/AutoArima model in AutoML forecasting by enabling parallel fitting for the time series when data sets have multiple time series. In order to benefit from this new feature, you're recommended to set "max_cores_per_iteration = -1" (that is, using all the available cpu cores) in AutoMLConfig.
+ Fix KeyError on printing guardrails in console interface
+ Fixed error message for experimentation_timeout_hours
+ Deprecated TensorFlow models for AutoML.
@@ -1784,12 +1789,12 @@ Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Fixed error message for experimentation_timeout_hours
+ Fixed unclassified exception when trying to deserialize from cache store
+ Speed up Prophet/AutoArima model in AutoML forecasting by enabling parallel fitting for the time series when data sets have multiple time series.
- + Fixed the forecasting with enabled rolling window on the data sets where test/prediction set does not contain one of grains from the training set.
+ + Fixed the forecasting with enabled rolling window on the data sets where test/prediction set doesn't contain one of grains from the training set.
+ Improved handling of missing data
- + Fixed issue with prediction intervals during forecasting on data sets, containing time series, which are not aligned in time.
+ + Fixed issue with prediction intervals during forecasting on data sets, containing time series, which aren't aligned in time.
+ Added better validation of data shape for the forecasting tasks.
+ Improved the frequency detection.
- + Created better error message if the cross validation folds for forecasting tasks cannot be generated.
+ + Created better error message if the cross validation folds for forecasting tasks can't be generated.
+ Fix console interface to print missing value guardrail correctly.
+ Enforcing datatype checks on cv_split_indices input in AutoMLConfig.
+ **azureml-cli-common**
@@ -1876,7 +1881,7 @@ Access the following web-based authoring tools from the studio:
+ **Breaking changes**
- + AmlCompute clusters supported a Preview feature around run-based creation, that we are planning on deprecating in two weeks. You can continue to create persistent compute targets as always by using the Amlcompute class, but the specific approach of specifying the identifier "amlcompute" as the compute target in run config will not be supported soon.
+ + AmlCompute clusters supported a Preview feature around run-based creation, that we're planning on deprecating in two weeks. You can continue to create persistent compute targets as always by using the Amlcompute class, but the specific approach of specifying the identifier "amlcompute" as the compute target in run config won't be supported soon.
+ **Bug fixes and improvements**
+ **azureml-automl-runtime**
@@ -1885,7 +1890,7 @@ Access the following web-based authoring tools from the studio:
+ Improved stability when reading from Azure Blob Storage using a TabularDataset.
+ Improved documentation for the `grant_workspace_msi` parameter for `Datastore.register_azure_blob_store`.
+ Fixed bug with `datastore.upload` to support the `src_dir` argument ending with a `/` or `\`.
- + Added actionable error message when trying to upload to an Azure Blob Storage datastore that does not have an access key or SAS token.
+ + Added actionable error message when trying to upload to an Azure Blob Storage datastore that doesn't have an access key or SAS token.
+ **azureml-interpret**
+ Added upper bound to file size for the visualization data on uploaded explanations.
+ **azureml-train-automl-client**
@@ -1920,7 +1925,7 @@ Access the following web-based authoring tools from the studio:
+ Added user_managed flag in RSection that indicates whether the environment is managed by user or by Azure Machine Learning.
+ Dataset: Fixed dataset download failure if data path containing unicode characters.
+ Dataset: Improved dataset mount caching mechanism to respect the minimum disk space requirement in Azure Machine Learning Compute, which avoids making the node unusable and causing the job to be canceled.
- + Dataset: We add an index for the time series column when you access a time series dataset as a pandas dataframe, which is used to speed up access to time series-based data access. Previously, the index was given the same name as the timestamp column, confusing users about which is the actual timestamp column and which is the index. We now don't give any specific name to the index since it should not be used as a column.
+ + Dataset: We add an index for the time series column when you access a time series dataset as a pandas dataframe, which is used to speed up access to time series-based data access. Previously, the index was given the same name as the timestamp column, confusing users about which is the actual timestamp column and which is the index. We now don't give any specific name to the index since it shouldn't be used as a column.
+ Dataset: Fixed dataset authentication issue in sovereign cloud.
+ Dataset: Fixed `Dataset.to_spark_dataframe` failure for datasets created from Azure PostgreSQL datastores.
+ **azureml-interpret**
@@ -1997,14 +2002,14 @@ Access the following web-based authoring tools from the studio:
+ Changed the input of the constructor to take a list of options to apply the imputation options for corresponding columns.
+ The error logging has been improved.
+ **azureml-automl-runtime**
- + Fixed the issue with the error thrown if the grain was not present in the training set appeared in the test set
+ + Fixed the issue with the error thrown if the grain wasn't present in the training set appeared in the test set
+ Removed the y_query requirement during scoring on forecasting service
+ Fixed the issue with forecasting when the data set contains short grains with long time gaps.
- + Fixed the issue when the auto max horizon is turned on and the date column contains dates in form of strings. Proper conversion and error messages were added for when conversion to date is not possible
+ + Fixed the issue when the auto max horizon is turned on and the date column contains dates in form of strings. Proper conversion and error messages were added for when conversion to date isn't possible
+ Using native NumPy and SciPy for serializing and deserializing intermediate data for FileCacheStore (used for local AutoML runs)
+ Fixed a bug where failed child runs could get stuck in Running state.
+ Increased speed of featurization.
- + Fixed the frequency check during scoring. Now the forecasting tasks do not require strict frequency equivalence between train and test set.
+ + Fixed the frequency check during scoring. Now the forecasting tasks don't require strict frequency equivalence between train and test set.
+ Changed the input of the constructor to take a list of options to apply the imputation options for corresponding columns.
+ Fixed errors related to lag type selection.
+ Fixed the unclassified error raised on the data sets, having grains with the single row
@@ -2028,15 +2033,15 @@ Access the following web-based authoring tools from the studio:
+ Added image_build_compute parameter in workspace update method to allow user updating the compute for image build.
+ Added deprecation messages to the old profiling workflow. Fixed profiling cpu and memory limits.
+ Added RSection as part of Environment to run R jobs.
- + Added validation to `Dataset.mount` to raise error when source of the dataset is not accessible or does not contain any data.
+ + Added validation to `Dataset.mount` to raise error when source of the dataset isn't accessible or doesn't contain any data.
+ Added `--grant-workspace-msi-access` as another parameter for the Datastore CLI for registering Azure Blob Container that allows you to register Blob Container that is behind a VNet.
+ Single instance profiling was fixed to produce a recommendation and was made available in core SDK.
+ Fixed the issue in aks.py _deploy.
+ Validates the integrity of models being uploaded to avoid silent storage failures.
+ User may now specify a value for the auth key when regenerating keys for webservices.
- + Fixed bug where uppercase letters cannot be used as dataset's input name.
+ + Fixed bug where uppercase letters can't be used as dataset's input name.
+ **azureml-defaults**
- + `azureml-dataprep` will now be installed as part of `azureml-defaults`. It is no longer required to install data prep[fuse] manually on compute targets to mount datasets.
+ + `azureml-dataprep` will now be installed as part of `azureml-defaults`. It's no longer required to install data prep[fuse] manually on compute targets to mount datasets.
+ **azureml-interpret**
+ Updated azureml-interpret to interpret-community 0.6.*
+ Updated azureml-interpret to depend on interpret-community 0.5.0
@@ -2073,7 +2078,7 @@ Access the following web-based authoring tools from the studio:
+ Improved the detection of frequency on the data sets, lacking data or containing irregular data points
+ Added the ability to remove data points not complying with the dominant frequency.
+ **azureml-automl-runtime**
- + Fixed the issue with the error thrown if the grain was not present in the training set appeared in the test set
+ + Fixed the issue with the error thrown if the grain wasn't present in the training set appeared in the test set
+ Removed the y_query requirement during scoring on forecasting service
+ **azureml-contrib-mir**
+ Adds functionality in the MirWebservice class to retrieve the Access Token
@@ -2112,7 +2117,7 @@ Access the following web-based authoring tools from the studio:
+ The error logging has been improved.
+ **azureml-automl-runtime**
+ Fixed the issue with forecasting when the data set contains short grains with long time gaps.
- + Fixed the issue when the auto max horizon is turned on and the date column contains dates in form of strings. We added proper conversion and sensible error if conversion to date is not possible
+ + Fixed the issue when the auto max horizon is turned on and the date column contains dates in form of strings. We added proper conversion and sensible error if conversion to date isn't possible
+ Using native NumPy and SciPy for serializing and deserializing intermediate data for FileCacheStore (used for local AutoML runs)
+ Fixed a bug where failed child runs could get stuck in Running state.
+ **azureml-cli-common**
@@ -2151,7 +2156,7 @@ Access the following web-based authoring tools from the studio:
+ **Bug fixes and improvements**
+ **azureml-automl-runtime**
+ Increased speed of featurization.
- + Fixed the frequency check during scoring. Now in the forecasting tasks we do not require strict frequency equivalence between train and test set.
+ + Fixed the frequency check during scoring. Now in the forecasting tasks we don't require strict frequency equivalence between train and test set.
+ **azureml-core**
+ User may now specify a value for the auth key when regenerating keys for webservices.
+ **azureml-interpret**
@@ -2211,7 +2216,7 @@ Access the following web-based authoring tools from the studio:
+ **azureml-automl-runtime**
+ Fixed a regression that caused a TypeError to be raised when running AutoML on Python versions below 3.5.4.
+ **azureml-core**
- + Fixed bug in `datastore.upload_files` were relative path that didn't start with `./` was not able to be used.
+ + Fixed bug in `datastore.upload_files` were relative path that didn't start with `./` wasn't able to be used.
+ Added deprecation messages for all Image class code paths
+ Fixed Model Management URL construction for Microsoft Azure operated by 21Vianet.
+ Fixed issue where models using source_dir couldn't be packaged for Azure Functions.
@@ -2237,7 +2242,7 @@ Access the following web-based authoring tools from the studio:
+ **azureml-core**
+ Compute target can now be specified as a parameter to the corresponding deployment config objects. This is specifically the name of the compute target to deploy to, not the SDK object.
+ Added CreatedBy information to Model and Service objects. May be accessed through.created_by
- + Fixed ContainerImage.run(), which was not correctly setting up the Docker container's HTTP port.
+ + Fixed ContainerImage.run(), which wasn't correctly setting up the Docker container's HTTP port.
+ Make `azureml-dataprep` optional for `az ml dataset register` CLI command
+ Fixed a bug where `TabularDataset.to_pandas_dataframe` would incorrectly fall back to an alternate reader and print a warning.
+ **azureml-explain-model**
@@ -2262,12 +2267,12 @@ Access the following web-based authoring tools from the studio:
+ Added related code changes to handle categorical dtype correctly.
+ The forecast function interface was improved: the y_pred parameter was made optional. -The docstrings were improved.
+ **azureml-contrib-dataset**
- + Fixed a bug where labeled datasets could not be mounted.
+ + Fixed a bug where labeled datasets couldn't be mounted.
+ **azureml-core**
+ Bug fix for `Environment.from_existing_conda_environment(name, conda_environment_name)`. User can create an instance of Environment that is exact replica of the local environment
+ Changed time series-related Datasets methods to `include_boundary=True` by default.
+ **azureml-train-automl-client**
- + Fixed issue where validation results are not printed when show output is set to false.
+ + Fixed issue where validation results aren't printed when show output is set to false.
## 2019-11-25
@@ -2276,7 +2281,7 @@ Access the following web-based authoring tools from the studio:
+ **Breaking changes**
+ Azureml-Train-AutoML upgrade issues
- + Upgrading to azureml-train-automl>=1.0.76 from azureml-train-automl<1.0.76 can cause partial installations, causing some AutoML imports to fail. To resolve this, you can run the setup script found at https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/automl_setup.cmd. Or if you are using pip directly you can:
+ + Upgrading to azureml-train-automl>=1.0.76 from azureml-train-automl<1.0.76 can cause partial installations, causing some AutoML imports to fail. To resolve this, you can run the setup script found at https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/automl_setup.cmd. Or if you're using pip directly you can:
+ "pip install --upgrade azureml-train-automl"
+ "pip install --ignore-installed azureml-train-automl-client"
+ or you can uninstall the old version before upgrading
@@ -2343,7 +2348,7 @@ Access the following web-based authoring tools from the studio:
+ Fixes breaking change in Azure Storage causing Azure Machine Learning CLI to fail.
+ **azureml-core**
+ Fixes a bug that caused models deployed on Azure Functions to produce 500 s.
- + Fixed an issue where the amlignore file was not applied on snapshots.
+ + Fixed an issue where the amlignore file wasn't applied on snapshots.
+ Added a new API amlcompute.get_active_runs that returns a generator for running and queued runs on a given amlcompute.
+ Added Load Balancer Type to MLC for AKS types.
+ Added append_prefix bool parameter to download_files in run.py and download_artifacts_from_prefix in artifacts_client. This flag is used to selectively flatten the origin filepath so only the file or folder name is added to the output_directory
@@ -2354,7 +2359,7 @@ Access the following web-based authoring tools from the studio:
+ **azureml-train-automl**
+ Fixed an issue where automl_step might not print validation issues.
+ Fixed register_model to succeed even if the model's environment is missing dependencies locally.
- + Fixed an issue where some remote runs were not docker enabled.
+ + Fixed an issue where some remote runs weren't docker enabled.
+ Add logging of the exception that is causing a local run to fail prematurely.
+ **azureml-train-core**
+ Consider resume_from runs in the calculation of automated hyperparameter tuning best child runs.
@@ -2415,8 +2420,8 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ We have also made enhancements to Azure Machine Learning Compute - you can now view metrics for your clusters (like total nodes, running nodes, total core quota) in Azure Monitor, besides viewing Diagnostic logs for debugging. In addition, you can also view currently running or queued runs on your cluster and details such as the IPs of the various nodes on your cluster. You can view these either in the portal or by using corresponding functions in the SDK or CLI.
+ **Preview features**
- + We are releasing preview support for disk encryption of your local SSD in Azure Machine Learning Compute. Raise a technical support ticket to get your subscription allow listed to use this feature.
- + Public Preview of Azure Machine Learning Batch Inference. Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
+ + We're releasing preview support for disk encryption of your local SSD in Azure Machine Learning Compute. Raise a technical support ticket to get your subscription allow listed to use this feature.
+ + Public Preview of Azure Machine Learning Batch Inference. Azure Machine Learning Batch Inference targets large inference jobs that aren't time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It's optimized for high-throughput, fire-and-forget inference over large collections of data.
+ **azureml-contrib-dataset**
+ Enabled functionalities for labeled dataset
```Python
@@ -2451,7 +2456,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ **azureml-automl-core**
+ Training an iteration would run in a child process only when runtime constraints are being set.
+ Added a guardrail for forecasting tasks, to check whether a specified max_horizon causes a memory issue on the given machine or not. If it will, a guardrail message is displayed.
- + Added support for complex frequencies like two years and one month. -Added comprehensible error message if frequency cannot be determined.
+ + Added support for complex frequencies like two years and one month. -Added comprehensible error message if frequency can't be determined.
+ Add azureml-defaults to auto generated conda env to solve the model deployment failure
+ Allow intermediate data in Azure Machine Learning Pipeline to be converted to tabular dataset and used in `AutoMLStep`.
+ Implemented column purpose update for streaming.
@@ -2470,7 +2475,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ CLI now supports model packaging.
+ Add deprecation warning for deprecated Dataset APIs. See Dataset API change notice at https://aka.ms/tabular-dataset.
+ Change [`Dataset.get_by_id`](/python/api/azureml-core/azureml.core.dataset%28class%29#get-by-id-workspace--id-) to return registration name and version if the dataset is registered.
- + Fix a bug that ScriptRunConfig with dataset as argument cannot be used repeatedly to submit experiment run.
+ + Fix a bug that ScriptRunConfig with dataset as argument can't be used repeatedly to submit experiment run.
+ Datasets retrieved during a run will be tracked and can be seen in the run details page or by calling [`run.get_details()`](/python/api/azureml-core/azureml.core.run%28class%29#get-details--) after the run is complete.
+ Allow intermediate data in Azure Machine Learning Pipeline to be converted to tabular dataset and used in [`AutoMLStep`](/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automlstep).
+ Added support for deploying and packaging supported models (ONNX, scikit-learn, and TensorFlow) without an InferenceConfig instance.
@@ -2527,7 +2532,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ Fixed the issue with heuristic lags and rolling window sizes on the forecasting data sets some series of which can cause linear algebra errors
+ Added print for the heuristically determined parameters in the forecasting runs.
+ **azureml-contrib-datadrift**
- + Added protection while creating output metrics if dataset level drift is not in the first section.
+ + Added protection while creating output metrics if dataset level drift isn't in the first section.
+ **azureml-contrib-interpret**
+ azureml-contrib-explain-model package has been renamed to azureml-contrib-interpret
+ **azureml-core**
@@ -2550,7 +2555,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ The show attribute of DataDriftDector class doesn't support optional argument 'with_details' anymore. The show attribute only presents data drift coefficient and data drift contribution of feature columns.
+ DataDriftDetector attribute 'get_output' behavior changes:
+ Input parameter start_time, end_time are optional instead of mandatory;
- + Input specific start_time and/or end_time with a specific run_id in the same invoking results in value error exception because they are mutually exclusive
+ + Input specific start_time and/or end_time with a specific run_id in the same invoking results in value error exception because they're mutually exclusive
+ By input specific start_time and/or end_time, only results of scheduled runs are returned;
+ Parameter 'daily_latest_only' is deprecated.
+ Support retrieving Dataset-based Data Drift outputs.
@@ -2589,7 +2594,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ The show attribute of [DataDriftDetector](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector.datadriftdetector) class don't support optional argument 'with_details' anymore. The show attribute only presents data drift coefficient and data drift contribution of feature columns.
+ DataDriftDetector function [get_output]python/api/azureml-datadrift/azureml.datadrift.datadriftdetector.datadriftdetector#get-output-start-time-none--end-time-none--run-id-none-) behavior changes:
+ Input parameter start_time, end_time are optional instead of mandatory;
- + Input specific start_time and/or end_time with a specific run_id in the same invoking results in value error exception because they are mutually exclusive;
+ + Input specific start_time and/or end_time with a specific run_id in the same invoking results in value error exception because they're mutually exclusive;
+ By input specific start_time and/or end_time, only results of scheduled runs are returned;
+ Parameter 'daily_latest_only' is deprecated.
+ Support retrieving Dataset-based Data Drift outputs.
@@ -2648,9 +2653,9 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Added deprecation message for explain_model() and retrieve_model_explanations().
+ Added Prophet as a trainable pipeline (preview only).
+ Added support for automatic detection of target lags, rolling window size, and maximal horizon. If one of target_lags, target_rolling_window_size or max_horizon is set to 'auto', the heuristics is applied to estimate the value of corresponding parameter based on training data.
- + Fixed forecasting in the case when data set contains one grain column, this grain is of a numeric type and there is a gap between train and test set
+ + Fixed forecasting in the case when data set contains one grain column, this grain is of a numeric type and there's a gap between train and test set
+ Fixed the error message about the duplicated index in the remote run in forecasting tasks
- + Fixed forecasting in the case when data set contains one grain column, this grain is of a numeric type and there is a gap between train and test set.
+ + Fixed forecasting in the case when data set contains one grain column, this grain is of a numeric type and there's a gap between train and test set.
+ Fixed the error message about the duplicated index in the remote run in forecasting tasks.
+ Added a guardrail to check whether a dataset is imbalanced or not. If it is, a guardrail message would be written to the console.
+ **azureml-core**
@@ -2659,7 +2664,7 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Add API `Dataset.Tabular.from_json_lines_files` to create a TabularDataset from JSON Lines files. To learn about this tabular data in JSON Lines files on TabularDataset, visit [this article](how-to-create-register-datasets.md) for documentation.
+ Added other VM size fields (OS Disk, number of GPUs) to the supported_vmsizes () function
+ Added more fields to the list_nodes () function to show the run, the private and the public IP, the port etc.
- + Ability to specify a new field during cluster provisioning --remotelogin_port_public_access, which can be set to enabled or disabled depending on whether you would like to leave the SSH port open or closed at the time of creating the cluster. If you do not specify it, the service will smartly open or close the port depending on whether you are deploying the cluster inside a VNet.
+ + Ability to specify a new field during cluster provisioning --remotelogin_port_public_access, which can be set to enabled or disabled depending on whether you would like to leave the SSH port open or closed at the time of creating the cluster. If you don't specify it, the service will smartly open or close the port depending on whether you're deploying the cluster inside a VNet.
+ **azureml-explain-model**
+ **[azureml-core](/python/api/azureml-core/azureml.core)**
+ Added ability to retrieve SAS URL to model in storage through the model object. Ex: model.[get_sas_url()](/python/api/azureml-core/azureml.core.model.model#get-sas-urls--)
@@ -2667,7 +2672,7 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Add API `Dataset.Tabular`.[from_json_lines_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) to create a TabularDataset from JSON Lines files. To learn about this tabular data in JSON Lines files on TabularDataset, visithttps://aka.ms/azureml-data for documentation.
+ Added other VM size fields (OS Disk, number of GPUs) to the [supported_vmsizes()](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#supported-vmsizes-workspace--location-none-) function
+ Added other fields to the [list_nodes()](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#list-nodes--) function to show the run, the private, and the public IP, the port etc.
- + Ability to specify a new field during cluster [provisioning](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#provisioning-configuration-vm-size-----vm-priority--dedicated---min-nodes-0--max-nodes-none--idle-seconds-before-scaledown-none--admin-username-none--admin-user-password-none--admin-user-ssh-key-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--tags-none--description-none--remote-login-port-public-access--notspecified--) that can be set to enabled or disabled depending on whether you would like to leave the SSH port open or closed at the time of creating the cluster. If you do not specify it, the service smartly opens or closes the port depending on whether you are deploying the cluster inside a VNet.
+ + Ability to specify a new field during cluster [provisioning](/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute#provisioning-configuration-vm-size-----vm-priority--dedicated---min-nodes-0--max-nodes-none--idle-seconds-before-scaledown-none--admin-username-none--admin-user-password-none--admin-user-ssh-key-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--tags-none--description-none--remote-login-port-public-access--notspecified--) that can be set to enabled or disabled depending on whether you would like to leave the SSH port open or closed at the time of creating the cluster. If you don't specify it, the service smartly opens or closes the port depending on whether you're deploying the cluster inside a VNet.
+ **azureml-explain-model**
+ Improved documentation for Explanation outputs in the classification scenario.
+ Added the ability to upload the predicted y values on the explanation for the evaluation examples. Unlocks more useful visualizations.
@@ -2676,7 +2681,7 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Added notebook to describe Module, ModuleVersion, and ModuleStep
+ **azureml-pipeline-steps**
+ Added RScriptStep to support R script run via AML pipeline.
- + Fixed metadata parameters parsing in AzureBatchStep that was causing the error message "assignment for parameter SubscriptionId is not specified."
+ + Fixed metadata parameters parsing in AzureBatchStep that was causing the error message "assignment for parameter SubscriptionId isn't specified."
+ **azureml-train-automl**
+ Supported training_data, validation_data, label_column_name, weight_column_name as data input format
+ Added deprecation message for explain_model() and retrieve_model_explanations()
@@ -2684,7 +2689,7 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Added a [notebook](https://aka.ms/pl-modulestep) to describe [Module](/python/api/azureml-pipeline-core/azureml.pipeline.core.module%28class%29), [ModuleVersion, and [ModuleStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.modulestep).
+ **[azureml-pipeline-steps](/python/api/azureml-pipeline-steps/azureml.pipeline.steps)**
+ Added [RScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.rscriptstep) to support R script run via AML pipeline.
- + Fixed metadata parameters parsing in [AzureBatchStep that was causing the error message "assignment for parameter SubscriptionId is not specified".
+ + Fixed metadata parameters parsing in [AzureBatchStep that was causing the error message "assignment for parameter SubscriptionId isn't specified".
+ **[azureml-train-automl](/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime)**
+ Supported training_data, validation_data, label_column_name, weight_column_name as data input format.
+ Added deprecation message for explain_model() and retrieve_model_explanations().
@@ -2704,7 +2709,7 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ **Bug fixes and improvements**
+ **azureml-automl-core**
+ Deprecated the AutoML setting 'lag_length' and the LaggingTransformer.
- + Fixed correct validation of input data if they are specified in a Dataflow format
+ + Fixed correct validation of input data if they're specified in a Dataflow format
+ Modified the fit_pipeline.py to generate the graph json and upload to artifacts.
+ Rendered the graph under `userrun` using `Cytoscape`.
+ **azureml-core**
@@ -2713,14 +2718,14 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Fixes bug where corrupt or empty models could be uploaded because of failed retries.
+ Fixed the bug where `DataReference` name changes when the `DataReference` mode changes (for example, when calling `as_upload`, `as_download`, or `as_mount`).
+ Make `mount_point` and `target_path` optional for `FileDataset.mount` and `FileDataset.download`.
- + Exception that timestamp column cannot be found is thrown out if the time serials-related API is called without fine timestamp column assigned or the assigned timestamp columns are dropped.
+ + Exception that timestamp column can't be found is thrown out if the time serials-related API is called without fine timestamp column assigned or the assigned timestamp columns are dropped.
+ Time serials columns should be assigned with column whose type is Date, otherwise exception is expected
+ Time serials columns assigning API 'with_timestamp_columns' can take None value fine/coarse timestamp column name, which clears previously assigned timestamp columns.
+ Exception will be thrown out when either coarse grain or fine grained timestamp column is dropped with indication for user that dropping can be done after either excluding timestamp column in dropping list or call with_time_stamp with None value to release timestamp columns
- + Exception will be thrown out when either coarse grain or fine grained timestamp column is not included in keep columns list with indication for user that keeping can be done after either including timestamp column in keep column list or call with_time_stamp with None value to release timestamp columns.
+ + Exception will be thrown out when either coarse grain or fine grained timestamp column isn't included in keep columns list with indication for user that keeping can be done after either including timestamp column in keep column list or call with_time_stamp with None value to release timestamp columns.
+ Added logging for the size of a registered model.
+ **azureml-explain-model**
- + Fixed warning printed to console when "packaging" Python package is not installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ + Fixed warning printed to console when "packaging" Python package isn't installed: "Using older than supported version of lightgbm, please upgrade to version greater than 2.2.1"
+ Fixed download model explanation with sharding for global explanations with many features
+ Fixed mimic explainer missing initialization examples on output explanation
+ Fixed immutable error on set properties when uploading with explanation client using two different types of models
@@ -2729,7 +2734,7 @@ The Experiment tab in the [new workspace portal](https://ml.azure.com) has been
+ Introduced public APIs from AutoML for supporting explanations from `automl` explain SDK - Newer way of supporting AutoML explanations by decoupling AutoML featurization and explain SDK - Integrated raw explanation support from azureml explain SDK for AutoML models.
+ Removing azureml-defaults from remote training environments.
+ Changed default cache store location from FileCacheStore based one to AzureFileCacheStore one for AutoML on Azure Databricks code path.
- + Fixed correct validation of input data if they are specified in a Dataflow format
+ + Fixed correct validation of input data if they're specified in a Dataflow format
+ **azureml-train-core**
+ Reverted source_directory_data_store deprecation.
+ Added ability to override azureml installed package versions.
@@ -2763,7 +2768,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
1. Refresh your browser if you see "Something went wrong! Error loading chunk files" when deployment is in progress.
-1. Can't delete or rename file in Notebooks and Files. During Public Preview, you can use Jupyter UI or Terminal in Notebook VM to perform update file operations. Because it is a mounted network file system all changes, you make on Notebook VM are immediately reflected in the Notebook Workspace.
+1. Can't delete or rename file in Notebooks and Files. During Public Preview, you can use Jupyter UI or Terminal in Notebook VM to perform update file operations. Because it's a mounted network file system all changes, you make on Notebook VM are immediately reflected in the Notebook Workspace.
1. To SSH into the Notebook VM:
1. Find the SSH keys that were created during VM setup. Or, find the keys in the Azure Machine Learning workspace > open Compute tab > locate Notebook VM in the list > open its properties: copy the keys from the dialog.
@@ -2804,7 +2809,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
+ Introduce `partition_format` as argument to `Dataset.Tabular.from_delimited_files` and `Dataset.Tabular.from_parquet.files`. The partition information of each data path is extracted into columns based on the specified format. '{column_name}' creates string column, and '{column_name:yyyy/MM/dd/HH/mm/ss}' creates datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute, and second for the datetime type. The partition_format should start from the position of first partition key until the end of file path. For example, given the path '../USA/2019/01/01/data.csv' where the partition is by country/region and time, partition_format='/{Country}/{PartitionDate:yyyy/MM/dd}/data.csv' creates string column 'Country' with value 'USA' and datetime column 'PartitionDate' with value '2019-01-01'.
+ `to_csv_files` and `to_parquet_files` methods have been added to `TabularDataset`. These methods enable conversion between a `TabularDataset` and a `FileDataset` by converting the data to files of the specified format.
+ Automatically log into the base image registry when saving a Dockerfile generated by Model.package().
- + 'gpu_support' is no longer necessary; AML now automatically detects and uses the nvidia docker extension when it is available. It will be removed in a future release.
+ + 'gpu_support' is no longer necessary; AML now automatically detects and uses the nvidia docker extension when it's available. It will be removed in a future release.
+ Added support to create, update, and use PipelineDrafts.
+ This release improves the execution performance of automated machine learning local runs.
+ Users can query metrics from run history by name.
@@ -2829,7 +2834,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
+ Enabled writing to ADLS/ADLSGen2 using raw path and credentials.
+ Fixed a bug that caused `include_path=True` to not work for `read_parquet`.
+ Fixed `to_pandas_dataframe()` failure caused by exception "Invalid property value: hostSecret".
- + Fixed a bug where files could not be read on DBFS in Spark mode.
+ + Fixed a bug where files couldn't be read on DBFS in Spark mode.
## 2019-08-19
@@ -2841,14 +2846,14 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
+ **azure-cli-ml**
+ You can now update the TLS/SSL certificate for the scoring endpoint deployed on AKS cluster both for Microsoft generated and customer certificate.
+ **azureml-automl-core**
- + Fixed an issue in AutoML where rows with missing labels were not removed properly.
+ + Fixed an issue in AutoML where rows with missing labels weren't removed properly.
+ Improved error logging in AutoML; full error messages will now always be written to the log file.
+ AutoML has updated its package pinning to include `azureml-defaults`, `azureml-explain-model`, and `azureml-dataprep`. AutoML no longer warns on package mismatches (except for `azureml-train-automl` package).
+ Fixed an issue in `timeseries` where cv splits are of unequal size causing bin calculation to fail.
+ When running ensemble iteration for the Cross-Validation training type, if we ended up having trouble downloading the models trained on the entire dataset, we were having an inconsistency between the model weights and the models that were being fed into the voting ensemble.
+ Fixed the error, raised when training and/or validation labels (y and y_valid) are provided in the form of pandas dataframe but not as numpy array.
+ Fixed the issue with the forecasting tasks when None was encountered in the Boolean columns of input tables.
- + Allow AutoML users to drop training series that's not long enough when forecasting. - Allow AutoML users to drop grains from the test set that does not exist in the training set when forecasting.
+ + Allow AutoML users to drop training series that's not long enough when forecasting. - Allow AutoML users to drop grains from the test set that doesn't exist in the training set when forecasting.
+ **azureml-core**
+ Fixed issue with blob_cache_timeout parameter ordering.
+ Added external fit and transform exception types to system errors.
@@ -2883,10 +2888,10 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
+ **azureml-telemetry**
+ Fixed an incorrect telemetry configuration.
+ **azureml-train-automl**
- + Fixed the bug where on setup failure, error was not getting logged in "errors" field for the setup run and hence was not stored in parent run "errors".
- + Fixed an issue in AutoML where rows with missing labels were not removed properly.
- + Allow AutoML users to drop training series that are not long enough when forecasting.
- + Allow AutoML users to drop grains from the test set that does not exist in the training set when forecasting.
+ + Fixed the bug where on setup failure, error wasn't getting logged in "errors" field for the setup run and hence wasn't stored in parent run "errors".
+ + Fixed an issue in AutoML where rows with missing labels weren't removed properly.
+ + Allow AutoML users to drop training series that aren't long enough when forecasting.
+ + Allow AutoML users to drop grains from the test set that doesn't exist in the training set when forecasting.
+ Now AutoMLStep passes through `automl` config to backend to avoid any issues on changes or additions of new config parameters.
+ AutoML Data Guardrail is now in public preview. User will see a Data Guardrail report (for classification/regression tasks) after training and also be able to access it through SDK API.
+ **azureml-train-core**
@@ -3061,7 +3066,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
+ Added support for reading a file directly from an http or https url.
+ **Bug fixes and improvements**
- + Improved error message when attempting to read a Parquet Dataset from a remote source (which is not currently supported).
+ + Improved error message when attempting to read a Parquet Dataset from a remote source (which isn't currently supported).
+ Fixed a bug when writing to Parquet file format in ADLS Gen 2, and updating the ADLS Gen 2 container name in the path.
## 2019-07-09
@@ -3168,7 +3173,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
+ **Bring your own base image for inference**
Another common complaint was the difficulty in moving from experimentation to inference RE sharing dependencies. With our new base image sharing capability, you can now reuse your experimentation base images, dependencies and all, for inference. This should speed up deployments and reduce the gap from the inner to the outer loop.
-+ **Improved Swagger schema generation experience**
Our previous swagger generation method was error prone and impossible to automate. We have a new in-line way of generating swagger schemas from any Python function via decorators. We have open-sourced this code and our schema generation protocol is not coupled to the Azure Machine Learning platform.
++ **Improved Swagger schema generation experience**
Our previous swagger generation method was error prone and impossible to automate. We have a new in-line way of generating swagger schemas from any Python function via decorators. We have open-sourced this code and our schema generation protocol isn't coupled to the Azure Machine Learning platform.
+ **Azure Machine Learning CLI is generally available (GA)**
Models can now be deployed with a single CLI command. We got common customer feedback that no one deploys an ML model from a Jupyter notebook. The [**CLI reference documentation**](reference-azure-machine-learning-cli.md) has been updated.
diff --git a/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md b/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md
index cd8751a2005..143028965dd 100644
--- a/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md
+++ b/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md
@@ -156,7 +156,7 @@ df.dtypes
This dataset contains data for the numerous countries/regions. Lets verify what countries/regions we have data for.
-We will start by looking at the latest data for each country:
+We will start by looking at the latest data for each country/region:
```python
df.groupby('countryname').first().filter(['confirmedcases ', 'confirmeddeaths','h5_investment_in_vaccines',
diff --git a/articles/search/search-get-started-rag.md b/articles/search/search-get-started-rag.md
index 5fb79ce9a0e..29840547d20 100644
--- a/articles/search/search-get-started-rag.md
+++ b/articles/search/search-get-started-rag.md
@@ -8,12 +8,12 @@ ms.service: azure-ai-search
ms.custom:
- ignite-2024
ms.topic: quickstart
-ms.date: 10/14/2024
+ms.date: 01/07/2025
---
# Quickstart: Generative search (RAG) with grounding data from Azure AI Search
-This quickstart shows you how to send basic and complex queries to a Large Language Model (LLM) for a conversational search experience over your indexed content on Azure AI Search. You use the Azure portal to set up the resources, and then run Python code to call the APIs.
+This quickstart shows you how to send queries to a chat completion model for a conversational search experience over your indexed content on Azure AI Search. You use the Azure portal to set up the resources, and then run Python code to call the APIs.
## Prerequisites
@@ -21,7 +21,7 @@ This quickstart shows you how to send basic and complex queries to a Large Langu
- [Azure AI Search](search-create-service-portal.md), Basic tier or higher so that you can [enable semantic ranker](semantic-how-to-enable-disable.md). Region must be the same one used for Azure OpenAI.
-- [Azure OpenAI](https://aka.ms/oai/access) resource with a deployment of `gpt-4o`, `gpt-4o-mini`, or equivalent LLM, in the same region as Azure AI Search.
+- [Azure OpenAI](https://aka.ms/oai/access) resource with a deployment of `gpt-4o`, `gpt-4o-mini`, or equivalent chat completion model, in the same region as Azure AI Search.
- [Visual Studio Code](https://code.visualstudio.com/download) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) and the [Jupyter package](https://pypi.org/project/jupyter/). For more information, see [Python in Visual Studio Code](https://code.visualstudio.com/docs/languages/python).
@@ -90,7 +90,17 @@ We recommend the hotels-sample-index, which can be created in minutes and runs o
1. Select **Edit JSON**.
-1. Search for "semantic" to find the section in the index for a semantic configuration. Replace the empty `"semantic": {}` line with the following semantic configuration. This example specifies a `"defaultConfiguration"`, which is important to the running of this quickstart.
+1. Scroll to the end of the index, where you can find placeholders for constructs that can be added to an index.
+
+ ```json
+ "analyzers": [],
+ "tokenizers": [],
+ "tokenFilters": [],
+ "charFilters": [],
+ "normalizers": [],
+ ```
+
+1. On a new line after "normalizers", paste in the following semantic configuration. This example specifies a `"defaultConfiguration"`, which is important to the running of this quickstart.
```json
"semantic":{
diff --git a/articles/search/search-limits-quotas-capacity.md b/articles/search/search-limits-quotas-capacity.md
index 6745a6e8749..e33250c1a94 100644
--- a/articles/search/search-limits-quotas-capacity.md
+++ b/articles/search/search-limits-quotas-capacity.md
@@ -69,8 +69,6 @@ Maximum number of documents per index are:
+ 288 billion on L1
+ 576 billion on L2
-You can check the number of documents in the Azure portal and through REST calls that include `search=*` and `count=true`.
-
Maximum size of each document is approximately 16 megabytes. Document size is actually a limit on the size of the indexing API request payload, which is 16 megabytes. That payload can be a single document, or a batch of documents. For a batch with a single document, the maximum document size is 16 MB of JSON.
Document size applies to *push mode* indexing that uploads documents to a search service. If you're using an indexer for *pull mode* indexing, your source files can be any file size, subject to [indexer limits](#indexer-limits). For the blob indexer, file size limits are larger for higher tiers. For example, the S1 limit is 128 megabytes, S2 limit is 256 megabytes, and so forth.
diff --git a/articles/search/tutorial-rag-build-solution-query.md b/articles/search/tutorial-rag-build-solution-query.md
index b047548ad9e..a23414376f8 100644
--- a/articles/search/tutorial-rag-build-solution-query.md
+++ b/articles/search/tutorial-rag-build-solution-query.md
@@ -167,16 +167,10 @@ search_results = search_client.search(
vector_queries= [vector_query],
filter="search.ismatch('ice*', 'locations', 'full', 'any')",
select=["title", "chunk", "locations"],
- top=5,
+ top=5
)
sources_formatted = "=================\n".join([f'TITLE: {document["title"]}, CONTENT: {document["chunk"]}, LOCATIONS: {document["locations"]}' for document in search_results])
-
-search_results = search_client.search(
- search_text=query,
- top=10,
- filter="search.ismatch('ice*', 'locations', 'full', 'any')",
- select="title, chunk, locations"
```
Results from the filtered query should now look similar to the following response. Notice the emphasis on ice cover.