With the development of Generative AI, we hope to use a unified platform to manage different LLM and SLM, enterprise data integration, fine-tuning/RAG operations, and the evaluation of different enterprise businesses after integrating LLM and SLM, etc., so that generative AI can Smart applications are better implemented. Azure AI Studio is an enterprise-level generative AI application platform.
With Azure AI Studio, you can evaluate large language model (LLM) responses and orchestrate prompt application components with prompt flow for better performance. The platform facilitates scalability for transforming proof of concepts into full-fledged production with ease. Continuous monitoring and refinement support long-term success.
We can quickly deploy the Phi-3 model on Azure AI Studio through simple steps, and then use Azure AI Studio to complete Phi-3 related Playground/Chat, Fine-tuning, evaluation and other related work.
This is Bicep template that deploys everything you need to get started with Azure AI Studio. Includes AI Hub with dependent resources, AI project, AI Services and an online endpoint
If you already have the Azure Developer CLI installed on your machine, using this template is as simple as running this command in a new directory.
azd init -t azd-aistudio-starter
Or If using the azd VS Code extension you can paste this URL in the VS Code command terminal.
azd-aistudio-starter
Create Azure AI Studio on Azure Portal
After completing the naming of the studio and setting the region, you can create it
After successful creation, you can access the studio you created through ai.azure.com
There can be multiple projects on one AI Studio. Create a project in AI Studio to prepare.
Click the Explore option of the project to enter the Model Catalog and select Phi-3
Select Phi-3-mini-4k-instruct
Click 'Deploy' to deploy the Phi-3-mini-4k-instruct model
Note
You can select computing power when deploying
Go to the deployment page, select Playground, and chat with Phi-3 of Azure AI Studio
To deploy a model from the Azure Model Catalog, you can follow these steps:
- Sign in to Azure AI Studio.
- Choose the model you want to deploy from the Azure AI Studio model catalog.
- On the model's Details page, select Deploy and then select Serverless API with Azure AI Content Safety.
- Select the project in which you want to deploy your models. To use the Serverless API offering, your workspace must belong to the East US 2 or Sweden Central region. You can customize the Deployment name.
- On the deployment wizard, select the Pricing and terms to learn about the pricing and terms of use.
- Select Deploy. Wait until the deployment is ready and you're redirected to the Deployments page.
- Select Open in playground to start interacting with the model.
- You can return to the Deployments page, select the deployment, and note the endpoint's Target URL and the Secret Key, which you can use to call the deployment and generate completions.
- You can always find the endpoint's details, URL, and access keys by navigating to the Build tab and selecting Deployments from the Components section.
Note
Please note that your account must have the Azure AI Developer role permissions on the Resource Group to perform these steps.
You can access https://{Your project name}.region.inference.ml.azure.com/swagger.json through Postman GET and combine it with Key to learn about the provided interfaces
such as access score api
You can get the request parameters very conveniently, as well as the response parameters. This is Postman result