Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
  • Loading branch information
cetrarom2 authored Oct 21, 2024
1 parent 8d7c913 commit 639e286
Showing 1 changed file with 24 additions and 136 deletions.
160 changes: 24 additions & 136 deletions notebooks/GenAI/GCP_RAG_for_Structure_Data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
"source": [
"Generative AI (GenAI) represents a transformative technology capable of producing human-like text, images, code, and various other types of content. While much of the focus has been on unstructured data—such as PDFs, text documents, image files, and websites—many GenAI implementations rely on a parameter called \"top K.\" This algorithm retrieves only the highest-scoring pieces of content relevant to a user's query, which can be limiting. Users seeking insights from structured data formats like CSV and JSON often require access to all relevant occurrences, rather than just a subset.\n",
"\n",
"In this tutorial, we present a technique that leverages SQL databases. By formulating a query based on the user's request, the model submits this query to the database, providing comprehensive results. This approach not only ensures users receive all pertinent information but also reduces the likelihood of exceeding token limits."
"In this tutorial, we use a RAG agent together with a VertexAI chat model gemini-1.5-pro to query the BigQuery table. A Retrieval Augmented Generation (RAG) agent is a key part of a RAG application that enhances the capabilities of the large language models (LLMs) by integrating external data retrieval. AI agents empower LLMs to interact with the world through actions and tools."
]
},
{
Expand All @@ -53,7 +53,9 @@
"\n",
"In this tutorial, we will be using Google Gemini Pro 1.5, which does not require deployment. However, if you prefer to use a different model, you can select one from the Model Garden via the console. This will allow you to add the model to your registry, create an endpoint (or utilize an existing one), and deploy the model—all in a single step. Here is a link for more information: [model deployment](https://cloud.google.com/vertex-ai/docs/general/deployment).\n",
"\n",
"Before we begin, you'll need to create a Vertex AI RAG Data Service Agent service account. To do this, go to the IAM section of the console. Ensure you check the box for \"Include Google-provided role grant.\" If the role is not listed, click \"Grant Access\" and add \"Vertex AI RAG Data Service Agent\" as a role."
"Before we begin, you'll need to create a Vertex AI RAG Data Service Agent service account. To do this, go to the IAM section of the console. Ensure you check the box for \"Include Google-provided role grant.\" If the role is not listed, click \"Grant Access\" and add \"Vertex AI RAG Data Service Agent\" as a role.\n",
"\n",
"We are using a e2-standard-4 (Efficient Instance: 4 vCPUs, 16 GB RAM) for this tutorial. "
]
},
{
Expand All @@ -74,7 +76,7 @@
"In this tutorial you will learn about:\n",
"- How to set up a BigQuery dataset and a table.\n",
"- How to load data to a BigQuery table.\n",
"- How to use the langchain ChatVertexAI agent to extract information froim the table. \n"
"- How to use the langchain ChatVertexAI agent to extract information from the table. \n"
]
},
{
Expand All @@ -92,11 +94,7 @@
"id": "0dca31b2-880d-4cbc-bf76-ede96998e371",
"metadata": {},
"source": [
"<ul>\n",
" <li>BigQuery Standard 1TB Storage.</li>\n",
" <li>Model for Chat: 5 request per day. Average Input and Output characters 5,000.</li>\n",
" <li>Total $23.89 per month.</li>\n",
"</ul>"
"If you are following this tutorial in one sitting it will cost $23.89 per month. Completing the process in multiple sessions or using a method different from the tutorial may result in increased costs."
]
},
{
Expand Down Expand Up @@ -176,7 +174,7 @@
"id": "5781a07c-84ed-416c-a782-b1dc881e5c18",
"metadata": {},
"source": [
"Provide the names of the bucket name, source file name path, and destination blob name to upload the CSV source file to the bucket."
"Once the bucket is created, we need to access the CSV source file. In this tutorial, I transferred the data file to our Jupyter notebook by simply dragging and dropping it from my local folder. Next, we need to specify the bucket name and the path of the data source in order to upload the CSV file to the bucket. It is important to keep in mind that the name of the bucket has to be unique."
]
},
{
Expand All @@ -186,25 +184,7 @@
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import storage\n",
"\n",
"# The ID of your GCS bucket\n",
"bucket_name = bucket\n",
" # The path to your file to upload\n",
"source_file_name = \"<SOURCE FILE NAME>\"\n",
" # The ID of your GCS object\n",
"destination_blob_name = \"<DESTINATION BLOB NAME>\"\n",
"\n",
"storage_client = storage.Client()\n",
"bucket = storage_client.bucket(bucket_name)\n",
"blob = bucket.blob(destination_blob_name)\n",
"blob.upload_from_filename(source_file_name)\n",
"\n",
"print(\n",
" \"File {} uploaded to {}.\".format(\n",
" source_file_name, destination_blob_name\n",
" )\n",
")\n"
"!gsutil cp '<Path to your csv>' gs://{bucket}"
]
},
{
Expand All @@ -226,8 +206,9 @@
"\n",
"client = bigquery.Client()\n",
"\n",
"dataset_name = '<Name of the Dataset>'\n",
"# Set dataset_id to the ID of the dataset to create.\n",
"dataset_id = project_id.'<Name of the Dataset>'\n",
"dataset_id = f'{project_id}.{dataset_name}'\n",
"\n",
"# Construct a full Dataset object to send to the API.\n",
"dataset = bigquery.Dataset(dataset_id)\n",
Expand Down Expand Up @@ -261,7 +242,8 @@
"\n",
"client = bigquery.Client()\n",
"\n",
"table_id = project_id.dataset_id.\"<Table Name>\"\n",
"table_name = \"<Table Name>\"\n",
"table_id = f'{project_id}.{dataset_id}.{table_name}'\n",
"\n",
"schema = [\n",
" bigquery.SchemaField(\"id\", \"INTEGER\", mode=\"NULLABLE\"),\n",
Expand Down Expand Up @@ -319,105 +301,34 @@
},
{
"cell_type": "markdown",
"id": "dad4f96d-ae52-43f2-8290-c04b25960ea8",
"id": "ad42d610-daa6-4d92-ac14-2d90349dafe1",
"metadata": {},
"source": [
"We can perform a similar check to confirm that the table we created is located within the appropriate dataset."
"We need to use the \"to_gbq\" function from the pandas-gbq library, which allows us to write a Pandas DataFrame to a Google BigQuery table. This enables us to populate our BigQuery table with the data from the DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ea826c-9d70-4268-b4db-a657d5cdcaf0",
"id": "7cfd1d66-a0fb-403d-a9b1-27250cc81f91",
"metadata": {},
"outputs": [],
"source": [
"# Set dataset_id to the ID of the dataset to fetch.\n",
"\n",
"dataset = client.get_dataset(dataset_id) # Make an API request.\n",
"\n",
"full_dataset_id = \"{}.{}\".format(dataset.project, dataset.dataset_id)\n",
"friendly_name = dataset.friendly_name\n",
"print(\n",
" \"Got dataset '{}' with friendly_name '{}'.\".format(\n",
" full_dataset_id, friendly_name\n",
" )\n",
")\n",
"\n",
"# View dataset properties.\n",
"print(\"Description: {}\".format(dataset.description))\n",
"print(\"Labels:\")\n",
"labels = dataset.labels\n",
"if labels:\n",
" for label, value in labels.items():\n",
" print(\"\\t{}: {}\".format(label, value))\n",
"else:\n",
" print(\"\\tDataset has no labels defined.\")\n",
"df.to_gbq(table_id, project_id=project_id)\n",
"\n",
"# View tables in dataset.\n",
"print(\"Tables:\")\n",
"tables = list(client.list_tables(dataset)) # Make an API request(s).\n",
"if tables:\n",
" for table in tables:\n",
" print(\"\\t{}\".format(table.table_id))\n",
"else:\n",
" print(\"\\tThis dataset does not contain any tables.\")"
]
},
{
"cell_type": "markdown",
"id": "736080b9-0cd9-44fd-9580-dadb10a22a59",
"metadata": {
"tags": []
},
"source": [
"To upload the data from our file stored in the bucket, we first need to create a DataFrame. In this example, we will be using a file that contains health screening data."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b614cc1a-b318-4364-8d47-1dea9700c792",
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import storage\n",
"import pandas as pd\n",
"from io import StringIO\n",
"\n",
"# Initialize a client\n",
"client = storage.Client()\n",
"my_bucket = bucket\n",
"storage_client = storage.Client()\n",
"bucket = storage_client.get_bucket(my_bucket)\n",
"blob = bucket.blob(destination_blob_name)\n",
"path = \"<gsutil URI>\"\n",
"df = pd.read_csv(path)\n",
"df.head()"
"client.load_table_from_dataframe(df, table_id).result()"
]
},
{
"cell_type": "markdown",
"id": "ad42d610-daa6-4d92-ac14-2d90349dafe1",
"id": "4b16a292-afdc-415c-a8a7-071bc90281c3",
"metadata": {},
"source": [
"We need to use the \"to_gbq\" function from the pandas-gbq library, which allows us to write a Pandas DataFrame to a Google BigQuery table. This enables us to populate our BigQuery table with the data from the DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7cfd1d66-a0fb-403d-a9b1-27250cc81f91",
"metadata": {},
"outputs": [],
"source": [
"df.to_gbq(table_id, project_id=project_id)\n",
"\n",
"from google.cloud import storage\n",
"import pandas as pd\n",
"from io import StringIO\n",
"\n",
"client.load_table_from_dataframe(df, table_id).result()"
"![image.png](attachment:974666d1-25e5-47d9-9800-e82c434686b0.png)"
]
},
{
Expand Down Expand Up @@ -473,7 +384,7 @@
"id": "b6fb53b8-9f93-4f37-8730-33c5871720fc",
"metadata": {},
"source": [
"To interact with the BigQuery table using a Pythonic domain language, we utilize SQLAlchemy. SQLAlchemy is a Python SQL toolkit that enables developers to access and manage SQL databases, allowing users to write queries as strings or chain Python objects for similar queries. However, to do this, we need specific credentials, which can be accessed through a service account file. For more information on how to create a service account file, please visit the following link: [Service Account Creation](https://cloud.google.com/iam/docs/service-accounts-create#python)"
"To interact with the BigQuery table using a Pythonic domain language, we utilize SQLAlchemy. SQLAlchemy is a Python SQL toolkit that enables developers to access and manage SQL databases, allowing users to write queries as strings or chain Python objects for similar queries. "
]
},
{
Expand All @@ -492,11 +403,8 @@
"from langchain.agents import create_sql_agent\n",
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
"from langchain.sql_database import SQLDatabase\n",
"service_account_file = \"<Service_Account_File>\" # Change to where your service account key file is located\n",
"project_id = project_id\n",
"dataset = \"<Name of the Dataset>\"\n",
"table = \"<Table Name>\"\n",
"sqlalchemy_url = f'bigquery://{project_id}/{dataset}?credentials_path={service_account_file}'\n",
"\n",
"sqlalchemy_url = f'bigquery://{project_id}/{dataset}'\n",
"print(sqlalchemy_url)"
]
},
Expand All @@ -505,7 +413,7 @@
"id": "9194cc1d-7ef1-4c4e-8e65-e7d9ada5be24",
"metadata": {},
"source": [
"Next, we import the __ChatVertexAI__ agent from Langchain and configure it with the appropriate hyperparameters. In this instance, we are using the __gemini-1.5-pro__ LLM model, after which we create the SQL agent to enable querying the BigQuery table using a natural language string as a prompt."
"Next, we import the __ChatVertexAI__ agent from Langchain and configure it with the appropriate hyperparameters. In this instance, we are using the __gemini-1.5-pro__ LLM model, after which we create the SQL agent to enable querying the BigQuery table using a natural language string as a prompt. Temperature regulates randomness, with higher temperatures resulting in more varied and unpredictable outputs. Top-k sampling selects from the k most probable next tokens at each step, where a lower k emphasizes higher-probability tokens. The max tokens hyperparameter specifies the maximum number of tokens in the response from the large language model. Max retries indicates how many responses we will receive from the model."
]
},
{
Expand Down Expand Up @@ -561,26 +469,6 @@
"'The table screening has 139920 rows.'"
]
},
{
"cell_type": "markdown",
"id": "65f645ed-8efa-436f-b993-72c45a7aadbe",
"metadata": {
"tags": []
},
"source": [
"We know the output of the model are correct. The image below is a reminder of what the actual results are calculated from."
]
},
{
"cell_type": "markdown",
"id": "8160ee6e-f95f-44c3-ab27-4415b44ba461",
"metadata": {
"tags": []
},
"source": [
"![image.png](../../images/gcp_rag_structure_data_01.png)"
]
},
{
"cell_type": "markdown",
"id": "f2b62012-b2d2-48c1-ae69-6f8816065551",
Expand Down Expand Up @@ -668,7 +556,7 @@
"id": "1f0edc0b-f2cc-478c-855f-9acff3792f4d",
"metadata": {},
"source": [
"Please remember to delete or stop your Jupyter notebook and delete your data store to prevent incurring charges. And if you have created any other services like buckets, please remember to delete them as well."
"Please remember to delete or stop your Jupyter notebook and delete your BigQuery dataset and table to prevent incurring charges. And if you have created any other services like buckets, please remember to delete them as well."
]
},
{
Expand Down

0 comments on commit 639e286

Please sign in to comment.