diff --git a/docs/api/overview.md b/docs/api/overview.md index 679c5ec..120ea9f 100644 --- a/docs/api/overview.md +++ b/docs/api/overview.md @@ -43,7 +43,7 @@ response = client.generate_text( print(response) ``` -For more detailed usage instructions and examples, please refer to the [Usage Guide](../usage.md) (🚧 Under Construction, come back soon 🚧). +For more detailed usage instructions and examples, please refer to the [Usage Guide](../usage/overview.md) (🚧 Under Construction, come back soon 🚧). ## Extending ClientAI diff --git a/docs/examples/ai_dungeon_master.md b/docs/examples/ai_dungeon_master.md new file mode 100644 index 0000000..c08f490 --- /dev/null +++ b/docs/examples/ai_dungeon_master.md @@ -0,0 +1,845 @@ +# ClientAI Tutorial: Building an AI Dungeon Master + +In this tutorial, we'll walk through the process of creating an AI-powered Dungeon Master using the ClientAI package. We'll explain each concept in detail and build our game step-by-step, providing context for every decision we make, both technical and gameplay-related. + +## Table of Contents + +1. [Introduction](#1-introduction) +2. [Setting Up the Project](#2-setting-up-the-project)
+2.5 [Creating the Project Structure](#25-creating-the-project-structure) +3. [Creating the Game Structure](#3-creating-the-game-structure) +4. [Integrating Multiple AI Providers](#4-integrating-multiple-ai-providers) +5. [Developing the Enhanced AI Dungeon Master](#5-developing-the-enhanced-ai-dungeon-master) +6. [Main Script that Runs the Game](#6-main-script-that-runs-the-game) +7. [Running the Game](#7-running-the-game) +8. [Conclusion and Further Improvements](#8-conclusion-and-further-improvements) + +## 1. Introduction + +ClientAI is a Python package that provides a unified interface for interacting with multiple AI providers. In this tutorial, we'll use ClientAI to create an AI Dungeon Master that can generate story elements, NPC dialogues, and dynamic environments using different AI models. + +Our AI Dungeon Master will be a text-based role-playing game (RPG) where the game's content is dynamically generated by AI. This approach allows for infinite replayability and unique experiences for each player. + +We'll focus on explaining both technical decisions (such as class structures and AI interactions) and gameplay decisions (like character creation and game mechanics). + +The final result is available in [this github repo](https://github.com/igorbenav/clientai-dungeon-master). + +## 2. Setting Up the Project + +First, let's set up our project and install the necessary dependencies. + +1. Create a new directory for your project: + +```bash +mkdir ai_dungeon_master +cd ai_dungeon_master +``` + +2. Install ClientAI and its dependencies: + + _If you want to use poetry, you may skip this part._ + +```bash +pip install clientai[all] +``` + +This command installs ClientAI with support for all providers. If you only need specific providers, you can install them individually (e.g., `pip install clientai[openai]` for just OpenAI support). + +3. Install additional dependencies: + + _If you want to use poetry, you may also skip this part._ + +We'll need some additional packages for our project. + +```bash +pip install requests +``` + +- `requests`: For making HTTP requests to check if the local AI servers are running. + +4. Install Ollama: + +Ollama is a local AI model server that we'll use to run the Llama 3 model. Follow these steps to install Ollama: + +- For macOS or Linux: + ```bash + curl -fsSL https://ollama.com/install.sh | sh + ``` + +- For Windows: + Download the installer from the [Ollama GitHub releases page](https://github.com/ollama/ollama/releases) and follow the installation instructions. + +5. Pull the Llama 3 model from Ollama: + +After installing Ollama, you need to download the Llama 3 model. Run the following command: + +```bash +ollama pull llama3 +``` + +This command will download and set up the Llama 3 model for use with Ollama. The download might take some time depending on your internet connection. + +These imports will be used throughout our project: + +- `random`: For generating random numbers and making random choices. +- `subprocess`: For starting and managing subprocesses like local AI servers. +- `time`: For adding delays and managing timeouts. +- `requests`: For making HTTP requests to check server availability. +- `logging`: For logging information and errors. +- `ClientAI`: The main class from the ClientAI package that we'll use to interact with AI providers. + +## 2.5 Creating the Project Structure + +Before we dive into the code, let's set up a proper project structure. This will help us organize our code and make it easier to maintain and expand in the future. + +1. Create the following directory structure: + +``` +clientai_dungeon_master/ +├── pyproject.toml +├── README.md +├── .gitignore +├── .env +└── ai_dungeon_master/ + ├── __init__.py + ├── main.py + ├── game/ + │ ├── __init__.py + │ ├── character.py + │ ├── game_state.py + │ └── dungeon_master.py + ├── ai/ + │ ├── __init__.py + │ ├── ai_providers.py + │ └── ollama_server.py + └── utils/ + ├── __init__.py + └── text_utils.py +``` + +2. Create a `pyproject.toml` file in the root directory with the following content: + + _If you're using pip directly, you may skip this part_ + +```toml +[tool.poetry] +name = "clientai-dungeon-master" +version = "0.1.0" +description = "An AI-powered dungeon master for text-based RPG adventures" +authors = ["Your Name "] +readme = "README.md" +packages = [{include = "clientai_dungeon_master"}] + +[tool.poetry.dependencies] +python = "^3.11" +clientai = "^0.1.2" +requests = "^2.32.3" +python-decouple = "^3.8" + + +[build-system] +requires = ["poetry-core"] +build-backend = "poetry.core.masonry.api" +``` + +and run + +```bash +poetry install +``` + +3. Create a `.gitignore` file in the root directory with the following content: + +``` +# Python +__pycache__/ +*.py[cod] +*.pyo +*.pyd +.Python +env/ +venv/ +ENV/ + +# Poetry +.venv/ +dist/ + +# Environment variables +.env + +# IDEs +.vscode/ +.idea/ + +# Logs +*.log + +# OS generated files +.DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db +Thumbs.db +``` + +4. Create a `.env` file in the root directory to store your API keys: + +``` +OPENAI_API_KEY=your_openai_api_key_here +REPLICATE_API_KEY=your_replicate_api_key_here +``` + +Remember to replace `your_openai_api_key_here` and `your_replicate_api_key_here` with your actual API keys. + +5. Move the relevant code into the appropriate files based on the new structure. + +This structure separates concerns, making the code more modular and easier to maintain. It also sets up the project for potential future expansion, such as adding more game features or integrating additional AI providers. + +## 3. Creating the Game Structure + +Before integrating AI, we'll create the basic structure of our game. This includes classes to represent the character, game state, and AI providers. + +### Character Class + +The `Character` class represents the player's character in the game. It stores essential character information like name, race, class, background story, and stats. + +```python title="ai_dungeon_master/game/character.py" +class Character: + def __init__(self, name: str, race: str, class_type: str, background: str, stats: dict): + self.name = name + self.race = race + self.class_type = class_type + self.background = background + self.stats = stats + + def __str__(self): + return f"Name: {self.name}, Race: {self.race}, Class: {self.class_type}, Background: {self.background}, Stats: {self.stats}" +``` + +Here we define a character with attributes like a name, race, class, background and stats (like Strength, Intelligence, Wisdom). This is really simple, but will be enough to customize what happens in the story. + +We'll also define the `__str__` method to be able to print the character's details easily. + +### GameState Class + +The `GameState` class keeps track of the game's current state, including the character's status, location, inventory, health, experience, and quests. + +```python title="ai_dungeon_master/game/game_state.py" +from typing import Optional + +from .character import Character + +class GameState: + def __init__(self, character: Character): + self.character = character + self.location = "entrance" + self.inventory = [] + self.health = 100 + self.experience = 0 + self.quests = [] + + def update(self, location: Optional[str] = None, item: Optional[str] = None, health_change: int = 0, exp_gain: int = 0, quest: Optional[str] = None): + if location: + self.location = location + if item: + self.inventory.append(item) + self.health = max(0, min(100, self.health + health_change)) + self.experience += exp_gain + if quest: + self.quests.append(quest) + + def __str__(self): + return f"{str(self.character)}\nLocation: {self.location}, Health: {self.health}, XP: {self.experience}, Inventory: {', '.join(self.inventory)}, Quests: {', '.join(self.quests)}" +``` + +We keep track of the state to keep a more consistent experience, we can't expect this to be always generated by the llm. We need to pass the game state as a guide to generate the content. + +The update method allows easy updates to the game state, we'll keep health within 0 to 100, and add an inventory and quests to add more depth to the game. + + +## 4. Integrating Multiple AI Providers + +We'll use ClientAI to create a class that manages interactions with different AI providers. This abstraction allows us to switch between providers seamlessly. + +### AIProviders Class + +```python title="ai_dungeon_master/ai/ai_providers.py" +from typing import List + +from clientai import ClientAI + +class AIProviders: + def __init__(self): + self.openai = ClientAI('openai', api_key=openai_token) + self.replicate = ClientAI('replicate', api_key=replicate_token) + self.ollama = ClientAI('ollama', host="http://localhost:11434") + + def chat( + self, + messages: List[dict], + provider: str = 'openai', + openai_model="gpt-4o-mini", + replicate_model="meta/meta-llama-3-8b-instruct", + ollama_model="llama3", + ): + if provider == 'openai': + return self.openai.chat(messages, model=openai_model, stream=True) + elif provider == 'replicate': + return self.replicate.chat(messages, model=replicate_model, stream=True) + elif provider == 'ollama': + return self.ollama.chat(messages, model=ollama_model, stream=True) + else: + raise ValueError(f"Unknown provider: {provider}") +``` + +We create instances of ClientAI for each provider with the necessary API keys or host information, then abstract the chat method to allow for easy switching between AI providers. + +We are going to use ClientAI to use multiple AI models from different providers, since we want to find what is the best model for each task balancing performance and costs. + +### Managing API Keys with python-decouple and a .env File + +To securely handle your API keys without exposing them in your codebase, you can use the python-decouple package and store your keys in a .env file. This approach keeps sensitive information out of your code and version control. + +1. Install python-decouple: + _You may skip this if you used poetry_ + + ```bash + pip install python-decouple + ``` + +2. Create a .env File: + In your project's root directory, make sure the `.env` has your API keys: + + ```plaintext + OPENAI_API_KEY=your_openai_api_key_here + REPLICATE_API_KEY=your_replicate_api_key_here + ``` + + Replace `your_openai_api_key_here` and `your_replicate_api_key_here` with your actual API keys. + +3. Ensure .env is added to .gitignore: + To prevent the .env file from being tracked by version control, ensure it is in your .gitignore file: + + ```plaintext + # .gitignore + .env + ``` + + This ensures your API keys remain private and aren't pushed to repositories like GitHub. + +4. Access the API Keys in Your Code: + Import `config` from decouple and retrieve the API keys: + + ```python title="ai_dungeon_master/ai/ai_providers.py" + from decouple import config + + openai_token = config('OPENAI_API_KEY') + replicate_token = config('REPLICATE_API_KEY') + ``` + + Now, you can use these variables when initializing your AI providers. + +5. Update the AIProviders Class: + ```python title="ai_dungeon_master/ai/ai_providers.py" + from typing import List + + from clientai import ClientAI + from decouple import config + + openai_token = config('OPENAI_API_KEY') + replicate_token = config('REPLICATE_API_KEY')he ol + + class AIProviders: + def __init__(self): + self.openai = ClientAI('openai', api_key=openai_token) + self.replicate = ClientAI('replicate', api_key=replicate_token) + self.ollama = ClientAI('ollama', host="http://localhost:11434") + + ... + ``` + +### Managing AI Servers + +We need to ensure that local AI servers (like Ollama) are running before the game starts, so let's define a function to start ollama. + +```python title="ai_dungeon_master/ai/ollama_server.py" +import subprocess +import time +import requests +import logging + +logging.basicConfig(level=logging.INFO) + +def start_ollama_server(timeout: int = 30, check_interval: float = 1.0): + """ + Start the Ollama server and wait for it to be ready. + """ + logging.info("Starting Ollama server...") + + try: + process = subprocess.Popen( + ['ollama', 'serve'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + text=True + ) + except subprocess.SubprocessError as e: + logging.error(f"Failed to start Ollama process: {e}") + raise + + start_time = time.time() + while time.time() - start_time < timeout: + try: + response = requests.get('http://localhost:11434', timeout=5) + if response.status_code == 200: + logging.info("Ollama server is ready.") + return process + except requests.ConnectionError: + pass + except requests.RequestException as e: + logging.error(f"Unexpected error when checking Ollama server: {e}") + process.terminate() + raise + + if process.poll() is not None: + stdout, stderr = process.communicate() + logging.error(f"Ollama process terminated unexpectedly. stdout: {stdout}, stderr: {stderr}") + raise subprocess.SubprocessError("Ollama process terminated unexpectedly") + + time.sleep(check_interval) + + process.terminate() + raise TimeoutError(f"Ollama server did not start within {timeout} seconds") +``` + +By managing the server startup within the code, we reduce the setup burden on the player. + +## 5. Developing the Enhanced AI Dungeon Master + +Now we'll develop the main class that controls the game logic and interactions with AI models. + +### EnhancedAIDungeonMaster Class + +```python title="ai_dungeon_master/game/dungeon_master.py" +from typing import Tuple, List +import random +import time + +from ai.ai_providers import AIProviders +from utils.text_utils import print_separator +from game.character import Character +from game.game_state import GameState + +class EnhancedAIDungeonMaster: + def __init__(self): + self.ai = AIProviders() + self.conversation_history = [] + self.game_state = None + + # Methods will be added here... +``` + +### Creating the Character + +We need a method to create the player's character. We'll use AI to do this automatically for us: + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def create_character(self): + print("Let's create your character!") + name = input("What is your character's name? ") + + # We start by defining a prompt + character_prompt = f""" + Create a character for a fantasy RPG with the following details: + Name: {name} + + Please provide: + 1. A suitable race (e.g., Human, Elf, Dwarf, etc.) + 2. A class (e.g., Warrior, Mage, Rogue, etc.) + 3. A brief background story (2-3 sentences) + 4. Basic stats (Strength, Dexterity, Constitution, Intelligence, Wisdom, Charisma) on a scale of 1-20 + + Format the response as follows: + Race: [race] + Class: [class] + Background: [background story] + Stats: + - Strength: [value] + - Dexterity: [value] + - Constitution: [value] + - Intelligence: [value] + - Wisdom: [value] + - Charisma: [value] + """ + + # And we add this prompt to our chat history + self.add_to_history("user", character_prompt) + character_info = self.print_stream(self.ai.chat(self.conversation_history, provider='openai')) + + # Parse the character info + lines = character_info.strip().split('\n') + race = class_type = background = "" + stats = {} + + for line in lines: + if line.startswith("Race:"): + race = line.split(": ", 1)[1].strip() + elif line.startswith("Class:"): + class_type = line.split(": ", 1)[1].strip() + elif line.startswith("Background:"): + background = line.split(": ", 1)[1].strip() + elif ":" in line and not line.startswith("Stats:"): + key, value = line.split(":", 1) + key = key.strip("- ") + try: + stats[key] = int(value.strip()) + except ValueError: + stats[key] = random.randint(1, 20) + + # Just in case, let's ensure it'the player has stats + # If any stat is missing, assign a random value + for stat in ["Strength", "Dexterity", "Constitution", "Intelligence", "Wisdom", "Charisma"]: + if stat not in stats: + stats[stat] = random.randint(1, 20) + + # And let's also ensure other required attributes are assigned + # If race, class, or background is empty, assign default values + race = race or "Human" + class_type = class_type or "Adventurer" + background = background or "A mysterious traveler with an unknown past." + + return Character(name, race, class_type, background, stats) +``` + +We'll use GPT 4o mini to create initial stuff we need, like the race, class, background etc, and extract the information from the generated content to handle errors. + +Note that since we are leaving this information to the LLM, the name will influence the attributes. If you need a more consistently random generation, do it in the python code and just pass it to the prompt. + +### Maintaining Conversation History + +To provide context to the AI, we maintain a conversation history. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def add_to_history(self, role: str, content: str): + if not self.conversation_history or self.conversation_history[-1]['content'] != content: + self.conversation_history.append({"role": role, "content": content}) + if len(self.conversation_history) > 10: + self.conversation_history = self.conversation_history[-10:] +``` + +Here we will ensure we don't add the same message twice. Plus, we are limiting the conversation history to 10 messages to prevent exceeding token limits. + +### Generating the Environment + +Next, let's create detailed environments to enhance the imersion. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def generate_environment(self): + if not hasattr(self, 'current_environment'): + prompt = f""" + The character {self.game_state.character.name} is a {self.game_state.character.race} {self.game_state.character.class_type} + currently in the {self.game_state.location}. + + Describe the current environment in detail, focusing on: + 1. The physical setting and atmosphere + 2. Any notable NPCs present + 3. Interesting objects or features + + Do not create a new character or change any existing character details. + Do not include any actions or dialogue for {self.game_state.character.name}. + + End your description with one of these tags if appropriate: + [INTERACT_OPPORTUNITY] - if there's a chance for the player to interact with someone or something + [QUEST_OPPORTUNITY] - if there's a potential quest or mission available + """ + self.add_to_history("user", prompt) + self.current_environment = self.ai.chat(self.conversation_history, provider='openai') + return self.current_environment +``` + +Here we instruct the AI to provide specific details, and we use tags for opportunities. We'll parse these tags `INTERACT_OPPORTUNITY` and `QUEST_OPPORTUNITY` later to perform other actions. + +We'll also store the environment description to avoid regenerating it unnecessarily. + +### Handling Player Actions + +Now let's process the player's actions and generate outcomes. We'll run this one locally with ollama. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def handle_player_action(self, action): + prompt = f""" + The player ({self.game_state.character.name}, a {self.game_state.character.race} {self.game_state.character.class_type}) + attempts to {action} in {self.game_state.location}. + Describe the immediate result of this action, focusing on the environment and NPCs' reactions. + Do not generate any further actions or dialogue for {self.game_state.character.name}. + If the player is trying to interact with an NPC, end your response with [NPC_INTERACTION: ]. + """ + self.add_to_history("user", prompt) + return self.ai.chat(self.conversation_history, provider='ollama') +``` + +Here we pass what the player wants to do to the AI and generate the outcomes for the players actions. We are also using a tag here for interactions, so we can process those in a different way. + +### Generating NPC Dialogue + +Next, let's create a function to generate a dialogue with an npc. We'll use replicate with llama3 8b for this. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def generate_npc_dialogue(self, npc_name: str, player_input: str): + prompt = f""" + The player ({self.game_state.character.name}) said to {npc_name}: "{player_input}" + Generate a single, natural response from {npc_name}, addressing the player's input directly. + If the player is asking about items for sale, list 2-3 specific items with brief descriptions and prices. + Do not include any actions or responses from the player character. + Keep the response concise and relevant to the player's input. + Do not include any formatting tags, headers, or quotation marks in your response. + Respond as if you are {npc_name} speaking directly to the player. + """ + self.add_to_history("user", prompt) + return self.ai.chat(self.conversation_history, provider='replicate') +``` + +Note that in the prompt we ensure the AI provides responses that are in character and appropriate, so we can pass this directly to the player. + +### Handling Conversations + +We manage conversations with NPCs in a separate method. We start with a conversation loop, to allow the player to have a back-and-forth dialogue with an NPC, and we reset the conversation history to focus the AI on the dialogue. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def handle_conversation(self, npc_name): + print(f"\nYou are now in conversation with {npc_name}.") + self.conversation_history = [ + {"role": "system", "content": f"You are {npc_name}, speaking directly to the player. Respond naturally and in character."} + ] + while True: + player_input = input(f"\nWhat do you say to {npc_name}? (or type 'end conversation' to stop): ") + if player_input.lower() == "end conversation": + print(f"\nYou end your conversation with {npc_name}.") + break + + print(f"\n{npc_name}:") + self.print_stream(self.generate_npc_dialogue(npc_name, player_input)) +``` + +We also add the possibility for the player to end the conversation at any time. + +### Updating the Game State + +We update the game state based on the outcomes provided by the AI. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def update_game_state(self, outcome): + if "found" in outcome.lower(): + item = outcome.split("found")[1].split(".")[0].strip() + self.game_state.update(item=item) + if "new area" in outcome.lower(): + new_location = outcome.split("new area")[1].split(".")[0].strip() + self.game_state.update(location=new_location) + if "damage" in outcome.lower(): + self.game_state.update(health_change=-10) + if "healed" in outcome.lower(): + self.game_state.update(health_change=10) + if "quest" in outcome.lower(): + quest = outcome.split("quest")[1].split(".")[0].strip() + self.game_state.update(quest=quest) + self.game_state.update(exp_gain=5) +``` + +This is a simpler way to do it, but we will just look for keywords in the AI's response to determine what changes to make. This isn't the most consistent way to do it, but is easy to do and will easily allow the game to respond to the player's actions, making the experience feel more dynamic. + +### Processing Story Elements + +Let's process the AI-generated story to extract content and any special flags. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def process_story(self, story_generator) -> Tuple[str, List[str]]: + story = self.print_stream(story_generator, print_output=True) + story_lines = story.split('\n') + + flags = [] + for line in reversed(story_lines): + if line.strip().startswith('[') and line.strip().endswith(']'): + flags.append(line.strip('[').strip(']')) + story_lines.remove(line) + else: + break + + story_content = '\n'.join(story_lines).strip() + + if any(flag.startswith("NPC_INTERACTION:") for flag in flags): + npc_name = next(flag.split(':')[1].strip() for flag in flags if flag.startswith("NPC_INTERACTION:")) + return story_content, npc_name + else: + return story_content, flags +``` + +Here is where we'll actually separates the special tags we defined earlier from the story content and ensure the player sees a coherent story without tags. + +### Printing Streamed Content + +We also don't want to wait until the whole content is generated to print, so let's define a function to display the AI's response in real-time, simulating typing. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def print_stream(self, stream, print_output=True) -> str: + full_text = "" + for chunk in stream: + if print_output: + print(chunk, end='', flush=True) + full_text += chunk + time.sleep(0.03) + if print_output: + print() + return full_text +``` + +### Main Game Loop + +Finally, we bring everything together in the play_game method. + +```python title="ai_dungeon_master/game/dungeon_master.py" +class EnhancedAIDungeonMaster: + ... + def play_game(self): + print("Welcome to the Dungeon!") + character = self.create_character() + self.game_state = GameState(character) + + print("\nYour adventure begins...") + while True: + print_separator() + environment_description, env_flags = self.process_story(self.generate_environment()) + + if "INTERACT_OPPORTUNITY" in env_flags: + print("\nThere seems to be an opportunity to interact.") + if "QUEST_OPPORTUNITY" in env_flags: + print("\nThere might be a quest available.") + + action = input("\nWhat do you do? ") + if action.lower() == "quit": + break + + print("\nOutcome:") + outcome, npc_interaction = self.process_story(self.handle_player_action(action)) + + self.update_game_state(outcome) + + if npc_interaction: + self.handle_conversation(npc_interaction) + + print_separator() + print(f"Current state: {str(self.game_state)}") + + if self.game_state.health <= 0: + print("Game Over! Your health reached 0.") + break + + if hasattr(self, 'current_environment'): + del self.current_environment +``` + +The game loop continuously processes player actions and updates the game state, new environments are generated to keep the game dynamic and the player is allowed to quit whenever they want. + +Plus, the game is over if health reaches zero. + +### Helper Methods + +Let's also create some methods for improved user experience, we want to separate content to make it easier to see and also create a print_slowly to simulate streamed content in important messages. + +```python title="ai_dungeon_master/utils/text_utils.py" +import time + +def print_separator(self): + print("\n" + "=" * 50 + "\n") + +def print_slowly(text, delay=0.03): + for char in text: + print(char, end='', flush=True) + time.sleep(delay) + print() +``` + +## 6. Main Script that Runs the Game + +At our main script, we initialize and start the game. + +```python title="ai_dungeon_master/main.py" +from game.dungeon_master import EnhancedAIDungeonMaster +from utils.text_utils import print_slowly +from ai.ollama_server import start_ollama_server + +def main(): + print_slowly("Welcome to the AI Dungeon Master!") + print_slowly("Prepare for an adventure guided by multiple AI models.") + print_slowly("Type 'quit' at any time to exit the game.") + print() + + # Start the Ollama server before the game begins + ollama_process = start_ollama_server() + + game = EnhancedAIDungeonMaster() + game.play_game() + + print_slowly("Thank you for playing AI Dungeon Master!") + + # Terminate the Ollama server when the game ends + if ollama_process: + ollama_process.terminate() + +if __name__ == "__main__": + main() +``` + +## 7. Running the Game + +1. Ensure you're in the root directory of the project. + +2. Run the game using Poetry: + +```bash +poetry run python ai_dungeon_master/main.py +``` + +Or directly if you used pip: + +```bash +python ai_dungeon_master/main.py +``` + +This command will execute the `main.py` file, which should contain the game initialization and main loop. + +## 8. Conclusion and Further Improvements + +Congratulations! You've now created an AI Dungeon Master using the ClientAI package. This project demonstrates how to integrate multiple AI providers and manage game logic to create a dynamic and engaging text-based RPG. + +### Potential Improvements: + +1. Error Handling: Implement try-except blocks to handle exceptions and improve robustness. +2. Saving and Loading: Add functionality to save and load game states. +3. Combat System: Develop a combat system that uses character stats and AI to determine outcomes. +4. Quest Management: Create a more complex quest system with objectives and rewards. +5. Multiplayer: Explore options for multiplayer interactions. +6. User Interface: Develop a GUI for a more user-friendly experience. +7. AI Fine-Tuning: Customize AI models for more consistent and relevant responses. + +By implementing these improvements, you can further enhance the gameplay experience and create an even more immersive and engaging AI-driven RPG. \ No newline at end of file diff --git a/docs/examples/overview.md b/docs/examples/overview.md new file mode 100644 index 0000000..b0bb338 --- /dev/null +++ b/docs/examples/overview.md @@ -0,0 +1,64 @@ +# Examples Overview + +Welcome to the Examples section of the ClientAI documentation. This section provides practical, real-world examples of how to use ClientAI in various applications. Whether you're a beginner looking to get started or an experienced developer seeking inspiration for more complex projects, these examples will demonstrate the versatility and power of ClientAI. + +## Featured Examples + +Our examples cover a range of applications, from simple text generation to more complex AI-driven systems. Here's an overview of what you'll find in this section: + +1. **AI Dungeon Master**: A text-based RPG that uses multiple AI providers to create an interactive storytelling experience. + + - [AI Dungeon Master Tutorial](ai_dungeon_master.md) + +2. **Chatbot Assistant**: A simple chatbot that can answer questions and engage in conversation using ClientAI. + + - Soon + +3. **Sentiment Analyzer**: An application that analyzes the sentiment of given text using different AI models. + + - Soon + +## Usage + +Each example is documented on its own page, where you'll find: + +- A detailed explanation of the example's purpose and functionality +- Step-by-step instructions for implementing the example +- Code snippets and full source code +- Explanations of key ClientAI features used in the example +- Tips for customizing and extending the example + +### Quick Start Example + +Here's a simple example to get you started with ClientAI: + +```python +from clientai import ClientAI + +# Initialize the client +client = ClientAI('openai', api_key="your-openai-api-key") + +# Generate a short story +prompt = "Write a short story about a robot learning to paint." +response = client.generate_text(prompt, model="gpt-3.5-turbo") + +print(response) +``` + +For more general usage instructions, please refer to our [Quickstart Guide](../quick-start.md). + +## Customizing Examples + +Feel free to use these examples as starting points for your own projects. You can modify and extend them to suit your specific needs. If you create an interesting project using ClientAI, we'd love to hear about it! + +## Contributing + +We welcome contributions to our examples collection! If you've created an example that you think would be valuable to others, please consider submitting it. Check out our [Contributing Guidelines](../community/CONTRIBUTING.md) for more information on how to contribute. + +## Feedback + +Your feedback helps us improve our examples and documentation. If you have suggestions for new examples, improvements to existing ones, or any other feedback, please let us know through GitHub issues or our community channels. + +--- + +Explore each example to see ClientAI in action and learn how to implement AI-driven features in your own projects. \ No newline at end of file diff --git a/docs/quick-start.md b/docs/quick-start.md index d714834..e9fb56b 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -110,7 +110,7 @@ print(response) Now that you've seen the basics of ClientAI, you can: 1. Explore more advanced features like streaming responses and handling full response objects. -2. Check out the [Usage Guide](usage.md) for detailed information on all available methods and options. +2. Check out the [Usage Guide](usage/overview.md) for detailed information on all available methods and options. 3. See the [API Reference](api/overview.md) for a complete list of ClientAI's classes and methods. Remember to handle API keys securely and never expose them in your code or version control systems. \ No newline at end of file diff --git a/docs/usage/chat_functionality.md b/docs/usage/chat_functionality.md new file mode 100644 index 0000000..9e3cbee --- /dev/null +++ b/docs/usage/chat_functionality.md @@ -0,0 +1,181 @@ +# Chat Functionality in ClientAI + +This guide covers how to leverage ClientAI's chat functionality. You'll learn about creating chat conversations, managing context, and handling chat-specific features across supported providers. + +## Table of Contents + +1. [Basic Chat Interaction](#basic-chat-interaction) +2. [Managing Conversation Context](#managing-conversation-context) +3. [Advanced Chat Features](#advanced-chat-features) +4. [Provider-Specific Chat Capabilities](#provider-specific-chat-capabilities) +5. [Best Practices](#best-practices) + +## Basic Chat Interaction + +To use the chat functionality in ClientAI, use the `chat` method: + +```python +from clientai import ClientAI + +client = ClientAI('openai', api_key="your-openai-api-key") + +messages = [ + {"role": "user", "content": "Hello, who are you?"} +] + +response = client.chat(messages, model="gpt-3.5-turbo") +print(response) + +# Continue the conversation +messages.append({"role": "assistant", "content": response}) +messages.append({"role": "user", "content": "What can you help me with?"}) + +response = client.chat(messages, model="gpt-3.5-turbo") +print(response) +``` + +This example demonstrates a simple back-and-forth conversation. + +## Managing Conversation Context + +Effective context management is crucial for coherent conversations: + +```python +conversation = [ + {"role": "system", "content": "You are a helpful assistant specializing in Python programming."}, + {"role": "user", "content": "How do I use list comprehensions in Python?"} +] + +response = client.chat(conversation, model="gpt-3.5-turbo") +print(response) + +conversation.append({"role": "assistant", "content": response}) +conversation.append({"role": "user", "content": "Can you give an example?"}) + +response = client.chat(conversation, model="gpt-3.5-turbo") +print(response) +``` + +This example shows how to maintain context across multiple exchanges, including a system message to set the assistant's role. + +## Advanced Chat Features + +### Streaming Chat Responses + +For real-time conversation, you can stream chat responses: + +```python +conversation = [ + {"role": "user", "content": "Tell me a long story about space exploration"} +] + +for chunk in client.chat(conversation, model="gpt-3.5-turbo", stream=True): + print(chunk, end="", flush=True) +``` + +### Temperature and Top-p Sampling + +Adjust the creativity and randomness of responses: + +```python +response = client.chat( + conversation, + model="gpt-3.5-turbo", + temperature=0.7, + top_p=0.9 +) +``` + +## Provider-Specific Chat Capabilities + +Different providers may offer unique chat features: + +### OpenAI + +```python +openai_client = ClientAI('openai', api_key="your-openai-api-key") + +response = openai_client.chat( + [{"role": "user", "content": "Translate 'Hello, world!' to Japanese"}], + model="gpt-4" +) +``` + +### Replicate + +```python +replicate_client = ClientAI('replicate', api_key="your-replicate-api-key") + +response = replicate_client.chat( + [{"role": "user", "content": "Explain quantum computing"}], + model="meta/llama-2-70b-chat:latest" +) +``` + +### Ollama + +```python +ollama_client = ClientAI('ollama', host="http://localhost:11434") + +response = ollama_client.chat( + [{"role": "user", "content": "What are the three laws of robotics?"}], + model="llama2" +) +``` + +## Best Practices + +1. **Context Management**: Keep track of the conversation history, but be mindful of token limits. + + ```python + max_context_length = 10 + if len(conversation) > max_context_length: + conversation = conversation[-max_context_length:] + ``` + +2. **Error Handling**: Implement robust error handling for chat interactions: + + ```python + try: + response = client.chat(conversation, model="gpt-3.5-turbo") + except Exception as e: + print(f"An error occurred during chat: {e}") + response = "I'm sorry, I encountered an error. Could you please try again?" + ``` + +3. **User Input Validation**: Validate and sanitize user inputs to prevent potential issues: + + ```python + def sanitize_input(user_input): + # Implement appropriate sanitization logic + return user_input.strip() + + user_message = sanitize_input(input("Your message: ")) + conversation.append({"role": "user", "content": user_message}) + ``` + +4. **Graceful Fallbacks**: Implement fallback mechanisms for when the AI doesn't understand or can't provide a suitable response: + + ```python + if not response or response.lower() == "i don't know": + response = "I'm not sure about that. Could you please rephrase or ask something else?" + ``` + +5. **Model Selection**: Choose appropriate models based on the complexity of your chat application: + + ```python + model = "gpt-4" if complex_conversation else "gpt-3.5-turbo" + response = client.chat(conversation, model=model) + ``` + +6. **Conversation Resetting**: Provide options to reset or start new conversations: + + ```python + def reset_conversation(): + return [{"role": "system", "content": "You are a helpful assistant."}] + + # Usage + conversation = reset_conversation() + ``` + +By following these guidelines and exploring the various features available, you can create sophisticated chat applications using ClientAI across different AI providers. \ No newline at end of file diff --git a/docs/usage/initialization.md b/docs/usage/initialization.md new file mode 100644 index 0000000..ea3eb49 --- /dev/null +++ b/docs/usage/initialization.md @@ -0,0 +1,117 @@ +# Initializing ClientAI + +This guide covers the process of initializing ClientAI with different AI providers. You'll learn how to set up ClientAI for use with OpenAI, Replicate, and Ollama. + +## Table of Contents + +1. [Prerequisites](#prerequisites) +2. [OpenAI Initialization](#openai-initialization) +3. [Replicate Initialization](#replicate-initialization) +4. [Ollama Initialization](#ollama-initialization) +5. [Multiple Provider Initialization](#multiple-provider-initialization) +6. [Best Practices](#best-practices) + +## Prerequisites + +Before initializing ClientAI, ensure you have: + +1. Installed ClientAI: `pip install clientai[all]` +2. Obtained necessary API keys for the providers you plan to use +3. Basic understanding of Python and asynchronous programming + +## OpenAI Initialization + +To initialize ClientAI with OpenAI: + +```python +from clientai import ClientAI + +openai_client = ClientAI('openai', api_key="your-openai-api-key") +``` + +Replace `"your-openai-api-key"` with your actual OpenAI API key. + +## Replicate Initialization + +To initialize ClientAI with Replicate: + +```python +from clientai import ClientAI + +replicate_client = ClientAI('replicate', api_key="your-replicate-api-key") +``` + +Replace `"your-replicate-api-key"` with your actual Replicate API key. + +## Ollama Initialization + +To initialize ClientAI with Ollama: + +```python +from clientai import ClientAI + +ollama_client = ClientAI('ollama', host="http://localhost:11434") +``` + +Ensure that you have Ollama running locally on the specified host. + +## Multiple Provider Initialization + +You can initialize multiple providers in the same script: + +```python +from clientai import ClientAI + +openai_client = ClientAI('openai', api_key="your-openai-api-key") +replicate_client = ClientAI('replicate', api_key="your-replicate-api-key") +ollama_client = ClientAI('ollama', host="http://localhost:11434") +``` + +## Best Practices + +1. **Environment Variables**: Store API keys in environment variables instead of hardcoding them in your script: + + ```python + import os + from clientai import ClientAI + + openai_client = ClientAI('openai', api_key=os.getenv('OPENAI_API_KEY')) + ``` + +2. **Error Handling**: Wrap initialization in a try-except block to handle potential errors: + + ```python + try: + client = ClientAI('openai', api_key="your-openai-api-key") + except ValueError as e: + print(f"Error initializing ClientAI: {e}") + ``` + +3. **Configuration Files**: For projects with multiple providers, consider using a configuration file: + + ```python + import json + from clientai import ClientAI + + with open('config.json') as f: + config = json.load(f) + + openai_client = ClientAI('openai', **config['openai']) + replicate_client = ClientAI('replicate', **config['replicate']) + ``` + +4. **Lazy Initialization**: If you're not sure which provider you'll use, initialize clients only when needed: + + ```python + def get_client(provider): + if provider == 'openai': + return ClientAI('openai', api_key="your-openai-api-key") + elif provider == 'replicate': + return ClientAI('replicate', api_key="your-replicate-api-key") + # ... other providers ... + + # Use the client when needed + client = get_client('openai') + ``` + +By following these initialization guidelines, you'll be well-prepared to start using ClientAI with various AI providers in your projects. \ No newline at end of file diff --git a/docs/usage/multiple_providers.md b/docs/usage/multiple_providers.md new file mode 100644 index 0000000..77e41a0 --- /dev/null +++ b/docs/usage/multiple_providers.md @@ -0,0 +1,187 @@ +# Working with Multiple Providers in ClientAI + +This guide explores techniques for effectively using multiple AI providers within a single project using ClientAI. You'll learn how to switch between providers and leverage their unique strengths. + +## Table of Contents + +1. [Setting Up Multiple Providers](#setting-up-multiple-providers) +2. [Switching Between Providers](#switching-between-providers) +3. [Leveraging Provider Strengths](#leveraging-provider-strengths) +4. [Load Balancing and Fallback Strategies](#load-balancing-and-fallback-strategies) +5. [Best Practices](#best-practices) + +## Setting Up Multiple Providers + +First, initialize ClientAI with multiple providers: + +```python +from clientai import ClientAI + +openai_client = ClientAI('openai', api_key="your-openai-api-key") +replicate_client = ClientAI('replicate', api_key="your-replicate-api-key") +ollama_client = ClientAI('ollama', host="http://localhost:11434") +``` + +## Switching Between Providers + +Create a function to switch between providers based on your requirements: + +```python +def get_provider(task): + if task == "translation": + return openai_client + elif task == "code_generation": + return replicate_client + elif task == "local_inference": + return ollama_client + else: + return openai_client # Default to OpenAI + +# Usage +task = "translation" +provider = get_provider(task) +response = provider.generate_text("Translate 'Hello' to French", model="gpt-3.5-turbo") +``` + +This approach allows you to dynamically select the most appropriate provider for each task. + +## Leveraging Provider Strengths + +Different providers excel in different areas. Here's how you can leverage their strengths: + +```python +def translate_text(text, target_language): + return openai_client.generate_text( + f"Translate '{text}' to {target_language}", + model="gpt-3.5-turbo" + ) + +def generate_code(prompt): + return replicate_client.generate_text( + prompt, + model="meta/llama-2-70b-chat:latest" + ) + +def local_inference(prompt): + return ollama_client.generate_text( + prompt, + model="llama2" + ) + +# Usage +french_text = translate_text("Hello, world!", "French") +python_code = generate_code("Write a Python function to calculate the Fibonacci sequence") +quick_response = local_inference("What's the capital of France?") +``` + +## Load Balancing and Fallback Strategies + +Implement load balancing and fallback strategies to ensure reliability: + +```python +import random + +providers = [openai_client, replicate_client, ollama_client] + +def load_balanced_generate(prompt, max_retries=3): + for _ in range(max_retries): + try: + provider = random.choice(providers) + return provider.generate_text(prompt, model=provider.default_model) + except Exception as e: + print(f"Error with provider {provider.__class__.__name__}: {e}") + raise Exception("All providers failed after max retries") + +# Usage +try: + response = load_balanced_generate("Explain the concept of machine learning") + print(response) +except Exception as e: + print(f"Failed to generate text: {e}") +``` + +This function randomly selects a provider and falls back to others if there's an error. + +## Best Practices + +1. **Provider Selection Logic**: Develop clear criteria for selecting providers based on task requirements, cost, and performance. + + ```python + def select_provider(task, complexity, budget): + if complexity == "high" and budget == "high": + return openai_client # Assuming OpenAI has more advanced models + elif task == "code" and budget == "medium": + return replicate_client + else: + return ollama_client # Assuming Ollama is the most cost-effective + ``` + +2. **Consistent Interface**: Create wrapper functions to provide a consistent interface across providers: + + ```python + def unified_generate(prompt, provider=None): + if provider is None: + provider = get_default_provider() + return provider.generate_text(prompt, model=provider.default_model) + + # Usage + response = unified_generate("Explain quantum computing") + ``` + +3. **Error Handling and Logging**: Implement comprehensive error handling and logging when working with multiple providers: + + ```python + import logging + + logging.basicConfig(level=logging.INFO) + logger = logging.getLogger(__name__) + + def safe_generate(prompt, provider): + try: + return provider.generate_text(prompt, model=provider.default_model) + except Exception as e: + logger.error(f"Error with {provider.__class__.__name__}: {e}") + return None + ``` + +4. **Performance Monitoring**: Track the performance of different providers to optimize selection: + + ```python + import time + + def timed_generate(prompt, provider): + start_time = time.time() + result = provider.generate_text(prompt, model=provider.default_model) + elapsed_time = time.time() - start_time + logger.info(f"{provider.__class__.__name__} took {elapsed_time:.2f} seconds") + return result + ``` + +5. **Configuration Management**: Use configuration files or environment variables to manage provider settings: + + ```python + import os + from dotenv import load_dotenv + + load_dotenv() + + openai_client = ClientAI('openai', api_key=os.getenv('OPENAI_API_KEY')) + replicate_client = ClientAI('replicate', api_key=os.getenv('REPLICATE_API_KEY')) + ollama_client = ClientAI('ollama', host=os.getenv('OLLAMA_HOST')) + ``` + +6. **Caching**: Implement caching to reduce redundant API calls and improve response times: + + ```python + from functools import lru_cache + + @lru_cache(maxsize=100) + def cached_generate(prompt, provider_name): + provider = get_provider(provider_name) + return provider.generate_text(prompt, model=provider.default_model) + + # Usage + response = cached_generate("What is the speed of light?", "openai") + ``` + +By following these practices and leveraging the strengths of multiple providers, you can create more robust, efficient, and versatile applications with ClientAI. \ No newline at end of file diff --git a/docs/usage/overview.md b/docs/usage/overview.md new file mode 100644 index 0000000..a3a76b4 --- /dev/null +++ b/docs/usage/overview.md @@ -0,0 +1,104 @@ +# Usage Overview + +This Usage section provides comprehensive guides on how to effectively use the key features of ClientAI. Each topic focuses on a specific aspect of usage, ensuring you have all the information needed to leverage the full potential of ClientAI in your projects. + +## Key Topics + +### 1. Initializing ClientAI + +This guide covers the process of initializing ClientAI with different AI providers. It provides a step-by-step approach to setting up ClientAI for use with OpenAI, Replicate, and Ollama. + +- [Initializing ClientAI Guide](initialization.md) + +### 2. Text Generation with ClientAI + +Learn how to use ClientAI for text generation tasks. This guide explores the various options and parameters available for generating text across different AI providers. + +- [Text Generation Guide](text_generation.md) + +### 3. Chat Functionality in ClientAI + +Discover how to leverage ClientAI's chat functionality. This guide covers creating chat conversations, managing context, and handling chat-specific features across supported providers. + +- [Chat Functionality Guide](chat_functionality.md) + +### 4. Working with Multiple Providers + +Explore techniques for effectively using multiple AI providers within a single project. This guide demonstrates how to switch between providers and leverage their unique strengths. + +- [Multiple Providers Guide](multiple_providers.md) + +### 5. Handling Responses and Errors + +Learn best practices for handling responses from AI providers and managing potential errors. This guide covers response parsing, error handling, and retry strategies. + +- Soon + +## Getting Started + +To make the most of these guides, we recommend familiarizing yourself with basic Python programming and asynchronous programming concepts, as ClientAI leverages these extensively. + +### Quick Start Example + +Here's a simple example to get you started with ClientAI: + +```python +from clientai import ClientAI + +# Initialize the client +client = ClientAI('openai', api_key="your-openai-api-key") + +# Generate text +response = client.generate_text( + "Explain the concept of machine learning in simple terms.", + model="gpt-3.5-turbo" +) + +print(response) +``` + +For more detailed examples and explanations, refer to the specific guides linked above. + +## Advanced Usage + +### Streaming Responses + +ClientAI supports streaming responses for compatible providers. Here's a basic example: + +```python +for chunk in client.generate_text( + "Tell me a long story about space exploration", + model="gpt-3.5-turbo", + stream=True +): + print(chunk, end="", flush=True) +``` + +### Using Different Models + +ClientAI allows you to specify different models for each provider. For example: + +```python +# Using GPT-4 with OpenAI +openai_response = openai_client.generate_text( + "Explain quantum computing", + model="gpt-4" +) + +# Using Llama 2 with Replicate +replicate_response = replicate_client.generate_text( + "Describe the process of photosynthesis", + model="meta/llama-2-70b-chat:latest" +) +``` + +## Best Practices + +1. **API Key Management**: Always store your API keys securely, preferably as environment variables. +2. **Error Handling**: Implement proper error handling to manage potential API failures or rate limiting issues. +3. **Model Selection**: Choose appropriate models based on your task requirements and budget considerations. +4. **Context Management**: For chat applications, manage conversation context efficiently to get the best results. + +## Contribution + +If you have suggestions or contributions to these guides, please refer to our [Contributing Guidelines](../community/CONTRIBUTING.md). We appreciate your input in improving our documentation and making ClientAI more accessible to all users. \ No newline at end of file diff --git a/docs/usage/text_generation.md b/docs/usage/text_generation.md new file mode 100644 index 0000000..6f62e91 --- /dev/null +++ b/docs/usage/text_generation.md @@ -0,0 +1,127 @@ +# Text Generation with ClientAI + +This guide explores how to use ClientAI for text generation tasks across different AI providers. You'll learn about the various options and parameters available for generating text. + +## Table of Contents + +1. [Basic Text Generation](#basic-text-generation) +2. [Advanced Parameters](#advanced-parameters) +3. [Streaming Responses](#streaming-responses) +4. [Provider-Specific Features](#provider-specific-features) +5. [Best Practices](#best-practices) + +## Basic Text Generation + +To generate text using ClientAI, use the `generate_text` method: + +```python +from clientai import ClientAI + +client = ClientAI('openai', api_key="your-openai-api-key") + +response = client.generate_text( + "Write a short story about a robot learning to paint.", + model="gpt-3.5-turbo" +) + +print(response) +``` + +This will generate a short story based on the given prompt. + +## Advanced Parameters + +ClientAI supports various parameters to fine-tune text generation: + +```python +response = client.generate_text( + "Explain the theory of relativity", + model="gpt-4", + max_tokens=150, + temperature=0.7, + top_p=0.9, + presence_penalty=0.1, + frequency_penalty=0.1 +) +``` + +- `max_tokens`: Maximum number of tokens to generate +- `temperature`: Controls randomness (0.0 to 1.0) +- `top_p`: Nucleus sampling parameter +- `presence_penalty`: Penalizes new tokens based on their presence in the text so far +- `frequency_penalty`: Penalizes new tokens based on their frequency in the text so far + +Note: Available parameters may vary depending on the provider. + +## Streaming Responses + +For long-form content, you can use streaming to get partial responses as they're generated: + +```python +for chunk in client.generate_text( + "Write a comprehensive essay on climate change", + model="gpt-3.5-turbo", + stream=True +): + print(chunk, end="", flush=True) +``` + +This allows for real-time display of generated text, which can be useful for user interfaces or long-running generations. + +## Provider-Specific Features + +Different providers may offer unique features. Here are some examples: + +### OpenAI + +```python +response = openai_client.generate_text( + "Translate the following to French: 'Hello, how are you?'", + model="gpt-3.5-turbo" +) +``` + +### Replicate + +```python +response = replicate_client.generate_text( + "Generate a haiku about mountains", + model="meta/llama-2-70b-chat:latest" +) +``` + +### Ollama + +```python +response = ollama_client.generate_text( + "Explain the concept of neural networks", + model="llama2" +) +``` + +## Best Practices + +1. **Prompt Engineering**: Craft clear and specific prompts for better results. + + ```python + good_prompt = "Write a detailed description of a futuristic city, focusing on transportation and architecture." + ``` + +2. **Model Selection**: Choose appropriate models based on your task complexity and requirements. + +3. **Error Handling**: Always handle potential errors in text generation: + + ```python + try: + response = client.generate_text("Your prompt here", model="gpt-3.5-turbo") + except Exception as e: + print(f"An error occurred: {e}") + ``` + +4. **Rate Limiting**: Be mindful of rate limits imposed by providers. Implement appropriate delays or queuing mechanisms for high-volume applications. + +5. **Content Filtering**: Implement content filtering or moderation for user-facing applications to ensure appropriate outputs. + +6. **Consistency**: For applications requiring consistent outputs, consider using lower temperature values or implementing your own post-processing. + +By following these guidelines and exploring the various parameters and features available, you can effectively leverage ClientAI for a wide range of text generation tasks across different AI providers. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index d6856d6..2cf5671 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -45,10 +45,6 @@ nav: - ClientAI: index.md - Installing: installing.md - Quick-Start: quick-start.md - # - Usage: - # - Overview: usage/overview.md - # - Advanced: - # - Overview: advanced/overview.md - API Reference: - Overview: api/overview.md - ClientAI: api/clientai.md @@ -57,13 +53,23 @@ nav: - Ollama: api/ollama_provider.md - OpenAI: api/openai_provider.md - Replicate: api/replicate_provider.md + - Usage: + - usage/overview.md + - usage/initialization.md + - usage/text_generation.md + - usage/chat_functionality.md + - usage/multiple_providers.md + # - usage/error_handling.md + - Examples: + - Overview: examples/overview.md + - Examples: + - AI Dungeon Master: examples/ai_dungeon_master.md - Community: - Overview: community/overview.md - Contributing: community/CONTRIBUTING.md - Code of Conduct: community/CODE_OF_CONDUCT.md - License: community/LICENSE.md - Extending: extending.md - # - Changelog: changelog.md markdown_extensions: - admonition