Skip to content

Commit

Permalink
docs for ollama manager added, improved previous docs
Browse files Browse the repository at this point in the history
  • Loading branch information
igorbenav committed Nov 1, 2024
1 parent d7d88da commit bea6329
Show file tree
Hide file tree
Showing 10 changed files with 835 additions and 17 deletions.
9 changes: 9 additions & 0 deletions docs/api/ollama_manager/ollama_manager.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# OllamaManager Class API Reference

The `OllamaManager` class is a utility class that manages the lifecycle of a local Ollama server instance. It handles server process startup, monitoring, and shutdown while respecting platform-specific requirements and custom configurations. The manager supports configurable GPU acceleration, CPU thread allocation, and memory limits through `OllamaServerConfig`. It provides both context manager and manual management interfaces for controlling the server process.

## Class Definition

::: clientai.ollama.OllamaManager
rendering:
show_if_no_docstring: true
9 changes: 9 additions & 0 deletions docs/api/ollama_manager/ollama_server_config.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# OllamaServerConfig Class API Reference

The `OllamaServerConfig` class is a configuration container that defines the runtime parameters for an Ollama server instance. It allows users to specify network settings (host/port), hardware utilization options (GPU layers, CPU threads, memory limits), and environment variables. The class provides sensible defaults while allowing fine-grained control over server behavior through optional configuration parameters.

## Class Definition

::: clientai.ollama.OllamaServerConfig
rendering:
show_if_no_docstring: true
11 changes: 8 additions & 3 deletions docs/api/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,14 @@ ClientAI's API is comprised of several key components, each serving a specific p

3. **Provider-Specific Classes**: These classes implement the AIProvider interface for each supported AI service (OpenAI, Replicate, Ollama).

- [OpenAI Provider Reference](openai_provider.md)
- [Replicate Provider Reference](replicate_provider.md)
- [Ollama Provider Reference](ollama_provider.md)
- [OpenAI Provider Reference](specific_providers/openai_provider.md)
- [Replicate Provider Reference](specific_providers/replicate_provider.md)
- [Ollama Provider Reference](specific_providers/ollama_provider.md)

4. **Ollama Manager**: These classes handle the local Ollama server configuration and lifecycle management.

- [OllamaManager Class Reference](ollama_manager/ollama_manager.md)
- [OllamaServerConfig Class Reference](ollama_manager/ollama_server_config.md)

## Usage

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
104 changes: 100 additions & 4 deletions docs/extending.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,90 @@ class Provider(AIProvider):

Make sure to handle both streaming and non-streaming responses, as well as the `return_full_response` option.

## Step 4: Update the Main ClientAI Class
## Step 4: Implement Unified Error Handling

Before implementing the provider class, set up error handling for your provider. This ensures consistent error reporting across all providers.

1. First, import the necessary error types:

```python
# clientai/newai/provider.py

from ..exceptions import (
APIError,
AuthenticationError,
ClientAIError,
InvalidRequestError,
ModelError,
RateLimitError,
TimeoutError,
)
```

2. Implement the error mapping method in your provider class:

```python
class Provider(AIProvider):
...
def _map_exception_to_clientai_error(
self,
e: Exception,
status_code: Optional[int] = None
) -> ClientAIError:
"""
Maps NewAI-specific exceptions to ClientAI exceptions.
Args:
e: The caught exception
status_code: Optional HTTP status code
Returns:
ClientAIError: The appropriate ClientAI exception
"""
error_message = str(e)
status_code = status_code or getattr(e, "status_code", None)

# Map NewAI-specific exceptions to ClientAI exceptions
if isinstance(e, NewAIAuthError):
return AuthenticationError(
error_message,
status_code=401,
original_error=e
)
elif isinstance(e, NewAIRateLimitError):
return RateLimitError(
error_message,
status_code=429,
original_error=e
)
elif "model not found" in error_message.lower():
return ModelError(
error_message,
status_code=404,
original_error=e
)
elif isinstance(e, NewAIInvalidRequestError):
return InvalidRequestError(
error_message,
status_code=400,
original_error=e
)
elif isinstance(e, NewAITimeoutError):
return TimeoutError(
error_message,
status_code=408,
original_error=e
)

# Default to APIError for unknown errors
return APIError(
error_message,
status_code,
original_error=e
)
```

## Step 5: Update the Main ClientAI Class

Update the `clientai/client_ai.py` file to include support for your new provider:

Expand Down Expand Up @@ -149,7 +232,7 @@ Update the `clientai/client_ai.py` file to include support for your new provider
...
```

## Step 5: Update Package Constants and Dependencies
## Step 6: Update Package Constants and Dependencies

1. In the `clientai/_constants.py` file, add a constant for your new provider:

Expand Down Expand Up @@ -216,7 +299,7 @@ If users are not using Poetry and are installing the package via pip, they can s
pip install clientai[newai]
```

## Step 6: Add Tests
## Step 7: Add Tests

Create a new test file for your provider in the `tests` directory:

Expand All @@ -229,7 +312,20 @@ tests/

Implement tests for your new provider, ensuring that you cover both the `generate_text` and `chat` methods, as well as streaming and non-streaming scenarios.

## Step 7: Update Documentation
## Step 8: Test Error Handling

Also create a new test file to test your provider's exceptions in the `tests` directory:

```
tests/
newai/
__init__.py
test_exceptions.py
```

Add tests to ensure unified tests are being handled with a reference to the original error.

## Step 9: Update Documentation

Don't forget to update the documentation to include information about the new provider:

Expand Down
Loading

0 comments on commit bea6329

Please sign in to comment.