Skip to content

Commit

Permalink
Update docs and simplified external trigger API
Browse files Browse the repository at this point in the history
  • Loading branch information
ManiMozaffar committed May 11, 2024
1 parent 4d2df16 commit 069b740
Show file tree
Hide file tree
Showing 6 changed files with 126 additions and 59 deletions.
12 changes: 9 additions & 3 deletions aioclock/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,20 @@
@dataclass
class Task:
"""Task that will be run by AioClock.
Which always has a function and a trigger."""
Which always has a function and a trigger.
This is internally used, when you decorate your function with `aioclock.task`.
"""

func: Callable[..., Any]
"""Decorated function that will be run by AioClock."""

trigger: BaseTrigger
"""Trigger that will be used to run the function."""

async def run(self):
"""Run the task, and handle the exceptions.
If the task fails, log the error, but keep running the tasks.
"""
Run the task, and handle the exceptions.
If the task fails, log the error with exception, but keep running the tasks.
"""
while self.trigger.should_trigger():
try:
Expand Down
24 changes: 6 additions & 18 deletions aioclock/triggers.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,7 @@ async def trigger_next(self) -> None:
return None

async def get_waiting_time_till_next_trigger(self):
if self.should_trigger():
return 0
return None
return 0


class LoopController(BaseTrigger, ABC, Generic[TriggerTypeT]):
Expand Down Expand Up @@ -164,9 +162,7 @@ def should_trigger(self) -> bool:
return False

async def get_waiting_time_till_next_trigger(self):
if self.should_trigger():
return 0
return None
return 0


class Once(LoopController[Literal[Triggers.ONCE]]):
Expand All @@ -192,9 +188,7 @@ async def trigger_next(self) -> None:
return None

async def get_waiting_time_till_next_trigger(self):
if self.should_trigger():
return 0
return None
return 0


class OnStartUp(LoopController[Literal[Triggers.ON_START_UP]]):
Expand All @@ -220,9 +214,7 @@ async def trigger_next(self) -> None:
return None

async def get_waiting_time_till_next_trigger(self):
if self.should_trigger():
return 0
return None
return 0


class OnShutDown(LoopController[Literal[Triggers.ON_SHUT_DOWN]]):
Expand All @@ -249,9 +241,7 @@ async def trigger_next(self) -> None:
return None

async def get_waiting_time_till_next_trigger(self):
if self.should_trigger():
return 0
return None
return 0


class Every(LoopController[Literal[Triggers.EVERY]]):
Expand Down Expand Up @@ -420,9 +410,7 @@ def get_sleep_time(self):
return sleep_for

async def get_waiting_time_till_next_trigger(self):
if self.should_trigger():
return self.get_sleep_time()
return None
return self.get_sleep_time()

async def trigger_next(self) -> None:
self._increment_loop_counter()
Expand Down
65 changes: 33 additions & 32 deletions docs/alternative.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,37 +18,37 @@ Rocketry is a modern statement-based scheduling framework for Python. It is simp

When **AioClock** might be a better choice:

- You need a truly light weight solution
- You are using Pydantic v2
- Type safety is important to you. All triggers are type safe, but some statements are stringly typed in rocketry
- You need more reliable and preditcable time based scheduling that logs when the next event is going to be triggered
- You need a truly light weight solution.
- You are using Pydantic v2.
- Type safety is important to you. All triggers are type safe, but some statements are stringly typed in rocketry.
- You need more reliable and preditcable time based scheduling that logs when the next event is going to be triggered.

When **Rocketry** might be a better choice:

- You need a task pipelining that is heavily cpu intensive
- You need a task pipelining that is heavily cpu intensive.
- Your code is not yet asynchronous, or blocks the main thread.
- You are still using Pydantic v1
- You are still using Pydantic v1.

!!! success "Note"

You can also asyncify your sync code, by running them in a threadpool executor. Libraries like asyncer or anyio might help you with that.
You can also asyncify your sync code, by running them in a threadpool executor. Libraries like `asyncer` or `anyio` might help you with that. Note that this won't truely make your code async, because async code are meant to only run on one thread, but still you get the job done with AioClock easily. It would be the fit solution if the library you use does not have an async alternative.

## AioClock vs Crontab

Crontab is a scheduler for Unix-like operating systems. It is light weight and it is able to run tasks (or jobs) periodically, ie. hourly, weekly or on fixed dates.

When **AioClock** might be a better choice:

- You are building a system and not just running individual scripts
- You need task pipelining
- You need more complex and custom scheduling
- You are not familiar Unix-Linux or you work with Windows
- You are building a system and not just running individual scripts.
- You need task pipelining.
- You need more complex and custom scheduling.
- You are not familiar Unix-Linux or you work with Windows.

When **Crontab** might be a better choice:

- If you need a truly light weight solution
- You are not familiar with Python
- You only want to run scripts independently at given periods
- If you need a truly light weight solution.
- You are not familiar with Python.
- You only want to run scripts independently at given periods.

## AioClock vs APScheduler

Expand All @@ -57,9 +57,9 @@ It provides Cron-style scheduling and some interval based scheduling.

When **AioClock** might be a better choice:

- You are building an automation system
- You need more complex and customized scheduling
- You need to pipeline tasks
- You are building an automation system.
- You need more complex and customized scheduling.
- You need to pipeline tasks.

When **APScheduler** might be a better choice:

Expand All @@ -76,19 +76,20 @@ scheduling background tasks for web back-ends.

When **AioClock** might be a better choice:

- You are building an automation system
- You need more complex and customized scheduling
- You work with Windows
- You are building an automation system.
- You need more complex and customized scheduling.
- You work with Windows.
- You want to fully control your broker behavior, and have high flexability.

When **Celery** might be a better choice:

- You are running background tasks for web servers
- You need higher performance
- You need distributed execution
- You are running background tasks for web servers.
- You are not very familiar with message brokers, and you need very easy solution that abstract away all details.

!!! success "Note"

Celery works via task queues but such mechanism could be implemented to AioClock as well by creating a `once event` that reads from queue. You may make this as decorator and even create new libraries using AioClock!
Celery works via task queues but such mechanism could be implemented to AioClock as well by creating a `once trigger` that reads from queue. You may make this as decorator and even create new libraries using AioClock.
For implementation details, see [how to integrate a broker into AioClock App](examples/brokers.md).

## AioClock vs Airflow

Expand All @@ -97,14 +98,14 @@ in data pipelines. It has a scheduler and a built-in monitor.

When **AioClock** might be a better choice:

- You work with Windows
- You need something that is easy to set up and quick to get produtive with
- You are building an application
- You want more customization
- You work with Windows.
- You need something that is easy to set up and quick to get produtive with.
- You are building an application.
- You want more customization.

When **Airflow** might be a better choice:

- You are building standard data pipelines
- You would like to have more out-of-the-box
- You need distributed execution
- You work in data engineering
- You are building standard data pipelines.
- You would like to have more out-of-the-box.
- You need distributed execution.
- You work in data engineering.
36 changes: 36 additions & 0 deletions docs/examples/brokers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
You can basically run any tasks on aioclock, it could be your redis broker or other kind of brokers listening to a queue. The benefit of doing so, is that you don't need to worry about dependency injection, shutdown or startup event.

AioClock offer you a unique easy way to spin up new services, without any overhead or perfomance issue!

```python
from aioclock import AioClock, Forever, OnShutDown
from functools import lru_cache
from your_module import BrokerType

app = AioClock()

# your singleton redis instance
@lru_cache
def get_redis() -> BrokerType:
...


@app.task(trigger=Forever())
async def read_message_queue(redis: BrokerType = Depends(get_redis)):
async for message in redis.listen("..."):
...


@app.task(trigger=OnShutDown())
async def shutdown_event(redis: BrokerType = Depends(get_redis)):
await redis.disconnect()
```

One other way to do this, is to implement a trigger that automatically execute the function.
But to do so, I basically need to wrap redis in my own library, and that's not good for some reasons:

1. Complexity of framework increases.
2. Is not realy flexible, because native library and client are always way more flexible. I end up writing something like `Celery`.
3. The architecture I choose to handle interactions with broker may not satisfy your requirement.

[This repository is an example how you can write a message queue in aioclock.](https://github.com/ManiMozaffar/typed-redis)
33 changes: 33 additions & 0 deletions docs/examples/fastapi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
To run AioClock with FastAPI, you can run it on background with FastAPI lifespan, next to your asgi.

```python
from aioclock import AioClock
from fastapi import FastAPI
import asyncio

clock_app = AioClock()

async def lifespan(app: FastAPI):
task = asyncio.create_task(clock_app.serve())
yield

try:
task.cancel()
await task
except asyncio.CancelledError:
...

app = FastAPI(lifespan=lifespan)
```

!!! danger "This setup is not recommended at all"

Running AioClock with FastAPI is not a good practice in General, because:
FastAPI is a framework to write stateless API, but aioclock is still stateful component in your architecture.
In simlper term, it means if you have 5 instances of aioclock running, they produce 5x tasks than you intended.
So you cannot easily scale up horizontally by adding more aioclock power!

Even in this case, if you serve FastAPI with multiple process, you end up having one aioclock per each process!

What I suggest to do is spin one new service, that is responsible to process the periodic taks.
Try to avoid periodic task in general, but sometimes it's not easy to do so.
15 changes: 9 additions & 6 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,22 @@ repo_url: https://github.com/ManiMozaffar/aioclock
site_url: https://ManiMozaffar.github.io/aioclock
site_author: Mani Mozaffar
repo_name: ManiMozaffar/aioclock
copyright: Maintained by <a href="https://ManiMozaffar.com">Florian</a>.
copyright: Maintained by <a href="https://ManiMozaffar.com">Mani Mozaffar</a>.

theme:
name: "material"
palette:
- media: "(prefers-color-scheme: light)"
scheme: default
primary: pink
accent: pink
primary: blue grey
accent: indigo
toggle:
icon: material/lightbulb-outline
name: "Switch to dark mode"
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: pink
accent: pink
primary: blue grey
accent: indigo
toggle:
icon: material/lightbulb
name: "Switch to light mode"
Expand Down Expand Up @@ -88,9 +88,12 @@ plugins:

nav:
- Introduction: index.md
- API Documentation:
- Documentation:
- Aioclock App: api/app.md
- Group: api/group.md
- Task: api/task.md
- Triggers: api/triggers.md
- Alternatives: alternative.md
- Examples:
- Using with FastAPI: examples/fastapi.md
- Using with Message Brokers: examples/brokers.md

0 comments on commit 069b740

Please sign in to comment.