diff --git a/0.3.0/404.html b/0.3.0/404.html new file mode 100644 index 0000000..c7750a2 --- /dev/null +++ b/0.3.0/404.html @@ -0,0 +1,724 @@ + + + +
+ + + + + + + + + + + + + + + + + + +There are other alternatives for scheduling as well. +This section contains comparisons between AioClock +and other scheduling tools. +Credit to Rocketry library, as the comparison is inspired by that.
+Features unique for AioClock:
+Rocketry is a modern statement-based scheduling framework for Python. It is simple, clean and extensive. It is suitable for small and big projects.
+When AioClock might be a better choice:
+When Rocketry might be a better choice:
+Coming next...
+In future versions, aioclock will feature a more advanced architecture, leveraging multiprocessing to handle heavy tasks efficiently.
+Crontab is a scheduler for Unix-like operating systems. It is light weight and it is able to run tasks (or jobs) periodically, ie. hourly, weekly or on fixed dates.
+When AioClock might be a better choice:
+When Crontab might be a better choice:
+APScheduler is a relatively simple scheduler library for Python. +It provides Cron-style scheduling and some interval based scheduling.
+When AioClock might be a better choice:
+When APScheduler might be a better choice:
+You can do this by yourself already...
+There is already External APIs from library that you can use, to implement storing task metadata on a database. +It is very easy, but aioclock might actually not do it, to not couple library to a dependency. +Read about how to use the external API.
+Celery is a task queue system meant for distributed execution and +scheduling background tasks for web back-ends.
+When AioClock might be a better choice:
+When Celery might be a better choice:
+Integrate broker is easier than you can imagine, with aioclock!
+Celery works via task queues but such mechanism could be implemented to AioClock as well by creating a once trigger
that reads from queue. You may make this as decorator and even create new libraries using AioClock.
+For implementation details, see how to integrate a broker into AioClock App.
Airflow is a a workflow management system used heavily +in data pipelines. It has a scheduler and a built-in monitor.
+When AioClock might be a better choice:
+When Airflow might be a better choice:
+FastStream is a powerful and easy-to-use Python framework for building asynchronous services interacting with event streams such as Apache Kafka, RabbitMQ, NATS and Redis.
+When AioClock might be a better choice:
+When FastStream might be a better choice:
+They can be used together...
+Note that you can use both beside each other, just like FastAPI. All you'd have to do is to serve both application at same time.
+External API of the aioclock package, that can be used to interact with the AioClock instance. +This module could be very useful if you intend to use aioclock in a web application or a CLI tool.
+Other tools and extension are written from this tool.
+Note when writing to aioclock API and changing its state.
+Right now the state of AioClock instance is on the memory level, so if you write an API and change a task's trigger time, it will not persist. +In future we might store the state of AioClock instance in a database, so that it always remains same. +But this is a bit tricky and implicit because then your code gets ignored and database is preferred over the codebase. +For now you may consider it as a way to change something without redeploying the application, but it is not very recommended to write.
+
+ Bases: BaseModel
Metadata of the task that is included in the AioClock instance.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
id |
+
+ UUID
+ |
+
+
+
+ UUID: Task ID that is unique for each task, and changes every time you run the aioclock app. +In future we might store task ID in a database, so that it always remains same. + |
+
trigger |
+
+ Union[TriggerT, Any]
+ |
+
+
+
+ Union[TriggerT, Any]: Trigger that is used to run the task, type is also any to ease implementing new triggers. + |
+
task_name |
+
+ str
+ |
+
+
+
+ str: Name of the task function. + |
+
async
+
+
+¶run_specific_task(task_id: UUID, app: AioClock)
+
Run a specific task immediately by its ID, from the AioClock instance.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
task_id |
+
+ UUID
+ |
+
+
+
+ Task ID that is unique for each task, and changes every time you run the aioclock app. +In future we might store task ID in a database, so that it always remains same. + |
+ + required + | +
app |
+
+ AioClock
+ |
+
+
+
+ AioClock instance to run the task from. + |
+ + required + | +
aioclock/api.py
async
+
+
+¶Runs an aioclock decorated function, with all the dependencies injected.
+Can be used to run a task function with all the dependencies injected.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
func |
+
+ Callable[P, Awaitable[T]]
+ |
+
+
+
+ Function to run with all the dependencies injected. Must be decorated with |
+ + required + | +
from aioclock import Once, AioClock, Depends
+from aioclock.api import run_with_injected_deps
+
+app = AioClock()
+
+def some_dependency():
+ return 1
+
+@app.task(trigger=Once())
+async def main(bar: int = Depends(some_dependency)):
+ print("Hello World")
+ return bar
+
+async def some_other_func():
+ foo = await run_with_injected_deps(main)
+ assert foo == 1
+
aioclock/api.py
async
+
+
+¶get_metadata_of_all_tasks(
+ app: AioClock,
+) -> list[TaskMetadata]
+
Get metadata of all tasks that are included in the AioClock instance.
+This function can be used to mutate the TaskMetadata
object, i.e to change the trigger of a task.
+But for now it is yet not recommended to do this, as you might experience some unexpected behavior.
+But in next versions, I'd like to make it more stable and reliable on mutating the data.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
app |
+
+ AioClock
+ |
+
+
+
+ AioClock instance to get the metadata of all tasks. + |
+ + required + | +
aioclock/api.py
To initialize the AioClock instance, you need to import the AioClock class from the aioclock module. +AioClock class represent the aioclock, and handle the tasks and groups that will be run by the aioclock.
+Another way to modulize your code is to use Group
which is kinda the same idea as router in web frameworks.
AioClock(
+ *,
+ lifespan: Optional[
+ Callable[
+ [AioClock],
+ AsyncContextManager[AioClock]
+ | ContextManager[AioClock],
+ ]
+ ] = None,
+ limiter: Optional[CapacityLimiter] = None
+)
+
AioClock is the main class that will be used to run the tasks. +It will be responsible for running the tasks in the right order.
+ + +To run the aioclock final app simply do:
+ + +You can define this startup and shutdown logic using the lifespan parameter of the AioClock instance. +It should be as an AsyncContextManager which get AioClock application as arguement. +You can find the example below.
+ + + import asyncio
+ from contextlib import asynccontextmanager
+
+ from aioclock import AioClock
+
+ ML_MODEL = [] # just some imaginary component that needs to be started and stopped
+
+
+ @asynccontextmanager
+ async def lifespan(app: AioClock):
+ ML_MODEL.append(2)
+ print("UP!")
+ yield app
+ ML_MODEL.clear()
+ print("DOWN!")
+
+
+ app = AioClock(lifespan=lifespan)
+
+
+ if __name__ == "__main__":
+ asyncio.run(app.serve())
+
Here we are simulating the expensive startup operation of loading the model by putting the (fake) +model function in the dictionary with machine learning models before the yield. +This code will be executed before the application starts operationg, during the startup.
+And then, right after the yield, we unload the model. +This code will be executed after the application finishes handling requests, right before the shutdown. +This could, for example, release resources like memory, a GPU or some database connection.
+It would also happen when you're stopping your application gracefully, for example, when you're shutting down your container.
+Lifespan could also be synchronus context manager. Check the example below.
+ + + from contextlib import contextmanager
+
+ from aioclock import AioClock
+
+ ML_MODEL = []
+
+ @contextmanager
+ def lifespan_sync(sync_app: AioClock):
+ ML_MODEL.append(2)
+ print("UP!")
+ yield sync_app
+ ML_MODEL.clear()
+ print("DOWN!")
+
+ sync_app = AioClock(lifespan=lifespan_sync)
+
+ if __name__ == "__main__":
+ asyncio.run(app.serve())
+
Attributes:
+Name | +Type | +Description | +
---|---|---|
lifespan |
+ + | +
+
+
+ A context manager that will be used to handle the startup and shutdown of the application. +If not provided, the application will run without any startup and shutdown logic. +To understand it better, check the examples and documentation above. + |
+
limiter |
+ + | +
+
+
+ Anyio CapacityLimiter. capacity limiter to use to limit the total amount of threads running +Limiter that will be used to limit the number of tasks that are running at the same time. +If not provided, it will fallback to the default limiter set on Application level. +If no limiter is set on Application level, it will fallback to the default limiter set by AnyIO. + |
+
aioclock/app.py
property
+
+
+¶Dependencies provider that will be used to inject dependencies in tasks.
+Override a dependency with a new one.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
original |
+
+ Callable[..., Any]
+ |
+
+
+
+ Original dependency that will be overridden. + |
+ + required + | +
override |
+
+ Callable[..., Any]
+ |
+
+
+
+ New dependency that will override the original one. + |
+ + required + | +
aioclock/app.py
include_group(group: Group) -> None
+
Include a group of tasks that will be run by AioClock.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
group |
+
+ Group
+ |
+
+
+
+ Group of tasks that will be run together. + |
+ + required + | +
aioclock/app.py
task(*, trigger: BaseTrigger)
+
Decorator to add a task to the AioClock instance.
+If decorated function is sync, aioclock will run it in a thread pool executor, using AnyIO.
+But if you try to run the decorated function, it will run in the same thread, blocking the event loop.
+It is intended to not change all your sync functions
to coroutine functions,
+ and they can be used outside of aioclock, if needed.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
trigger |
+
+ BaseTrigger
+ |
+
+
+
+ BaseTrigger +Trigger that will trigger the task to be running. + |
+ + required + | +
aioclock/app.py
async
+
+
+¶Serves AioClock +Run the tasks in the right order. +First, run the startup tasks, then run the tasks, and finally run the shutdown tasks.
+ +aioclock/app.py
Best use case is to have a good modularity and separation of concerns. +For example, you can have a group of tasks that are responsible for sending emails. +And another group of tasks that are responsible for sending notifications.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
limiter |
+
+ Optional[CapacityLimiter]
+ |
+
+
+
+ Anyio CapacityLimiter. capacity limiter to use to limit the total amount of threads running +Limiter that will be used to limit the number of tasks that are running at the same time. +If not provided, it will fallback to the default limiter set on Application level. +If no limiter is set on Application level, it will fallback to the default limiter set by AnyIO. + |
+
+ None
+ |
+
aioclock/group.py
task(*, trigger: BaseTrigger)
+
Decorator to add a task to the AioClock instance.
+If decorated function is sync, aioclock will run it in a thread pool executor, using AnyIO.
+But if you try to run the decorated function, it will run in the same thread, blocking the event loop.
+It is intended to not change all your sync functions
to coroutine functions,
+ and they can be used outside of aioclock, if needed.
Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
trigger |
+
+ BaseTrigger
+ |
+
+
+
+ BaseTrigger +Trigger that will trigger the task to be running. + |
+ + required + | +
aioclock/group.py
Extensions for aioclock.
+AioClock is very extensible, and you can add your own extensions to it. +The extension would allow you to interact with your AioClock instance, from different layers of your application. +For instance, the FastAPI plugin allows you to run a specific task immediately from an HTTP API, or see your tasks in an HTTP API, and when they are going to run next.
+ + + +FastAPI extension to manage the tasks of the AioClock instance in HTTP Layer.
+ + +To use FastAPI Extension, please make sure you do pip install aioclock[fastapi]
.
make_fastapi_router(
+ aioclock: AioClock,
+ router: Union[APIRouter, None] = None,
+)
+
Make a FastAPI router that exposes the tasks of the AioClock instance and its external python API in HTTP Layer. +You can pass a router to this function, and have dependencies injected in the router, or any authorization logic that you want to have.
+ + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
aioclock |
+
+ AioClock
+ |
+
+
+
+ AioClock instance to get the tasks from. + |
+ + required + | +
router |
+
+ Union[APIRouter, None]
+ |
+
+
+
+ FastAPI router to add the routes to. If not provided, a new router will be created. + |
+
+ None
+ |
+
import asyncio
+from contextlib import asynccontextmanager
+
+from fastapi import FastAPI
+
+from aioclock import AioClock
+from aioclock.ext.fast import make_fastapi_router
+from aioclock.triggers import Every, OnStartUp
+
+clock_app = AioClock()
+
+@clock_app.task(trigger=OnStartUp())
+async def startup():
+ print("Starting...")
+
+@clock_app.task(trigger=Every(seconds=3600))
+async def foo():
+ print("Foo is processing...")
+
+
+@asynccontextmanager
+async def lifespan(app: FastAPI):
+ task = asyncio.create_task(clock_app.serve())
+ yield
+
+ try:
+ task.cancel()
+ await task
+ except asyncio.CancelledError:
+ ...
+
+
+app = FastAPI(lifespan=lifespan)
+app.include_router(make_fastapi_router(clock_app))
+
+if __name__ == "__main__":
+ import uvicorn
+ # uvicorn.run(app)
+
aioclock/ext/fast.py
Aioclock wrap your functions with a task object, and append the task to the list of tasks in the AioClock instance. +After collecting all the tasks from decorated functions, aioclock serve them in order it has to be (startup, normal, shutdown).
+These tasks keep running forever until the trigger's method should_trigger
returns False.
dataclass
+
+
+¶Task(
+ func: Callable[..., Awaitable[Any]],
+ trigger: BaseTrigger,
+ id: UUID = uuid4(),
+)
+
Task that will be run by AioClock.
+Which always has a function and a trigger.
+This is internally used, when you decorate your function with aioclock.task
.
Attributes:
+Name | +Type | +Description | +
---|---|---|
func |
+
+ Callable[..., Awaitable[Any]]
+ |
+
+
+
+ Callable[..., Awaitable[Any]]: Decorated function that will be run by AioClock. + |
+
trigger |
+
+ BaseTrigger
+ |
+
+
+
+ BaseTrigger: Trigger that will be used to run the function. + |
+
id |
+
+ UUID
+ |
+
+
+
+ UUID: Task ID that is unique for each task, and changes every time you run the aioclock app. +In future we might store task ID in a database, so that it always remains same. + |
+
async
+
+
+¶Run the task, and handle the exceptions. +If the task fails, log the error with exception, but keep running the tasks.
+ +aioclock/task.py
Triggers are used to determine when the event should be triggered. It can be based on time, or some other condition.
+You can create custom triggers by inheriting from BaseTrigger
class.
Don't run CPU intensitve or thread-block IO task
+AioClock's trigger are all running in async, only on one CPU.
+So, if you run a CPU intensive task, or a task that blocks the thread, then it will block the entire event loop.
+If you have a sync IO task, then it's recommended to use run_in_executor
to run the task in a separate thread.
+Or use similiar libraries like asyncer
or trio
to run the task in a separate thread.
+ Bases: BaseModel
, ABC
, Generic[TriggerTypeT]
Base class for all triggers. +A trigger is a way to determine when the event should be triggered. It can be based on time, or some other condition.
+ + +get_waiting_time_till_next_trigger
is called to get the time in seconds, after which the event should be triggered.trigger_next
is called immidiately after that, which triggers the event.You can create trigger by yourself, by inheriting from BaseTrigger
class.
from aioclock.triggers import BaseTrigger
+from typing import Literal
+
+class Forever(BaseTrigger[Literal["Forever"]]):
+ type_: Literal["Forever"] = "Forever"
+
+ def should_trigger(self) -> bool:
+ return True
+
+ async def trigger_next(self) -> None:
+ return None
+
+ async def get_waiting_time_till_next_trigger(self):
+ if self.should_trigger():
+ return 0
+ return None
+
Attributes:
+Name | +Type | +Description | +
---|---|---|
type_ |
+
+ TriggerTypeT
+ |
+
+
+
+ Type of the trigger. It is a string, which is used to identify the trigger's name.
+You can change the type by using |
+
expected_trigger_time |
+
+ Union[datetime, None]
+ |
+
+
+
+ Expected time when the event should be triggered. This gets updated +by Task Runner. It can be used on API layer, to know when the event is expected to be triggered. + |
+
abstractmethod
+ async
+
+
+¶trigger_next
keep track of the event, and triggers the event.
+The function shall return when the event is triggered and should be executed.
should_trigger
checks if the event should be triggered or not.
+If not, then the event will not be triggered anymore.
+You can save the state of the trigger and task inside the instance, and then check if the event should be triggered or not.
+For instance, in LoopCounter
trigger, it keeps track of the number of times the event has been triggered,
+and then checks if the event should be triggered or not.
aioclock/triggers.py
abstractmethod
+ async
+
+
+¶Returns the time in seconds, after which the event should be triggered. +Returns None, if the event should not trigger anymore.
+ +aioclock/triggers.py
+ Bases: BaseTrigger[Literal[FOREVER]]
A trigger that is always triggered imidiately.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
type_ |
+
+ Literal[FOREVER]
+ |
+
+
+
+ Type of the trigger. It is a string, which is used to identify the trigger's name.
+You can change the type by using |
+
+ Bases: BaseTrigger
, ABC
, Generic[TriggerTypeT]
Base class for all triggers that have loop control.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
type_ |
+
+ TriggerTypeT
+ |
+
+
+
+ Type of the trigger. It is a string, which is used to identify the trigger's name.
+You can change the type by using |
+
max_loop_count |
+
+ Union[PositiveInt, None]
+ |
+
+
+
+ The maximum number of times the event should be triggered.
+If set to 3, then 4th time the event will not be triggered.
+If set to None, it will keep running forever.
+This is available for all triggers that inherit from |
+
_current_loop_count |
+
+ int
+ |
+
+
+
+ Current loop count, which is used to keep track of the number of times the event has been triggered.
+Private attribute, should not be accessed directly.
+This is available for all triggers that inherit from |
+
+ Bases: LoopController[Literal[ONCE]]
A trigger that is triggered only once. It is used to trigger the event only once, and then stop.
+ + +
+ Bases: LoopController[Literal[ON_START_UP]]
Just like Once, but it triggers the event only once, when the application starts up.
+ + +
+ Bases: LoopController[Literal[ON_SHUT_DOWN]]
Just like Once, but it triggers the event only once, when the application shuts down.
+ + +
+ Bases: LoopController[Literal[EVERY]]
A trigger that is triggered every x time units.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
first_run_strategy |
+
+ Literal['immediate', 'wait']
+ |
+
+
+
+ Strategy to use for the first run.
+If |
+
seconds |
+
+ Union[PositiveNumber, None]
+ |
+
+
+
+ Seconds to wait before triggering the event. + |
+
minutes |
+
+ Union[PositiveNumber, None]
+ |
+
+
+
+ Minutes to wait before triggering the event. + |
+
hours |
+
+ Union[PositiveNumber, None]
+ |
+
+
+
+ Hours to wait before triggering the event. + |
+
days |
+
+ Union[PositiveNumber, None]
+ |
+
+
+
+ Days to wait before triggering the event. + |
+
weeks |
+
+ Union[PositiveNumber, None]
+ |
+
+
+
+ Weeks to wait before triggering the event. + |
+
max_loop_count |
+
+ Union[PositiveInt, None]
+ |
+
+
+
+ The maximum number of times the event should be triggered.
+If set to 3, then 4th time the event will not be triggered.
+If set to None, it will keep running forever.
+This is available for all triggers that inherit from |
+
+ Bases: LoopController[Literal[AT]]
A trigger that is triggered at a specific time.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
second |
+
+ Annotated[int, Interval(ge=0, le=59)]
+ |
+
+
+
+ Second to trigger the event. + |
+
minute |
+
+ Annotated[int, Interval(ge=0, le=59)]
+ |
+
+
+
+ Minute to trigger the event. + |
+
hour |
+
+ Annotated[int, Interval(ge=0, le=24)]
+ |
+
+
+
+ Hour to trigger the event. + |
+
at |
+
+ Literal['every monday', 'every tuesday', 'every wednesday', 'every thursday', 'every friday', 'every saturday', 'every sunday', 'every day']
+ |
+
+
+
+ Day of week to trigger the event. You would get the in-line typing support when using the trigger. + |
+
tz |
+
+ str
+ |
+
+
+
+ Timezone to use for the event. + |
+
max_loop_count |
+
+ Union[PositiveInt, None]
+ |
+
+
+
+ The maximum number of times the event should be triggered.
+If set to 3, then 4th time the event will not be triggered.
+If set to None, it will keep running forever.
+This is available for all triggers that inherit from |
+
+ Bases: LoopController[Literal[CRON]]
A trigger that is triggered at a specific time, using cron job format. +If you are not familiar with the cron format, you may read about in this wikipedia article. +Or if you need an online tool to generate cron job, you may use crontab.guru.
+ + +Attributes:
+Name | +Type | +Description | +
---|---|---|
cron |
+
+ str
+ |
+
+
+
+ Cron job format to trigger the event. + |
+
tz |
+
+ str
+ |
+
+
+
+ Timezone to use for the event. + |
+
max_loop_count |
+
+ Union[PositiveInt, None]
+ |
+
+
+
+ The maximum number of times the event should be triggered.
+If set to 3, then 4th time the event will not be triggered.
+If set to None, it will keep running forever.
+This is available for all triggers that inherit from |
+
+ Bases: LoopController[Literal[OR]]
A trigger that triggers the event if any of the inner triggers are met.
+ + +Not that any trigger used with OrTrigger, is fully respected, hence if you have two trigger with max_loop_count=1
,
+ then each trigger will be triggered only once, and then stop, which result in the OrTrigger run only twice.
+ Check example to understand this intended behaviour.
from aioclock import AioClock, OrTrigger, Every, At
+
+app = AioClock()
+
+@app.task(trigger=OrTrigger( # this get triggered 20 times because :...
+ triggers=[
+ Every(seconds=3, max_loop_count=10), # will trigger the event 10 times
+ At(hour=12, minute=30, tz="Asia/Kolkata", max_loop_count=10) # will trigger the event 10 times
+ ]
+))
+async def task():
+ print("Hello World!")
+
Attributes:
+Name | +Type | +Description | +
---|---|---|
triggers |
+
+ list[TriggerT]
+ |
+
+
+
+ List of triggers to use. + |
+
max_loop_count |
+
+ Union[PositiveInt, None]
+ |
+
+
+
+ The maximum number of times the event should be triggered.
+If set to 3, then 4th time the event will not be triggered.
+If set to None, it will keep running forever.
+This is available for all triggers that inherit from |
+