Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can this container be used for a worker node? #37

Open
mlamoure opened this issue Aug 1, 2023 · 10 comments
Open

Can this container be used for a worker node? #37

mlamoure opened this issue Aug 1, 2023 · 10 comments
Labels
question Further information is requested

Comments

@mlamoure
Copy link

mlamoure commented Aug 1, 2023

New to Cronicle. Excited about the possibilities. Was trying to figure out if your container can be used for a second node? My main use case is to use Cronicle to manage the crontab of another machine where the jobs must be run locally due to file system access.

@bluet
Copy link
Owner

bluet commented Aug 26, 2023

hi @mlamoure as this is an enhanced docker version of the original cronicle package (added docker-in-docker support), I think you can do whatever the original cronicle can do with this one.
Please let me know your experiment result, and don't hesitate to send a PR if anything's broken! :-)

@bluet bluet added the question Further information is requested label Aug 29, 2023
@peterbuga
Copy link
Contributor

@mlamoure i use this image in a main-node + a few workers setup with S3 as storage backend (could just be easily be used in multiple main nodes setup per Cronicle's documentation). basically, anything that can be done with original Cronicle can also be done with this - and yes, I use the docker-in-docker support to start various containers 👌

@pradyumnvermaa
Copy link

@bluet I faced flickering issue in screen while setting up worker node, can somebody please help here?

@peterbuga
Copy link
Contributor

@pradyumnvermaa you are not supposed to be able to login on a worker node - even though it displays a login form, odd choice.

@cheretbe
Copy link

cheretbe commented Jan 4, 2024

@peterbuga I'm trying to log in on a server, not worker, and get this flickering issue anyway. See my comment here.
Could you please share your main-node + workers setup? Just to see what needs to be configured for this to work properly.

@peterbuga
Copy link
Contributor

@cheretbe

this is my configuration for cronicle 1 master server + multiple slave node using S3 storage (the S3 storage part can be ignored and use the filesystem, I just use it as a basic backup system in case I need to move the master on another server)

base docker-compose template that's extended by each server:

---
version: "3.9"

services:
  cronicle:
    image: "bluet/cronicle-docker:0.9.39"
    container_name: "cronicle"
    # hostname: "cronicle-X.domain.tld"
    restart: "unless-stopped"
    volumes:
      - /opt/cronicle/config.json:/opt/cronicle/conf/config.json:ro
      - /var/run/docker.sock:/var/run/docker.sock
      # ran with problems being stuck after a restart, .pid reference in config
      - type: "tmpfs"
        target: "/var/runtmp"
      - /opt/cronicle/ssl:/opt/cronicle/ssl:ro
      - /opt/cronicle/data:/opt/cronicle/data
      - /opt/cronicle/logs:/opt/cronicle/logs
      - /opt/cronicle/plugins:/opt/cronicle/plugins
    environment:
      CRONICLE_WebServer__http_port: 80
      CRONICLE_WebServer__https_port: 443

master server compose.yml

....
  cronicle:
   extends:
     file: "cronicle/docker-compose.yml"
     service: "cronicle"
   hostname: "cronicle.domain.tld"
   environment:
     CRONICLE_base_app_url: "https://cronicle.domain.tld:443"

slave server compose.yml

...
  cronicle:
   extends:
     file: "cronicle/docker-compose.yml"
     service: "cronicle"
   hostname: "cronicle-home.domain.tld"
   environment:
     CRONICLE_base_app_url: "https://cronicle-home.domain.tld"

config.json

{
    "base_app_url": "https://cronicle.domain.tld:443",
    "email_from": "[email protected]",
    "smtp_hostname": "smtp",
    "smtp_port": 25,
    "secret_key": "xxxxxx",
    "log_dir": "logs",
    "log_filename": "[component].log",
    "log_columns":
    [
        "hires_epoch",
        "date",
        "hostname",
        "pid",
        "component",
        "category",
        "code",
        "msg",
        "data"
    ],
    "log_archive_path": "logs/archives/[yyyy]/[mm]/[dd]/[filename]-[yyyy]-[mm]-[dd].log.gz",
    "log_crashes": true,
    "copy_job_logs_to": "",
    "queue_dir": "queue",
    "pid_file": "/var/runtmp/cronicled.pid",
    "debug_level": 8,
    "maintenance": "04:00",
    "list_row_max": 500,
    "job_data_expire_days": 30,
    "child_kill_timeout": 10,
    "dead_job_timeout": 120,
    "master_ping_freq": 20,
    "master_ping_timeout": 60,
    "udp_broadcast_port": 3014,
    "scheduler_startup_grace": 10,
    "universal_web_hook": "",
    "track_manual_jobs": false,
    "server_comm_use_hostnames": true,
    "web_direct_connect": true,
    "web_socket_use_hostnames": true,
    "job_memory_max": 1073741824,
    "job_memory_sustain": 0,
    "job_cpu_max": 0,
    "job_cpu_sustain": 0,
    "job_log_max_size": 0,
    "job_env":
    {},
    "web_hook_text_templates":
    {
        "job_start": "Job started on [hostname]: [event_title] [job_details_url]",
        "job_complete": "Job completed successfully on [hostname]: [event_title] [job_details_url]",
        "job_failure": "Job failed on [hostname]: [event_title]: Error [code]: [description] [job_details_url]",
        "job_launch_failure": "Failed to launch scheduled event: [event_title]: [description] [edit_event_url]"
    },
    "client":
    {
        "name": "Cronicle",
        "debug": 1,
        "default_password_type": "password",
        "privilege_list":
        [
            {
                "id": "admin",
                "title": "Administrator"
            },
            {
                "id": "create_events",
                "title": "Create Events"
            },
            {
                "id": "edit_events",
                "title": "Edit Events"
            },
            {
                "id": "delete_events",
                "title": "Delete Events"
            },
            {
                "id": "run_events",
                "title": "Run Events"
            },
            {
                "id": "abort_events",
                "title": "Abort Events"
            },
            {
                "id": "state_update",
                "title": "Toggle Scheduler"
            }
        ],
        "new_event_template":
        {
            "enabled": 1,
            "params":
            {},
            "timing":
            {
                "minutes":
                [
                    0
                ]
            },
            "max_children": 1,
            "timeout": 3600,
            "catch_up": 0,
            "queue_max": 1000
        }
    },
    "Storage":
    {
        "transactions": true,
        "trans_auto_recover": true,
        "engine": "S3",
        "AWS":
        {
            "endpoint": "https://xxxxx",
            "accessKeyId": "xxxxx",
            "secretAccessKey": "xxxxx",
            "hostPrefixEnabled": false,
            "endpointPrefix": false,
            "region": "auto",
            "correctClockSkew": true,
            "maxRetries": 5,
            "httpOptions":
            {
                "connectTimeout": 5000,
                "timeout": 5000
            }
        },
        "S3":
        {
            "fileExtensions": true,
            "params":
            {
                "Bucket": "cronicle"
            },
            "connectTimeout": 5000,
            "socketTimeout": 5000,
            "maxAttempts": 50,
            "keyPrefix": "",
            "cache": {
                "enabled": true,
                "maxItems": 1000,
                "maxBytes": 10485760
            }
        }
    },
    "WebServer":
    {
        "http_port": 3012,
        "http_htdocs_dir": "htdocs",
        "http_max_upload_size": 104857600,
        "http_static_ttl": 3600,
        "http_static_index": "index.html",
        "http_server_signature": "Cronicle 1.0",
        "http_gzip_text": true,
        "http_timeout": 30,
        "http_regex_json": "(text|javascript|js|json)",
        "http_response_headers":
        {
            "Access-Control-Allow-Origin": "*"
        },
        "https": true,
        "https_port": 3013,
        "https_cert_file": "/opt/cronicle/ssl/ssl.crt",
        "https_key_file": "/opt/cronicle/ssl/ssl.key",
        "https_force": false,
        "https_timeout": 30,
        "https_header_detect":
        {
            "Front-End-Https": "^on$",
            "X-Url-Scheme": "^https$",
            "X-Forwarded-Protocol": "^https$",
            "X-Forwarded-Proto": "^https$",
            "X-Forwarded-Ssl": "^on$"
        }
    },
    "User":
    {
        "session_expire_days": 90,
        "max_failed_logins_per_hour": 5,
        "max_forgot_passwords_per_hour": 3,
        "free_accounts": false,
        "sort_global_users": true,
        "use_bcrypt": true,
        "email_templates":
        {
            "welcome_new_user": "conf/emails/welcome_new_user.txt",
            "changed_password": "conf/emails/changed_password.txt",
            "recover_password": "conf/emails/recover_password.txt"
        },
        "default_privileges":
        {
            "admin": 0,
            "create_events": 1,
            "edit_events": 1,
            "delete_events": 1,
            "run_events": 0,
            "abort_events": 0,
            "state_update": 0
        }
    }
}

master server has a cronicle.domain.tld format, hostname match regex in cronicle /^(cronicle\.domain\.tld)$/
slave servers have a cronicle-[location].domain.tld format, hostname regex /^(cronicle-.*\.domain\.tld)$/

I used above format to be able to access workers logs in realtime, and I was getting nervous trying to access the logs behind an internal reverse-proxy due to cronicle's current limitations.

the ssl references/certificates are locally generated, basically traefik does all the reverse-proxy magic on each node an because everything is accessed on :443, cronicle needs to generate the proper urls/socket format so I had to route everything via https port.

hope it helps somehow 🙆‍♂️

@cheretbe
Copy link

cheretbe commented Jan 5, 2024

hope it helps somehow 🙆‍♂️

It definitely helps, thanks!
My plan is to migrate from clunky and resource-hungry rundeck to cronicle, which at first sight looks like a good alternative. But initial config is not that obvious.

@mlamoure
Copy link
Author

mlamoure commented Apr 6, 2024

I've been studying this and am hoping someone can help me understand how to configure a second instance of cronicle that is a slave. I'm deploying the second instance on a different set of infrastructure on the same LAN. looking at @peterbuga config, I'm not seeing what configuration settings enable it to be a worker.

@peterbuga
Copy link
Contributor

peterbuga commented Apr 6, 2024 via email

@mlamoure
Copy link
Author

mlamoure commented Apr 6, 2024

Thanks, I did read the docs but these points you are making are buried under the "ops notes" on the Setup Wiki and I thought only applied for multi cluster. thanks for the hints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants