Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bitnami Image + Official Helm Chart Doesn't Work With Replication #67

Closed
jefftap opened this issue Jul 5, 2019 · 5 comments
Closed

Comments

@jefftap
Copy link

jefftap commented Jul 5, 2019

When substituting the latest TimescaleDB Bitnami image (PG 11, TimeScale 1.3.2) and using the official PostgreSQL helm chart, the replication slaves fail to start because TimescaleDB is trying to install the extension, but it is not supplying the password for the postgres user. The reason seems to be differences in how the master and slaves start up. It looks like the TimescaleDB extension installation script relies on the HBA authentication being set to trust mode.

@agronholm
Copy link
Contributor

Have you looked at #60?

@agronholm
Copy link
Contributor

There is also this: https://github.com/timescale/timescaledb-kubernetes

@agronholm
Copy link
Contributor

After two weeks of banging my head to the wall, I finally managed to get replication to work.
I'm documenting the procedure here when it's still fresh in my memory.
I was using the Bitnami image (1.5.1-pg11) and Bitnami helm chart (5.0.0) with an existing database. Switching to a replicated setup involved the following:

  • Using a custom image (PR Fixed replication slave failing to start #60) to prevent the slave from trying to install the timescaledb extension on the database and the chart does not provide any way to override these scripts on the slave
  • Providing the PostgreSQL tuning values as a configuration override (postgresql.postgresqlExtendedConf)
  • Manually creating a replication account on the master and saving the password in the chart configuration (protected by helm-secrets)
  • Disabling the liveness probe, as it keeps interrupting the initial base backup from master
  • Replacing the old server's persistent volume claim with one matching the new name, and reattaching the physical volume to that (or I could've just specified postgresql.persistence.existingClaim to point to the existing PVC)

Here are my configuration overrides:

postgresql:
  image:
    repository: agronholm/timescaledb
    tag: 1.5.1-pg11-bitnami
  replication:
    enabled: true
    user: replicator
    password: redacted
  livenessProbe:
    enabled: false
  postgresqlExtendedConf:
    shared_buffers: 256MB
    effective_cache_size: 768MB
    maintenance_work_mem: 128MB
    work_mem: 1310kB
    wal_buffers: 7864kB
    min_wal_size: 4GB
    max_wal_size: 8GB
    default_statistics_target: "500"
    random_page_cost: "1.1"
    checkpoint_completion_target: "0.9"
    max_connections: "100"
    max_locks_per_transaction: "64"
    effective_io_concurrency: "2"

@erimatnor
Copy link
Contributor

Thank you @agronholm for the provided information. Closing since you found a solution.

@jwgmeligmeyling
Copy link

jwgmeligmeyling commented May 4, 2021

I would rather see #60 merged which actually fixes the issue 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants