Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broken annotation "argocd-image-updater.argoproj.io/image-list" if there is more than one image in it #1008

Closed
onesb23 opened this issue Jan 10, 2025 · 8 comments
Labels
bug Something isn't working

Comments

@onesb23
Copy link

onesb23 commented Jan 10, 2025

Describe the bug
If multiple images are listed, only the last one is applied.

To Reproduce
An application that contains the following annotations:

metadata:
  annotations:
    argocd-image-updater.argoproj.io/image-list: myregistry/myimage1:prod,myregistry/myimage2:prod
    argocd-image-updater.argoproj.io/update-strategy: digest

According to the documentation, the corresponding kustomize should be processed and created:

spec:
  source:
    kustomize:
      images:
      - myregistry/myimage1:prod@sha256:blobloblo...
      - myregistry/myimage2:prod@sha256:blablabla...

In real life, this kustomize is created:

spec:
  source:
    kustomize:
      images:
        myregistry/myimage2@sha256:blablabla...

Expected behavior
I expect this to work correctly.

Additional context
This worked on some previous version, but rolling back to previous versions does not fix the problem.

Version
v0.15.2

Logs
In the log at each interval of synchronization with the registry there are such messages:

time="2025-01-10T16:03:34Z" level=info msg="Committing 1 parameter update(s) for application myapp" application=myapp
time="2025-01-10T16:03:35Z" level=info msg="Successfully updated the live application spec" application=myapp
time="2025-01-10T16:03:35Z" level=info msg="Processing results: applications=9 images_considered=8 images_skipped=3 images_updated=1 errors=0"
time="2025-01-10T16:04:06Z" level=info msg="Starting image update cycle, considering 8 annotated application(s) for update"
time="2025-01-10T16:04:07Z" level=info msg="Setting new image to myimage2:prod@sha256:9d2f8f8fb12f01697cd212f1fb79cca11a32d8120243a384e9cab153ad576c36" alias= application=myapp image_name=myimage2 image_tag=dummy registry="myregistry"
time="2025-01-10T16:04:07Z" level=info msg="Successfully updated image 'myimage2:prod@dummy' to 'myimage2:prod@sha256:9d2f8f8fb12f01697cd212f1fb79cca11a32d8120243a384e9cab153ad576c36', but pending spec update (dry run=false)" alias= application=myapp image_name=myimage2 image_tag=dummy registry="myregistry"
time="2025-01-10T16:04:07Z" level=info msg="Committing 1 parameter update(s) for application myapp" application=myapp
time="2025-01-10T16:04:08Z" level=info msg="Successfully updated the live application spec" application=myapp
time="2025-01-10T16:04:08Z" level=info msg="Processing results: applications=9 images_considered=8 images_skipped=3 images_updated=1 errors=0"
time="2025-01-10T16:04:38Z" level=info msg="Starting image update cycle, considering 8 annotated application(s) for update"
time="2025-01-10T16:04:39Z" level=info msg="Setting new image to myimage2:prod@sha256:9d2f8f8fb12f01697cd212f1fb79cca11a32d8120243a384e9cab153ad576c36" alias= application=myapp image_name=myimage2 image_tag=dummy registry="myregistry"
time="2025-01-10T16:04:39Z" level=info msg="Successfully updated image 'myimage2:prod@dummy' to 'myimage2:prod@sha256:9d2f8f8fb12f01697cd212f1fb79cca11a32d8120243a384e9cab153ad576c36', but pending spec update (dry run=false)" alias= application=myapp image_name=myimage2 image_tag=dummy registry="myregistry"
time="2025-01-10T16:04:40Z" level=info msg="Committing 1 parameter update(s) for application myapp" application=myapp
time="2025-01-10T16:04:41Z" level=info msg="Successfully updated the live application spec" application=myapp
time="2025-01-10T16:04:41Z" level=info msg="Processing results: applications=9 images_considered=8 images_skipped=3 images_updated=1 errors=0"
time="2025-01-10T16:05:11Z" level=info msg="Starting image update cycle, considering 8 annotated application(s) for update"
time="2025-01-10T16:05:12Z" level=info msg="Setting new image to myimage2:prod@sha256:9d2f8f8fb12f01697cd212f1fb79cca11a32d8120243a384e9cab153ad576c36" alias= application=myapp image_name=myimage2 image_tag=dummy registry="myregistry"
time="2025-01-10T16:05:12Z" level=info msg="Successfully updated image 'myimage2:prod@dummy' to 'myimage2:prod@sha256:9d2f8f8fb12f01697cd212f1fb79cca11a32d8120243a384e9cab153ad576c36', but pending spec update (dry run=false)" alias= application=myapp image_name=myimage2 image_tag=dummy registry="myregistry"

In fact, the image in the pod is not updated.
If the first or second image is specified, but only one, the pod update is successful.

@onesb23 onesb23 added the bug Something isn't working label Jan 10, 2025
@chengfang
Copy link
Collaborator

I tested with my sample app (https://github.com/chengfang/image-updater-examples/tree/main/image-list-kustomize) with image-updater v0.15.2, and works as expected. This app configures 2 updateable images with image-list annotation, and after running image-updater, the created target file .argocd-source-xxx contains entries for both images. See https://github.com/chengfang/image-updater-examples/blob/main/image-list-kustomize/source/.argocd-source-image-list-kustomize.yaml

@onesb23
Copy link
Author

onesb23 commented Jan 14, 2025

I tested with my sample app (https://github.com/chengfang/image-updater-examples/tree/main/image-list-kustomize) with image-updater v0.15.2, and works as expected. This app configures 2 updateable images with image-list annotation, and after running image-updater, the created target file .argocd-source-xxx contains entries for both images. See https://github.com/chengfang/image-updater-examples/blob/main/image-list-kustomize/source/.argocd-source-image-list-kustomize.yaml

Your example is completely unrepresentative. You are using a completely different update-strategy. Moreover, you are using the write-back-method - git. I want to use default settings where possible.

If write-back-methods other than git are no longer supported, this should be documented.

@onesb23
Copy link
Author

onesb23 commented Jan 14, 2025

To rule out possible problems specific to my installation that could have arisen historically, I completely removed all argocd resources including the namespace. I reinstalled everything and the behavior changed a bit:
In the customization there is now a list, but there is still only the last image from the list. And even it updates the pod incorrectly.

spec:
  source:
    kustomize:
      images:
      - myregistry/myimage2@sha256:blablabla...

But it should be like this:

spec:
  source:
    kustomize:
      images:
      - myregistry/myimage1:prod@sha256:blobloblo...
      - myregistry/myimage2:prod@sha256:blablabla...

@chengfang
Copy link
Collaborator

I updated my sample app to use argocd write-back method and digest update strategy. After the image-update run, I was able to see the expected image updates:

kubectl describe -n argocd apps image-list-kustomize

  Sync:
    Compared To:
      Destination:
        Name:       in-cluster
        Namespace:  argocd
      Source:
        Kustomize:
          Images:
            nginx:latest@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
            bitnami/nginx:latest@sha256:c02e18884badbd9482fd731668f75a3033124c748bc709651fb06062d0ab38c1

@onesb23
Copy link
Author

onesb23 commented Jan 15, 2025

I updated my sample app to use argocd write-back method and digest update strategy. After the image-update run, I was able to see the expected image updates:

kubectl describe -n argocd apps image-list-kustomize

  Sync:
    Compared To:
      Destination:
        Name:       in-cluster
        Namespace:  argocd
      Source:
        Kustomize:
          Images:
            nginx:latest@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
            bitnami/nginx:latest@sha256:c02e18884badbd9482fd731668f75a3033124c748bc709651fb06062d0ab38c1

Thank you for taking the time to do this. Obviously, i need to analyze more deeply what is happening in my setup to find out what is different from the working default...

Please tell me which version of argocd and kubernetes was used in this test 🙏

@chengfang
Copy link
Collaborator

I'm using latest version (v0.15.2) of image-updater, and pretty recent version of argo-cd:

time="2025-01-15T13:12:24Z" level=info msg="ArgoCD API Server is starting" built="2024-08-27T11:57:48Z" commit=6b9cd828c6e9807398869ad5ac44efd2c28422d6 namespace=argocd port=8080 version=v2.12.3+6b9cd82

kubectl version:

Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.3+k3s1

@onesb23
Copy link
Author

onesb23 commented Jan 15, 2025

I'm using latest version (v0.15.2) of image-updater, and pretty recent version of argo-cd:

time="2025-01-15T13:12:24Z" level=info msg="ArgoCD API Server is starting" built="2024-08-27T11:57:48Z" commit=6b9cd828c6e9807398869ad5ac44efd2c28422d6 namespace=argocd port=8080 version=v2.12.3+6b9cd82

kubectl version:

Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.3+k3s1

Image

Quite an old version of argocd. I have the latest version argocd - 2.13.3

Client Version: v1.31.4
Kustomize Version: v5.4.2
Server Version: v1.31.4

Thanks for the information. I'll have to create a test bench for training.

@onesb23
Copy link
Author

onesb23 commented Jan 20, 2025

@chengfang I finally got to the truth. The problem is that I was using registry on a user port.

myregistry:8443/myimage1:prod
myregistry:8443/myimage2:prod

After i reconfigured my registry to use the default port 443, everything worked.

myregistry/myimage1:prod
myregistry/myimage2:prod

Looks like i missed the documentation where it states that custom ports don't work. Although if only one image is specified, it also works with a custom port...

I also apologize for initially setting the task incorrectly, without specifying such an important detail...

@onesb23 onesb23 closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants