-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logstash S3 Output plugin requires both access_key_id and secret_access_key (if both not provided throws errors) #261
Comments
The documentation does state what pairs of settings it uses when:
The In this case those settings are not mandatory but some form of authentication (i think) is mandatory. Not sure how we represent that for other plugins but maybe @karenzone can chime in |
Currently, in order to access S3 buckets the plugin requires any of [< Is your intention to access public bucket without creds? If so, I don't see the way for now. |
Hello Mashhurs, Thank you for your reply and for the clarification. Yes, my intention was to access my own bucket without creds because in AWS I do use a role that provides me the proper permissions and, therefore, I cannot provide access_key_id and secret_access_key pair. I called AWS and they told me that is not possible to setup key_id and secret_access_key to a role. I believed that providing a policy with S3 permissions attached to the EC2 instance would be good enough but doesn't seem to be the case. I'm a newbie with logstash, so thank you for the patience. Any help/suggestions will be appreciated. Regards, |
If you are running Logstash in EC2 with IAM role attached and you have role ARN, you can set it to |
Hello Mashhurs, I will post below my first-pipeline.conf: output { I checked and both my firewall and proxy are turned off. Please let me know if I did anything incorrect. thanks Regards, |
Hello @mashhurs may I ask for your help on my last comment? Thank you in advance. Regards, |
You are facing a strange error where file doesn't contain key, coming through this line if you are using a final version of the plugin. Did you make any config changes beside S3 creds? |
Hello @mashhurs , thanks for your reply. Regards, |
Hello @mashhurs , may I ask you if you could please edit your reply from 5 days ago and remove bucket name and role? thank you very much. |
As my understanding from your last error message, in temporary folder there are some left over files the plugin is trying to restore. If you can clean (back up the data in case if you need in the future) temporary dir and rerun we could see if that was a cause. I don't see you have a |
Hello @mashhurs , thank you for your reply. Today I added a temporary directory in my first-pipeline.conf (located in /usr/share/logstash/). In /usr/share/logstash/temporary_directory I removed my test file (now is completely empty) and I ran the pipeline. I got this error:
NOTE: I did realize that I missed the commas, looks like I needed it in my first-pipeline.conf. However, I still got errors. I tried as well by removing the commas in first-pipeline.conf and running again the pipeline but I got this other error:
|
Logstash information:
Please include the following information:
bin/logstash --version
): 8.12.0JVM (e.g.
java -version
): 11.0.22If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
java -version
)JAVA_HOME
environment variable if set.OS version (
uname -a
if on a Unix-like system): Linux ip-10-147-116-224.xxx 3.10.0-1160.108.1.el7.x86_64 #1 SMP Thu Jan 25 16:17:31 UTC 2024 x86_64 x86_64 x86_64 GNU/LinuxDescription of the problem including expected versus actual behavior:
In the documentation: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-s3.html, is clearly written (look the example in "Usage" and "S3 Output Configuration Options") that both access_key_id and secret_access_key are optional.
However, if you do NOT include these info, you do get errors. In the specific, if you do NOT provide any of the two information, you will get as result the error: "key must not be blank". If you do provide the access_key_id but NOT the secret_access_key you will get as result the error "unable to sign request without credentials set".
The documentation is misleading because makes you believe that only the name of the bucket is required.
Steps to reproduce:
cd /usr/share/logstash/bin/
/usr/share/logstash/bin/logstash -f /usr/share/logstash/first-pipeline.conf
Below my "first-pipeline.conf" file:
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
Provide logs (if relevant):
Thanks for looking into :)
Regards,
Tizi
Added by @mashhurs
Expectation
The user expectation with this issue is persisting data on S3 without credentials using
--no-sign-request
of AWS API.The text was updated successfully, but these errors were encountered: