-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add smoke testing #296
Conversation
Add a terraform script and an example js function. The function is pushed to aws with the following layers: - apm js agent - locally built lambda extension Log level and aws region can be configured using variables.
First iteration, ready for review! Feedback or tip on how to integrate with CI welcomed 🙇 |
💔 Build Failed
Expand to view the summary
Build stats
Steps errorsExpand to view the steps failures
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great start. Before closing #217 (the description says this PR closes it), we should make invocations to the function and check the results in Elasticsearch.
In standup I mentioned that I think we should use an ephemeral deployment for this. Perhaps for that we can reuse some of the Terraform modules defined in the apm-server repo, like this:
module "ec_deployment" {
source = "github.com/elastic/apm-server//testing/infra/terraform/modules/ec_deployment?depth=1"
deployment_name_prefix = "apm-aws-lambda"
integrations_server = true
apm_server_expvar = false
apm_server_pprof = false
}
(@marclop WDYT about other projects using the apm-server modules like that?)
We could then compose that and the module you have defined to orchestrate the creation of the ESS deployment and Lambda. After provisioning we would run a script that sends requests to the Lambda URL, and then poll Elasticsearch for results.
I've been intending to propose that we take this approach for the agents too, and stop using apm-integration-testing. CC @felixbarny @cachedout
create an elastic cloud deployment and auto set the apm server url and secret token
Could you clarify this ? Looking at the module in the apm-server repository I don't think we will end up using much of the available inputs. Is there a specific feature we're planning to use ? |
My initial thought was also to use the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good @kruskall, just a few changes I'd like to see.
In addition to the comments I left, can we:
- add .gitignore entries for terraform state?
- add outputs for the Lambda function URL, and ESS deployment details (Elasticsearch + Kibana URLs, username and password)?
Perhaps we should default the AWS region to gcp-us-west-2, since it's where we do the rest of our cloud-based testing. Although it's crossing regions, the amount of traffic will be minimal - and running the deployment in the dedicated CFT us-west-2 region means we won't be contending with customers for ESS resources.
How will we invoke the Lambda function from some test script after provisioning? Should we be configuring a function URL, and emitting it in output?
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, thanks for the updates @kruskall.
I think we should change the default ESS region, but let me know if you have any concerns about that. Ideally I think we would use the defaults for ess_region
and deployment_template
as defined in ec_deployment
, to keep things defined in one place.
In order to close #217 we will need to actually run some invocations and verify trace events end up in Elasticsearch as expected. I suggest we get this PR finished up and follow up with a test script that does those bits. Can you please remove the "Closes" from the description?
Co-authored-by: Andrew Wilkins <[email protected]>
Ok, I added the script to verify the traces are in Elasticsearch, I think this is ready for another review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this is great. LGTM with the region explicitly passed to the AWS CLI.
Co-authored-by: Andrew Wilkins <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, my review didn't make it pre-merge, I guess we can see if it's worth as a follow up
apm_server_expvar = false | ||
apm_server_pprof = false | ||
region = var.ess_region | ||
deployment_template = var.ess_deployment_template |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By default, we'll end up using the latest SNAPSHOT versions, they may be broken and cause the smoke tests to fail. If we want to avoid that, then we need to provide an expression for a released version, such as:
deployment_template = var.ess_deployment_template | |
deployment_template = var.ess_deployment_template | |
stack_version = "\\d.\\d.\\d?(\\d)$" // use the latest released version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++ I've opened #307 to track this
Add a terraform script and an example js function.
The function is pushed to aws with the following layers:
Log level and aws region can be configured using variables.
Closes #217