Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 7.4.1 sf1 #25047

Draft
wants to merge 10 commits into
base: release-7.4.1
Choose a base branch
from

Conversation

sfc-gh-mgalindo
Copy link

No description provided.

Bazel Release System and others added 10 commits November 11, 2024 21:20
    This change adds logic to clean up the contents of the BEP before it
    leaves Bazel in an attempt to prevent propagating secrets to a remote
    service.

    In particular, this implements protections for the following:

    *   Secrets stored in the environment which are unconditionally leaked
        via the hidden --client_env flag.

    *   Secrets passed via --bes_header, --test_env, --test_arg, and similar
        options.  This is not completely safe to do because a determined
        user could inject secrets into the BEP in other ways.  So... treat
        this as just a safety net for common innocent mistakes.

    *   Secrets passed to tools run via "bazel run".  Think, for example,
        doing "bazel run //some/docker:command -- --password 1234".

    For the purposes of this change, a "secret" is whatever string we did
    not explicitly allow.  This code does not implement secret detection
    and it does not indend to, because such code would be based on
    heuristics as well.

    Note that services like BuildBuddy implement their own redaction of
    secrets when they receive the BEP... but it's better if we can strip
    them out at the source to minimize exposure.  Defense in depth.

Author: Julio Merino <[email protected]>
Date:   Thu Oct 5 10:34:10 2023 -0700
    We have a custom repository rule that uses download(..., allow_fail=True) to
    first check if an artifact is already in the repository cache.  Obviously,
    this test is expected to fail when it is not, but Bazel is incredibly loud
    by printing a WARNING message for each attempt of the form:

    WARNING: Download from file:/mnt/jenkins/home/jenkins/.cache/bazel/_bazel_jenkins/5d6e21225a0f8eb93bf6c0ff841144e1/external/anaconda-python_3.10_x86_64/python-3.10.9-h7a1cb2a_2.tar.bz2 failed: class java.io.FileNotFoundException /mnt/jenkins/home/jenkins/.cache/bazel/_bazel_jenkins/5d6e21225a0f8eb93bf6c0ff841144e1/external/anaconda-python_3.10_x86_64/python-3.10.9-h7a1cb2a_2.tar.bz2 (No such file or directory)

    Given that allow_fail=True is meant to let the caller check for failures,
    warnings should be suppressed.  This change makes this happen.

Upstream PR: bazelbuild#18484

Author: Julio Merino <[email protected]>
Date:   Tue May 23 15:02:20 2023 -0700
    This adds logic to treat remote actions that terminate due to a signal as
    retryable errors, assuming that such terminations are caused by a worker
    crash.

    Because this is a hack to paper over a current Remote Build defficiency,
    and because this heuristic may be wrong, this feature is hidden behind a
    new --snowflake_remote_exit_signals_are_transient_errors flag.

    Mitigates bazelbuild#18319.

Author: Julio Merino <[email protected]>
Date:   Wed May 24 07:12:43 2023 -0700
    This new flag is similar in spirit to --remote_accept_cached but allows
    being more selective about what's accepted and what's not.

    The specific problem I face is the following: we have a setup where we
    want to use dynamic execution for performance reasons.  However, we know
    some actions in our build (those run by rules_foreign_cc) are not
    deterministic.  To mitigate this, we force the actions that we know are
    not deterministic to run remotely, without dynamic execution, as this
    will prevent exposing the non-determinism for as long as they are cached
    and until we can fix their problems.

    However, we still observe non-deterministic actions in the build and we
    need to diagnose what those are.  To do this, I need to run two builds
    and compare their execlogs.  And I need these builds to continue to
    reuse the non-deterministic artifacts we _already_ know about from the
    cache, but to rerun other local actions from scratch.

    Unfortunately, the fact that "remote-cache" is not a strategy (see
    bazelbuild#18245) makes this very
    difficult to do because, even if I configure certain actions to run
    locally unconditionally, the spawn strategy insists on checking the
    remote cache for them.

    With this new flag, I can run a build where the remote actions remain
    remote but where I disable the dynamic scheduler and force the remaining
    actions to re-run locally.  I'm marking the flag as experimental because
    this feels like a huge kludge to paper over the fact that the remote
    cache should really be a strategy, but isn't.  In other words: this
    flag should go away with a better rearchitecting of the remote caching
    interface.

Upstream PR: bazelbuild#18944
Author: Julio Merino <[email protected]>
Date:   Fri Jul 14 10:32:41 2023 -0700

Description

Testing
    Instead of failing hard when JaCoCo encounters issues instrumenting a
    source file (e.g. because instrumentation pushes a large method over
    the bytecode limit of individual methods), print a warning and continue.

    It's more useful to have partial coverage information than none at all
    and users often cannot easily fix their code to appease the issues that
    instrumentation triggers.

    Fixes bazelbuild#21520.

Author: Julio Merino <[email protected]>
Date:   Thu Feb 29 17:57:14 2024 -0800

Description

Testing
Auth from bazel download function is a dictionary url -> AuthHeaders.
If bazel downloader config rewrites the url, the auth dictionary keys need to be rewritten
so that the AuthHeaders are applied correctly.

Upstream PR: bazelbuild#23207
Copy link

google-cla bot commented Jan 23, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant