Skip to content

[AGENTRUN-466] Strip cluster-agent binaries #37813

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 11, 2025

Conversation

pgimalac
Copy link
Member

@pgimalac pgimalac commented Jun 10, 2025

What does this PR do?

Strip cluster-agent binaries.

Motivation

Reduce cluster-agent binary size by 45MiB and docker image size by 30MiB.

Describe how you validated your changes

Possible Drawbacks / Trade-offs

Additional Notes

I discussed this with container-platform and they considered it was probably just forgotten.
Other binaries in other images are stripped.

The unstripped binary is needed for fips compliance checks, so I had to keep an unstripped version in the artifacts.

@pgimalac pgimalac added changelog/no-changelog qa/no-code-change No code change in Agent code requiring validation team/container-platform The Container Platform Team and removed team/agent-devx team/agent-build labels Jun 10, 2025
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Jun 10, 2025

Gitlab CI Configuration Changes

Modified Jobs

.cluster_agent-build_common
  .cluster_agent-build_common:
    artifacts:
      exclude:
      - Dockerfiles/cluster-agent/security-agent-policies/.git/**/*
      - Dockerfiles/cluster-agent/security-agent-policies/.github/**/*
      paths:
      - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
+     - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
      - Dockerfiles/cluster-agent/datadog-cluster.yaml
      - Dockerfiles/cluster-agent/security-agent-policies
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache.tar.xz
    needs:
    - go_mod_tidy_check
    - go_deps
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - dda inv -- check-go-version
    - dda inv -- -e cluster-agent.build --major-version "$AGENT_MAJOR_VERSION"
+   - cp $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
+   - strip $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
    stage: binary_build
    variables:
      AGENT_MAJOR_VERSION: 7
      KUBERNETES_CPU_REQUEST: 6
      KUBERNETES_MEMORY_LIMIT: 8Gi
      KUBERNETES_MEMORY_REQUEST: 8Gi
.docker_build_cluster_agent_fips
  .docker_build_cluster_agent_fips:
    before_script:
    - mkdir -p ${ARTIFACTS_BUILD_CONTEXT}
    - mv -vf $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $ARTIFACTS_BUILD_CONTEXT/
    - mv -vf $CWS_INSTRUMENTATION_BINARIES_DIR $ARTIFACTS_BUILD_CONTEXT/
    - mv -vf Dockerfiles/agent/nosys-seccomp $BUILD_CONTEXT/
-   - go tool nm ${ARTIFACTS_BUILD_CONTEXT}/datadog-cluster-agent | grep '_Cfunc__goboringcrypto_'
+   - go tool nm $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped | grep
+     '_Cfunc__goboringcrypto_'
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    variables:
      ARTIFACTS_BUILD_CONTEXT: /tmp/build_artifacts
      BUILD_CONTEXT: Dockerfiles/cluster-agent
      IMAGE: registry.ddbuild.io/ci/datadog-agent/cluster-agent
      TAG_SUFFIX: -fips
cluster_agent-build_amd64
  cluster_agent-build_amd64:
    artifacts:
      exclude:
      - Dockerfiles/cluster-agent/security-agent-policies/.git/**/*
      - Dockerfiles/cluster-agent/security-agent-policies/.github/**/*
      paths:
      - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
+     - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
      - Dockerfiles/cluster-agent/datadog-cluster.yaml
      - Dockerfiles/cluster-agent/security-agent-policies
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache.tar.xz
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/linux-glibc-2-17-x64$CI_IMAGE_LINUX_GLIBC_2_17_X64_SUFFIX:$CI_IMAGE_LINUX_GLIBC_2_17_X64
    needs:
    - go_mod_tidy_check
    - go_deps
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - dda inv -- check-go-version
    - dda inv -- -e cluster-agent.build --major-version "$AGENT_MAJOR_VERSION"
+   - cp $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
+   - strip $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
    stage: binary_build
    tags:
    - arch:amd64
    variables:
      AGENT_MAJOR_VERSION: 7
      KUBERNETES_CPU_REQUEST: 6
      KUBERNETES_MEMORY_LIMIT: 8Gi
      KUBERNETES_MEMORY_REQUEST: 8Gi
cluster_agent-build_arm64
  cluster_agent-build_arm64:
    artifacts:
      exclude:
      - Dockerfiles/cluster-agent/security-agent-policies/.git/**/*
      - Dockerfiles/cluster-agent/security-agent-policies/.github/**/*
      paths:
      - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
+     - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
      - Dockerfiles/cluster-agent/datadog-cluster.yaml
      - Dockerfiles/cluster-agent/security-agent-policies
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache.tar.xz
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/linux-glibc-2-23-arm64$CI_IMAGE_LINUX_GLIBC_2_23_ARM64_SUFFIX:$CI_IMAGE_LINUX_GLIBC_2_23_ARM64
    needs:
    - go_mod_tidy_check
    - go_deps
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - dda inv -- check-go-version
    - dda inv -- -e cluster-agent.build --major-version "$AGENT_MAJOR_VERSION"
+   - cp $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
+   - strip $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
    stage: binary_build
    tags:
    - arch:arm64
    variables:
      AGENT_MAJOR_VERSION: 7
      KUBERNETES_CPU_REQUEST: 6
      KUBERNETES_MEMORY_LIMIT: 8Gi
      KUBERNETES_MEMORY_REQUEST: 8Gi
cluster_agent_cloudfoundry-build_amd64
  cluster_agent_cloudfoundry-build_amd64:
    artifacts:
      expire_in: 2 weeks
      paths:
      - $OMNIBUS_PACKAGE_DIR
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache.tar.xz
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/linux-glibc-2-17-x64$CI_IMAGE_LINUX_GLIBC_2_17_X64_SUFFIX:$CI_IMAGE_LINUX_GLIBC_2_17_X64
    needs:
    - go_mod_tidy_check
    - go_deps
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - dda inv -- check-go-version
    - dda inv -- -e cluster-agent-cloudfoundry.build
    - cd $CI_PROJECT_DIR/$CLUSTER_AGENT_CLOUDFOUNDRY_BINARIES_DIR
+   - strip datadog-cluster-agent-cloudfoundry
    - mkdir -p $OMNIBUS_PACKAGE_DIR
    - PACKAGE_VERSION=$(dda inv agent.version --url-safe) || exit $?
    - tar cf $OMNIBUS_PACKAGE_DIR/datadog-cluster-agent-cloudfoundry-$PACKAGE_VERSION-$ARCH.tar.xz
      datadog-cluster-agent-cloudfoundry
    stage: binary_build
    tags:
    - arch:amd64
    variables:
      ARCH: amd64
      KUBERNETES_CPU_REQUEST: 4
cluster_agent_fips-build_amd64
  cluster_agent_fips-build_amd64:
    artifacts:
      exclude:
      - Dockerfiles/cluster-agent/security-agent-policies/.git/**/*
      - Dockerfiles/cluster-agent/security-agent-policies/.github/**/*
      paths:
      - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
+     - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
      - Dockerfiles/cluster-agent/datadog-cluster.yaml
      - Dockerfiles/cluster-agent/security-agent-policies
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache.tar.xz
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/linux-glibc-2-17-x64$CI_IMAGE_LINUX_GLIBC_2_17_X64_SUFFIX:$CI_IMAGE_LINUX_GLIBC_2_17_X64
    needs:
    - go_mod_tidy_check
    - go_deps
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - dda inv -- check-go-version
    - dda inv -- -e cluster-agent.build --major-version "$AGENT_MAJOR_VERSION"
+   - cp $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
+   - strip $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
    stage: binary_build
    tags:
    - arch:amd64
    variables:
      AGENT_MAJOR_VERSION: 7
      GOEXPERIMENT: boringcrypto
      KUBERNETES_CPU_REQUEST: 6
      KUBERNETES_MEMORY_LIMIT: 8Gi
      KUBERNETES_MEMORY_REQUEST: 8Gi
cluster_agent_fips-build_arm64
  cluster_agent_fips-build_arm64:
    artifacts:
      exclude:
      - Dockerfiles/cluster-agent/security-agent-policies/.git/**/*
      - Dockerfiles/cluster-agent/security-agent-policies/.github/**/*
      paths:
      - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
+     - $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
      - Dockerfiles/cluster-agent/datadog-cluster.yaml
      - Dockerfiles/cluster-agent/security-agent-policies
    before_script:
    - mkdir -p $GOPATH/pkg/mod/cache && tar xJf modcache.tar.xz -C $GOPATH/pkg/mod/cache
      || exit 101
    - rm -f modcache.tar.xz
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/linux-glibc-2-23-arm64$CI_IMAGE_LINUX_GLIBC_2_23_ARM64_SUFFIX:$CI_IMAGE_LINUX_GLIBC_2_23_ARM64
    needs:
    - go_mod_tidy_check
    - go_deps
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - dda inv -- check-go-version
    - dda inv -- -e cluster-agent.build --major-version "$AGENT_MAJOR_VERSION"
+   - cp $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped
+   - strip $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent
    stage: binary_build
    tags:
    - arch:arm64
    variables:
      AGENT_MAJOR_VERSION: 7
      GOEXPERIMENT: boringcrypto
      KUBERNETES_CPU_REQUEST: 6
      KUBERNETES_MEMORY_LIMIT: 8Gi
      KUBERNETES_MEMORY_REQUEST: 8Gi
docker_build_cluster_agent_fips_amd64
  docker_build_cluster_agent_fips_amd64:
    before_script:
    - mkdir -p ${ARTIFACTS_BUILD_CONTEXT}
    - mv -vf $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $ARTIFACTS_BUILD_CONTEXT/
    - mv -vf $CWS_INSTRUMENTATION_BINARIES_DIR $ARTIFACTS_BUILD_CONTEXT/
    - mv -vf Dockerfiles/agent/nosys-seccomp $BUILD_CONTEXT/
-   - go tool nm ${ARTIFACTS_BUILD_CONTEXT}/datadog-cluster-agent | grep '_Cfunc__goboringcrypto_'
+   - go tool nm $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped | grep
+     '_Cfunc__goboringcrypto_'
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/docker_x64$CI_IMAGE_DOCKER_X64_SUFFIX:$CI_IMAGE_DOCKER_X64
    needs:
    - artifacts: true
      job: cluster_agent_fips-build_amd64
    - artifacts: true
      job: cws_instrumentation-build_amd64
    - artifacts: true
      job: cws_instrumentation-build_arm64
    retry: 2
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - TAG_SUFFIX=${TAG_SUFFIX:-}
    - BUILD_ARG=${BUILD_ARG:-}
    - EXTRA_BUILD_CONTEXT=${ARTIFACTS_BUILD_CONTEXT:+"--build-context artifacts=$ARTIFACTS_BUILD_CONTEXT"}
    - "if [[ \"$BUCKET_BRANCH\" == \"nightly\" && ( \"$IMAGE\" =~ \"ci/datadog-agent/agent\"\
      \ || \"$IMAGE\" =~ \"ci/datadog-agent/cluster-agent\" || \"$IMAGE\" =~ \"ci/datadog-agent/cws-instrumentation\"\
      \  || \"$IMAGE\" =~ \"ci/datadog-agent/otel-agent\" ) ]]; then\n  export ECR_RELEASE_SUFFIX=\"\
      -nightly\"\nelse\n  export ECR_RELEASE_SUFFIX=${CI_COMMIT_TAG+-release}\nfi\n"
    - DOCKER_CACHE_TARGET="${IMAGE}${TAG_SUFFIX}-${ARCH}:cache"
    - CACHE_SOURCE="--cache-from type=registry,ref=${DOCKER_CACHE_TARGET}"
    - CACHE_TO=""
    - "DOCKER_NO_CACHE=\"\"\nfor target in ${NO_CACHE_TARGETS}; do\n  DOCKER_NO_CACHE=\"\
      ${DOCKER_NO_CACHE_FILTER} --no-cache-filter ${target}\"\ndone\n"
    - "if [[ \"$BUCKET_BRANCH\" == \"nightly\" ]]; then\n  DOCKER_NO_CACHE=\"--no-cache\"\
      \n  CACHE_SOURCE=\"\"\n  CACHE_TO=\"--cache-to type=registry,ref=${DOCKER_CACHE_TARGET},mode=max\"\
      \nfi\nif [[ \"$CI_COMMIT_BRANCH\" == \"$CI_DEFAULT_BRANCH\" ]]; then\n  CACHE_TO=\"\
      --cache-to type=registry,ref=${DOCKER_CACHE_TARGET},mode=max\"\nfi\n"
    - "if [[ \"$DEPLOY_AGENT\" == \"true\" ]]; then\n  DOCKER_NO_CACHE=\"--no-cache\"\
      \n  CACHE_SOURCE=\"\"\nfi\n"
    - AGENT_BASE_IMAGE_TAG=registry.ddbuild.io/ci/datadog-agent/agent-base-image${ECR_RELEASE_SUFFIX}:v${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}-$ARCH
    - TARGET_TAG=${IMAGE}${ECR_RELEASE_SUFFIX}:v${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}$TAG_SUFFIX-$ARCH
    - DOCKER_LOGIN=$($CI_PROJECT_DIR/tools/ci/fetch_secret.sh $DOCKER_REGISTRY_RO user)
      || exit $?
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $DOCKER_REGISTRY_RO token | docker login
      --username "$DOCKER_LOGIN" --password-stdin "$DOCKER_REGISTRY_URL"
    - EXIT="${PIPESTATUS[0]}"; if [ $EXIT -ne 0 ]; then echo "Unable to locate credentials
      needs gitlab runner restart"; exit $EXIT; fi
    - "docker buildx build --push --pull --platform linux/$ARCH \\\n  ${CACHE_SOURCE}\
      \ \\\n  ${CACHE_TO} \\\n  ${DOCKER_NO_CACHE} \\\n  --build-arg AGENT_BASE_IMAGE_TAG=${AGENT_BASE_IMAGE_TAG}\
      \ \\\n  --build-arg CIBUILD=true \\\n  --build-arg GENERAL_ARTIFACTS_CACHE_BUCKET_URL=${GENERAL_ARTIFACTS_CACHE_BUCKET_URL}\
      \ \\\n  $BUILD_ARG \\\n  --build-arg DD_GIT_REPOSITORY_URL=https://github.com/DataDog/datadog-agent\
      \ \\\n  --build-arg DD_GIT_COMMIT_SHA=${CI_COMMIT_SHA} \\\n  ${EXTRA_BUILD_CONTEXT}\
      \ \\\n  --file $BUILD_CONTEXT/Dockerfile \\\n  --tag ${TARGET_TAG} \\\n  --label\
      \ \"org.opencontainers.image.created=$(date --rfc-3339=seconds)\" \\\n  --label\
      \ \"org.opencontainers.image.authors=Datadog <package@datadoghq.com>\" \\\n  --label\
      \ \"org.opencontainers.image.source=https://github.com/DataDog/datadog-agent\"\
      \ \\\n  --label \"org.opencontainers.image.version=$(dda inv agent.version)\"\
      \ \\\n  --label \"org.opencontainers.image.revision=${CI_COMMIT_SHA}\" \\\n  --label\
      \ \"org.opencontainers.image.vendor=Datadog, Inc.\" \\\n  --label \"target=none\"\
      \ \\\n  $BUILD_CONTEXT"
    - FLATTEN_IMAGE=${FLATTEN_IMAGE:-true}
    - "if [[ \"$FLATTEN_IMAGE\" == \"true\" ]]; then\n  crane flatten -t ${TARGET_TAG}\
      \ ${TARGET_TAG}\nfi\n"
    stage: container_build
    tags:
    - arch:amd64
    timeout: 30m
    variables:
      ARCH: amd64
      ARTIFACTS_BUILD_CONTEXT: /tmp/build_artifacts
      BUILD_CONTEXT: Dockerfiles/cluster-agent
      IMAGE: registry.ddbuild.io/ci/datadog-agent/cluster-agent
      TAG_SUFFIX: -fips
docker_build_cluster_agent_fips_arm64
  docker_build_cluster_agent_fips_arm64:
    before_script:
    - mkdir -p ${ARTIFACTS_BUILD_CONTEXT}
    - mv -vf $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent $ARTIFACTS_BUILD_CONTEXT/
    - mv -vf $CWS_INSTRUMENTATION_BINARIES_DIR $ARTIFACTS_BUILD_CONTEXT/
    - mv -vf Dockerfiles/agent/nosys-seccomp $BUILD_CONTEXT/
-   - go tool nm ${ARTIFACTS_BUILD_CONTEXT}/datadog-cluster-agent | grep '_Cfunc__goboringcrypto_'
+   - go tool nm $CLUSTER_AGENT_BINARIES_DIR/datadog-cluster-agent-unstripped | grep
+     '_Cfunc__goboringcrypto_'
    image: registry.ddbuild.io/ci/datadog-agent-buildimages/docker_arm64$CI_IMAGE_DOCKER_ARM64_SUFFIX:$CI_IMAGE_DOCKER_ARM64
    needs:
    - artifacts: true
      job: cluster_agent_fips-build_arm64
    - artifacts: true
      job: cws_instrumentation-build_amd64
    - artifacts: true
      job: cws_instrumentation-build_arm64
    retry: 2
    rules:
    - if: $CI_COMMIT_BRANCH =~ /^mq-working-branch-/
      when: never
    - when: on_success
    script:
    - TAG_SUFFIX=${TAG_SUFFIX:-}
    - BUILD_ARG=${BUILD_ARG:-}
    - EXTRA_BUILD_CONTEXT=${ARTIFACTS_BUILD_CONTEXT:+"--build-context artifacts=$ARTIFACTS_BUILD_CONTEXT"}
    - "if [[ \"$BUCKET_BRANCH\" == \"nightly\" && ( \"$IMAGE\" =~ \"ci/datadog-agent/agent\"\
      \ || \"$IMAGE\" =~ \"ci/datadog-agent/cluster-agent\" || \"$IMAGE\" =~ \"ci/datadog-agent/cws-instrumentation\"\
      \  || \"$IMAGE\" =~ \"ci/datadog-agent/otel-agent\" ) ]]; then\n  export ECR_RELEASE_SUFFIX=\"\
      -nightly\"\nelse\n  export ECR_RELEASE_SUFFIX=${CI_COMMIT_TAG+-release}\nfi\n"
    - DOCKER_CACHE_TARGET="${IMAGE}${TAG_SUFFIX}-${ARCH}:cache"
    - CACHE_SOURCE="--cache-from type=registry,ref=${DOCKER_CACHE_TARGET}"
    - CACHE_TO=""
    - "DOCKER_NO_CACHE=\"\"\nfor target in ${NO_CACHE_TARGETS}; do\n  DOCKER_NO_CACHE=\"\
      ${DOCKER_NO_CACHE_FILTER} --no-cache-filter ${target}\"\ndone\n"
    - "if [[ \"$BUCKET_BRANCH\" == \"nightly\" ]]; then\n  DOCKER_NO_CACHE=\"--no-cache\"\
      \n  CACHE_SOURCE=\"\"\n  CACHE_TO=\"--cache-to type=registry,ref=${DOCKER_CACHE_TARGET},mode=max\"\
      \nfi\nif [[ \"$CI_COMMIT_BRANCH\" == \"$CI_DEFAULT_BRANCH\" ]]; then\n  CACHE_TO=\"\
      --cache-to type=registry,ref=${DOCKER_CACHE_TARGET},mode=max\"\nfi\n"
    - "if [[ \"$DEPLOY_AGENT\" == \"true\" ]]; then\n  DOCKER_NO_CACHE=\"--no-cache\"\
      \n  CACHE_SOURCE=\"\"\nfi\n"
    - AGENT_BASE_IMAGE_TAG=registry.ddbuild.io/ci/datadog-agent/agent-base-image${ECR_RELEASE_SUFFIX}:v${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}-$ARCH
    - TARGET_TAG=${IMAGE}${ECR_RELEASE_SUFFIX}:v${CI_PIPELINE_ID}-${CI_COMMIT_SHORT_SHA}$TAG_SUFFIX-$ARCH
    - DOCKER_LOGIN=$($CI_PROJECT_DIR/tools/ci/fetch_secret.sh $DOCKER_REGISTRY_RO user)
      || exit $?
    - $CI_PROJECT_DIR/tools/ci/fetch_secret.sh $DOCKER_REGISTRY_RO token | docker login
      --username "$DOCKER_LOGIN" --password-stdin "$DOCKER_REGISTRY_URL"
    - EXIT="${PIPESTATUS[0]}"; if [ $EXIT -ne 0 ]; then echo "Unable to locate credentials
      needs gitlab runner restart"; exit $EXIT; fi
    - "docker buildx build --push --pull --platform linux/$ARCH \\\n  ${CACHE_SOURCE}\
      \ \\\n  ${CACHE_TO} \\\n  ${DOCKER_NO_CACHE} \\\n  --build-arg AGENT_BASE_IMAGE_TAG=${AGENT_BASE_IMAGE_TAG}\
      \ \\\n  --build-arg CIBUILD=true \\\n  --build-arg GENERAL_ARTIFACTS_CACHE_BUCKET_URL=${GENERAL_ARTIFACTS_CACHE_BUCKET_URL}\
      \ \\\n  $BUILD_ARG \\\n  --build-arg DD_GIT_REPOSITORY_URL=https://github.com/DataDog/datadog-agent\
      \ \\\n  --build-arg DD_GIT_COMMIT_SHA=${CI_COMMIT_SHA} \\\n  ${EXTRA_BUILD_CONTEXT}\
      \ \\\n  --file $BUILD_CONTEXT/Dockerfile \\\n  --tag ${TARGET_TAG} \\\n  --label\
      \ \"org.opencontainers.image.created=$(date --rfc-3339=seconds)\" \\\n  --label\
      \ \"org.opencontainers.image.authors=Datadog <package@datadoghq.com>\" \\\n  --label\
      \ \"org.opencontainers.image.source=https://github.com/DataDog/datadog-agent\"\
      \ \\\n  --label \"org.opencontainers.image.version=$(dda inv agent.version)\"\
      \ \\\n  --label \"org.opencontainers.image.revision=${CI_COMMIT_SHA}\" \\\n  --label\
      \ \"org.opencontainers.image.vendor=Datadog, Inc.\" \\\n  --label \"target=none\"\
      \ \\\n  $BUILD_CONTEXT"
    - FLATTEN_IMAGE=${FLATTEN_IMAGE:-true}
    - "if [[ \"$FLATTEN_IMAGE\" == \"true\" ]]; then\n  crane flatten -t ${TARGET_TAG}\
      \ ${TARGET_TAG}\nfi\n"
    stage: container_build
    tags:
    - arch:arm64
    timeout: 30m
    variables:
      ARCH: arm64
      ARTIFACTS_BUILD_CONTEXT: /tmp/build_artifacts
      BUILD_CONTEXT: Dockerfiles/cluster-agent
      IMAGE: registry.ddbuild.io/ci/datadog-agent/cluster-agent
      TAG_SUFFIX: -fips

Changes Summary

Removed Modified Added Renamed
0 9 0 0

ℹ️ Diff available in the job log.

@pgimalac pgimalac changed the title Strip cluster-agent binaries [AGENTRUN-466] Strip cluster-agent binaries Jun 10, 2025
Copy link

cit-pr-commenter bot commented Jun 10, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: cdf90d57-5f5f-4ede-8b99-31d4ae3aa04c

Baseline: 67df56f
Comparison: 760eb3e
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +2.35 [-0.44, +5.15] 1 Logs bounds checks dashboard
uds_dogstatsd_20mb_12k_contexts_20_senders memory utilization +0.44 [+0.39, +0.49] 1 Logs
otlp_ingest_metrics memory utilization +0.35 [+0.19, +0.51] 1 Logs
ddot_metrics memory utilization +0.25 [+0.13, +0.37] 1 Logs
tcp_syslog_to_blackhole ingress throughput +0.13 [+0.06, +0.20] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.11 [-0.44, +0.66] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.05 [-0.56, +0.65] 1 Logs
file_to_blackhole_100ms_latency egress throughput +0.00 [-0.61, +0.62] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.00 [-0.27, +0.27] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.02, +0.01] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.02 [-0.64, +0.60] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.05 [-0.58, +0.48] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.06 [-0.62, +0.50] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.08 [-0.32, +0.16] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput -0.08 [-0.65, +0.49] 1 Logs
ddot_logs memory utilization -0.16 [-0.31, -0.02] 1 Logs
otlp_ingest_logs memory utilization -0.19 [-0.32, -0.07] 1 Logs
file_tree memory utilization -0.31 [-0.45, -0.17] 1 Logs
docker_containers_cpu % cpu utilization -0.32 [-3.24, +2.60] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization -0.43 [-1.30, +0.43] 1 Logs
quality_gate_idle_all_features memory utilization -0.45 [-0.54, -0.36] 1 Logs bounds checks dashboard
docker_containers_memory memory utilization -0.64 [-0.72, -0.55] 1 Logs
quality_gate_idle memory utilization -0.75 [-0.81, -0.68] 1 Logs bounds checks dashboard

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
docker_containers_cpu simple_check_run 10/10
docker_containers_memory memory_usage 10/10
docker_containers_memory simple_check_run 10/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10 bounds checks dashboard
quality_gate_logs memory_usage 10/10 bounds checks dashboard

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Jun 10, 2025

Static quality checks

✅ Please find below the results from static quality gates
Comparison made with ancestor 67df56f

Successful checks

Info

Quality gate Delta On disk size (MiB) Delta On wire size (MiB)
agent_deb_amd64 $${-0}$$ $${696.06}$$ < $${752.99}$$ $${-0.03}$$ $${176.03}$$ < $${187.44}$$
agent_deb_amd64_fips $${-0}$$ $${694.29}$$ < $${751.36}$$ $${+0.02}$$ $${175.51}$$ < $${187.06}$$
agent_heroku_amd64 $${0}$$ $${358.56}$$ < $${369.68}$$ $${+0}$$ $${96.5}$$ < $${99.55}$$
agent_msi $${+0}$$ $${959.88}$$ < $${987.01}$$ $${-0.01}$$ $${146.45}$$ < $${150.72}$$
agent_rpm_amd64 $${-0}$$ $${696.05}$$ < $${752.98}$$ $${-0.01}$$ $${177.55}$$ < $${190.03}$$
agent_rpm_amd64_fips $${-0}$$ $${694.28}$$ < $${751.35}$$ $${-0.03}$$ $${177.4}$$ < $${189.81}$$
agent_rpm_arm64 $${-0}$$ $${686.0}$$ < $${739.42}$$ $${-0.02}$$ $${161.03}$$ < $${171.23}$$
agent_rpm_arm64_fips $${-0}$$ $${684.35}$$ < $${737.91}$$ $${-0.01}$$ $${160.07}$$ < $${170.22}$$
agent_suse_amd64 $${-0}$$ $${696.05}$$ < $${752.98}$$ $${-0.01}$$ $${177.55}$$ < $${190.03}$$
agent_suse_amd64_fips $${-0}$$ $${694.28}$$ < $${751.35}$$ $${-0.03}$$ $${177.4}$$ < $${189.81}$$
agent_suse_arm64 $${-0}$$ $${686.0}$$ < $${739.42}$$ $${-0.02}$$ $${161.03}$$ < $${171.23}$$
agent_suse_arm64_fips $${-0}$$ $${684.35}$$ < $${737.91}$$ $${-0.01}$$ $${160.07}$$ < $${170.22}$$
docker_agent_amd64 $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_arm64 $${-0}$$ $${793.25}$$ < $${858.97}$$ $${-0.01}$$ $${255.99}$$ < $${274.36}$$
docker_agent_jmx_amd64 $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_jmx_arm64 $${-0}$$ $${793.25}$$ < $${858.97}$$ $${-0.01}$$ $${255.99}$$ < $${274.36}$$
docker_agent_windows1809 $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows1809_core $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows1809_core_jmx $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows1809_jmx $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows2022 $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows2022_core $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows2022_core_jmx $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_agent_windows2022_jmx $${-0}$$ $${779.85}$$ < $${849.39}$$ $${-0.01}$$ $${268.59}$$ < $${288.34}$$
docker_cluster_agent_amd64 $${-46.42}$$ $${212.78}$$ < $${259.73}$$ $${-30.52}$$ $${72.36}$$ < $${103.68}$$
docker_cluster_agent_arm64 $${-44.96}$$ $${228.62}$$ < $${274.24}$$ $${-28.97}$$ $${68.64}$$ < $${98.45}$$
docker_cws_instrumentation_amd64 $${0}$$ $${7.08}$$ < $${7.12}$$ $${+0}$$ $${2.95}$$ < $${3.29}$$
docker_cws_instrumentation_arm64 $${0}$$ $${6.69}$$ < $${6.92}$$ $${+0}$$ $${2.7}$$ < $${3.07}$$
docker_dogstatsd_amd64 $${+0}$$ $${38.93}$$ < $${39.57}$$ $${0}$$ $${14.95}$$ < $${15.76}$$
docker_dogstatsd_arm64 $${0}$$ $${37.52}$$ < $${38.2}$$ $${-0}$$ $${13.96}$$ < $${14.83}$$
dogstatsd_deb_amd64 $${0}$$ $${30.61}$$ < $${31.52}$$ $${-0}$$ $${8.03}$$ < $${8.97}$$
dogstatsd_deb_arm64 $${0}$$ $${29.16}$$ < $${30.08}$$ $${+0}$$ $${6.98}$$ < $${7.92}$$
dogstatsd_rpm_amd64 $${0}$$ $${30.61}$$ < $${31.52}$$ $${-0}$$ $${8.04}$$ < $${8.98}$$
dogstatsd_suse_amd64 $${0}$$ $${30.61}$$ < $${31.52}$$ $${-0}$$ $${8.04}$$ < $${8.98}$$
iot_agent_deb_amd64 $${0}$$ $${50.49}$$ < $${60.17}$$ $${-0}$$ $${12.85}$$ < $${15.82}$$
iot_agent_deb_arm64 $${0}$$ $${47.94}$$ < $${56.94}$$ $${+0}$$ $${11.15}$$ < $${13.86}$$
iot_agent_deb_armhf $${0}$$ $${47.52}$$ < $${56.41}$$ $${+0}$$ $${11.21}$$ < $${13.86}$$
iot_agent_rpm_amd64 $${0}$$ $${50.49}$$ < $${60.18}$$ $${+0}$$ $${12.87}$$ < $${15.84}$$
iot_agent_rpm_arm64 $${0}$$ $${47.94}$$ < $${56.94}$$ $${+0}$$ $${11.17}$$ < $${13.76}$$
iot_agent_suse_amd64 $${0}$$ $${50.49}$$ < $${60.18}$$ $${+0}$$ $${12.87}$$ < $${15.84}$$

@github-actions github-actions bot added short review PR is simple enough to be reviewed quickly and removed medium review PR review might take time labels Jun 10, 2025
@pgimalac pgimalac marked this pull request as ready for review June 11, 2025 06:21
@pgimalac pgimalac requested review from a team as code owners June 11, 2025 06:21
@pgimalac pgimalac added the ask-review Ask required teams to review this PR label Jun 11, 2025
Copy link
Member

@NouemanKHAL NouemanKHAL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@pgimalac
Copy link
Member Author

/merge

@dd-devflow
Copy link

dd-devflow bot commented Jun 11, 2025

View all feedbacks in Devflow UI.

2025-06-11 15:57:11 UTC ℹ️ Start processing command /merge


2025-06-11 15:57:17 UTC ℹ️ MergeQueue: pull request added to the queue

The expected merge time in main is approximately 52m (p90).


2025-06-11 16:33:56 UTC ℹ️ MergeQueue: This merge request was merged

Copy link
Member

@L3n41c L3n41c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The figures are so significant that I’m wondering if we could have had the same result with -ldflags="-s -w" ?

I just checked the output of a build job and it seems we only have -ldflags="… -w"

-ldflags="-s" is for:

 -s    disable symbol table

So, I’m wondering if it would have the same effect as strip?

@dd-mergequeue dd-mergequeue bot merged commit e72b4f0 into main Jun 11, 2025
286 checks passed
@dd-mergequeue dd-mergequeue bot deleted the pgimalac/cluster-agent-strip-binary branch June 11, 2025 16:33
@github-actions github-actions bot added this to the 7.68.0 milestone Jun 11, 2025
@pgimalac
Copy link
Member Author

👋 Hey @L3n41c sorry I didn't see the comment before the PR got merged !

I believe strip is pretty much equivalent to using -ldflags="-s -w", I initially tried using that (this is what we do it for system-probe and cws-instrumentation), but we have a fips job which requires the symbol table to be part of the binary to check for compliance, so we can't "strip" at build time
That's why I ended up building without stripping, making a copy of the unstripped binary, and stripping the original binary

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ask-review Ask required teams to review this PR changelog/no-changelog qa/no-code-change No code change in Agent code requiring validation short review PR is simple enough to be reviewed quickly team/container-platform The Container Platform Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants