Description
Description
When a service refer to a secret in the build
section that doesn't exist anywhere yet - the docker compose convert
fails with service "test" refers to undefined build secret my-ci-secret: invalid compose project
. I am not sure if this is as designed, but I hope not. It does not fails like that with the runtime secrets.
The use case is simple - you would want to use docker compose convert
in CI to merge multiple compose files and convert non-yaml compose files to yaml, to produce an effective single compose yaml. That you can then read and process, such as - inject secrets that were asked for. For instance, my docker-compose.yaml
could be something like
services:
test:
build:
context: .
target: test
secrets:
- my-ci-secret
secrets:
- my-ci-secret
And my docker-compose.local.yaml
could be
secrets:
my-ci-secret:
file: ...
Then my pipeline would not use that docker-compose.local.yaml
, it would only use docker-compose.yaml
and possibly other with docker compose convert
. Resulting docker-compose.auto.yaml
the pipeline can read, understand that there are secrets expected to be injected and produce an docker-compose.override.yaml
secrets:
my-ci-secret:
file: /path/to/build/secret
That override obviously may be adding more stuff such as cache locations etc etc. So it needs the results of docker compose convert
because it needs to know what's in it and what it is about to produce.
If I remove the secrets
under build
- it works exactly like I am expecting, but it fails with the build
secrets which to me sounds wrong and breaking this perfectly valid CI scenario.
Steps To Reproduce
See description
Compose Version
v2.11.2
Docker Environment
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.9.1)
compose: Docker Compose (Docker Inc., v2.11.2)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.18
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc version: v1.1.4-0-g5fd4c4d1
init version: de40ad0
Security Options:
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.63-flatcar
Operating System: Alpine Linux v3.16 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.18GiB
Name: ***
ID: RLDU:KQTI:YSZ5:NJUM:RXIF:2QD2:NUTO:5RQC:A6I6:KXAE:G4XG:U7HG
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
Anything else?
No response