Kubernetes Quickstart

Learn how to deploy a self-hosted router in Kubernetes using Helm charts


This guide shows how to:

  • Get the router Helm chart from the Apollo container repository.

  • Deploy a router with a basic Helm chart.

note
The Apollo Router Core source code and all its distributions are made available under the Elastic License v2.0 (ELv2) license.

Prerequisites

note
This guide assumes you are familiar with Kubernetes and Helm. If you are not familiar with either, you can find a Kubernetes tutorial and a Helm tutorial to get started.
  • A GraphOS graph set up in your Apollo account. If you don't have a graph, you can create one in the GraphOS Studio.

  • Helm version 3.x or higher installed on your local machine.

  • A Kubernetes cluster with access to the internet.

GraphOS graph

Set up your self-hosted graph and get its graph ref and API key.

If you need a guide to set up your graph, you can follow the self-hosted router quickstart and complete step 1 (Set up Apollo tools), step 4 (Obtain your subgraph schemas), and step 5 (Publish your subgraph schemas).

Kubernetes cluster

If you don't have a Kubernetes cluster, you can set one up using kind or minikube locally, or by referring to your cloud provider's documentation.

Quickstart

To deploy the router, run the helm install command with an argument for the router's OCI image URL. Optionally, you can add arguments for the values.yaml configuration file and/or additional arguments to override specific configuration values.

Bash
1helm install <name_for_install> --namespace apollo-router --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>"  oci://ghcr.io/apollographql/helm-charts/router

The necessary arguments for specific configuration values:

Some optional but recommended arguments:

  • --namespace <router-namespace>. The namespace scope for this deployment.

  • --version <router-version>. The version of the router to deploy. If not specified by helm install, the latest version is installed.

Verify deployment

Verify that your router is one of the deployed releases with the helm list command. If you deployed with the --namespace <router-namespace> option, you can list only the releases within your namespace:

Bash
1helm list --namespace <router-namespace>

Deployed architecture

The default deployed architecture will be:

Router Helm chart configuration

Apollo provides an application Helm chart with each release of Apollo Router Core in GitHub. Since the router version v0.14.0, Apollo has released the router Helm chart as an Open Container Initiative (OCI) image in the GitHub container registry.

note
The path to the OCI router chart is oci://ghcr.io/apollographql/helm-charts/router and tagged with the applicable router release version. For example, router version v2.3.0's Helm chart would be oci://ghcr.io/apollographql/helm-charts/router:2.3.0.

You customize a deployed router with the same command-line options and YAML configuration options using different Helm CLI options and YAML keys through a values file.

Each router chart has a defult values.yaml file with router and deployment settings. The released, unedited file has a few explicit settings, including:

Click to expand values.yaml for router v2.3.0
The values of the Helm chart for Apollo Router Core v2.3.0 in the GitHub container repository, as output by the helm show command:
Bash
1helm show values oci://ghcr.io/apollographql/helm-charts/router
YAML
1# Default values for router.
2# This is a YAML-formatted file.
3# Declare variables to be passed into your templates.
4
5replicaCount: 1
6
7# -- See https://www.apollographql.com/docs/graphos/reference/router/configuration#yaml-config-file for yaml structure
8router:
9  configuration:
10    supergraph:
11      listen: 0.0.0.0:4000
12    health_check:
13      listen: 0.0.0.0:8088
14
15  args:
16    - --hot-reload
17
18managedFederation:
19  # -- If using managed federation, the graph API key to identify router to Studio
20  apiKey:
21  # -- If using managed federation, use existing Secret which stores the graph API key instead of creating a new one.
22  # If set along `managedFederation.apiKey`, a secret with the graph API key will be created using this parameter as name
23  existingSecret:
24  # -- If using managed federation, the name of the key within the existing Secret which stores the graph API key.
25  # If set along `managedFederation.apiKey`, a secret with the graph API key will be created using this parameter as key, defaults to using a key of `managedFederationApiKey`
26  existingSecretKeyRefKey:
27  # -- If using managed federation, the variant of which graph to use
28  graphRef: ""
29
30# This should not be specified in values.yaml. It's much simpler to use --set-file from helm command line.
31# e.g.: helm ... --set-file supergraphFile="location of your supergraph file"
32supergraphFile:
33
34# An array of extra environmental variables
35# Example:
36# extraEnvVars:
37#   - name: APOLLO_ROUTER_SUPERGRAPH_PATH
38#     value: /etc/apollo/supergraph.yaml
39#   - name: APOLLO_ROUTER_LOG
40#     value: debug
41#
42extraEnvVars: []
43extraEnvVarsCM: ""
44extraEnvVarsSecret: ""
45
46# An array of extra VolumeMounts
47# Example:
48# extraVolumeMounts:
49#   - name: rhai-volume
50#     mountPath: /dist/rhai
51#     readonly: true
52extraVolumeMounts: []
53
54# An array of extra Volumes
55# Example:
56# extraVolumes:
57#   - name: rhai-volume
58#     configMap:
59#       name: rhai-config
60#
61extraVolumes: []
62
63image:
64  repository: ghcr.io/apollographql/router
65  pullPolicy: IfNotPresent
66  # Overrides the image tag whose default is the chart appVersion.
67  tag: ""
68
69containerPorts:
70  # -- If you override the port in `router.configuration.server.listen` then make sure to match the listen port here
71  http: 4000
72  # -- For exposing the metrics port when running a serviceMonitor for example
73  metrics: 9090
74  # -- For exposing the health check endpoint
75  health: 8088
76
77# -- An array of extra containers to include in the router pod
78# Example:
79# extraContainers:
80#   - name: coprocessor
81#     image: acme/coprocessor:1.0
82#     ports:
83#       - containerPort: 4001
84extraContainers: []
85
86# -- An array of init containers to include in the router pod
87# Example:
88# initContainers:
89#   - name: init-myservice
90#     image: busybox:1.28
91#     command: ["sh"]
92initContainers: []
93
94# -- A map of extra labels to apply to the resources created by this chart
95# Example:
96# extraLabels:
97#   label_one_name: "label_one_value"
98#   label_two_name: "label_two_value"
99extraLabels: {}
100
101lifecycle: {}
102#  preStop:
103#    exec:
104#      command:
105#        - /bin/bash
106#        - -c
107#        - sleep 10
108
109imagePullSecrets: []
110nameOverride: ""
111fullnameOverride: ""
112
113serviceAccount:
114  # Specifies whether a service account should be created
115  create: true
116  # Annotations to add to the service account
117  annotations: {}
118  # The name of the service account to use.
119  # If not set and create is true, a name is generated using the fullname template
120  name: ""
121
122podAnnotations: {}
123
124podSecurityContext:
125  {}
126  # fsGroup: 2000
127
128securityContext:
129  {}
130  # capabilities:
131  #   drop:
132  #   - ALL
133  # readOnlyRootFilesystem: true
134  # runAsNonRoot: true
135  # runAsUser: 1000
136
137service:
138  type: ClusterIP
139  port: 80
140  annotations: {}
141  targetport: http
142
143serviceMonitor:
144  enabled: false
145
146ingress:
147  enabled: false
148  className: ""
149  annotations: {}
150    # kubernetes.io/ingress.class: nginx
151    # kubernetes.io/tls-acme: "true"
152  hosts:
153    - host: chart-example.local
154      paths:
155        - path: /
156          pathType: ImplementationSpecific
157  tls: []
158  #  - secretName: chart-example-tls
159  #    hosts:
160  #      - chart-example.local
161
162# set to true to enable istio's virtualservice
163virtualservice:
164  enabled: false
165  # namespace: ""
166  # gatewayName: "" # Deprecated in favor of gatewayNames
167  # gatewayNames: []
168  #  - "gateway-1"
169  #  - "gateway-2"
170  # Hosts: "" # configurable but will default to '*'
171  #  - somehost.domain.com
172  # http:
173  #   main:
174  #     # set enabled to true to add
175  #     # the default matcher of `exact: "/" or prefix: "/graphql"`
176  #     # with the <$fullName>.<.Release.Namespace>.svc.cluster.local destination
177  #     enabled: true
178  #   # use additionals to provide your custom virtualservice rules
179  #   additionals: []
180  #   - name: "default-nginx-routes"
181  #       match:
182  #         - uri:
183  #             prefix: "/foo"
184  #       rewrite:
185  #         uri: /
186  #       route:
187  #         - destination:
188  #             host: my.custom.backend.svc.cluster.local
189  #             port:
190  #               number: 80
191
192# set to true and provide configuration details if you want to make external https calls through istio's virtualservice
193serviceentry:
194  enabled: false
195  # hosts:
196  # a list of external hosts you want to be able to make https calls to
197  #   - api.example.com
198
199resources:
200  {}
201  # We usually recommend not to specify default resources and to leave this as a conscious
202  # choice for the user. This also increases chances charts run on environments with little
203  # resources, such as Minikube. If you do want to specify resources, uncomment the following
204  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
205  # limits:
206  #   cpu: 100m
207  #   memory: 128Mi
208  # requests:
209  #   cpu: 100m
210  #   memory: 128Mi
211
212autoscaling:
213  enabled: false
214  minReplicas: 1
215  maxReplicas: 100
216  targetCPUUtilizationPercentage: 80
217  # targetMemoryUtilizationPercentage: 80
218  #
219  # Specify container-specific HPA scaling targets
220  # Only available in 1.27+ (https://kubernetes.io/blog/2023/05/02/hpa-container-resource-metric/)
221  # containerBased:
222  #   - name: <container name>
223  #     type: cpu
224  #     targetUtilizationPercentage: 75
225
226# -- Sets the [rolling update strategy parameters](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment).  Can take absolute values or % values.
227rollingUpdate:
228  {}
229# Defaults if not set are:
230#  maxUnavailable: 25%
231#  maxSurge: 25%
232
233nodeSelector: {}
234
235tolerations: []
236
237affinity: {}
238
239# -- Sets the [pod disruption budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) for Deployment pods
240podDisruptionBudget: {}
241
242# -- Set to existing PriorityClass name to control pod preemption by the scheduler
243priorityClassName: ""
244
245# -- Sets the [termination grace period](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution) for Deployment pods
246terminationGracePeriodSeconds: 30
247
248probes:
249  # -- Configure readiness probe
250  readiness:
251    initialDelaySeconds: 0
252  # -- Configure liveness probe
253  liveness:
254    initialDelaySeconds: 0
255
256# -- Sets the [topology spread constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) for Deployment pods
257topologySpreadConstraints: []
258
259# -- Sets the restart policy of pods
260restartPolicy: Always

Separate configurations per environment

To support your different deployment configurations for different environments (development, staging, production, etc.), Apollo recommends separating your configuration values into separate files:

  • A common file, which contains values that apply across all environments.

  • A unique environment file per environment, which includes and overrides the values from the common file while adding new environment-specific values.

The helm install command applies each --values <values-file> option in the order you set them within the command. Therefore, a common file must be set before an environment file so that the environment file's values are applied last and override the common file's values.

For example, this command deploys with a common_values.yaml file applied first and then a prod_values.yaml file:

Bash
1helm install <name_for_install> --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>"  oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml  --values common_values.yaml --values prod_values.yaml
Feedback

Edit on GitHub

Ask Community