tree: 643f1f2304e9f11b3c77b0b435bc4a4a4193427f [path history] [tgz]
  1. auth-proxy/
  2. base-cipd/
  3. basealpine/
  4. clusters/
  5. cmd/
  6. doxygen/
  7. expand-and-reload/
  8. go/
  9. secrets/
  10. attach.sh
  11. backup-secrets.sh
  12. BUILD.bazel
  13. build_base_cipd_release
  14. build_basealpine_release
  15. build_doxygen_release
  16. build_expand_and_reload_release
  17. clusters.sh
  18. config.sh
  19. corp-config.sh
  20. create-cluster-corp.sh
  21. create-cluster.sh
  22. create-cq-watcher-sa.sh
  23. create-csrf-salt-secret.sh
  24. create-node-pool.sh
  25. create-skia-corp-cert.sh
  26. create-skia-public-auth-sa.sh
  27. create-webhook-request-salt-secret.sh
  28. edit-corp-legacy-login-secrets.sh
  29. kube.mk
  30. Makefile
  31. README.md
  32. restore-secret.sh
  33. set-backend-timeouts.sh
  34. switchboard-config.sh
kube/README.md

Kubernetes config and applications

Scripts, YAML files, and utility apps to run our kubernetes cluster(s). Each cluster will have its own subdirectory that matches the name of the GCE project.

Ingress

The ingress configs presume that the IP address and certs have already been created and named, both of which can be done via command line.

Upload certs:

gcloud compute ssl-certificates create skia-org --certificate=skia.pem --private-key=skia.key

Take care when copying the certs around, for example, download them onto a ramdrive and unmount the ramdrive after they have been uploaded. See ‘create-sa.sh’ in this directory.

Reserving a named global IP address:

gcloud compute addresses create skia-org --global

pushk and kube/clusters/config.json

The definitive list of clusters and how to talk to each one is stored in kube/clusters/config.json.

This config file also defines the git repo where YAML files are stored and where to checkout that repo when pushing. The location of the checkout can be set by setting the PUSHK_GITDIR environment variable.

The k8s YAML files are checked into https://skia.googlesource.com/k8s-config/, with one sub-directory for each cluster.

See http://go/corp-ssh-helper for details on setting up SSH.

When you run pushk it will update the images for all the clusters and then run kubectl apply for each file and for each cluster.

Standing up a new cluster in a different project

  1. Add a new __skia_NNN function to clusters.sh.
  2. Create the config-NNN.sh file.
  3. Copy and modify the create-cluster-corp.sh script.
  4. Add a node pool if necessary using the web UI.
  5. Update kube/clusters/config.json with info on the new cluster.