tree: e44be0f4e8ea81ed5ed5cd3018fc4ef4977c3ba9 [path history] [tgz]
  1. auth-proxy/
  2. base-cipd/
  3. basealpine/
  4. clusters/
  5. configmap-reload/
  6. continuous-deploy/
  7. doxygen/
  8. expand-and-reload/
  9. go/
  10. repo-sync/
  11. secrets/
  13. build_auth_proxy_release
  14. build_base_cipd_release
  15. build_basealpine_release
  16. build_configmap_reload_release
  17. build_continuous_deploy_release
  18. build_doxygen_release
  19. build_expand_and_reload_release
  20. build_reposync_release
  38. Makefile

Kubernetes config and applications

Scripts, YAML files, and utility apps to run our kubernetes cluster(s). Each cluster will have its own subdirectory that matches the name of the GCE project.


The ingress configs presume that the IP address and certs have already been created and named, both of which can be done via command line.

Upload certs:

gcloud compute ssl-certificates create skia-org --certificate=skia.pem --private-key=skia.key

Take care when copying the certs around, for example, download them onto a ramdrive and unmount the ramdrive after they have been uploaded. See ‘’ in this directory.

Reserving a named global IP address:

gcloud compute addresses create skia-org --global

pushk and kube/clusters/config.json

The definitive list of clusters and how to talk to each one is stored in kube/clusters/config.json.

This config file also defines the git repo where YAML files are stored and where to checkout that repo when pushing. The location of the checkout can be set by setting the PUSHK_GITDIR environment variable.

The k8s YAML files are checked into, with one sub-directory for each cluster.

When you run pushk it will update the images for all the clusters and then run kubectl apply for each file and for each cluster.

Continuous Deployment

Continuous deployment uses three bits on infrastructure:

  1. The same build_foo config files that are used when building from the desktop.
  2. GCP Container Builder.
  3. The continuous-deploy application.

To do continuous deployment for any application that depends upon the Skia repo, such as fiddler, you will need to add two new steps and a new image to the docker/cloudbuild.yaml file in the Skia repo.

For example:

  - name: ''
    dir: '/home/skia/golib/src/'
      - 'ROOT=/workspace/__staging'
      - 'SKIP_BUILD=1'
    args: ['./build_fiddler_release']
    timeout: 600s

As a note, the image that this cloud builder step runs on is re-created on each commit to this repo (see docker/Dockerfile).

This sets the working directory to the one for the app we want to build, then runs the build_fiddler_release script, but note that we have set the ROOT and SKIP_BUILD environment variables so that the script only builds the application and copies the files into the directory w/o calling docker on that directory. Also note that we are putting our work product under the /workspace directory which is preserved between steps by GCP Container Builder.

Also note that we could add a Makefile target that runs all tests and then runs build_fiddler_release and calls make instead of build_fiddler_release directly, which is the preferred method.

Then we add a second step that runs docker on that container to build the image:

  - name: ''
    args: ['build', '-t', '$PROJECT_ID/fiddler:$COMMIT_SHA', '/workspace/__staging']
    timeout: 600s

See Substituting Variable Values for more details on $PROJECT_ID and $COMMIT_SHA.

Finally we add the new image to the list of images that get pushed to

  - '$PROJECT_ID/fiddler:$COMMIT_SHA'
  - '$PROJECT_ID/skia-release:prod'

The continuous-deploy application runs in skia-public and listens for PubSub messages from GCP Container Builder that is has successfully completed a build and in that message it includes a list of images it has uploaded. Update the continuous-deploy.yaml file to include the short name of the image you want continuously deployed as a command-line argument:

  - name: continuous-deploy
      - "--logtostderr"
      - "--prom_port=:20000"
      - "fiddler"

Since continuous-deploy runs pushk, all of these deployments will be recorded in the git repo for skia-public.

Standing up a new cluster in a different project

  1. Add a new __skia_NNN function to
  2. Create the file.
  3. Copy and modify the script.
  4. Add a node pool if necessary using the web UI.
  5. Update kube/clusters/config.json with info on the new cluster.