tree: 43082e85e170590804c10500660278c0c285b722 [path history] [tgz]
  1. backup-to-gcs/
  2. go/
  3. grafana/
  4. images/
  5. prometheus/
  6. secrets/
  7. .gitignore
  8. build_backup_to_gcs_release
  16. Makefile


The grafana.ini file should almost never change, so if it does, just delete the pod and have kubernetes restart it so the config gets read.

Edit the config file by running the ./ script.



Before deploying yaml files with service accounts you need to give yourself cluster-admin rights:

  kubectl create clusterrolebinding \
    ${USER}-cluster-admin-binding \
    --clusterrole=cluster-admin \


The best way to get an idea of all the parts of Thanos and how they work together is to look at the diagram on the Thanos Tuturial.

There are two protected URLS for Thanos:

Both sites above to restricted to Googlers only.

All alert rules are evaluated by thanos-rule, which then sends alerts to alert-to-pubsub.

If an alert is changed only make push_config_thanos needs to be run.

A Thanos sidecar runs along side each Prometheus instance. For each Prometheus instance that runs outside of skia-public we also run a thanos-bouncer container that sets up a reverse ssh port-forward that allows thanos-query to make queries against the Thanos sidecar.

Additionally thanos-store runs in skia-public and allows querying against all the hsitorical data written by the thanos-sidecars.

The long term storage bucket for metrics is gs://skia-thanos.

We do not currently run an instance of the Thanos compactor.


Obviously we can‘t get alerts if thanos-ruler stops sending alerts to alert-to-pubsub, so we need a second path for such alerts. We use Grafana’s ability to send alert emails to cover that case. There is a dashboard for Thanos setup at: and the Liveness panel has an alert set if alert-to-pubsub goes too long without seeing an alert come from thanos-ruler. When firing the alert will send email to