tree: 834db122b74f87ffefc6c5267f503e922237ff66 [path history] [tgz]
  1. go/
  2. modules/
  3. pages/
  4. secrets/
  5. BUILD.bazel
  8. Makefile
  10. smile-svg-scrap.json

Scrap Exchange

A unified backend service for storing scraps of code across all of Skia Infrastructure's services.


Skia Infrastructure currently has services that store scraps of code, such as The term ‘scrap’ implies that the code can not stand on its own, such as the JSON that underlies SkSL shaders, or JS demos.

Scrap Exchange is a unified backend service that is constructed to make it easier to stand up such web UIs, and to also make linking top level web UIs together easier, for example, going from a particle ‘scrap’ on to complete C++ code on, or complete JS code on, that runs the shaders. The scrap service has the following features:

  • Store scraps by ‘type’, i.e. SVG vs SkSL vs C++.
  • Load scraps via URL.
  • Store meta-data with scraps. For example, one SkSL scrap may refer to other child shaders, so the meta-data will contain the references to those other scraps. Other things that could be stored in metadata include the values of controls that are used as inputs to a shader.
  • Ability for Infrastructure team members to review and delete scraps as needed, such as copyright violations.
  • The scrap server will also allow ‘wrapping’ scraps in boilerplate code. For example, an SkSL scrap can be wrapped in C++ and then used directly by fiddle. This includes understanding the metadata about the scrap, such as child shaders, and filling in the C++ code template appropriately.

See http://go/scrap-exchange for more details.


Uses the service account, originally created in //infra/am, to talk to the Chrome Infra Auth API.


We use the following buckets for each instance:

skia-public: gs://skia-public-scrap-exchange


Scrap Exchange is automatically built by a Louhi flow whenever a change merges anywhere in this repository. If this results in a new Docker image in then Louhi will automatically update references in k8s-config whereby k8s-deployer will automatically deploy that new image. Deployment is fully automatic.

A Docker image can be manually pushed to GCR by running make push_I_am_really_sure. In addition to pushing, this target will also update k8s-config references. This will prevent Louhi from updating future scrapexchange references in k8s-config. To re-enable updating, a manual change in k8s-config will need to be landed that returns the image reference from tag-style (e.g.{tagname}) to digest style (e.g.{digest}).