This cheatsheet provides quick tips on how to build and test code in our repository using Bazel.
Start here if you're completely new to Bazel.
The reference documentation for our Bazel build can be found at the following Golinks:
This section includes steps every engineer should follow to get a consistent development experience.
Bazelisk is a wrapper for Bazel that downloads and runs the version of Bazel specified in //.bazelversion
. It serves a similar purpose as nvm for NodeJS.
Bazelisk is recommended over plain Bazel because the bazel
command on our gLinux workstations is automatically updated every time a new version of Bazel is released.
The easiest way to install Bazelisk is via npm
, e.g.:
npm install -g @bazel/bazelisk
An alternate method is to install bazelisk to a temporary directory and then copy the correct binary to the PATH. For example:
mkdir /tmp/bazelisk && cd /tmp/bazelisk npm install @bazel/bazelisk cp node_modules/@bazel/bazelisk/bazelisk-linux_amd64 ~/bin/bazelisk ln ~/bin/bazelisk ~/bin/bazel
Tips:
alias bazel="bazelisk"
and add it to your ~/.bash_aliases
file.bazel
to be the full path to bazelisk
in your IDE of choice. This is necessary for some extensions to work correctly, such as the Bazel plugin for Visual Studio Code.We use Gazelle to automatically generate BUILD.bazel
files for most of our Go and TypeScript code.
Note that we occasionally edit Gazelle-generated BUILD.bazel
files by hand, e.g. to mark tests as flaky.
Run make gazelle
from the repository's root directory.
Currently, Gazelle only generates front-end Bazel targets for the directories explicitly listed in //bazel/gazelle/frontend/allowlist.go
. Edit this file to enable generation of front-end targets for your app.
TypeScript support is provided via a custom Gazelle extension which can be found in //bazel/gazelle/frontend
.
Tip: see here for details on how this extension decides which rule to generate for a given TypeScript file.
Buildifier is a linter and formatter for BUILD.bazel
and other Bazel files (WORKSPACE
, *.bzl
, etc.).
Run bazel run //:buildifier
.
Our Bazel build is tested on RBE via the following tasks:
bazel build //... --config=remote
)bazel test //... --config=remote
)We regard the above tasks as the source of truth for build and test correctness.
As an insurance policy against RBE outages, we also have the following tasks:
bazel build //...
)bazel test //...
)The non-RBE tasks tend to be a bit more brittle than the RBE ones, which is why they are excluded from the CQ.
Use commands bazel build
and bazel test
to build and test Bazel targets, respectively. Examples:
# Single target. $ bazel build //go/util:util $ bazel test //go/util:util_test # All targets under a directory and any subdirectoriews. $ bazel build //go/... $ bazel test //go/... # All targets in the repository. $ bazel build //... $ bazel test //...
Any build artifacts produced by bazel build
or bazel test
will be found under //_bazel_bin
.
Note that it‘s not necessary to bazel build
a test target before bazel test
-ing it. bazel test
will automatically build the test target if it wasn’t built already (i.e. if it wasn't found in the Bazel cache).
More on bazel build
here.
More on bazel test
here.
By default, Bazel will build and test targets on the host system (aka a local build). To build on RBE, add flag --config=remote
, e.g.:
$ bazel build //go/util:util --config=remote $ bazel test //go/util:util_test --config=remote
Use command bazel run
to run binary Bazel targets (such as go_binary
, sh_binary
, etc.), e.g.:
# Without command-line parameters. $ bazel run //scripts/run_emulators:run_emulators # With command-line parameters. $ bazel run //scripts/run_emulators:run_emulators -- start
Alternatively, you can run the Bazel-built artifact directly, e.g.:
$ bazel build //scripts/run_emulators:run_emulators $ _bazel_bin/scripts/run_emulators/run_emulators_/run_emulators start
The exact path of the binary under //_bazel_bin
depends on the Bazel rule (go_binary
, py_binary
, etc.). As you can see, said path can be non-obvious, so it's generally recommended to use bazel run
.
More on bazel run
here.
Our Go codebase is built and tested using Bazel rules from the rules_go repository. The go_test
rule documentation is a great read to get started.
As mentioned in the Gazelle section, all Bazel targets for Go code are generated with Gazelle.
Read go/skia-infra-bazel-backend for the full details.
Simply use bazel build
(and optionally bazel run
) as described earlier.
Tip: Start by reading the General testing tips section.
Our setup differs slightly from typical Go + Bazel projects in that we use a wrapper macro around go_test
to handle manual tests. Gazelle is configured to use this macro via a gazelle:map_kind
directive in //BUILD.bazel
. The macro is defined in //bazel/go/go_test.bzl
. Read the macro's docstring for the full details.
To mark specific Go test cases as manual, extract them out into a separate file ending with _manual_test.go
within the same directory, and call unittest.ManualTest(t)
from each test case in said file.
The go_test
macro in //bazel/go/go_test.bzl
places files ending with _manual_test.go
in a separate go_test
target, which is tagged as manual.
More on manual tests here.
The go test
command supports flags such as -v
to print verbose outputs, -run
to run a specific test case, etc. Under Bazel, these flags can be passed to a go_test
test target via --test_arg
, but they need to be prefixed with -test.
, e.g.:
# Equivalent to "go test ./go/util -v". $ bazel test //go/util:util_test --test_arg=-test.v # Equivalent to "go test ./go/util -run=TestFoo" $ bazel test //go/util:util_test --test_arg=-test.run=TestFoo
bazel test
invocation for Go testsThe following example shows what a typical bazel test
invocation might look like while debugging a go_test
target locally.
# Equivalent to "$ MY_ENV_VAR=foo go test ./go/my_pkg -v -logtostderr" $ bazel test //go/my_pkg:my_pkg_test \ --test_output=streamed \ --nocache_test_results \ --test_arg=-test.v \ --test_arg=-logtostderr \ --test_env=MY_ENV_VAR=foo
Our front-end code is built and tested using a set of custom Bazel macros built on top of rules provided by the rules_nodejs repository. All such macros are either defined in or re-exported from //infra-sk/index.bzl
. This section uses the terms macro and rule interchangeably when referring to the macros exported from said file.
As mentioned in the Gazelle section, most Bazel targets for front-end code are generated with Gazelle.
Read go/skia-infra-bazel-frontend for the full details.
Simply use bazel build
(and optionally bazel run
) as described earlier.
Demo pages are served via a Gazelle-generated sk_demo_page_server
rule.
Use bazel run
to serve a demo page via its sk_demo_page_server
rule, e.g.:
$ bazel run //golden/modules/dots-sk:demo_page_server
To rebuild the demo page automatically upon changes in the custom element‘s directory, use the demopage.sh
script found in the repository’s root directory, e.g.:
$ ./demopage.sh golden/modules/dots-sk
This script uses entr to watch for file changes and re-execute the bazel run
command as needed. The above demopage.sh
invocation is equivalent to:
$ ls golden/modules/dots-sk/* | entr -r bazel run //golden/modules/dots-sk:demo_page_server
Install entr
on a gLinux workstation with sudo apt-get install entr
.
In the future, we might replace this script with ibazel, which requires changes to the sk_demo_page_server
rule.
Tip: Start by reading the General testing tips section.
Front-end code testing is done via three different Bazel rules:
karma_test
for in-browser tests based on the Karma test runner.sk_element_puppeteer_test
for Puppeteer tests that require a running sk_demo_page_server
.nodejs_test
for any other server-side TypeScript tests (i.e. NodeJS tests).Gazelle decides which rule to generate for a given *_test.ts
file based the following patterns:
karma_test
is used for files matching //<app>/modules/<element>/<element>_test.ts
.sk_element_puppeteer_test
is used for files matching //<app>/modules/<element>/<element>_puppeteer_test.ts
.nodejs_test
is used for files matching *_nodejs_test.ts
.karma_test
rule)Use bazel test
to run a Karma test in headless mode:
$ bazel test //golden/modules/dots-sk:dots-sk_test
To run a Karma test in the browser during development, use bazel run
instead:
$ bazel run //golden/modules/dots-sk:dots-sk_test ... Karma v4.4.1 server started at http://0.0.0.0:9876/
sk_element_puppeteer_test
rule)Use bazel test
to run a Puppeteer test, e.g.:
$ bazel test //golden/modules/dots-sk:dots-sk_puppeteer_test
To view the screenshots captured by a Puppeteer test, use the //:extract_puppeteer_screenshots
target:
$ mkdir /tmp/screenshots $ bazel run //:extract_puppeteer_screenshots -- --output_dir /tmp/screenshots
To step through a Puppeteer test with a debugger, run your test with bazel run
, and append _debug
at the end of the target name, e.g.:
# Normal test execution (for reference). $ bazel test //golden/modules/dots-sk:dots-sk_puppeteer_test # Test execution in debug mode. $ bazel run //golden/modules/dots-sk:dots-sk_puppeteer_test_debug
This will print a URL to stdout that you can use to attach a Node.js debugger (such as the VS Code Node.js debugger, or Chrome DevTools). Your test will wait until a debugger is attached before continuing.
Example debug session with Chrome DevTools:
debugger
statements in your test code to set breakpoints, e.g.:// //golden/modules/dots-sk/dots-sk_puppeteer_test.ts describe('dots-sk', () => { it('should do something', () => { debugger; ... }); });
bazel run //golden/modules/dots-sk:dots-sk_puppeteer_test_debugger
.chrome://inspect
in the URL bar, then press return.debugger
statement.By default, Puppeteer starts a Chromium instance in headless mode. If you would like to run your test in headful mode, invoke your test with bazel run
, and append _debug_headful
at the end of the target name, e.g.:
$ bazel run //golden/modules/dots-sk:dots-sk_puppeteer_test_debug_headful
Run your test in headful mode to visually inspect how your test interacts with the demo page under test as you step through your test code with the attached debugger.
nodejs_test
rule)Use bazel test
to run a NodeJS test, e.g.:
$ bazel test //puppeteer-tests:util_nodejs_test
The below tips apply to all Bazel test targets (e.g. go_test
, karma_test
, etc.).
By default, Bazel omits the standard output of tests (e.g. fmt.Println("Hello")
).
Use flag --test_output=all
to see the full output of your tests:
$ bazel test //perf/... --test_output=all
Note that Bazel runs tests in parallel, so it will only print out their output once all tests have finished running.
Flag --test_output=errors
can be used to only print out the output of failing tests.
To see the tests' output in real time, use flag --test_output=streamed
. Note however that this forces serial execution of tests, so this can be significantly slower.
Bazel caches successful test runs, and reports (cached) PASSED
on subsequent bazel test
invocations, e.g.:
$ bazel test //go/util:util_test ... //go/util:util_test PASSED in 0.1s $ bazel test //go/util:util_test ... //go/util:util_test (cached) PASSED in 0.1s
To disable caching, use flag --nocache_test_results
, e.g.
$ bazel test //go/util:util_test ... //go/util:util_test (cached) PASSED in 0.1s $ bazel test //go/util:util_test --nocache_test_results ... //go/util:util_test PASSED in 0.1s
Flaky tests can cause the CI to fail (see Bazel CI tasks).
Tests can be marked as flaky via the flaky
argument, e.g.:
go_test( name = "some_flaky_test", srcs = ["some_flaky_test.go"], flaky = True, ... )
Bazel will execute tests marked as flaky up to three times, and report test failure only if the three attempts fail.
Using flaky
is generally discouraged, but can be useful until the root cause of the flake is diagnosed (see Debugging flaky tests) and fixed.
As a last resort, consider marking your flaky test as manual (see Manual tests).
More on the flaky
attribute here.
While --nocache_test_results
can be useful for debugging flaky tests, flag --runs_per_test
was specifically added for this purpose. Example:
$ bazel test //path/to:some_flaky_test --runs_per_test=10 ... //path/to:some_flaky_test FAILED in 4 out of 10 in 0.1s
Manual tests are excluded from Bazel wildcards such as bazel test //...
.
To mark a test target as manual, use the manual
tag, e.g.:
nodejs_test( name = "some_manual_nodejs_test", src = "some_manual_nodejs_test.ts", tags = ["manual"], ... )
Note that the instructions to mark go_test
targets as manual are different. See Manual Go tests for more.
Note that manual tests are excluded from the Bazel CI tasks.
More on manual tests and Bazel tags here.
By default, Bazel will report TIMEOUT
if the test does not finish within 5 minutes. This can be overridden via the --test_timeout
flag, e.g.
$ bazel test //go/util:slow_test --test_timeout=20
This can also be overridden via the timeout
and size
arguments of the test target, e.g.
go_test( name = "my_test", srcs = ["my_test.go"], timeout = "long", .... )
More on how to handle timeouts and slow tests here.
Use flag --test_arg
to pass flags to the binary produced by a test target.
For example, our go_test
targets define custom command-line flags such as flag.Bool("logtostderr", ...)
. This flag can be enabled with --test_arg
, e.g.:
$ bazel test //go/util:util_test --test_arg=-logtostderr
As an alternative, command-line flags can be specified via the args
argument of the Bazel test target, as follows:
go_test( name = "my_test", srcs = ["my_test.go], args = ["-logtostderr"], ... )
More on test arguments here.
By default, Bazel isolates test targets from the host system's environment variables, and sets the environment with a number of variables with Bazel-specific information that some *_test
rules depend on (documented here).
Use flag --test_env
to specify any environment variables, e.g.
$ bazel test //path/to:my_cockroachdb_test --test_env=COCKROACHDB_EMULATOR_STORE_DIR=/tmp/crdb
To pipe through an environment variable from the host's system:
$ export COCKROACHDB_EMULATOR_STORE_DIR=/tmp/crdb $ bazel test //path/to:my_cockroachdb_test --test_env=COCKROACHDB_EMULATOR_STORE_DIR
More on the --test_env
flag here.
By default, Bazel sandboxes every build step. Effectively, it runs the compile command with only the given source files for a particular rule and the specified dependencies visible, to force all dependencies to be properly listed.
For steps that have a lot of files, this can have a bit of I/O overhead. To speed this up, one can use tempfs (e.g. a RAM disk) for the sandbox by adding --sandbox_base=/dev/shm
to the build command. When compiling Skia, for example, this reduces compile time by 2-3x.
Sandboxing can make diagnosing failing rules a bit harder. To see what command got run and to be able to view the sandbox after failure, add --subcommands --sandbox_debug
to the command.
Bazel builds fast and correct by making use of cached outputs and reusing them when the input file is identical. This can make it hard to debug a slow or non-deterministic build.
To get a detailed log of all the actions your build is taking:
# ensure there are no disk cache hits build --disk_cache=/path/to/debugging/cache # IMPORTANT Generate execution logs build --experimental_execution_log_file=yourLogFile.log
bazel clean --expunge
. We want all actions to get executed, so nothing cached.