databricks-cli/acceptance
Denis Bilenko 185bbd28e4
Add acceptance tests (#2081)
## Changes
- New kind of test is added - acceptance tests. See acceptance/README.md
for explanation.
- A few tests are converted to acceptance tests by moving databricks.yml
to acceptance/ and adding corresponding script files.

As these tests run against compiled binary and can capture full output
of the command, they can be useful to support major changes such as
refactoring internal logging / diagnostics or complex variable
interpolation.

These are currently run as part of 'make test' but the intention is to
run them as part of integration tests as well.

### Benefits

- Full binary is tested, exactly as users get it.
  - We're not testing custom set of mutators like many existing tests.
- Not mocking anything, real SDK is used (although the HTTP endpoint is
not a real Databricks env).
- Easy to maintain: output can be updated automatically.
- Can easily set up external env, such as env vars, CLI args,
.databrickscfg location etc.

### Gaps

The tests currently share the test server and there is global place to
define handlers. We should have a way for tests to override / add new
handlers.

## Tests
I manually checked that output of new acceptance tests matches previous
asserts.
2025-01-08 12:41:08 +00:00
..
build Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
bundle Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
help Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
README.md Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
acceptance_test.go Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
script.cleanup Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
script.prepare Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
server_test.go Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00

README.md

Acceptance tests are blackbox tests that are run against compiled binary.

Currently these tests are run against "fake" HTTP server pretending to be Databricks API. However, they will be extended to run against real environment as regular integration tests.

To author a test,

  • Add a new directory under acceptance. Any level of nesting is supported.
  • Add databricks.yml there.
  • Add script with commands to run, e.g. $CLI bundle validate. The test case is recognized by presence of script.

The test runner will run script and capture output and compare it with output.txt file in the same directory.

In order to write output.txt for the first time or overwrite it with the current output, set TESTS_OUTPUT=OVERWRITE env var.

The scripts are run with bash -e so any errors will be propagated. They are captured in output.txt by appending Exit code: N line at the end.

For more complex tests one can also use:

  • errcode helper: if the command fails with non-zero code, it appends Exit code: N to the output but returns success to caller (bash), allowing continuation of script.
  • trace helper: prints the arguments before executing the command.
  • custom output files: redirect output to custom file (it must start with out), e.g. $CLI bundle validate > out.txt 2> out.error.txt.