databricks-cli/acceptance
Denis Bilenko 3a32c63919
Add -inprocess mode for acceptance tests (#2184)
## Changes
- If you pass -inprocess flag to acceptance tests, they will run in the
same process as test itself. This enables debugging.
- If you set singleTest variable on top of acceptance_test.go, you'll
only run that test and with inprocess mode. This is intended for
debugging in VSCode.
- (minor) Converted KeepTmp to flag -keeptmp from env var KEEP_TMP for
consistency with other flags.

## Tests
- I verified that acceptance tests pass with -inprocess mode: `go test
-inprocess < /dev/null | cat`
- I verified that debugging in VSCode works: set a test name in
singleTest variable, set breakpoints inside CLI and click "debug test"
in VSCode.
2025-01-21 21:21:12 +00:00
..
bin Add -inprocess mode for acceptance tests (#2184) 2025-01-21 21:21:12 +00:00
build Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
bundle Add acceptance for test for sync.paths equal to two dots (#2196) 2025-01-21 11:50:28 +00:00
help Fix duplicate "apps" entry in help output (#2191) 2025-01-20 16:02:29 +00:00
README.md Use -update instead of TESTS_OUTPUT=OVERWRITE (#2097) 2025-01-09 09:00:05 +00:00
acceptance_test.go Add -inprocess mode for acceptance tests (#2184) 2025-01-21 21:21:12 +00:00
cmd_server_test.go Add -inprocess mode for acceptance tests (#2184) 2025-01-21 21:21:12 +00:00
script.cleanup Add acceptance tests (#2081) 2025-01-08 12:41:08 +00:00
script.prepare Add test about using variable in bundle.git.branch (#2118) 2025-01-15 10:34:51 +01:00
server_test.go Add acceptance tests for builtin templates (#2135) 2025-01-14 18:23:34 +00:00

README.md

Acceptance tests are blackbox tests that are run against compiled binary.

Currently these tests are run against "fake" HTTP server pretending to be Databricks API. However, they will be extended to run against real environment as regular integration tests.

To author a test,

  • Add a new directory under acceptance. Any level of nesting is supported.
  • Add databricks.yml there.
  • Add script with commands to run, e.g. $CLI bundle validate. The test case is recognized by presence of script.

The test runner will run script and capture output and compare it with output.txt file in the same directory.

In order to write output.txt for the first time or overwrite it with the current output pass -update flag to go test.

The scripts are run with bash -e so any errors will be propagated. They are captured in output.txt by appending Exit code: N line at the end.

For more complex tests one can also use:

  • errcode helper: if the command fails with non-zero code, it appends Exit code: N to the output but returns success to caller (bash), allowing continuation of script.
  • trace helper: prints the arguments before executing the command.
  • custom output files: redirect output to custom file (it must start with out), e.g. $CLI bundle validate > out.txt 2> out.error.txt.