mirror of https://github.com/databricks/cli.git
## Changes <!-- Brief summary of your changes that is easy to understand --> 1. Change the cloud acceptance test for `bind/schema` to run locally 2. Add debug lines to the mock server 3. Change `fake_workspace` to create directories for imported files ## Why <!-- Why are these changes needed? Provide the context that the reviewer might be missing. For example, were there any decisions behind the change that are not reflected in the code itself? --> 1. Local version of the test run can indicate breaking changes faster than the cloud version and it can be run locally without any predefined environment variables ## Tests <!-- How have you tested the changes? --> 1. Ran both acloud and local versions of test, both succeeded <!-- If your PR needs to be included in the release notes for next release, add a separate entry in NEXT_CHANGELOG.md as part of your PR. --> |
||
---|---|---|
.. | ||
auth | ||
bin | ||
bundle | ||
cmd | ||
help | ||
panic | ||
selftest | ||
terraform | ||
workspace/jobs | ||
.gitignore | ||
README.md | ||
acceptance_test.go | ||
cmd_server_test.go | ||
config_test.go | ||
install_terraform.py | ||
script.cleanup | ||
script.prepare | ||
server_test.go | ||
test.toml |
README.md
Acceptance tests are blackbox tests that are run against compiled binary.
Currently these tests are run against "fake" HTTP server pretending to be Databricks API. However, they will be extended to run against real environment as regular integration tests.
To author a test,
- Add a new directory under
acceptance
. Any level of nesting is supported. - Add
databricks.yml
there. - Add
script
with commands to run, e.g.$CLI bundle validate
. The test case is recognized by presence ofscript
.
The test runner will run script and capture output and compare it with output.txt
file in the same directory.
In order to write output.txt
for the first time or overwrite it with the current output pass -update flag to go test.
The scripts are run with bash -e
so any errors will be propagated. They are captured in output.txt
by appending Exit code: N
line at the end.
For more complex tests one can also use:
errcode
helper: if the command fails with non-zero code, it appendsExit code: N
to the output but returns success to caller (bash), allowing continuation of script.trace
helper: prints the arguments before executing the command.- custom output files: redirect output to custom file (it must start with
out
), e.g.$CLI bundle validate > out.txt 2> out.error.txt
.
See selftest for a toy test.