mirror of https://github.com/databricks/cli.git
## Changes This PR fails the acceptance test when an unknown endpoint (i.e. not stubbed) is used. We want to ensure that all API endpoints used in an acceptance test are stubbed and do not otherwise silently fail with a 404. The logs on failure output include a configuration that developers can simply copy-paste to `test.toml` to stub the missing API endpoint. It'll look something like: ``` [[Server]] Pattern = "<method> <path>" Response.Body = ''' <response body here> ''' Response.StatusCode = <response status-code here> ``` ## Tests Manually: output.txt when an endpoint is not found: ``` >>> [CLI] jobs create --json {"name":"abc"} Error: No stub found for pattern: POST /api/2.1/jobs/create ``` How this renders in the test logs: ``` --- FAIL: TestAccept/workspace/jobs/create (0.03s) server.go:46: ---------------------------------------- No stub found for pattern: POST /api/2.1/jobs/create To stub a response for this request, you can add the following to test.toml: [[Server]] Pattern = "POST /api/2.1/jobs/create" Response.Body = ''' <response body here> ''' Response.StatusCode = <response status-code here> ---------------------------------------- ``` Manually checked that the debug mode still works. |
||
---|---|---|
.. | ||
auth/bundle_and_profile | ||
bin | ||
bundle | ||
help | ||
selftest | ||
terraform | ||
workspace/jobs | ||
.gitignore | ||
README.md | ||
acceptance_test.go | ||
cmd_server_test.go | ||
config_test.go | ||
install_terraform.py | ||
script.cleanup | ||
script.prepare | ||
server_test.go | ||
test.toml |
README.md
Acceptance tests are blackbox tests that are run against compiled binary.
Currently these tests are run against "fake" HTTP server pretending to be Databricks API. However, they will be extended to run against real environment as regular integration tests.
To author a test,
- Add a new directory under
acceptance
. Any level of nesting is supported. - Add
databricks.yml
there. - Add
script
with commands to run, e.g.$CLI bundle validate
. The test case is recognized by presence ofscript
.
The test runner will run script and capture output and compare it with output.txt
file in the same directory.
In order to write output.txt
for the first time or overwrite it with the current output pass -update flag to go test.
The scripts are run with bash -e
so any errors will be propagated. They are captured in output.txt
by appending Exit code: N
line at the end.
For more complex tests one can also use:
errcode
helper: if the command fails with non-zero code, it appendsExit code: N
to the output but returns success to caller (bash), allowing continuation of script.trace
helper: prints the arguments before executing the command.- custom output files: redirect output to custom file (it must start with
out
), e.g.$CLI bundle validate > out.txt 2> out.error.txt
.
See selftest for a toy test.