## Changes
This workflow only worked if it was triggered on the tag to publish
itself. This means it is not possible to release a version if the
workflow configuration at that tag is broken (as is the case for
v0.238.0 because of #2105).
This change adds a "tag" input that can be set when manually triggering
the workflow.
## Tests
* Succesful run with this change:
https://github.com/databricks/cli/actions/runs/12689281843
* Pull request that the run created:
https://github.com/microsoft/winget-pkgs/pull/209220
## Changes
This action uses a token to access the release artifacts and, as such,
needs to execute on the runner that's on the allowlist.
Related PRs:
* #2098
* #2077
Since merge happens first, before variable resolution, the two jobs are
seen as different.
I also updated override/job_cluster/script to include more of the
output.
It's easier to remember and type and also validated and part of help:
```
% go test ./acceptance -updat 2>&1 | grep updat
flag provided but not defined: -updat
-update
```
## Changes
This used to work because the permission bits for built-in templates
were hardcoded to 0644 for files and 0755 for directories.
As of #1912 (and the PRs it depends on), built-in templates are no
longer pre-materialized to a temporary directory and read directly from
the embedded filesystem. This built-in filesystem returns 0444 as the
permission bits for the files it contains. These bits are carried over
to the destination filesystem.
This change updates template materialization to always set the owner's
write bit. It doesn't really make sense to write read-only files and
expect users to work with these files in a VCS (note: Git only stores
the executable bit).
The regression shipped as part of v0.235.0 and will be fixed as of
v0.238.0.
## Tests
Unit tests.
## Changes
- New kind of test is added - acceptance tests. See acceptance/README.md
for explanation.
- A few tests are converted to acceptance tests by moving databricks.yml
to acceptance/ and adding corresponding script files.
As these tests run against compiled binary and can capture full output
of the command, they can be useful to support major changes such as
refactoring internal logging / diagnostics or complex variable
interpolation.
These are currently run as part of 'make test' but the intention is to
run them as part of integration tests as well.
### Benefits
- Full binary is tested, exactly as users get it.
- We're not testing custom set of mutators like many existing tests.
- Not mocking anything, real SDK is used (although the HTTP endpoint is
not a real Databricks env).
- Easy to maintain: output can be updated automatically.
- Can easily set up external env, such as env vars, CLI args,
.databrickscfg location etc.
### Gaps
The tests currently share the test server and there is global place to
define handlers. We should have a way for tests to override / add new
handlers.
## Tests
I manually checked that output of new acceptance tests matches previous
asserts.
## Changes
- Remove TestMain from integration tests and related checks.
- This fixes "go test" caching for integration tests.
The test_main.go files were added in
https://github.com/databricks/cli/pull/2009 to make sure integration
tests are not run as part of go test ./.... We recommend running make
test to run tests, which includes the packages to test (and excludes
integration).
## Tests
To test that caching works I ran a test twice:
```
+ CLOUD_ENV=aws
+ go test --timeout 3h -v -run TestDefaultPython/3.9 ./integration/bundle/
…
PASS
ok github.com/databricks/cli/integration/bundle (cached)
```
## Changes
This reverts commit 31552852ff.
These workflows were disabled in #2085.
They should work again now that we're using self-hosted runners (see
#2077).
## Tests
(inline)
## Changes
Migrate workflows to Databricks-hosted GitHub Actions runners.
The GitHub-hosted runners can no longer be used because of security
hardening.
## Changes
The comment block appears on all PRs, even if the integration tests are
automatically triggered. This is quite noisy. This change limits those
comments to PRs from forks.
## Tests
Have to try by merging...
## Changes
This PR:
1. Incrementally improves the error messages shown to the user when the
volume they are referring to in `workspace.artifact_path` does not
exist.
2. Performs this validation in both `bundle validate` and `bundle
deploy` compared to before on just deployments.
3. It runs "fast" validations on `bundle deploy`, which earlier were
only run on `bundle validate`.
## Tests
Unit tests and manually. Also, existing integration tests provide
coverage (`TestUploadArtifactToVolumeNotYetDeployed`,
`TestUploadArtifactFileToVolumeThatDoesNotExist`)
Examples:
```
.venv➜ bundle-playground git:(master) ✗ cli bundle validate
Error: cannot access volume capital.whatever.my_volume: User does not have READ VOLUME on Volume 'capital.whatever.my_volume'.
at workspace.artifact_path
in databricks.yml:7:18
```
and
```
.venv➜ bundle-playground git:(master) ✗ cli bundle validate
Error: volume capital.whatever.foobar does not exist
at workspace.artifact_path
resources.volumes.foo
in databricks.yml:7:18
databricks.yml:12:7
You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.
```
## Changes
https://github.com/databricks/mlops-stacks/pull/187 broke mlops-stacks
deployments for non-UC projects. Snoozing the test until upstream is
fixed.
## Tests
The test is skipped on my local machine. CI run will verify it's also
skipped on Github Actions runners.
By default it stops after 3 issues of a given type, which gives false
impression and also unhelpful if you fixing it with aider.
1000 is almost like unlimited but not unlimited in case there is a bug
in a linter.
## Changes
This PR adds back debugging functionality that was lost during migration
to `internal.Main` as an entry point for integration tests.
The PR that caused the regression:
https://github.com/databricks/cli/pull/2009. Specifically the addition
of internal.Main as the entrypoint for all integration tests.
## Tests
Manually, by trying to debug a test.
This is useful on CI and locally for debugging and being able to
copy-paste command to tweak the options.
Removed redundant and imprecise messages like "✓ Running tests ...".
On main branch: ‘make test’ takes about 33s
On this branch: ‘make test’ takes about 2.7s
(all measurements are for hot cache)
What’s done (from highest impact to lowest):
- Remove -coverprofile= option - this option was disabling "go test"'s
built-in cache and also it took extra time to calculate the coverage
(extra 21s).
- Exclude ./integration/ folder, there are no unit tests there, but
having it included adds significant time. "go test"'s caching also does
not work there for me, due to TestMain() presence (extra 7.2s).
- Remove dependency on "make lint" - nice to have, but slow to re-check
the whole repo and should already be done by IDE (extra 2.5s).
- Remove dependency on "make vendor" — rarely needed; on CI it is
already executed separately (extra 1.1s).
The coverage option is still available under "make cover". Use "make
showcover" to show it.
I’ve also removed separate "make testonly". If you only want tests, run
"make test". If you want lint+test run "make lint test" etc.
I've also modified the test command, removed unnecessary -short, -v,
--raw-command.
## Changes
- Enable new linter: testifylint.
- Apply fixes with --fix.
- Fix remaining issues (mostly with aider).
There were 2 cases we --fix did the wrong thing - this seems to a be a
bug in linter: https://github.com/Antonboom/testifylint/issues/210
Nonetheless, I kept that check enabled, it seems useful, just need to be
fixed manually after autofix.
## Tests
Existing tests
## Changes
I noticed a diff in the schema in #2052.
This check should be performed automatically.
## Tests
This PR includes a commit that changes the schema to check that the
workflow actually fails.
## Changes
- Add t.Helper() in testcli-related helpers, this ensures that output is
attributed correctly to test case and not to the helper.
- Modify testlcli.Run() to run process in foreground. This is needed for
t.Helper to work.
- Extend a few assertions with message to help attribute it to proper
helper where needed.
## Tests
Manually reviewed test output.
Before:
```
+ go test --timeout 3h -v -run TestDefaultPython/3.9 ./integration/bundle/
=== RUN TestDefaultPython
=== RUN TestDefaultPython/3.9
workspace.go:26: aws
golden.go:14: run args: [bundle, init, default-python, --config-file, config.json]
runner.go:206: [databricks stderr]:
runner.go:206: [databricks stderr]: Welcome to the default Python template for Databricks Asset Bundles!
...
testdiff.go:23:
Error Trace: /Users/denis.bilenko/work/cli/libs/testdiff/testdiff.go:23
/Users/denis.bilenko/work/cli/libs/testdiff/golden.go:43
/Users/denis.bilenko/work/cli/internal/testcli/golden.go:23
/Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:92
/Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:45
...
```
After:
```
+ go test --timeout 3h -v -run TestDefaultPython/3.9 ./integration/bundle/
=== RUN TestDefaultPython
=== RUN TestDefaultPython/3.9
init_default_python_test.go:51: CLOUD_ENV=aws
init_default_python_test.go:92: args: bundle, init, default-python, --config-file, config.json
init_default_python_test.go:92: stderr:
init_default_python_test.go:92: stderr: Welcome to the default Python template for Databricks Asset Bundles!
...
init_default_python_test.go:92:
Error Trace: /Users/denis.bilenko/work/cli/libs/testdiff/testdiff.go:24
/Users/denis.bilenko/work/cli/libs/testdiff/golden.go:46
/Users/denis.bilenko/work/cli/internal/testcli/golden.go:23
/Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:92
/Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:45
...
```
## Changes
Fix cases where accumulated diagnostics are lost instead of being
propagated further. In some cases it's not possible, add a comment
there.
## Tests
Existing tests
## Changes
- Detach "golden files" assertions from testcli runner. Now any output
can be compared, no matter how it is obtained.
- Move those assertion to libs/testdiff package.
This allows using "golden files" in non-integration tests.
## Tests
Existing tests
First stage is to run goimports and formatter, second is full suite.
This ensures that imports and formatting are fixed even in presence of
other issues. Otherwise golanci-lint refuses to fix anything
https://github.com/golangci/golangci-lint/issues/5257
This helpful when running aider with config like this - aider will use
that to autofix what it can after every update:
```
% cat .aider.conf.yml
lint-cmd:
- "go: ./lint.sh"
```
## Changes
Relax the checks of `lib/template/builtin_test` so they don't fail for a
local development copy that has uncommitted draft templates. Right now
these tests fail because I have some git-ignored uncommitted templates
in my local dev copy.
## Changes
1. Removes default yaml-fields during schema generation, caused by [this
PR](https://github.com/databricks/cli/pull/2032) (current yaml package
can't read `json` annotations in struct fields)
2. Addresses missing annotations for fields from OpenAPI spec, which are
named differently in go SDK
3. Adds filtering for annotations.yaml to include only CLI package
fields
4. Implements alphabetical sort for yaml keys to avoid unnecessary diff
in PRs
## Tests
Manually tested
## Changes
Add new type of test helpers that run the command and compare full
output (golden files approach).
In case of JSON, there is also an option to ignore certain paths.
Add test for different versions of Python to go through bundle init
default-python / validate / deploy / summary.
## Tests
New integration tests.
## Changes
Simplify logic for selecting Python to run when calculating default whl
build command: "python" on Windows and "python3" everywhere.
Python installers from python.org do not install python3.exe. In
virtualenv there is no python3.exe.
## Tests
Added new unit tests to create real venv with uv and simulate activation
by prepending venv/bin to PATH.