## Changes
Add new type of test helpers that run the command and compare full
output (golden files approach).
In case of JSON, there is also an option to ignore certain paths.
Add test for different versions of Python to go through bundle init
default-python / validate / deploy / summary.
## Tests
New integration tests.
## Changes
There was only one helper for AWS and not the other clouds. Found this
when looking through double calls to `acc.WorkspaceTest()` (see
`TestPythonWheelTaskDeployAndRunOnInteractiveCluster`).
## Tests
n/a
## Changes
The `acc` package is exclusively used by integration tests, so it
belongs under `integration/internal`.
It's not the best name we can rename later.
## Tests
n/a
## Changes
Fixed downloading arm64 binaries
Go 1.23 changed the way built binaries are prefixed on amd64, more
details here: https://tip.golang.org/doc/go1.23#arm64
## Changes
The `Setenv` helper function configures an environment variable and
resets it to its original value when exiting the test scope. It is
incompatible with running tests in parallel because it modifies
process-wide state. The `libs/env` package defines functions to interact
with the environment but records `Setenv` calls on a `context.Context`.
This enables us to override/specialize the environment scoped to a
context.
Pre-requisites for removing the `t.Setenv` calls:
* Make `cmdio.NewIO` accept a context and use it with `libs/env`
* Make all `internal/testcli` functions use a context
The rest of this change:
* Modifies integration tests to initialize a context to use if there
wasn't already one
* Updates `t.Setenv` calls to use `env.Set`
## Tests
n/a
## Changes
Objectives:
* A dedicated directory for integration tests
* It is not picked up by `go test ./...`
* No need for a `TestAcc` test name prefix
* More granular packages to improve test selection (future)
The tree structure generally mirrors the source code tree structure.
Requirements for new files in this directory:
* Every package **must** be named after its directory with `_test` appended
* Requiring a different package name for integration tests avoids
aliasing with the main package.
* Every integration test package **must** include a `main_test.go` file.
These requirements are enforced by a unit test in the `integration` package.
## Tests
Integration tests pass.
The total run time regresses by about 10%. A follow-up change that
increases the degree of test parallelism will address this.
## Changes
This change moves fixture helpers to `internal/acc/fixtures.go`. These
helpers create an ephemeral path or resource for the duration of a test.
Call sites are updated to use `acc.WorkspaceTest()` to construct a
workspace-focused test wrapper as needed.
This change also moves the `GetNodeTypeID()` function to `testutil`.
## Tests
n/a
## Changes
The CLI test runner instantiates a new CLI "instance" through
`cmd.New()` and runs it with specified arguments. This is as close as we
get to running the real CLI **in-process**. This runner was located in
the `internal` package next to other helpers. This change moves it to
its own dedicated package.
Note: this runner transitively imports pretty much the entire
repository, which is why we intentionally keep it _separate_ from
`testutil`.
## Tests
n/a
## Changes
Using an interface instead of a concrete type means we can pass
`*testing.T` directly or any wrapper type that implements a superset of
this interface. It prepares for more broad use of `acc.WorkspaceT`,
which enhances the testing object with helper functions for using a
Databricks workspace.
This eliminates the need to dereference a `*testing.T` field on a
wrapper type.
## Tests
n/a
## Changes
This is one step (of many) toward moving the integration tests around.
This change consolidates the following functions:
* `ReadFile` / `WriteFile`
* `GetEnvOrSkipTest`
* `RandomName`
## Tests
n/a
## Changes
Enable gofumpt and goimports in golangci-lint and apply autofix.
This makes 'make fmt' redundant, will be cleaned up in follow up diff.
## Tests
Existing tests.
## Changes
It was failing because when schema.yml was removed, `catalog: main`
option left set in the base pipeline definition in databricks.yml which
lead to incorrect config (empty target schema)
https://github.com/databricks/cli/pull/1413
Backend behaviour changed and DLT pipelines stopped to accept empty
targets leading to the error which was ignored before.
## Tests
```
--- PASS: TestAccBundleDeployUcSchema (41.75s)
PASS
coverage: 33.3% of statements in ./...
ok github.com/databricks/cli/internal/bundle 42.210s coverage: 33.3% of statements in ./...
```
## Changes
Remove two duplicate implementations of the same logic, switch
everywhere to folders.FindDirWithLeaf.
Add Abs() call to FindDirWithLeaf, it cannot really work on relative
paths.
## Tests
Existing tests.
## Changes
I saw this test fail on rerun because the global wasn't set.
Fix by removing the global and using a different approach to acquire a
valid cluster ID.
## Tests
Integration tests.
## Changes
Fix all errcheck-found issues in tests and test helpers. Mostly this
done by adding require.NoError(t, err), sometimes panic() where t object
is not available).
Initial change is obtained with aider+claude, then manually reviewed and
cleaned up.
## Tests
Existing tests.
## Changes
This test changes the cwd using the `testutil.Chdir` function. This
causes flakiness with other integration tests, like
`TestAccWorkspaceFilesExtensionsNotebooksAreNotDeletedAsFiles`, which
rely on the cwd being configured correctly to read test fixtures.
The `t.Setenv` call in `testutil.Chdir` ensures that it is not run from
a test whose upstream is executing in parallel.
## Changes
I found a race where this error is nil and the subsequent assert panics
on the error being nil. This change makes the test robust against this
to fail immediately if the error is different from the one we expect.
## Tests
n/a
## Changes
The `any` alias for `interface{}` has been around since Go 1.18.
Now that we're using golangci-lint (#1953), we can lint on it.
Existing commits can be updated with:
```
gofmt -w -r 'interface{} -> any' .
```
## Tests
n/a
## Changes
Since there is no .git directory in Workspace file system, we need to
make an API call to api/2.0/workspace/get-status?return_git_info=true to
fetch git the root of the repo, current branch, commit and origin.
Added new function FetchRepositoryInfo that either looks up and parses
.git or calls remote API depending on env.
Refactor Repository/View/FileSet to accept repository root rather than
calculate it. This helps because:
- Repository is currently created in multiple places and finding the
repository root is becoming relatively expensive (API call needed).
- Repository/FileSet/View do not have access to current Bundle which is
where WorkplaceClient is stored.
## Tests
- Tested manually by running "bundle validate --json" inside web
terminal within Databricks env.
- Added integration tests for the new API.
---------
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
This PR adds support for UC volumes to DABs.
### Can I use a UC volume managed by DABs in `artifact_path`?
Yes, but we require the volume to exist before being referenced in
`artifact_path`. Otherwise you'll see an error that the volume does not
exist. For this case, this PR also adds a warning if we detect that the
UC volume is defined in the DAB itself, which informs the user to deploy
the UC volume in a separate deployment first before using it in
`artifact_path`.
We cannot create the UC volume and then upload the artifacts to it in
the same `bundle deploy` because `bundle deploy` always uploads the
artifacts to `artifact_path` before materializing any resources defined
in the bundle. Supporting this in a single deployment requires us to
migrate away from our dependency on the Databricks Terraform provider to
manage the CRUD lifecycle of DABs resources.
### Why do we not support `preset.name_prefix` for UC volumes?
UC volumes will not have a `dev_shreyas_goenka` prefix added in `mode:
development`. Configuring `presets.name_prefix` will be a no-op for UC
volumes. We have decided not to support prefixing for UC resources. This
is because:
1. UC provides its own namespace hierarchy that is independent of DABs.
2. Users can always manually use `${workspace.current_user.short_name}`
to configure the prefixes manually.
Customers often manually set up a UC hierarchy for dev and prod,
including a schema or catalog per developer. Thus, it's often
unnecessary for us to add prefixing in `mode: development` by default
for UC resources.
In retrospect, supporting prefixing for UC schemas and registered models
was a mistake and will be removed in a future release of DABs.
## Tests
Unit, integration test, and manually.
### Manual Testing cases:
1. UC volume does not exist:
```
➜ bundle-playground git:(master) ✗ cli bundle deploy
Error: failed to fetch metadata for the UC volume /Volumes/main/caps/my_volume that is configured in the artifact_path: Not Found
```
2. UC Volume does not exist, but is defined in the DAB
```
➜ bundle-playground git:(master) ✗ cli bundle deploy
Error: failed to fetch metadata for the UC volume /Volumes/main/caps/managed_by_dab that is configured in the artifact_path: Not Found
Warning: You might be using a UC volume in your artifact_path that is managed by this bundle but which has not been deployed yet. Please deploy the UC volume in a separate bundle deploy before using it in the artifact_path.
at resources.volumes.bar
in databricks.yml:24:7
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
The ML production team modified mlops-stack to use `mode: development`
for their development target here:
https://github.com/databricks/mlops-stacks/pull/174
This PR makes the integration test assertion agnostic of the prefix to
make it pass again.
## Tests
The test passes now
## Changes
Integration tests using these fixtures could have been flaky when run in
parallel using the same user's identity. They would also possibly have
piggybacked state from previous runs.
This PR adds a UUID to the root_path to force independent bundle
deployments for every test run.
I have checked that all bundles in `internal/bundle/bundles` have
`root_path` namespaced to a UUID.
## Tests
Self testing.
## Changes
Update filenames used by bundle generate to use '.resource-type.yml'
Similar to [Add sub-extension to resource files in built-in templates by
shreyas-goenka · Pull Request #1777 ·
databricks/cli](https://github.com/databricks/cli/pull/1777)
---------
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
## Changes
Added integration test to deploy bundle to /Shared root path
## Tests
```
--- PASS: TestAccDeployBasicToSharedWorkspace (24.58s)
PASS
coverage: 31.2% of statements in ./...
ok github.com/databricks/cli/internal/bundle 25.572s coverage: 31.2% of statements in ./...
```
---------
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
## Changes
When running the CLI on Databricks Runtime (DBR), use the
extension-aware filer to write an instantiated template if the instance
path is located in the workspace filesystem.
Notebooks cannot be written through the workspace filesystem's FUSE
mount. As a result, this is the only method for initializing templates
that contain notebooks when running the CLI on DBR and writing to the
workspace filesystem.
Depends on #1910 and #1911.
Supersedes #1744.
## Tests
* Manually confirmed I can initialize a template with notebooks when
running the CLI from the web terminal.
## Changes
While working on the v2 of #1744, I found that:
* Template initialization first copies built-in templates to a temporary
directory before initializing them
* Reading a template's contents goes through a `filer.Filer` but is
hardcoded to a local one
This change updates the interface for reading templates to be `fs.FS`.
This is compatible with the `embed.FS` type for the built-in templates,
so they no longer have to be copied to a temporary directory before
being used.
The alternative is to use a `filer.Filer` throughout, but this would
have required even more plumbing, and we don't need to _read_ templates,
including notebooks, from the workspace filesystem (yet?).
As part of making `template.Materialize` take an `fs.FS` argument, the
logic to match a given argument to a particular built-in template in the
`init` command has moved to sit next to its implementation.
## Tests
Existing tests pass.
## Changes
The workspace extensions filer should not read or stat a notebook called
`foo` if the user calls `.Stat(ctx, "foo")`.
Instead, the filer should return a file not found error. This is because
the contract for the workspace extensions filer is to only work for
notebooks when the file path / name includes the extension (example:
`foo.ipynb` or `foo.sql` instead of just `foo`)
## Tests
Integration tests.
## Changes
### Background
The workspace import APIs recently added support for importing Jupyter
notebooks written in R, Scala, or SQL, that is non-Python notebooks.
This now works for the `/import-file` API which we leverage in the CLI.
Note: We do not need any changes in `databricks sync`. It works out of
the box because any state mapping of local names to remote names that we
store is only scoped to the notebook extension (i.e., `.ipynb` in this
case) and is agnostic of the notebook's specific language.
### Problem this PR addresses
The extension-aware filer previously did not function because it checks
that a `.ipynb` notebook is written in Python. This PR relaxes that
constraint and adds integration tests for both the normal workspace
filer and extensions aware filer writing and reading non-Python `.ipynb`
notebooks.
This implies that after this PR DABs in the workspace / CLI from DBR
will work for non-Python notebooks as well. non-Python notebooks for
DABs deployment from local machines already works after the platform
side changes to the API landed, this PR just adds integration tests for
that bit of functionality.
Note: Any platform side changes we needed for the import API have
already been rolled out to production.
### Before
DABs deploy would work fine for non-Python notebooks. But DABs
deployments from DBR would not.
### After
DABs deploys both from local machines and DBR will work fine.
## Testing
For creating the `.ipynb` notebook fixtures used in the integration
tests I created them directly from the VSCode UI. This ensures high
fidelity with how users will create their non-Python notebooks locally.
For Python notebooks this is supported out of the box by VSCode but for
R and Scala notebooks this requires installing the Jupyter kernel for R
and Scala on my local machine and using that from VSCode.
For SQL, I ended up directly modifying the `language_info` field in the
Jupyter metadata to create the test fixture.
### Discussion: Issues with configuring language at the cell level
The language metadata for a Jupyter notebook is standardized at the
notebook level (in the `language_info` field). Unfortunately, it's not
standardized at the cell level. Thus, for example, if a user changes the
language for their cell in VSCode (which is supported by the standard
Jupyter VSCode integration), it'll cause a runtime error when the user
actually attempts to run the notebook. This is because the cell-level
metadata is encoded in a format specific to VSCode:
```
cells: []{
"vscode": {
"languageId": "sql"
}
}
```
Supporting cell level languages is thus out of scope for this PR and can
be revisited along with the workspace files team if there's strong
customer interest.
Known issues:
- [ ] _(non-blocking with a command override)_ `apps.Update` requires 2
`name` params (one from path, one from request body)
- [ ] _(non-blocking)_ `lakeview.Create` does not require positional
argument `display_name` anymore because it's not marked as required in
request body
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.49.0 to 0.51.0.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
## Changes
Added E2E test to run python wheels on interactive cluster created in
bundle.
We had a gap in testing wheel on all purpose clusters, so this PR
addresses the gap
## Changes
Convenience script to exec into a shell where a CLI build for a specific
branch is made available.
## Tests
Manually from `/tmp` with `bash <([path]) demo-dashboards`.
## Changes
Dashboards can be imported either via its own CRUD API, or via the
workspace import API. This test encodes the assumptions we can make
about the API behavior. More specifically, the identity of the dashboard
is retained on workspace import, as are the properties that aren't
surfaced in the workspace import API.
## Tests
The integration test passes.
## Changes
This test passes on normal `azure-prod` but started to fail on
`azure-prod-is`, which is the isolated version of azure-prod. This PR
patches the test to include the error returned from the cloud setup in
`azure-prod-is`.
## Tests
The test passes now on `azure-prod-is`.
## Changes
Followup from
https://github.com/databricks/cli/pull/1809#discussion_r1790045086. User
will see the following error if the SDK version does not match during
CLI code generation.
```
--- FAIL: TestConsistentDatabricksSdkVersion (1.34s)
info_test.go:53:
Error Trace: /Users/shreyas.goenka/cli/internal/build/info_test.go:53
Error: Not equal:
expected: "0c86ea6dbd9a730c24ff0d4e509603e476955ac5"
actual : "6f6b1371e640f2dfeba72d365ac566368656f6b6"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-0c86ea6dbd9a730c24ff0d4e509603e476955ac5
+6f6b1371e640f2dfeba72d365ac566368656f6b6
Test: TestConsistentDatabricksSdkVersion
Messages: please update the SDK version before generating the CLI
```
## Tests
Manually asserted that:
1. Generating the CLI without the SDK bump causes an error.
2. Generating the CLI after the SDK bump works as expected.
3. The test works even when the SDK is pinned to a specific commit like
`v0.47.1-0.20241002195128-6cecc224cbf7`
## Changes
The two functions `GetShortUserName` and `IsServicePrincipal` are
unrelated to auth or the purpose of the auth package. This change moves
them into their own package and updates `IsServicePrincipal` to take an
`*iam.User` argument instead of a string username.
## Tests
Tests pass.
## Changes
Due to platform changes, all libraries, notebooks and etc. paths used in
Databricks must be started with either /Workspace or /Volumes prefix.
This PR makes sure that all bundle paths are correctly prefixed.
Note: this change is a breaking change if user previously configured and
used `/Workspace/Workspace` folder in their workspace file system or
having `/Workspace/${workspace.root_path}...` pattern configured
anywhere in their bundle config
Fixes: #1751
AI:
- [x] Scan DABs config and error out on
`/Workspace/${workspace.root_path}...` pattern usage
## Tests
Added unit tests
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
After introducing the `SyncRootPath` field on the bundle (#1694), the
previous `RootPath` became ambiguous. Does it mean the bundle root path
or the sync root path? This PR renames to field to `BundleRootPath` to
remove the ambiguity.
## Tests
n/a
---------
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
## Summary
Use the friendly name of service principals when shortening their name.
This change is helpful for the prefix in development mode. Instead of
adding a prefix like `[dev 1706906c-c0a2-4c25-9f57-3a7aa3cb8123]`, we'll
prefix like `[dev my_principal]`.
## Changes
DLT pipeline recreations are destructive. They can lead to lost history
of previous updates, outage of the tables temporarily and are
potentially computationally expensive. Thus we make a breaking change
where a prompt is shown to the user if there configuration changes will
lead to a DLT recreation.
Users can skip the prompt by specifying the `--auto-approve` flag.
This PR also fixes an issue with our test runner where logs from the
cmdio.Logger would not get propagated to the reader returned by our
cobra test runner.
## Tests
Manually, and new unit and integration tests.
```
➜ bundle-playground-3 cli bundle deploy
Uploading bundle files to /Users/63ec021d-b0c6-49c0-93a0-5123953a1cb2/.bundle/test/development/files...
The following DLT pipelines will be recreated. Underlying tables will be unavailable for a transient period until the newly recreated pipelines are run once successfully. History of previous pipeline update runs will be lost because of recreation:
recreate pipeline foo
Would you like to proceed? [y/n]: n
Deployment cancelled!
```
## Changes
We were not using the readers and writers set in the test fixtures in
the progress logger. This PR fixes that. It also modifies
`TestAccAbortBind`, which was implicitly relying on the bug.
I encountered this bug while working on
https://github.com/databricks/cli/pull/1672.
## Tests
Manually.
From non-tty:
```
Error: failed to bind the resource, err: This bind operation requires user confirmation, but the current console does not support prompting. Please specify --auto-approve if you would like to skip prompts and proceed.
```
From tty, bind works as expected.
```
Confirm import changes? Changes will be remotely applied only after running 'bundle deploy'. [y/n]: y
Updating deployment state...
Successfully bound databricks_pipeline with an id '9d2dedbb-f522-4503-96ba-4bc4d5bfa77d'. Run 'bundle deploy' to deploy changes to your workspace
```
## Changes
With hc-install version `0.8.0` there was a regression where debug logs
would be leaked into stderr. Reported upstream in
https://github.com/hashicorp/hc-install/issues/239.
Meanwhile we need to revert and pin to version`0.7.0`. This PR also
includes a regression test.
## Tests
Regression test.
## Changes
`TestAccFilerWorkspaceFilesExtensionsErrorsOnDupName` recently started
failing in our nightlies because the upstream `import` API was changed
to [prohibit conflicting file
paths](https://docs.databricks.com/en/release-notes/product/2024/august.html#files-can-no-longer-have-identical-names-in-workspace-folders).
Because existing conflicting file paths can still be grandfathered in,
we need to retain coverage for the test. To do this, this PR:
1. Removes the failing
`TestAccFilerWorkspaceFilesExtensionsErrorsOnDupName`
2. Add an equivalent unit test with the `list` and `get-status` API
calls mocked.
## Changes
These 2 tests failed
`TestAccAlertsCreateErrWhenNoArguments ` -> switched to legacy command
for now, new one does not have a required request body (might be an
OpenAPI spec issue
https://github.com/databricks/databricks-sdk-go/blob/main/service/sql/model.go#L595),
will follow up later
`TestAccClustersList` -> increased channel size because new clusters API
returns more clusters
## Tests
Tests are green now