## Changes
The sync struct initialization would recreate the deleted `file_path`.
This PR moves to not initializing the sync object to delete the
snapshot, thus fixing the lingering `file_path` after `bundle destroy`.
## Tests
Manually, and a integration test to prevent regression.
## Changes
Allow specifying CLI version constraints required to run the bundle
Example of configuration:
#### only allow specific version
```
bundle:
name: my-bundle
databricks_cli_version: "0.210.0"
```
#### allow all patch releases
```
bundle:
name: my-bundle
databricks_cli_version: "0.210.*"
```
#### constrain minimum version
```
bundle:
name: my-bundle
databricks_cli_version: ">= 0.210.0"
```
#### constrain range
```
bundle:
name: my-bundle
databricks_cli_version: ">= 0.210.0, <= 1.0.0"
```
For other examples see:
https://github.com/Masterminds/semver?tab=readme-ov-file#checking-version-constraints
Example error
```
sh-3.2$ databricks bundle validate
Error: Databricks CLI version constraint not satisfied. Required: >= 1.0.0, current: 0.216.0
```
## Tests
Added unit test cover all possible configuration permutations
---------
Co-authored-by: Lennart Kats (databricks) <lennart.kats@databricks.com>
## Changes
The bundle path was previously stored on the `config.Root` type under
the assumption that the first configuration file being loaded would set
it. This is slightly counterintuitive and we know what the path is upon
construction of the bundle. The new location for this property reflects
this.
## Tests
Unit tests pass.
## Changes
This diagnostics type allows us to capture multiple warnings as well as
errors in the return value. This is a preparation for returning
additional warnings from mutators in case we detect non-fatal problems.
* All return statements that previously returned an error now return
`diag.FromErr`
* All return statements that previously returned `fmt.Errorf` now return
`diag.Errorf`
* All `err != nil` checks now use `diags.HasError()` or `diags.Error()`
## Tests
* Existing tests pass.
* I confirmed no call site under `./bundle` or `./cmd/bundle` uses
`errors.Is` on the return value from mutators. This is relevant because
we cannot wrap errors with `%w` when calling `diag.Errorf` (like
`fmt.Errorf`; context in https://github.com/golang/go/issues/47641).
## Changes
CheckRunningResource does `terraform.Show` which (I believe) expects
valid `bundle.tf.json` which is only written as part of
`terraform.Write` later.
With this PR order is changed.
Fixes#1286
## Tests
Added regression E2E test
## Changes
This PR introduces new structure (and a file) being used locally and
synced remotely to Databricks workspace to track bundle deployment
related metadata.
The state is pulled from remote, updated and pushed back remotely as
part of `bundle deploy` command.
This state can be used for deployment sequencing as it's `Version` field
is monotonically increasing on each deployment.
Currently, it only tracks files being synced as part of the deployment.
This helps fix the issue with files not being removed during deployments
on CI/CD as sync snapshot was never present there.
Fixes#943
## Tests
Added E2E (regression) test for files removal on CI/CD
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
aws-prod-ucws has CLOUD_ENV set to "ucws" which was failing the
validation checks in the template itself. This PR fixes the test.
## Tests
The tests pass now
## Changes
This PR:
1. Adds an integration test for mlops-stacks that checks the
initialization and deployment of the project was successful.
2. Fixes a bug in the initialization of templates from non-tty. We need
to process the input parameters in order since their descriptions can
refer to input parameters that came before in the interactive UX.
## Tests
The integration test passes in CI.
## Changes
Currently, when the CLI run a list API call (like list jobs), it uses
the `List*All` methods from the SDK, which list all resources in the
collection. This is very slow for large collections: if you need to list
all jobs from a workspace that has 10,000+ jobs, you'll be waiting for
at least 100 RPCs to complete before seeing any output.
Instead of using List*All() methods, the SDK recently added an iterator
data structure that allows traversing the collection without needing to
completely list it first. New pages are fetched lazily if the next
requested item belongs to the next page. Using the List() methods that
return these iterators, the CLI can proactively print out some of the
response before the complete collection has been fetched.
This involves a pretty major rewrite of the rendering logic in `cmdio`.
The idea there is to define custom rendering logic based on the type of
the provided resource. There are three renderer interfaces:
1. textRenderer: supports printing something in a textual format (i.e.
not JSON, and not templated).
2. jsonRenderer: supports printing something in a pretty-printed JSON
format.
3. templateRenderer: supports printing something using a text template.
There are also three renderer implementations:
1. readerRenderer: supports printing a reader. This only implements the
textRenderer interface.
2. iteratorRenderer: supports printing a `listing.Iterator` from the Go
SDK. This implements jsonRenderer and templateRenderer, buffering 20
resources at a time before writing them to the output.
3. defaultRenderer: supports printing arbitrary resources (the previous
implementation).
Callers will either use `cmdio.Render()` for rendering individual
resources or `io.Reader` or `cmdio.RenderIterator()` for rendering an
iterator. This separate method is needed to safely be able to match on
the type of the iterator, since Go does not allow runtime type matches
on generic types with an existential type parameter.
One other change that needs to happen is to split the templates used for
text representation of list resources into a header template and a row
template. The template is now executed multiple times for List API
calls, but the header should only be printed once. To support this, I
have added `headerTemplate` to `cmdIO`, and I have also changed
`RenderWithTemplate` to include a `headerTemplate` parameter everywhere.
## Tests
- [x] Unit tests for text rendering logic
- [x] Unit test for reflection-based iterator construction.
---------
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
## Changes
```
shreyas.goenka@THW32HFW6T cli % databricks fs -h
Commands to do file system operations on DBFS and UC Volumes.
Usage:
databricks fs [command]
Available Commands:
cat Show file content.
cp Copy files and directories.
ls Lists files.
mkdir Make directories.
rm Remove files and directories.
```
This PR adds support for UC Volumes to the fs commands. The fs commands
for UC volumes work the same as they currently do for DBFS. This is
ensured by running the same test matrix we across both DBFS and UC
Volumes versions of the fs commands.
## Tests
Support for UC volumes is tested by running the same tests as we did
originally for DBFS commands. The tests require a `main` catalog to
exist in the workspace, which does in our test workspaces environments
which have the `TEST_METASTORE_ID` environment variable set.
For the Files API filer, we do the same by running mostly common tests
to ensure the filers for "local", "wsfs", "dbfs" and "files API" are
consistent.
The tests are also made to all run in parallel to reduce the time taken.
To ensure the separation of the tests, each test creates its own UC
schema (for UC volumes tests) or DBFS directories (for DBFS tests).
## Changes
This is a fundamental change to how we load and process bundle
configuration. We now depend on the configuration being represented as a
`dyn.Value`. This representation is functionally equivalent to Go's
`any` (it is variadic) and allows us to capture metadata associated with
a value, such as where it was defined (e.g. file, line, and column). It
also allows us to represent Go's zero values properly (e.g. empty
string, integer equal to 0, or boolean false).
Using this representation allows us to let the configuration model
deviate from the typed structure we have been relying on so far
(`config.Root`). We need to deviate from these types when using
variables for fields that are not a string themselves. For example,
using `${var.num_workers}` for an integer `workers` field was impossible
until now (though not implemented in this change).
The loader for a `dyn.Value` includes functionality to capture any and
all type mismatches between the user-defined configuration and the
expected types. These mismatches can be surfaced as validation errors in
future PRs.
Given that many mutators expect the typed struct to be the source of
truth, this change converts between the dynamic representation and the
typed representation on mutator entry and exit. Existing mutators can
continue to modify the typed representation and these modifications are
reflected in the dynamic representation (see `MarkMutatorEntry` and
`MarkMutatorExit` in `bundle/config/root.go`).
Required changes included in this change:
* The existing interpolation package is removed in favor of
`libs/dyn/dynvar`.
* Functionality to merge job clusters, job tasks, and pipeline clusters
are now all broken out into their own mutators.
To be implemented later:
* Allow variable references for non-string types.
* Surface diagnostics about the configuration provided by the user in
the validation output.
* Some mutators use a resource's configuration file path to resolve
related relative paths. These depend on `bundle/config/paths.Path` being
set and populated through `ConfigureConfigFilePath`. Instead, they
should interact with the dynamically typed configuration directly. Doing
this also unlocks being able to differentiate different base paths used
within a job (e.g. a task override with a relative path defined in a
directory other than the base job).
## Tests
* Existing unit tests pass (some have been modified to accommodate)
* Integration tests pass
These fields (key and values) needs to be double quoted in order for
yaml loader to read, parse and unmarshal it into Go struct correctly
because these fields are `map[string]string` type.
## Tests
Added regression unit and E2E tests
## Changes
Added `bundle deployment bind` and `unbind` command.
This command allows to bind bundle-defined resources to existing
resources in Databricks workspace so they become DABs-managed.
## Tests
Manually + added E2E test
## Changes
This helper:
* Constructs a context
* Constructs a `*databricks.WorkspaceClient`
* Ensures required environment variables are present to run an
integration test
* Enables debugging integration tests from VS Code
Debugging integration tests (from VS Code) is made possible by a prelude
in the helper that checks if the calling process is a debug binary, and
if so, sources environment variables from
`~/..databricks/debug-env.json` (if present).
## Tests
Integration tests still pass.
---------
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
## Changes
The indentation mistake on the `path` field under `notebook` meant the
pipeline had a single entry with a `nil` notebook field. This was
allowed but incorrect.
While working on the `dyn.Value` approach, this yielded a non-nil but
zeroed `notebook` field and a failure to translate an empty path.
## Tests
Correcting the indentation made the test fail because the file is not a
notebook. I changed it to a `file` reference and the test now passes.
## Changes
The approach to do this was:
1. Iterate over all libraries in all job tasks
2. Find references to local libraries
3. Store pointer to `compute.Library` in the matching artifact file to
signal it should be uploaded
This breaks down when introducing #1098 because we can no longer track
unexported state across mutators. The approach in this PR performs the
path matching twice; once in the matching mutator where we check if each
referenced file has an artifacts section, and once during artifact
upload to rewrite the library path from a local file reference to an
absolute Databricks path.
## Tests
Integration tests pass.
## Changes
Adds the short_name helper function. short_name is useful when templates
do not want to print the full userName (typically email or service
principal application-id) of the current user.
## Tests
Integration test. Also adds integration tests for other helper functions
that interact with the Databricks API.
## Changes
Now it's possible to generate bundle configuration for existing job.
For now it only supports jobs with notebook tasks.
It will download notebooks referenced in the job tasks and generate
bundle YAML config for this job which can be included in larger bundle.
## Tests
Running command manually
Example of generated config
```
resources:
jobs:
job_128737545467921:
name: Notebook job
format: MULTI_TASK
tasks:
- task_key: as_notebook
existing_cluster_id: 0704-xxxxxx-yyyyyyy
notebook_task:
base_parameters:
bundle_root: /Users/andrew.nester@databricks.com/.bundle/job_with_module_imports/development/files
notebook_path: ./entry_notebook.py
source: WORKSPACE
run_if: ALL_SUCCESS
max_concurrent_runs: 1
```
## Tests
Manual (on our last 100 jobs) + added end-to-end test
```
--- PASS: TestAccGenerateFromExistingJobAndDeploy (50.91s)
PASS
coverage: 61.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 51.209s coverage: 61.5% of
statements in ./...
```
## Changes
Copying a local file in windows to remote directory in DBFS would fail
if the path was specified as a windows style path (compared to a UNIX
style path). This PR fixes that.
Note, UNIX style paths will continue to work because `filepath.Base`
respects both `/` and `\` as file separators. See: `IsPathSeparator` in
https://go.dev/src/os/path_windows.go.
Fixes issue: https://github.com/databricks/cli/issues/1109.
## Tests
Integration test and manually
```
C:\Users\shreyas.goenka>Desktop\cli.exe fs cp .\Desktop\foo.txt dbfs:/Users/shreyas.goenka@databricks.com
.\Desktop\foo.txt -> dbfs:/Users/shreyas.goenka@databricks.com/foo.txt
C:\Users\shreyas.goenka>Desktop\cli.exe fs cat dbfs:/Users/shreyas.goenka@databricks.com/foo.txt
hello, world
````
## Changes
It makes the behaviour consistent with or without `python_wheel_wrapper`
on when job is run with `--python-params` flag.
In `python_wheel_wrapper` mode it converts dynamic `python_params` in a
dynamic specially named `notebook_param` and the wrapper reads them with
`dbutils` and pass to `sys.argv`
Fixes#1000
## Tests
Added an integration test.
Integration tests pass.
## Changes
Removed hash from the upload path since it's not useful anyway.
The main reason for that change was to make it work on all-purpose
clusters. But in order to make it work, wheel version needs to be
increased anyway. So having only hash in path is useless.
Note: using --build-number (build tag) flag does not help with
re-installing libraries on all-purpose clusters. The reason is that
`pip` ignoring build tag when upgrading the library and only look at
wheel version.
Build tag is only used for sorting the versions and the one with higher
build tag takes priority when installed. It only works if no library is
installed.
See
a15dd75d98/src/pip/_internal/index/package_finder.py (L522-L556)https://github.com/pypa/pip/issues/4781
Thus, the only way to reinstall the library on all-purpose cluster is to
increase wheel version manually or use automatic version generation,
f.e.
```
setup(
version=datetime.datetime.utcnow().strftime("%Y%m%d.%H%M%S"),
...
)
```
## Tests
Integration tests passed.
## Changes
A bug in the code that pulls the remote state could cause the local
state to be empty instead of a copy of the remote state. This happened
only if the local state was present and stale when compared to the
remote version.
We correctly checked for the state serial to see if the local state had
to be replaced but didn't seek back on the remote state before writing
it out. Because the staleness check would read the remote state in full,
copying from the same reader would immediately yield an EOF.
## Tests
* Unit tests for state pull and push mutators that rely on a mocked
filer.
* An integration test that deploys the same bundle from multiple paths,
triggering the staleness logic.
Both failed prior to the fix and now pass.
## Changes
This PR:
1. Renames `FilesPath` -> `FilePath` and `ArtifactsPath` ->
`ArtifactPath` in the bundle and metadata configuration to make them
consistant with the json tags.
2. Fixes development / production mode error messages to point to
`file_path` and `artifact_path`
## Tests
Existing unit tests. This is a strightforward renaming of the fields.
## Changes
This functionality is not exercised (and will not be anytime soon).
Instead we use a map to have first party aliases for supported
templates.
1e46b9f88a/cmd/bundle/init.go (L21)
## Tests
Existing tests and manually, bundle init still works.
## Changes
This PR introduces a metadata struct that stores a subset of bundle
configuration that we wish to expose to other Databricks services that
wish to integrate with bundles.
This metadata file is uploaded to a file
`${bundle.workspace.state_path}/metadata.json` in the WSFS destination
of the bundle deployment.
Documentation for emitted metadata fields:
* `version`: Version for the metadata file schema
* `config.bundle.git.branch`: Name of the git branch the bundle was
deployed from.
* `config.bundle.git.origin_url`: URL for git remote "origin"
* `config.bundle.git.bundle_root_path`: Relative path of the bundle root
from the root of the git repository. Is set to "." if they are the same.
* `config.bundle.git.commit`: SHA-1 commit hash of the exact commit this
bundle was deployed from. Note, the deployment might not exactly match
this commit version if there are changes that have not been committed to
git at deploy time,
* `file_path`: Path in workspace where we sync bundle files to.
* `resources.jobs.[job-ref].id`: Id of the job
* `resources.jobs.[job-ref].relative_path`: Relative path of the yaml
config file from the bundle root where this job was defined.
Example metadata object when bundle root and git root are the same:
```json
{
"version": 1,
"config": {
"bundle": {
"lock": {},
"git": {
"branch": "master",
"origin_url": "www.host.com",
"commit": "7af8e5d3f5dceffff9295d42d21606ccf056dce0",
"bundle_root_path": "."
}
},
"workspace": {
"file_path": "/Users/shreyas.goenka@databricks.com/.bundle/pipeline-progress/default/files"
},
"resources": {
"jobs": {
"bar": {
"id": "245921165354846",
"relative_path": "databricks.yml"
}
}
},
"sync": {}
}
}
```
Example metadata when the git root is one level above the bundle repo:
```json
{
"version": 1,
"config": {
"bundle": {
"lock": {},
"git": {
"branch": "dev-branch",
"origin_url": "www.my-repo.com",
"commit": "3db46ef750998952b00a2b3e7991e31787e4b98b",
"bundle_root_path": "pipeline-progress"
}
},
"workspace": {
"file_path": "/Users/shreyas.goenka@databricks.com/.bundle/pipeline-progress/default/files"
},
"resources": {
"jobs": {
"bar": {
"id": "245921165354846",
"relative_path": "databricks.yml"
}
}
},
"sync": {}
}
}
```
This unblocks integration to the jobs break glass UI for bundles.
## Tests
Unit tests and integration tests.
## Changes
Updates to bundle templates can require updated versions of the CLI.
This PR extends the JSON schema representation to allow template authors
to set a min CLI version they require for their templates.
This is required to make improvements/additions to the mlops-stacks repo
## Tests
Tested using unit tests and manually.
For manualy testing, I created a custom build of the CLI using go
releaser and then tested it against a local instance of mlops-stack
When mlops-stack schema has:
```
"min_databricks_cli_version": "v5000.1.1",
```
output (error as expected)
```
shreyas.goenka@THW32HFW6T bricks % ./dist/cli_darwin_arm64/databricks bundle init ~/mlops-stack
Error: minimum CLI version "v5000.1.1" is greater than current CLI version "v0.207.2-dev+1b992c0". Please upgrade your current Databricks CLI
```
When the mlops-stack schema has:
```
"min_databricks_cli_version": "v0.1.1",
```
output (validation passes)
```
shreyas.goenka@THW32HFW6T bricks % ./dist/cli_darwin_arm64/databricks bundle init ~/mlops-stack
Welcome to MLOps Stack. For detailed information on project generation, see the README at https://github.com/databricks/mlops-stack/blob/main/README.md.
Project Name [my-mlops-project]: ^C
```
## Changes
Previously we only supported uploading Python wheels smaller than 10mb
due to using Workspace.Import API and `content ` field
https://docs.databricks.com/api/workspace/workspace/import
By switching to use `WorkspaceFilesClient` we overcome the limit because
it uses POST body for the API instead.
## Tests
`TestAccUploadArtifactFileToCorrectRemotePath` integration test passes
```
=== RUN TestAccUploadArtifactFileToCorrectRemotePath
artifacts_test.go:28: gcp
2023/10/17 15:24:04 INFO Using Google Credentials sdk=true
helpers.go:356: Creating /Users/.../integration-test-wsfs-ekggbkcfdkid
artifacts.Upload(test.whl): Uploading...
2023/10/17 15:24:06 INFO Using Google Credentials mutator=artifacts.Upload(test) sdk=true
artifacts.Upload(test.whl): Upload succeeded
helpers.go:362: Removing /Users/.../integration-test-wsfs-ekggbkcfdkid
--- PASS: TestAccUploadArtifactFileToCorrectRemotePath (5.66s)
PASS
coverage: 14.9% of statements in ./...
ok github.com/databricks/cli/internal 6.109s coverage: 14.9% of statements in ./...
```
This PR:
1. Adds the `--file` flag to the workspace export command. This allows
you to specify a output file to write to.
2. Adds e2e integration tests for the workspace export command
## Changes
This PR makes a few really important QOL improvements to the `workspace
import` command.
They are:
1. Adds the `--file` flag, which allows a user to specify a file to read
the content from.
2. Wraps the most common error first time users of this command will run
into with a helpful hint.
3. Minor changes to the command Use string changing `PATH` ->
`TARGET_PATH`
## Tests
Integration tests. The newly added integration tests that check the
--file flag works as expected for both `SOURCE` and `AUTO` format.
Skipped the other formats because the API behaviour for them is
straightforward.
## Changes
This PR pays some tech debt by refactoring sync diff computation into
interfaces that are more robust.
Specifically:
1. Refactor the single diff computation function into a `SnapshotState`
class that computes the target state only based on the current local
files making it more robust and not carrying over state from previous
iterations.
2. Adds new validations for the sync state which make sure that the
invariants that downstream code expects are actually held true. This
prevents a class of issues where these invariants break and the
synchroniser behaves unexpectedly.
Note, this does not change the existing schema for the snapshot, only
the way the diff is computed, and thus is backwards compatible (ie does
not require a schema version bump).
## Tests
<!-- How is this tested? -->
## Changes
These tests allow us to get information for execution context
(PYTHONPATH, CWD) for various Python tasks and different cluster setups.
Note: this test won't be executed automatically as part of nightly
builds since it requires RUN_PYTHON_TASKS_TEST env to be executed.
## Tests
Integration test run successfully.
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>