## Changes
```
shreyas.goenka@THW32HFW6T cli % databricks fs -h
Commands to do file system operations on DBFS and UC Volumes.
Usage:
databricks fs [command]
Available Commands:
cat Show file content.
cp Copy files and directories.
ls Lists files.
mkdir Make directories.
rm Remove files and directories.
```
This PR adds support for UC Volumes to the fs commands. The fs commands
for UC volumes work the same as they currently do for DBFS. This is
ensured by running the same test matrix we across both DBFS and UC
Volumes versions of the fs commands.
## Tests
Support for UC volumes is tested by running the same tests as we did
originally for DBFS commands. The tests require a `main` catalog to
exist in the workspace, which does in our test workspaces environments
which have the `TEST_METASTORE_ID` environment variable set.
For the Files API filer, we do the same by running mostly common tests
to ensure the filers for "local", "wsfs", "dbfs" and "files API" are
consistent.
The tests are also made to all run in parallel to reduce the time taken.
To ensure the separation of the tests, each test creates its own UC
schema (for UC volumes tests) or DBFS directories (for DBFS tests).
## Changes
This is a fundamental change to how we load and process bundle
configuration. We now depend on the configuration being represented as a
`dyn.Value`. This representation is functionally equivalent to Go's
`any` (it is variadic) and allows us to capture metadata associated with
a value, such as where it was defined (e.g. file, line, and column). It
also allows us to represent Go's zero values properly (e.g. empty
string, integer equal to 0, or boolean false).
Using this representation allows us to let the configuration model
deviate from the typed structure we have been relying on so far
(`config.Root`). We need to deviate from these types when using
variables for fields that are not a string themselves. For example,
using `${var.num_workers}` for an integer `workers` field was impossible
until now (though not implemented in this change).
The loader for a `dyn.Value` includes functionality to capture any and
all type mismatches between the user-defined configuration and the
expected types. These mismatches can be surfaced as validation errors in
future PRs.
Given that many mutators expect the typed struct to be the source of
truth, this change converts between the dynamic representation and the
typed representation on mutator entry and exit. Existing mutators can
continue to modify the typed representation and these modifications are
reflected in the dynamic representation (see `MarkMutatorEntry` and
`MarkMutatorExit` in `bundle/config/root.go`).
Required changes included in this change:
* The existing interpolation package is removed in favor of
`libs/dyn/dynvar`.
* Functionality to merge job clusters, job tasks, and pipeline clusters
are now all broken out into their own mutators.
To be implemented later:
* Allow variable references for non-string types.
* Surface diagnostics about the configuration provided by the user in
the validation output.
* Some mutators use a resource's configuration file path to resolve
related relative paths. These depend on `bundle/config/paths.Path` being
set and populated through `ConfigureConfigFilePath`. Instead, they
should interact with the dynamically typed configuration directly. Doing
this also unlocks being able to differentiate different base paths used
within a job (e.g. a task override with a relative path defined in a
directory other than the base job).
## Tests
* Existing unit tests pass (some have been modified to accommodate)
* Integration tests pass
These fields (key and values) needs to be double quoted in order for
yaml loader to read, parse and unmarshal it into Go struct correctly
because these fields are `map[string]string` type.
## Tests
Added regression unit and E2E tests
## Changes
Added `bundle deployment bind` and `unbind` command.
This command allows to bind bundle-defined resources to existing
resources in Databricks workspace so they become DABs-managed.
## Tests
Manually + added E2E test
## Changes
This helper:
* Constructs a context
* Constructs a `*databricks.WorkspaceClient`
* Ensures required environment variables are present to run an
integration test
* Enables debugging integration tests from VS Code
Debugging integration tests (from VS Code) is made possible by a prelude
in the helper that checks if the calling process is a debug binary, and
if so, sources environment variables from
`~/..databricks/debug-env.json` (if present).
## Tests
Integration tests still pass.
---------
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
## Changes
The indentation mistake on the `path` field under `notebook` meant the
pipeline had a single entry with a `nil` notebook field. This was
allowed but incorrect.
While working on the `dyn.Value` approach, this yielded a non-nil but
zeroed `notebook` field and a failure to translate an empty path.
## Tests
Correcting the indentation made the test fail because the file is not a
notebook. I changed it to a `file` reference and the test now passes.
## Changes
The approach to do this was:
1. Iterate over all libraries in all job tasks
2. Find references to local libraries
3. Store pointer to `compute.Library` in the matching artifact file to
signal it should be uploaded
This breaks down when introducing #1098 because we can no longer track
unexported state across mutators. The approach in this PR performs the
path matching twice; once in the matching mutator where we check if each
referenced file has an artifacts section, and once during artifact
upload to rewrite the library path from a local file reference to an
absolute Databricks path.
## Tests
Integration tests pass.
## Changes
Adds the short_name helper function. short_name is useful when templates
do not want to print the full userName (typically email or service
principal application-id) of the current user.
## Tests
Integration test. Also adds integration tests for other helper functions
that interact with the Databricks API.
## Changes
Now it's possible to generate bundle configuration for existing job.
For now it only supports jobs with notebook tasks.
It will download notebooks referenced in the job tasks and generate
bundle YAML config for this job which can be included in larger bundle.
## Tests
Running command manually
Example of generated config
```
resources:
jobs:
job_128737545467921:
name: Notebook job
format: MULTI_TASK
tasks:
- task_key: as_notebook
existing_cluster_id: 0704-xxxxxx-yyyyyyy
notebook_task:
base_parameters:
bundle_root: /Users/andrew.nester@databricks.com/.bundle/job_with_module_imports/development/files
notebook_path: ./entry_notebook.py
source: WORKSPACE
run_if: ALL_SUCCESS
max_concurrent_runs: 1
```
## Tests
Manual (on our last 100 jobs) + added end-to-end test
```
--- PASS: TestAccGenerateFromExistingJobAndDeploy (50.91s)
PASS
coverage: 61.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 51.209s coverage: 61.5% of
statements in ./...
```
## Changes
Copying a local file in windows to remote directory in DBFS would fail
if the path was specified as a windows style path (compared to a UNIX
style path). This PR fixes that.
Note, UNIX style paths will continue to work because `filepath.Base`
respects both `/` and `\` as file separators. See: `IsPathSeparator` in
https://go.dev/src/os/path_windows.go.
Fixes issue: https://github.com/databricks/cli/issues/1109.
## Tests
Integration test and manually
```
C:\Users\shreyas.goenka>Desktop\cli.exe fs cp .\Desktop\foo.txt dbfs:/Users/shreyas.goenka@databricks.com
.\Desktop\foo.txt -> dbfs:/Users/shreyas.goenka@databricks.com/foo.txt
C:\Users\shreyas.goenka>Desktop\cli.exe fs cat dbfs:/Users/shreyas.goenka@databricks.com/foo.txt
hello, world
````
## Changes
It makes the behaviour consistent with or without `python_wheel_wrapper`
on when job is run with `--python-params` flag.
In `python_wheel_wrapper` mode it converts dynamic `python_params` in a
dynamic specially named `notebook_param` and the wrapper reads them with
`dbutils` and pass to `sys.argv`
Fixes#1000
## Tests
Added an integration test.
Integration tests pass.
## Changes
Removed hash from the upload path since it's not useful anyway.
The main reason for that change was to make it work on all-purpose
clusters. But in order to make it work, wheel version needs to be
increased anyway. So having only hash in path is useless.
Note: using --build-number (build tag) flag does not help with
re-installing libraries on all-purpose clusters. The reason is that
`pip` ignoring build tag when upgrading the library and only look at
wheel version.
Build tag is only used for sorting the versions and the one with higher
build tag takes priority when installed. It only works if no library is
installed.
See
a15dd75d98/src/pip/_internal/index/package_finder.py (L522-L556)https://github.com/pypa/pip/issues/4781
Thus, the only way to reinstall the library on all-purpose cluster is to
increase wheel version manually or use automatic version generation,
f.e.
```
setup(
version=datetime.datetime.utcnow().strftime("%Y%m%d.%H%M%S"),
...
)
```
## Tests
Integration tests passed.
## Changes
A bug in the code that pulls the remote state could cause the local
state to be empty instead of a copy of the remote state. This happened
only if the local state was present and stale when compared to the
remote version.
We correctly checked for the state serial to see if the local state had
to be replaced but didn't seek back on the remote state before writing
it out. Because the staleness check would read the remote state in full,
copying from the same reader would immediately yield an EOF.
## Tests
* Unit tests for state pull and push mutators that rely on a mocked
filer.
* An integration test that deploys the same bundle from multiple paths,
triggering the staleness logic.
Both failed prior to the fix and now pass.
## Changes
This PR:
1. Renames `FilesPath` -> `FilePath` and `ArtifactsPath` ->
`ArtifactPath` in the bundle and metadata configuration to make them
consistant with the json tags.
2. Fixes development / production mode error messages to point to
`file_path` and `artifact_path`
## Tests
Existing unit tests. This is a strightforward renaming of the fields.
## Changes
This functionality is not exercised (and will not be anytime soon).
Instead we use a map to have first party aliases for supported
templates.
1e46b9f88a/cmd/bundle/init.go (L21)
## Tests
Existing tests and manually, bundle init still works.
## Changes
This PR introduces a metadata struct that stores a subset of bundle
configuration that we wish to expose to other Databricks services that
wish to integrate with bundles.
This metadata file is uploaded to a file
`${bundle.workspace.state_path}/metadata.json` in the WSFS destination
of the bundle deployment.
Documentation for emitted metadata fields:
* `version`: Version for the metadata file schema
* `config.bundle.git.branch`: Name of the git branch the bundle was
deployed from.
* `config.bundle.git.origin_url`: URL for git remote "origin"
* `config.bundle.git.bundle_root_path`: Relative path of the bundle root
from the root of the git repository. Is set to "." if they are the same.
* `config.bundle.git.commit`: SHA-1 commit hash of the exact commit this
bundle was deployed from. Note, the deployment might not exactly match
this commit version if there are changes that have not been committed to
git at deploy time,
* `file_path`: Path in workspace where we sync bundle files to.
* `resources.jobs.[job-ref].id`: Id of the job
* `resources.jobs.[job-ref].relative_path`: Relative path of the yaml
config file from the bundle root where this job was defined.
Example metadata object when bundle root and git root are the same:
```json
{
"version": 1,
"config": {
"bundle": {
"lock": {},
"git": {
"branch": "master",
"origin_url": "www.host.com",
"commit": "7af8e5d3f5dceffff9295d42d21606ccf056dce0",
"bundle_root_path": "."
}
},
"workspace": {
"file_path": "/Users/shreyas.goenka@databricks.com/.bundle/pipeline-progress/default/files"
},
"resources": {
"jobs": {
"bar": {
"id": "245921165354846",
"relative_path": "databricks.yml"
}
}
},
"sync": {}
}
}
```
Example metadata when the git root is one level above the bundle repo:
```json
{
"version": 1,
"config": {
"bundle": {
"lock": {},
"git": {
"branch": "dev-branch",
"origin_url": "www.my-repo.com",
"commit": "3db46ef750998952b00a2b3e7991e31787e4b98b",
"bundle_root_path": "pipeline-progress"
}
},
"workspace": {
"file_path": "/Users/shreyas.goenka@databricks.com/.bundle/pipeline-progress/default/files"
},
"resources": {
"jobs": {
"bar": {
"id": "245921165354846",
"relative_path": "databricks.yml"
}
}
},
"sync": {}
}
}
```
This unblocks integration to the jobs break glass UI for bundles.
## Tests
Unit tests and integration tests.
## Changes
Updates to bundle templates can require updated versions of the CLI.
This PR extends the JSON schema representation to allow template authors
to set a min CLI version they require for their templates.
This is required to make improvements/additions to the mlops-stacks repo
## Tests
Tested using unit tests and manually.
For manualy testing, I created a custom build of the CLI using go
releaser and then tested it against a local instance of mlops-stack
When mlops-stack schema has:
```
"min_databricks_cli_version": "v5000.1.1",
```
output (error as expected)
```
shreyas.goenka@THW32HFW6T bricks % ./dist/cli_darwin_arm64/databricks bundle init ~/mlops-stack
Error: minimum CLI version "v5000.1.1" is greater than current CLI version "v0.207.2-dev+1b992c0". Please upgrade your current Databricks CLI
```
When the mlops-stack schema has:
```
"min_databricks_cli_version": "v0.1.1",
```
output (validation passes)
```
shreyas.goenka@THW32HFW6T bricks % ./dist/cli_darwin_arm64/databricks bundle init ~/mlops-stack
Welcome to MLOps Stack. For detailed information on project generation, see the README at https://github.com/databricks/mlops-stack/blob/main/README.md.
Project Name [my-mlops-project]: ^C
```
## Changes
Previously we only supported uploading Python wheels smaller than 10mb
due to using Workspace.Import API and `content ` field
https://docs.databricks.com/api/workspace/workspace/import
By switching to use `WorkspaceFilesClient` we overcome the limit because
it uses POST body for the API instead.
## Tests
`TestAccUploadArtifactFileToCorrectRemotePath` integration test passes
```
=== RUN TestAccUploadArtifactFileToCorrectRemotePath
artifacts_test.go:28: gcp
2023/10/17 15:24:04 INFO Using Google Credentials sdk=true
helpers.go:356: Creating /Users/.../integration-test-wsfs-ekggbkcfdkid
artifacts.Upload(test.whl): Uploading...
2023/10/17 15:24:06 INFO Using Google Credentials mutator=artifacts.Upload(test) sdk=true
artifacts.Upload(test.whl): Upload succeeded
helpers.go:362: Removing /Users/.../integration-test-wsfs-ekggbkcfdkid
--- PASS: TestAccUploadArtifactFileToCorrectRemotePath (5.66s)
PASS
coverage: 14.9% of statements in ./...
ok github.com/databricks/cli/internal 6.109s coverage: 14.9% of statements in ./...
```
This PR:
1. Adds the `--file` flag to the workspace export command. This allows
you to specify a output file to write to.
2. Adds e2e integration tests for the workspace export command
## Changes
This PR makes a few really important QOL improvements to the `workspace
import` command.
They are:
1. Adds the `--file` flag, which allows a user to specify a file to read
the content from.
2. Wraps the most common error first time users of this command will run
into with a helpful hint.
3. Minor changes to the command Use string changing `PATH` ->
`TARGET_PATH`
## Tests
Integration tests. The newly added integration tests that check the
--file flag works as expected for both `SOURCE` and `AUTO` format.
Skipped the other formats because the API behaviour for them is
straightforward.
## Changes
This PR pays some tech debt by refactoring sync diff computation into
interfaces that are more robust.
Specifically:
1. Refactor the single diff computation function into a `SnapshotState`
class that computes the target state only based on the current local
files making it more robust and not carrying over state from previous
iterations.
2. Adds new validations for the sync state which make sure that the
invariants that downstream code expects are actually held true. This
prevents a class of issues where these invariants break and the
synchroniser behaves unexpectedly.
Note, this does not change the existing schema for the snapshot, only
the way the diff is computed, and thus is backwards compatible (ie does
not require a schema version bump).
## Tests
<!-- How is this tested? -->
## Changes
These tests allow us to get information for execution context
(PYTHONPATH, CWD) for various Python tasks and different cluster setups.
Note: this test won't be executed automatically as part of nightly
builds since it requires RUN_PYTHON_TASKS_TEST env to be executed.
## Tests
Integration test run successfully.
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Validation rules on tags are different per cloud (they are passed
through to the underlying clusters and as such must comply with
cloud-specific validation rules). This change adds tests to confirm the
current behavior to ensure the normalization we can apply is in line
with how the backend behaves.
## Tests
The new integration tests pass (tested locally).
## Changes
Instead of always using notebook wrapper for Python wheel tasks, let's
make this an opt-in option.
Now by default Python wheel tasks will be deployed as is to Databricks
platform.
If notebook wrapper required (DBR < 13.1 or other configuration
differences), users can provide a following experimental setting
```
experimental:
python_wheel_wrapper: true
```
Fixes#783,
https://github.com/databricks/databricks-asset-bundles-dais2023/issues/8
## Tests
Added unit tests.
Integration tests passed for both cases
```
helpers.go:163: [databricks stdout]: Hello from my func
helpers.go:163: [databricks stdout]: Got arguments:
helpers.go:163: [databricks stdout]: ['my_test_code', 'one', 'two']
...
Bundle remote directory is ***/.bundle/ac05d5e8-ed4b-4e34-b3f2-afa73f62b021
Deleted snapshot file at /var/folders/nt/xjv68qzs45319w4k36dhpylc0000gp/T/TestAccPythonWheelTaskDeployAndRunWithWrapper3733431114/001/.databricks/bundle/default/sync-snapshots/cac1e02f3941a97b.json
Successfully deleted files!
--- PASS: TestAccPythonWheelTaskDeployAndRunWithWrapper (214.18s)
PASS
coverage: 93.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 214.495s coverage: 93.5% of statements in ./...
```
```
helpers.go:163: [databricks stdout]: Hello from my func
helpers.go:163: [databricks stdout]: Got arguments:
helpers.go:163: [databricks stdout]: ['my_test_code', 'one', 'two']
...
Bundle remote directory is ***/.bundle/0ef67aaf-5960-4049-bf1d-dc9e29157421
Deleted snapshot file at /var/folders/nt/xjv68qzs45319w4k36dhpylc0000gp/T/TestAccPythonWheelTaskDeployAndRunWithoutWrapper2340216760/001/.databricks/bundle/default/sync-snapshots/edf0b322cee93b13.json
Successfully deleted files!
--- PASS: TestAccPythonWheelTaskDeployAndRunWithoutWrapper (192.36s)
PASS
coverage: 93.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 195.130s coverage: 93.5% of statements in ./...
```
## Changes
This PR sets "resource" to nil in the terraform representation if no
resources are defined in the bundle configuration. This solves two
problems:
1. Makes bundle deploy work without any resources specified.
2. Previously if a `resources` block was removed after a deployment,
that would fail with an error. Now the resources would get destroyed as
expected.
Also removes `TerraformHasNoResources` which is no longer needed.
## Tests
New e2e tests.
## Changes
If the caller running the test has one or more environment variables
that are used in the test already set, they can interfere and make tests
fail.
## Tests
Ran tests in `./cmd/root` with Databricks related environment variables
set.
## Changes
There are a couple places throughout the code base where interaction
with environment variables takes place. Moreover, more than one of these
would try to read a value from more than one environment variable as
fallback (for backwards compatibility). This change consolidates those
accesses.
The majority of diffs in this change are mechanical (i.e. add an
argument or replace a call).
This change:
* Moves common environment variable lookups for bundles to
`bundles/env`.
* Adds a `libs/env` package that wraps `os.LookupEnv` and `os.Getenv`
and allows for overrides to take place in a `context.Context`. By
scoping overrides to a `context.Context` we can avoid `t.Setenv` in
testing and unlock parallel test execution for integration tests.
* Updates call sites to pass through a `context.Context` where needed.
* For bundles, introduces `DATABRICKS_BUNDLE_ROOT` as new primary
variable instead of `BUNDLE_ROOT`. This was the last environment
variable that did not use the `DATABRICKS_` prefix.
## Tests
Unit tests pass.
## Changes
It helps to make sure jobs in the tests are deployed and executed
uniquely and isolated
```
Bundle remote directory is /Users/61b77d30-bc10-4214-9650-29cf5db0e941/.bundle/4b630810-5edc-4d8f-85d1-0eb5baf7bb28
Deleted snapshot file at /var/folders/nt/xjv68qzs45319w4k36dhpylc0000gp/T/TestAccPythonWheelTaskDeployAndRun3933198431/001/.databricks/bundle/default/sync-snapshots/dd9db100465e3d91.json
Successfully deleted files!
--- PASS: TestAccPythonWheelTaskDeployAndRun (346.28s)
PASS
coverage: 93.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 346.976s coverage: 93.5% of statements in ./...
```
## Changes
Added end-to-end test for deploying and running Python wheel task
## Tests
Test successfully passed on all environments, takes about 9-10 minutes
to pass.
```
Deleted snapshot file at /var/folders/nt/xjv68qzs45319w4k36dhpylc0000gp/T/TestAccPythonWheelTaskDeployAndRun1845899209/002/.databricks/bundle/default/sync-snapshots/1f7cc766ffe038d6.json
Successfully deleted files!
2023/09/06 17:50:50 INFO Releasing deployment lock mutator=destroy mutator=seq mutator=seq mutator=deferred mutator=lock:release
--- PASS: TestAccPythonWheelTaskDeployAndRun (508.16s)
PASS
coverage: 77.9% of statements in ./...
ok github.com/databricks/cli/internal/bundle 508.810s coverage: 77.9% of statements in ./...
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
* Update Go SDK to v0.19.0
* Update commands per OpenAPI spec from Go SDK
* Incorporate `client.Do()` signature change to include a (nil) header
map
* Update `workspace.WorkspaceService` mock with permissions methods
* Skip `files` service in codegen; already implemented under the `fs`
command
## Tests
Unit and integration tests pass.