## Changes
Fs integration tests were not running on our nightlies before because
the nightlies only run tests with the `TestAcc` prefix. A couple of them
were also broken!
This PR fixes the tests and adds the prefix to all fs integration tests.
As a followup we can automate the check for this prefix.
## Tested
Fs tests are green and pass on both azure and aws
## Changes
This change is another step towards a CLI without globals. Also see #595.
The flags for the root command are now encapsulated in struct types.
## Tests
Unit tests pass.
## Changes
The assertions would fail because `DATABRICKS_TOKEN` overrides a token
set in the profile.
## Tests
Tests now pass if `DATABRICKS_TOKEN` is set.
## Changes
Generated commands relied on global variables for flags and request
payloads. This is difficult to test if a sequence of tests tries to run
the same command with various arguments because the global state causes
test interference. Moreover, it is impossible to run tests in parallel.
This change modifies the approach and turns every command group and
command itself into a function that returns a `*cobra.Command`. All
flags and request payloads are variables scoped to the command's
initialization function. This means it is possible to construct
independent copies of the CLI structure and fixes the test isolation
issue.
The scope of this change is only the generated commands. The other
commands will be changed accordingly in subsequent changes.
## Tests
Unit and integration tests pass.
## Changes
Add unit test that raw strings are printed as is. This method is useful
to print text that would otherwise be interpreted a go text template.
## Changes
Added support for artifacts building for bundles.
Now it allows to specify `artifacts` block in bundle.yml and define a
resource (at the moment Python wheel) to be build and uploaded during
`bundle deploy`
Built artifact will be automatically attached to corresponding job task
or pipeline where it's used as a library
Follow-ups:
1. If artifact is used in job or pipeline, but not found in the config,
try to infer and build it anyway
2. If build command is not provided for Python wheel artifact, infer it
## Changes
Before this PR we would load all yaml files matching * and \*/\*.yml
files as bundle configurations. This was problematic since this would
also load yaml files that were not meant to be a part of the bundle
## Tests
Manually, now files are no longer included unless manually specified
## Changes
Due to a bug in Github UI, https://github.com/databricks/cli/pull/589
got merged without passing the go/fmt formatting checks
This PR fixes the formatting which breaks the PR checks
## Changes
* Add support for using `databricks.yml` as config file. If
`databricks.yml` is not found then falling back to `bundle.yml` for
backwards compatibility.
* Add support for `.yaml` extension.
* Give an error when more than one config file is found
## Tests
* added unit test
* manual testing the different cases
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Earlier we removed recursive deletion from sync. This makes it safe
enough for us to not restrict sync to just the namespace of the user.
This PR removes that base path validation.
Note: If the sync destination is under `/Repos` we still only create
missing directories required if the path is under my namespace ie
matches `/Repos/@me/`
## Tests
Manually
Before:
```
shreyas.goenka@THW32HFW6T hello-bundle % cli bundle deploy
Starting upload of bundle files
Error: path must be nested under /Users/shreyas.goenka@databricks.com or /Repos/shreyas.goenka@databricks.com
```
After:
```
shreyas.goenka@THW32HFW6T hello-bundle % cli bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Shared/common-test/hello-bundle/files!
Starting resource deployment
Resource deployment completed!
```
## Changes
Currently, `databricks auth login` is difficult to use. If a user types
this command in, the command fails with
```
Error: init: cannot fetch credentials
```
after prompting for a profile name.
To make this experience smoother, this change ensures that the host, and
if necessary, the account ID, are prompted for input from the user if
they aren't provided on the CLI.
## Tests
Manual tests:
```
$ ./cli auth token
Databricks Host: https://<HOST>.staging.cloud.databricks.com
{
"access_token": "...",
"token_type": "Bearer",
"expiry": "2023-07-11T12:56:59.929671+02:00"
}
$ ./cli auth login
Databricks Host: https://<HOST>.staging.cloud.databricks.com
Databricks Profile Name: <HOST>-test
Profile <HOST>-test was successfully saved
$ ./cli auth login
Databricks Host: https://accounts.cloud.databricks.com
Databricks Account ID: <ACCOUNTID>
Databricks Profile Name: ACCOUNT-<ACCOUNTID>-test
Profile ACCOUNT-<ACCOUNTID>-test was successfully saved
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Uploading a notebook strips it's file extension. This PR returns an
error if a notebook is specified where a file is expected. For example:
A spark python task in a job or `libraries.file.path` DLT library (where
instead `libraries.notebook.path` should be used
This PR also adds test coverage for the opposite case, when files are
not notebooks where notebooks are expected.
## Tests
Integration tests and manually
## Changes
Correctly use --profile flag passed for all bundle commands.
Also adds a validation that if bundle configured host mismatches
provided profile, it throws an error.
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Currently, `databricks --profile <TAB>` autocompletes with the shell
default behavior, listing files in the local directory. This is not a
great experience. Especially given that the suggested profile names for
accounts are so long, it can be cumbersome to type them out by hand.
This PR configures autocompletion for `--profile` to inspect the
profiles of ~/.databrickscfg.
One potential improvement is to filter the response based on whether the
command is known to be account-level or workspace-level.
## Tests
Manual test.
<img width="579" alt="Screenshot_11_07_2023__18_31"
src="https://github.com/databricks/cli/assets/1850319/d7a3acd0-2511-45ac-bd82-95567775c10a">
This implements the "development run" functionality that we desire for DABs in the workspace / IDE.
## bundle.yml changes
In bundle.yml, there should be a "dev" environment that is marked as
`mode: debug`:
```
environments:
dev:
default: true
mode: development # future accepted values might include pull_request, production
```
Setting `mode` to `development` indicates that this environment is used
just for running things for development. This results in several changes
to deployed assets:
* All assets will get '[dev]' in their name and will get a 'dev' tag
* All assets will be hidden from the list of assets (future work; e.g.
for jobs we would have a special job_type that hides it from the list)
* All deployed assets will be ephemeral (future work, we need some form
of garbage collection)
* Pipelines will be marked as 'development: true'
* Jobs can run on development compute through the `--compute` parameter
in the CLI
* Jobs get their schedule / triggers paused
* Jobs get concurrent runs (it's really annoying if your runs get
skipped because the last run was still in progress)
Other accepted values for `mode` are `default` (which does nothing) and
`pull-request` (which is reserved for future use).
## CLI changes
To run a single job called "shark_sighting" on existing compute, use the
following commands:
```
$ databricks bundle deploy --compute 0617-201942-9yd9g8ix
$ databricks bundle run shark_sighting
```
which would deploy and run a job called "[dev] shark_sightings" on the
compute provided. Note that `--compute` is not accepted in production
environments, so we show an error if `mode: development` is not used.
The `run --deploy` command offers a convenient shorthand for the common
combination of deploying & running:
```
$ export DATABRICKS_COMPUTE=0617-201942-9yd9g8ix
$ bundle run --deploy shark_sightings
```
The `--deploy` addition isn't really essential and I welcome feedback 🤔
I played with the idea of a "debug" or "dev" command but that seemed to
only make the option space even broader for users. The above could work
well with an IDE or workspace that automatically sets the target
compute.
One more thing I added is`run --no-wait` can now be used to run
something without waiting for it to be completed (useful for IDE-like
environments that can display progress themselves).
```
$ bundle run --deploy shark_sightings --no-wait
```
## Tests
Tested manually. `"workspace"` is no longer a required field in the
generated JSON schema
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Propagate `TF_CLI_CONFIG_FILE` env variable.
From Terraform documentation:
> The location of the Terraform CLI configuration file can also be
specified using the TF_CLI_CONFIG_FILE [environment
variable](https://developer.hashicorp.com/terraform/cli/config/environment-variables)
It allows using custom builds of terraform-provider-databricks, using
config files like:
```tf
provider_installation {
dev_overrides {
"databricks/databricks" = "/Users/gleb.kanterov/terraform-provider-databricks"
}
direct {}
}
```
## Tests
I added unit tests.
## Changes
Fixed error reporting when included invalid files in include section
Case 1. When the file to include is invalid, throw an error
Case 2. When the file is loaded but the schema is wrong, indicate which
file is failed to load
## Tests
With non-existent notexists.yml
```
databricks bundle deploy
Error: notexists.yml defined in 'include' section does not match any files
```
With malformed notexists.yml
```
databricks bundle deploy
Error: failed to load /Users/andrew.nester/dabs/wheel/notexists.yml: error unmarshaling JSON: json: cannot unmarshal string into Go value of type config.Root
```
## Changes
Adds the following steps to the destroy phase:
1. interpolate
2. write
Resolves#518
## Tests
Tested manually due there not being an examples for tests to use.
## Changes
Two issues with this command:
* The command line arguments for the secret value were ignored
* If the secret value was piped through stdin, it would still prompt
The second issue prevented users from using multi-line strings because
the prompt reads until end-of-line.
This change adds testing infrastructure for:
* Setting up a workspace focused test (common between many tests)
* Running a snippet of Python through the command execution API
Porting more integration tests to use this infrastructure will be done
in later commits.
## Tests
New integration test passes.
The interactive path cannot be integration tested just yet.
## Changes
When there are positional required parameters in the command which can't
be unmarshalled from JSON, we should require them despite the fact
`--json` flag is provided.
The reason is that for some of the command, for example, `databricks
groups patch ID` these arguments are actually path arguments in API and
can't be set as part of `--json` body provided.
Original change which introduced this ignore logic is here:
https://github.com/databricks/cli/pull/405
Fixes https://github.com/databricks/cli/issues/533,
https://github.com/databricks/cli/issues/537
Note: Code generation is based on the change in this PR:
https://github.com/databricks/databricks-sdk-go/pull/536
## Tests
1. Running `cli groups patch 123 --json {...}` works correctly
Backward compatibility tests with previous changes from
https://github.com/databricks/cli/pull/405
1. `cli clusters events --json '{"cluster_id": "1029-xxxx"}'` - works,
returns list of events
2. `cli clusters events 1029-xxxx` - works, returns list of events
3. `cli clusters events` - works, first prompts for Cluster ID and then
returns the list of events
## Changes
Also see #525.
The direct download flag has been removed in newer versions because of
the content type issue.
Instead, we can make the command decode the base64 output when the
output mode is text.
```
$ databricks workspace export /some/path/script.sh
#!/bin/bash
echo "this is a script"
```
## Tests
New integration test.
## Changes
Adds the dbfs:/ prefix to paths output by the cp command so they can be
used with the CLI
## Tests
Manually
Currently there are no integration tests for command output, I'll add
them in separately
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>