## Changes
Currently, `databricks auth login` is difficult to use. If a user types
this command in, the command fails with
```
Error: init: cannot fetch credentials
```
after prompting for a profile name.
To make this experience smoother, this change ensures that the host, and
if necessary, the account ID, are prompted for input from the user if
they aren't provided on the CLI.
## Tests
Manual tests:
```
$ ./cli auth token
Databricks Host: https://<HOST>.staging.cloud.databricks.com
{
"access_token": "...",
"token_type": "Bearer",
"expiry": "2023-07-11T12:56:59.929671+02:00"
}
$ ./cli auth login
Databricks Host: https://<HOST>.staging.cloud.databricks.com
Databricks Profile Name: <HOST>-test
Profile <HOST>-test was successfully saved
$ ./cli auth login
Databricks Host: https://accounts.cloud.databricks.com
Databricks Account ID: <ACCOUNTID>
Databricks Profile Name: ACCOUNT-<ACCOUNTID>-test
Profile ACCOUNT-<ACCOUNTID>-test was successfully saved
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Correctly use --profile flag passed for all bundle commands.
Also adds a validation that if bundle configured host mismatches
provided profile, it throws an error.
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Currently, `databricks --profile <TAB>` autocompletes with the shell
default behavior, listing files in the local directory. This is not a
great experience. Especially given that the suggested profile names for
accounts are so long, it can be cumbersome to type them out by hand.
This PR configures autocompletion for `--profile` to inspect the
profiles of ~/.databrickscfg.
One potential improvement is to filter the response based on whether the
command is known to be account-level or workspace-level.
## Tests
Manual test.
<img width="579" alt="Screenshot_11_07_2023__18_31"
src="https://github.com/databricks/cli/assets/1850319/d7a3acd0-2511-45ac-bd82-95567775c10a">
This implements the "development run" functionality that we desire for DABs in the workspace / IDE.
## bundle.yml changes
In bundle.yml, there should be a "dev" environment that is marked as
`mode: debug`:
```
environments:
dev:
default: true
mode: development # future accepted values might include pull_request, production
```
Setting `mode` to `development` indicates that this environment is used
just for running things for development. This results in several changes
to deployed assets:
* All assets will get '[dev]' in their name and will get a 'dev' tag
* All assets will be hidden from the list of assets (future work; e.g.
for jobs we would have a special job_type that hides it from the list)
* All deployed assets will be ephemeral (future work, we need some form
of garbage collection)
* Pipelines will be marked as 'development: true'
* Jobs can run on development compute through the `--compute` parameter
in the CLI
* Jobs get their schedule / triggers paused
* Jobs get concurrent runs (it's really annoying if your runs get
skipped because the last run was still in progress)
Other accepted values for `mode` are `default` (which does nothing) and
`pull-request` (which is reserved for future use).
## CLI changes
To run a single job called "shark_sighting" on existing compute, use the
following commands:
```
$ databricks bundle deploy --compute 0617-201942-9yd9g8ix
$ databricks bundle run shark_sighting
```
which would deploy and run a job called "[dev] shark_sightings" on the
compute provided. Note that `--compute` is not accepted in production
environments, so we show an error if `mode: development` is not used.
The `run --deploy` command offers a convenient shorthand for the common
combination of deploying & running:
```
$ export DATABRICKS_COMPUTE=0617-201942-9yd9g8ix
$ bundle run --deploy shark_sightings
```
The `--deploy` addition isn't really essential and I welcome feedback 🤔
I played with the idea of a "debug" or "dev" command but that seemed to
only make the option space even broader for users. The above could work
well with an IDE or workspace that automatically sets the target
compute.
One more thing I added is`run --no-wait` can now be used to run
something without waiting for it to be completed (useful for IDE-like
environments that can display progress themselves).
```
$ bundle run --deploy shark_sightings --no-wait
```
## Changes
Two issues with this command:
* The command line arguments for the secret value were ignored
* If the secret value was piped through stdin, it would still prompt
The second issue prevented users from using multi-line strings because
the prompt reads until end-of-line.
This change adds testing infrastructure for:
* Setting up a workspace focused test (common between many tests)
* Running a snippet of Python through the command execution API
Porting more integration tests to use this infrastructure will be done
in later commits.
## Tests
New integration test passes.
The interactive path cannot be integration tested just yet.
## Changes
When there are positional required parameters in the command which can't
be unmarshalled from JSON, we should require them despite the fact
`--json` flag is provided.
The reason is that for some of the command, for example, `databricks
groups patch ID` these arguments are actually path arguments in API and
can't be set as part of `--json` body provided.
Original change which introduced this ignore logic is here:
https://github.com/databricks/cli/pull/405
Fixes https://github.com/databricks/cli/issues/533,
https://github.com/databricks/cli/issues/537
Note: Code generation is based on the change in this PR:
https://github.com/databricks/databricks-sdk-go/pull/536
## Tests
1. Running `cli groups patch 123 --json {...}` works correctly
Backward compatibility tests with previous changes from
https://github.com/databricks/cli/pull/405
1. `cli clusters events --json '{"cluster_id": "1029-xxxx"}'` - works,
returns list of events
2. `cli clusters events 1029-xxxx` - works, returns list of events
3. `cli clusters events` - works, first prompts for Cluster ID and then
returns the list of events
## Changes
Also see #525.
The direct download flag has been removed in newer versions because of
the content type issue.
Instead, we can make the command decode the base64 output when the
output mode is text.
```
$ databricks workspace export /some/path/script.sh
#!/bin/bash
echo "this is a script"
```
## Tests
New integration test.
## Changes
Adds the dbfs:/ prefix to paths output by the cp command so they can be
used with the CLI
## Tests
Manually
Currently there are no integration tests for command output, I'll add
them in separately
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Added configure-cluster flag for auth login which will allow to
configure cluster ID and save it in Databricks profile
Note: the build will fail until this one is merged and released
https://github.com/databricks/databricks-sdk-go/pull/524
## Tests
```
andrew.nester@HFW9Y94129 cli % ./cli auth login https://xxxxxxx.databricks.com --configure-cluster
✔ Databricks Profile Name: my-profile█
Search: █
? Choose cluster:
10.1 ML beta (1029-yyyyy-xxxxxx)
10.5 ML standard cluster
12.2 LTS
↓ 13.1 free for all
andrew.nester@HFW9Y94129 cli % cat ~/.databrickscfg
[DEFAULT]
host = https://xxxxx.databricks.com
cluster_id = 1029-xxxxx-yyyyy
auth_type = databricks-cli
```
## Changes
Fixed jobs create command to only accept JSON payload.
Note: relies on this PR from Go SDK
https://github.com/databricks/databricks-sdk-go/pull/522
## Tests
```
andrew.nester@HFW9Y94129 cli % ./cli jobs create -h
Create a new job.
Create a new job.
Usage:
databricks jobs create [flags]
Flags:
-h, --help help for create
--json JSON either inline JSON string or @path/to/file.json with request body (default JSON (0 bytes))
Global Flags:
-e, --environment string bundle environment to use (if applicable)
--log-file file file to write logs to (default stderr)
--log-format type log output format (text or json) (default text)
--log-level format log level (default disabled)
-o, --output type output type: text or json (default text)
-p, --profile string ~/.databrickscfg profile
--progress-format format format for progress logs (append, inplace, json) (default default)
```
## Tests
New integration test for the read/write parts of the other filers. The
integration test cannot be shared just yet because the Files API doesn't
include support for creating/listing/removing directories yet.
## Changes
`--force` flag did not exist for `bundle destroy`. This PR adds that in.
## Tests
manually tested. Now adding the `--force` flag hijacks the deploy lock
on the target directory.
## Changes
Some of the command such as `databricks alerts create` require
positional arguments which are not primitive.
Since these arguments are required, we should correctly set ExactArgs
for such commands
Fixes#367
## Tests
Running `databricks alerts create`
Before
```
andrew.nester@HFW9Y94129 cli % ./cli alerts create
panic: runtime error: index out of range [0] with length 0
goroutine 1 [running]:
github.com/databricks/bricks/cmd/workspace/alerts.glob..func1(0x22a1280?, {0x2321638, 0x0, 0x0?})
github.com/databricks/bricks/cmd/workspace/alerts/alerts.go:57 +0x355
github.com/spf13/cobra.(*Command).execute(0x22a1280, {0x2321638, 0x0, 0x0})
github.com/spf13/cobra@v1.7.0/command.go:940 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0x22a0700)
github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3bd
github.com/spf13/cobra.(*Command).ExecuteContextC(...)
github.com/spf13/cobra@v1.7.0/command.go:1001
github.com/databricks/bricks/cmd/root.Execute()
github.com/databricks/bricks/cmd/root/root.go:80 +0x6a
main.main()
github.com/databricks/bricks/main.go:18 +0x17
```
After
```
andrew.nester@HFW9Y94129 cli % ./cli alerts create
Error: provide command input in JSON format by specifying --json option
```
Acceptance test
```
=== RUN TestAccAlertsCreateErrWhenNoArguments
alerts_test.go:10: gcp
helpers.go:147: Error running command: provide command input in JSON format by specifying --json option
--- PASS: TestAccAlertsCreateErrWhenNoArguments (1.99s)
PASS
```
## Changes
Disable shell completions for generated commands.
Default completion behavior completes local files which never makes
sense.
Automatic contextual completion of required arguments would be super
powerful but a lot of work to get right. Until then, we could do manual
completion functions in `overrides.go` as needed.
This fixes#374.
## Tests
Confirmed manually that commands no longer complete local files.
## Changes
With this change related commands show up next to each other in help
output.
The ordered list of groups is hard-coded until it can be derived from
the specification.
## Tests
Manually confirmed that the help output of the root command and the
account command list commands by their groups.
## Changes
This includes the following changes:
* Move profile loading code to libs/databrickscfg and add tests
* Update prompt label to reflect workspace/account profiles
* Start prompt in search mode by default
* Custom error if `~/.databrickscfg` doesn't exist
* Custom error if `~/.databrickscfg` doesn't contain profiles
* Use stderr for prompt so that stdout redirection works (e.g. with `jq` or `jless`)
## Tests
* New unit tests pass
* Manual tests for both workspace and account commands
* Search-by-default is really nice if you have many profiles
## Changes
This PR:
1. Adds the export-dir command
2. Changes filer.Read to return an error if a user tries to read a
directory
3. Adds returning internal file structures from filer.Stat().Sys()
## Tests
Integration tests and manually
## Changes
Some of the commands do not support prompts, for example `workspace
get-status` but we were wrongly suggesting customers some option.
Quick fix for this is not to provide prompts for these known commands.
Note: it uses a method from this PR in Go SDK
https://github.com/databricks/databricks-sdk-go/pull/416
## Tests
Running `workspace get-status`
Before
```
andrew.nester@HFW9Y94129 multiples-tasks % ../../cli/cli workspace get-status
Error: Path () doesn't start with '/'
```
After
```
andrew.nester@HFW9Y94129 multiples-tasks % ../../cli/cli workspace get-status
Error: accepts 1 arg(s), received 0
```
## Changes
Better error message if can not load prompts
## Tests
Setup 2 jobs with the same name and ran `cli job get`
```
andrew.nester@HFW9Y94129 multiples-tasks % ../../cli/cli jobs get
Error: failed to load names for Jobs drop-down. Please manually specify required arguments. Original error: duplicate .Settings.Name: duplicatejob
```
## Changes
Use cmdio in the version command such that it accepts the `--output` flag.
This removes the existing `--detail` flag which previously made the
command print JSON output.
## Tests
New integration test passes.
## Changes
Do not prompt for List methods
## Tests
Running
```
cli workspace list
```
Before
```
cli workspace list
Error: Path () doesn't start with '/'
```
After
```
cli workspace list
Error: accepts 1 arg(s), received 0
```