## Changes
This change replaces usage of the `repofiles` package with the `filer`
package to consolidate WSFS code paths.
The `repofiles` package implemented the following behavior. If a file at
`foo/bar.txt` was created and removed, the directory `foo` was kept
around because we do not perform directory tracking. If subsequently, a
file at `foo` was created, it resulted in an `fs.ErrExist` because it is
impossible to overwrite a directory. It would then perform a recursive
delete of the path if this happened and retry the file write.
To make this use case work without resorting to a recursive delete on
conflict, we need to implement directory tracking as part of sync. The
approach in this commit is as follows:
1. Maintain set of directories needed for current set of files. Compare
to previous set of files. This results in mkdir of added directories and
rmdir of removed directories.
2. Creation of new directories should happen prior to writing files.
Otherwise, many file writes may race to create the same parent
directories, resulting in additional API calls. Removal of existing
directories should happen after removing files.
3. Making new directories can be deduped across common prefixes where
only the longest prefix is created recursively.
4. Removing existing directories must happen sequentially, starting with
the longest prefix.
5. Removal of directories is a best effort. It fails only if the
directory is not empty, and if this happens we know something placed a
file or directory manually, outside of sync.
## Tests
* Existing integration tests pass (modified where it used to assert
directories weren't cleaned up)
* New integration test to confirm the inability to remove a directory
doesn't fail the sync run
## Changes
This includes the following changes:
* Move profile loading code to libs/databrickscfg and add tests
* Update prompt label to reflect workspace/account profiles
* Start prompt in search mode by default
* Custom error if `~/.databrickscfg` doesn't exist
* Custom error if `~/.databrickscfg` doesn't contain profiles
* Use stderr for prompt so that stdout redirection works (e.g. with `jq` or `jless`)
## Tests
* New unit tests pass
* Manual tests for both workspace and account commands
* Search-by-default is really nice if you have many profiles
## Changes
This PR:
1. Adds the export-dir command
2. Changes filer.Read to return an error if a user tries to read a
directory
3. Adds returning internal file structures from filer.Stat().Sys()
## Tests
Integration tests and manually
## Changes
This PR adds a new line break to JSON rendering using cmdio. This is
useful when we call `cmdio.Render` multiple times
## Tests
Manually
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Added skipping of translating paths for notebook path in notebook tasks
and python file path in spark python tasks if the git source is not null.
Resolves: #402
## Tests
There is a unit test and also tested with a sample bundle:
```
resources:
jobs:
demo:
git_source:
git_branch: master
git_provider: github
git_url: https://github.com/test/dummy
....
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
This captures the recursive deletion of a directory tree in the filer interface.
Prompted by #433.
## Tests
Integration tests pass (ran the filer ones on AWS and Azure).
## Changes
Some of the commands do not support prompts, for example `workspace
get-status` but we were wrongly suggesting customers some option.
Quick fix for this is not to provide prompts for these known commands.
Note: it uses a method from this PR in Go SDK
https://github.com/databricks/databricks-sdk-go/pull/416
## Tests
Running `workspace get-status`
Before
```
andrew.nester@HFW9Y94129 multiples-tasks % ../../cli/cli workspace get-status
Error: Path () doesn't start with '/'
```
After
```
andrew.nester@HFW9Y94129 multiples-tasks % ../../cli/cli workspace get-status
Error: accepts 1 arg(s), received 0
```
## Changes
Better error message if can not load prompts
## Tests
Setup 2 jobs with the same name and ran `cli job get`
```
andrew.nester@HFW9Y94129 multiples-tasks % ../../cli/cli jobs get
Error: failed to load names for Jobs drop-down. Please manually specify required arguments. Original error: duplicate .Settings.Name: duplicatejob
```
## Changes
This change implements:
* Channels for line-by-line output from stdout/stderr
* A function to wait for a sync step to complete (using above)
* Ensure all tests are prefixed `TestAccSync`
* Use temporary paths in WSFS instead of cloning a repo
## Tests
The same integration tests now pass in ~90 seconds (was ~250s).
## Changes
This enables the use of `io/fs` functions `fs.Glob` and `fs.WalkDir`
with filers.
We can't use `fs.FS` as the standard interface instead of `filer.Filer` because:
1. It was made for reading from filesystems only, not writing
2. It doesn't take a context for the core functions
Therefore a wrapper will do.
## Tests
* Added unit tests to cover the adapter through a fake filer.
* Manually ran `fs.WalkDir` against both WSFS and DBFS filers.
## Changes
- added saving profile to `~/.databrickscfg` whenever we do `databricks
auth login`.
- we either match profile by account id / canonical host or introduce
the new one from deployment name.
- fail on multiple profiles with matching accounts or workspace hosts.
- overriding `~/.databrickscfg` keeps the (valid) comments, but
reformats the file.
## Tests
<!-- How is this tested? -->
- `make test`
- `go run main.go auth login --account-id XXX --host
https://accounts.cloud.databricks.com/`
- `go run main.go auth token --account-id XXX --host
https://accounts.cloud.databricks.com/`
- `go run main.go auth login --host https://XXX.cloud.databricks.com/`
## Changes
Use cmdio in the version command such that it accepts the `--output` flag.
This removes the existing `--detail` flag which previously made the
command print JSON output.
## Tests
New integration test passes.
## Changes
The pattern `errors.Is(err, fs.ErrNotExist)` is common to check for an
error type.
Errors can implement `Is(error) bool` with a custom equivalence checker.
## Tests
New asserts all pass in the integration test.
Adds a DBFS implementation of the `filer.Filer` interface.
The integration tests are reused between the workspace filesystem and
DBFS implementations to ensure identical behavior.
## Changes
Do not prompt for List methods
## Tests
Running
```
cli workspace list
```
Before
```
cli workspace list
Error: Path () doesn't start with '/'
```
After
```
cli workspace list
Error: accepts 1 arg(s), received 0
```
## Changes
On Unix systems, the default of `/tmp` always works. No need to
synthesize a path for it.
The custom TMPDIR was causing issues when used from GitHub Actions
runners.
## Tests
Confirmed manually this fixes the issue on GitHub Actions runners.
## Changes
Added support for `bundle.Seq`, simplified `Mutator.Apply` interface by
removing list of mutators from return values/
## Tests
1. Ran `cli bundle deploy` and interrupted it with Cmd + C mid execution
so lock is not released
2. Ran `cli bundle deploy` top make sure that CLI is not trying to
release lock when it fail to acquire it
```
andrew.nester@HFW9Y94129 multiples-tasks % cli bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!
^C
andrew.nester@HFW9Y94129 multiples-tasks % cli bundle deploy
Error: deploy lock acquired by andrew.nester@databricks.com at 2023-05-24 12:10:23.050343 +0200 CEST. Use --force to override
```
## Changes
Regenerated internal schema structs based on Terraform provider schemas
Allows to use `serverless` flag in bundle config.
## Tests
Ran `cli bundle deploy` with bundle which contains pipeline with
serverless key true
## Changes
Passes through tmp dir related env vars to the terraform process. Incase
any of them are not set, we assign temp dir inside bundle cache dir as
the location terraform should use.
## Tests
Manually checked that these env vars do override location where
os.CreateTemp files are created
## Changes
With this PR, all of the command below print version and exit:
```
$ databricks -v
Databricks CLI v0.100.1-dev+4d3fa76
$ databricks --version
Databricks CLI v0.100.1-dev+4d3fa76
$ databricks version
Databricks CLI v0.100.1-dev+4d3fa76
```
## Tests
Added integration test for each flag or command.
## Changes
Rename all instances of "bricks" to "databricks".
## Tests
* Confirmed the goreleaser build works, uses the correct new binary
name, and produces the right archives.
* Help output is confirmed to be correct.
* Output of `git grep -w bricks` is minimal with a couple changes
remaining for after the repository rename.
## Changes
Added `DeferredMutator` and `bundle.Defer` function which allows to
always execute some mutators either in the end of execution chain or
after error occurs in the middle of execution chain.
Usage as follows:
```
deferredMutator := bundle.Defer([]bundle.Mutator{
lock.Acquire()
transform.DoSomething(),
//...
}, []bundle.Mutator{
lock.Release(),
})
```
In such case `lock.Release()` will always be executed: either when all
operations above succeed or when any of them fails
## Tests
Before the change
```
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!
Error: terraform not initialized
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Error: deploy lock acquired by andrew.nester@databricks.com at 2023-05-10 16:41:22.902659 +0200 CEST. Use --force to override
```
After the change
```
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!
Error: terraform not initialized
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!
Error: terraform not initialized
```
## Changes
When local state file exists it won't be override by remote state file
## Tests
Running `bricks bundle deploy` after state push failed does not override
local state file
Use cases verified:
1. Local state file is newer than remote
2. Local state file is older than remote
3. Local state file does not exist
4. Local state file corrupted
## Changes
Allows to override default value for a variable definition from the
environment block in a bundle config. See bundle.yml for example usage
## Tests
Unit tests
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>