## Changes
Go text templates allows only specifying one input argument for
invocations of associated templates (ie `{{template ...}}`). This PR
introduces the map and pair functions which allow template authors to
work around this limitation by passing multiple arguments as key value
pairs in a map.
This PR is based on feedback from the mlops stacks migration where
otherwise a bunch of duplicate code is required for computed values and
fixtures.
## Tests
Unit test
## Changes
Prompt UI glitches often. We are switching to a custom implementation of
a simple prompter which is much more stable.
This also allows new lines in prompts which has been an ask by the
mlflow team.
## Tests
Tested manually
## Changes
Adds a function to validate json schema types added by the author. The
default json unmarshaller does not validate that the parsed type matches
the enum defined in `jsonschema.Type`
Includes some other improvements to provide better error messages.
This PR was prompted by usability difficulties reported by @mingyu89
during mlops stack migration.
## Tests
Unit tests
## Changes
#629 introduced a change to autopopulate the host from .databrickscfg if
the user is logging back into a host they were previously using. This
did not respect the DATABRICKS_CONFIG_FILE env variable, causing the
flow to stop working for users with no .databrickscfg file in their home
directory.
This PR refactors all config file loading to go through one interface,
`databrickscfg.GetDatabricksCfg()`, and an auxiliary
`databrickscfg.GetDatabricksCfgPath()` to get the configured file path.
Closes#655.
## Tests
```
$ databricks auth login --profile abc
Error: open /Users/miles/.databrickscfg: no such file or directory
$ ./cli auth login --profile abc
Error: cannot load Databricks config file: open /Users/miles/.databrickscfg: no such file or directory
$ DATABRICKS_CONFIG_FILE=~/.databrickscfg.bak ./cli auth login --profile abc
Databricks Host: https://asdf
```
## Changes
The `.tmpl` extension is only meant as a qualifier for whether the file
content is executed as a template. All file paths in the `template`
directory should be treated as valid go text templates.
Before only paths with the `.tmpl` extensions would be resolved as
templates, after this change, all file paths are interpreted as
templates.
## Tests
Unit test. The newly added unit tests also asserts that the file path is
correct, even when the `.tmpl` extension is missing.
## Changes
The functions in `libs/git/git.go` assumed global state (e.g. working
directory) and were no longer used.
This change consolidates the functionality to turn an origin URL into an
HTTPS URL.
Closes#187.
## Tests
Expanded existing unit test.
## Changes
This PR:
1. Fixes the computation logic for `ActualBranch`. An error in the
earlier logic caused the validation mutator to be a no-op.
2. Makes the `.git` string a global var. This is useful to configure in
tests.
3. Adds e2e test for the validation mutator.
## Tests
Unit test
## Changes
This PR adds two features:
1. The bundle init command
2. Support for prompting for input values
In order to do this, this PR also introduces a new `config` struct which
handles reading config files, prompting users and all validation steps
before we materialize the template
With this PR users can start authoring custom templates, based on go
text templates, for their projects / orgs.
## Tests
Unit tests, both existing and new
## Changes
This PR:
1. Adds code for reading template configs and validating them against a
JSON schema.
2. Moves the json schema struct in `bundle/schema` to a separate library
package. This struct is now reused for validating template configs.
## Tests
Unit tests
## Changes
In a world before this PR, all files would be treated as `go text
templates`, making the content in these files quake in fear since they
would be executed (as a template).
This PR makes it so that only files with the `.tmpl` extension are
understood to be templates. This is useful for avoiding ambiguity in
cases like where a binary file could be interpreted as a go text
template otherwise.
In order to do so, we introduce the `copyFile` struct which does a copy
of the source file from the template without loading it into memory.
## Tests
Unit tests
## Changes
This adds `mode: production` option. This mode doesn't do any
transformations but verifies that an environment is configured correctly
for production:
```
environments:
prod:
mode: production
# paths should not be scoped to a user (unless a service principal is used)
root_path: /Shared/non_user_path/...
# run_as and permissions should be set at the resource level (or at the top level when that is implemented)
run_as:
user_name: Alice
permissions:
- level: CAN_MANAGE
user_name: Alice
```
Additionally, this extends the existing `mode: development` option,
* now prefixing deployed assets with `[dev your.user]` instead of just
`[dev`]
* validating that development deployments _are_ scoped to a user
## Related
https://github.com/databricks/cli/pull/578/files (in draft)
## Tests
Manual testing to validate the experience, error messages, and
functionality with all resource types. Automated unit tests.
---------
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
## Changes
This PR changes the integration test to just check an error is returned
rather than asserting specific text is present in the error. This is
required because the error returned can be different based on whether
git ssh keys have been setup.
## Changes
Add unit test that raw strings are printed as is. This method is useful
to print text that would otherwise be interpreted a go text template.
## Changes
Due to a bug in Github UI, https://github.com/databricks/cli/pull/589
got merged without passing the go/fmt formatting checks
This PR fixes the formatting which breaks the PR checks
## Changes
Earlier we removed recursive deletion from sync. This makes it safe
enough for us to not restrict sync to just the namespace of the user.
This PR removes that base path validation.
Note: If the sync destination is under `/Repos` we still only create
missing directories required if the path is under my namespace ie
matches `/Repos/@me/`
## Tests
Manually
Before:
```
shreyas.goenka@THW32HFW6T hello-bundle % cli bundle deploy
Starting upload of bundle files
Error: path must be nested under /Users/shreyas.goenka@databricks.com or /Repos/shreyas.goenka@databricks.com
```
After:
```
shreyas.goenka@THW32HFW6T hello-bundle % cli bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Shared/common-test/hello-bundle/files!
Starting resource deployment
Resource deployment completed!
```
## Changes
Correctly use --profile flag passed for all bundle commands.
Also adds a validation that if bundle configured host mismatches
provided profile, it throws an error.
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Currently, `databricks --profile <TAB>` autocompletes with the shell
default behavior, listing files in the local directory. This is not a
great experience. Especially given that the suggested profile names for
accounts are so long, it can be cumbersome to type them out by hand.
This PR configures autocompletion for `--profile` to inspect the
profiles of ~/.databrickscfg.
One potential improvement is to filter the response based on whether the
command is known to be account-level or workspace-level.
## Tests
Manual test.
<img width="579" alt="Screenshot_11_07_2023__18_31"
src="https://github.com/databricks/cli/assets/1850319/d7a3acd0-2511-45ac-bd82-95567775c10a">
## Changes
Two issues with this command:
* The command line arguments for the secret value were ignored
* If the secret value was piped through stdin, it would still prompt
The second issue prevented users from using multi-line strings because
the prompt reads until end-of-line.
This change adds testing infrastructure for:
* Setting up a workspace focused test (common between many tests)
* Running a snippet of Python through the command execution API
Porting more integration tests to use this infrastructure will be done
in later commits.
## Tests
New integration test passes.
The interactive path cannot be integration tested just yet.
## Changes
Also see #525.
The direct download flag has been removed in newer versions because of
the content type issue.
Instead, we can make the command decode the base64 output when the
output mode is text.
```
$ databricks workspace export /some/path/script.sh
#!/bin/bash
echo "this is a script"
```
## Tests
New integration test.
## Changes
Use the download method from the SDK in the read method for the WSFS
implementation of the filer interface.
Closes#452.
## Tests
Tested by existing integration tests
## Changes
This PR removes the stat call and instead relies on errors returned by
the go SDK to return the appropriate errors
## Tests
Tested using existing filer integration tests
## Tests
New integration test for the read/write parts of the other filers. The
integration test cannot be shared just yet because the Files API doesn't
include support for creating/listing/removing directories yet.
## Changes
The ini library omits the default section header and in doing so breaks
compatibility with Python's config parser. It raises:
```
Error: MissingSectionHeaderError: File contains no section headers.
```
This commit makes sure the DEFAULT section header is included.
If the config file doesn't include a DEFAULT section itself, we include
a comment describing its purpose.
## Tests
New tests pass. Manually confirmed the DEFAULT section header is
included.
---------
Co-authored-by: PaulCornellDB <paul.cornell@databricks.com>
## Changes
Local file reads on Windows require the file handle to be closed after
using it. This commit includes an interface change to return an
`io.ReadCloser` from `Read` to accommodate this.
## Tests
The existing integration tests for the filer interface all pass.
## Changes
This change replaces usage of the `repofiles` package with the `filer`
package to consolidate WSFS code paths.
The `repofiles` package implemented the following behavior. If a file at
`foo/bar.txt` was created and removed, the directory `foo` was kept
around because we do not perform directory tracking. If subsequently, a
file at `foo` was created, it resulted in an `fs.ErrExist` because it is
impossible to overwrite a directory. It would then perform a recursive
delete of the path if this happened and retry the file write.
To make this use case work without resorting to a recursive delete on
conflict, we need to implement directory tracking as part of sync. The
approach in this commit is as follows:
1. Maintain set of directories needed for current set of files. Compare
to previous set of files. This results in mkdir of added directories and
rmdir of removed directories.
2. Creation of new directories should happen prior to writing files.
Otherwise, many file writes may race to create the same parent
directories, resulting in additional API calls. Removal of existing
directories should happen after removing files.
3. Making new directories can be deduped across common prefixes where
only the longest prefix is created recursively.
4. Removing existing directories must happen sequentially, starting with
the longest prefix.
5. Removal of directories is a best effort. It fails only if the
directory is not empty, and if this happens we know something placed a
file or directory manually, outside of sync.
## Tests
* Existing integration tests pass (modified where it used to assert
directories weren't cleaned up)
* New integration test to confirm the inability to remove a directory
doesn't fail the sync run
## Changes
This includes the following changes:
* Move profile loading code to libs/databrickscfg and add tests
* Update prompt label to reflect workspace/account profiles
* Start prompt in search mode by default
* Custom error if `~/.databrickscfg` doesn't exist
* Custom error if `~/.databrickscfg` doesn't contain profiles
* Use stderr for prompt so that stdout redirection works (e.g. with `jq` or `jless`)
## Tests
* New unit tests pass
* Manual tests for both workspace and account commands
* Search-by-default is really nice if you have many profiles
## Changes
This PR:
1. Adds the export-dir command
2. Changes filer.Read to return an error if a user tries to read a
directory
3. Adds returning internal file structures from filer.Stat().Sys()
## Tests
Integration tests and manually
## Changes
This PR adds a new line break to JSON rendering using cmdio. This is
useful when we call `cmdio.Render` multiple times
## Tests
Manually
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
This captures the recursive deletion of a directory tree in the filer interface.
Prompted by #433.
## Tests
Integration tests pass (ran the filer ones on AWS and Azure).