## Changes
Currently, when the CLI run a list API call (like list jobs), it uses
the `List*All` methods from the SDK, which list all resources in the
collection. This is very slow for large collections: if you need to list
all jobs from a workspace that has 10,000+ jobs, you'll be waiting for
at least 100 RPCs to complete before seeing any output.
Instead of using List*All() methods, the SDK recently added an iterator
data structure that allows traversing the collection without needing to
completely list it first. New pages are fetched lazily if the next
requested item belongs to the next page. Using the List() methods that
return these iterators, the CLI can proactively print out some of the
response before the complete collection has been fetched.
This involves a pretty major rewrite of the rendering logic in `cmdio`.
The idea there is to define custom rendering logic based on the type of
the provided resource. There are three renderer interfaces:
1. textRenderer: supports printing something in a textual format (i.e.
not JSON, and not templated).
2. jsonRenderer: supports printing something in a pretty-printed JSON
format.
3. templateRenderer: supports printing something using a text template.
There are also three renderer implementations:
1. readerRenderer: supports printing a reader. This only implements the
textRenderer interface.
2. iteratorRenderer: supports printing a `listing.Iterator` from the Go
SDK. This implements jsonRenderer and templateRenderer, buffering 20
resources at a time before writing them to the output.
3. defaultRenderer: supports printing arbitrary resources (the previous
implementation).
Callers will either use `cmdio.Render()` for rendering individual
resources or `io.Reader` or `cmdio.RenderIterator()` for rendering an
iterator. This separate method is needed to safely be able to match on
the type of the iterator, since Go does not allow runtime type matches
on generic types with an existential type parameter.
One other change that needs to happen is to split the templates used for
text representation of list resources into a header template and a row
template. The template is now executed multiple times for List API
calls, but the header should only be printed once. To support this, I
have added `headerTemplate` to `cmdIO`, and I have also changed
`RenderWithTemplate` to include a `headerTemplate` parameter everywhere.
## Tests
- [x] Unit tests for text rendering logic
- [x] Unit test for reflection-based iterator construction.
---------
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
## Changes
```
shreyas.goenka@THW32HFW6T cli % databricks fs -h
Commands to do file system operations on DBFS and UC Volumes.
Usage:
databricks fs [command]
Available Commands:
cat Show file content.
cp Copy files and directories.
ls Lists files.
mkdir Make directories.
rm Remove files and directories.
```
This PR adds support for UC Volumes to the fs commands. The fs commands
for UC volumes work the same as they currently do for DBFS. This is
ensured by running the same test matrix we across both DBFS and UC
Volumes versions of the fs commands.
## Tests
Support for UC volumes is tested by running the same tests as we did
originally for DBFS commands. The tests require a `main` catalog to
exist in the workspace, which does in our test workspaces environments
which have the `TEST_METASTORE_ID` environment variable set.
For the Files API filer, we do the same by running mostly common tests
to ensure the filers for "local", "wsfs", "dbfs" and "files API" are
consistent.
The tests are also made to all run in parallel to reduce the time taken.
To ensure the separation of the tests, each test creates its own UC
schema (for UC volumes tests) or DBFS directories (for DBFS tests).
## Changes
This adds a `default-sql` template!
In this latest revision, I've hidden the new template from the list so
we can merge it, iterate over it, and properly release the template at
the right time.
- [x] WorkspaceFS support for .sql files is in prod
- [x] SQL extension is preconfigured based on extension settings (if
possible)
- [ ] Streaming tables support is either ungated or the template
provides instructions about signup
- _Mitigation for now: this template is hidden from the list of
templates._
- [x] Support non-UC workspaces
## Tests
- [x] Unit tests
- [x] Manual testing
- [x] More manual testing
- [x] Reviewer testing
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: PaulCornellDB <paul.cornell@databricks.com>
## Changes
This change enables the use of bundle variables for boolean, integer,
and floating point fields.
## Tests
* Unit tests.
* I ran a manual test to confirm parameterizing the number of workers in
a cluster definition works.
## Changes
This adds a new dbt-sql template. This work requires the new WorkspaceFS
support for dbt tasks.
In this latest revision, I've hidden the new template from the list so
we can merge it, iterate over it, and propertly release the template at
the right time.
Blockers:
- [x] WorkspaceFS support for dbt projects is in prod
- [x] Move dbt files into a subdirectory
- [ ] Wait until the next (>1.7.4) release of the dbt plugin which will
have major improvements!
- _Rather than wait, this template is hidden from the list of
templates._
- [x] SQL extension is preconfigured based on extension settings (if
possible)
- MV / streaming tables:
- [x] Add to template
- [x] Fix https://github.com/databricks/dbt-databricks/issues/535 (to be
released with in 1.7.4)
- [x] Merge https://github.com/databricks/dbt-databricks/pull/338 (to be
released with in 1.7.4)
- [ ] Fix "too many 503 errors" issue
(https://github.com/databricks/dbt-databricks/issues/570, internal
tracker: ES-1009215, ES-1014138)
- [x] Support ANSI mode in the template
- [ ] Streaming tables support is either ungated or the template
provides instructions about signup
- _Mitigation for now: this template is hidden from the list of
templates._
- [x] Support non-workspace-admin deployment
- [x] Make sure `data_security_mode: SINGLE_USER` works on non-UC
workspaces (it's required to be explicitly specified on UC workspaces
with single-node clusters)
- [x] Support non-UC workspaces
## Tests
- [x] Unit tests
- [x] Manual testing
- [x] More manual testing
- [ ] Reviewer manual testing
- _I'd like to do a small bug bash post-merging._
- [x] Unit tests
## Changes
This builds on #1098 and uses the `dyn.Value` representation of the
bundle configuration to generate the Terraform JSON definition of
resources in the bundle.
The existing code (in `BundleToTerraform`) was not great and in an
effort to slightly improve this, I added a package `tfdyn` that includes
dedicated files for each resource type. Every resource type has its own
conversion type that takes the `dyn.Value` of the bundle-side resource
and converts it into Terraform resources (e.g. a job and optionally its
permissions).
Because we now use a `dyn.Value` as input, we can represent and emit
zero-values that have so far been omitted. For example, setting
`num_workers: 0` in your bundle configuration now propagates all the way
to the Terraform JSON definition.
## Tests
* Unit tests for every converter. I reused the test inputs from
`convert_test.go`.
* Equivalence tests in every existing test case checks that the
resulting JSON is identical.
* I manually compared the TF JSON file generated by the CLI from the
main branch and from this PR on all of our bundles and bundle examples
(internal and external) and found the output doesn't change (with the
exception of the odd zero-value being included by the version in this
PR).
## Changes
This is a fundamental change to how we load and process bundle
configuration. We now depend on the configuration being represented as a
`dyn.Value`. This representation is functionally equivalent to Go's
`any` (it is variadic) and allows us to capture metadata associated with
a value, such as where it was defined (e.g. file, line, and column). It
also allows us to represent Go's zero values properly (e.g. empty
string, integer equal to 0, or boolean false).
Using this representation allows us to let the configuration model
deviate from the typed structure we have been relying on so far
(`config.Root`). We need to deviate from these types when using
variables for fields that are not a string themselves. For example,
using `${var.num_workers}` for an integer `workers` field was impossible
until now (though not implemented in this change).
The loader for a `dyn.Value` includes functionality to capture any and
all type mismatches between the user-defined configuration and the
expected types. These mismatches can be surfaced as validation errors in
future PRs.
Given that many mutators expect the typed struct to be the source of
truth, this change converts between the dynamic representation and the
typed representation on mutator entry and exit. Existing mutators can
continue to modify the typed representation and these modifications are
reflected in the dynamic representation (see `MarkMutatorEntry` and
`MarkMutatorExit` in `bundle/config/root.go`).
Required changes included in this change:
* The existing interpolation package is removed in favor of
`libs/dyn/dynvar`.
* Functionality to merge job clusters, job tasks, and pipeline clusters
are now all broken out into their own mutators.
To be implemented later:
* Allow variable references for non-string types.
* Surface diagnostics about the configuration provided by the user in
the validation output.
* Some mutators use a resource's configuration file path to resolve
related relative paths. These depend on `bundle/config/paths.Path` being
set and populated through `ConfigureConfigFilePath`. Instead, they
should interact with the dynamically typed configuration directly. Doing
this also unlocks being able to differentiate different base paths used
within a job (e.g. a task override with a relative path defined in a
directory other than the base job).
## Tests
* Existing unit tests pass (some have been modified to accommodate)
* Integration tests pass
## Changes
When resolving a value returned by the lookup function, the code would
call into `resolveRef` with the key that `resolveKey` was called with.
In doing so, it would cache the _new_ ref under that key.
We fix this by caching ref resolution only at the top level and relying
on lookup caching to avoid duplicate work.
This came up while testing #1098.
## Tests
Unit test.
## Changes
This is a follow-up to #1211 prompted by the addition of a recursive
type in the Go SDK v0.31.0 (`jobs.ForEachTask`).
When populating missing fields with their zero values we must not
inadvertently recurse into a recursive type.
## Tests
New unit test fails with a stack overflow if the fix if the check is
disabled.
## Changes
This feature supports variable lookups in a `dyn.Value` that are present
in the type but haven't been initialized with a value.
For example: `${bundle.git.origin_url}` is present in the `dyn.Value`
only if it was assigned a value. If it wasn't assigned a value it should
resolve to the empty string. This normalization option, when set,
ensures that all fields that are represented in the specified type are
present in the return value.
This change is in support of #1098.
## Tests
Added unit test.
These fields (key and values) needs to be double quoted in order for
yaml loader to read, parse and unmarshal it into Go struct correctly
because these fields are `map[string]string` type.
## Tests
Added regression unit and E2E tests
## Changes
Before this change, any error in a subtree would cause the entire
subtree to be dropped from the output.
This is not ideal when debugging, so instead we drop only the values
that cannot be normalized. Note that this doesn't change behavior if the
caller is properly checking the returned diagnostics for errors.
Note: this includes a change to use `dyn.InvalidValue` as opposed to
`dyn.NilValue` when returning errors.
## Tests
Added unit tests for the case where nested struct, map, or slice
elements contain an error.
## Changes
`executor.Exec` now uses `cmd.CombinedOutput`. Previous implementation
was hanging on my windows VM during `bundle deploy` on the
`ReadAll(MultiReader(stdout, stderr))` line.
The problem is related to the fact the MultiReader reads sequentially,
and the `stdout` is the first in line. Even simple `io.ReadAll(stdout)`
hangs on me, as it seems like the command that we spawn (python wheel
build) waits for the error stream to be finished before closing stdout
on its own side? Reading `stderr` (or `out`) in a separate go-routine
fixes the deadlock, but `cmd.CombinedOutput` feels like a simpler
solution.
Also noticed that Exec was not removing `scriptFile` after itself, fixed
that too.
## Tests
Unit tests and manually
## Changes
Not doing this means that the output struct is not a true representation
of the `dyn.Value` and unrepresentable state (e.g. unexported fields)
can be carried over across `convert.ToTyped` calls.
## Tests
Unit tests.
## Changes
This was an issue in cases where the typed structure contains a non-nil
pointer to an empty struct. After conversion to a `dyn.Value` and back
to the typed structure, the pointer became nil.
## Tests
Unit tests.
## Changes
References to keys that themselves are also variable references were
shortcircuited in the previous approach. This meant that certain fields
were resolved even if the lookup function would have instructed to skip
resolution.
To fix this we separate the memoization of resolved variable references
from the memoization of lookups. Now, every variable reference is passed
through the lookup function.
## Tests
Before this change, the new test failed with:
```
=== RUN TestResolveWithSkipEverything
[...]/libs/dyn/dynvar/resolve_test.go:208:
Error Trace: [...]/libs/dyn/dynvar/resolve_test.go:208
Error: Not equal:
expected: "${d} ${c} ${c} ${d}"
actual : "${b} ${a} ${a} ${b}"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-${d} ${c} ${c} ${d}
+${b} ${a} ${a} ${b}
Test: TestResolveWithSkipEverything
```
## Changes
Group bundle run flags by job and pipeline types
## Tests
```
Run a resource (e.g. a job or a pipeline)
Usage:
databricks bundle run [flags] KEY
Job Flags:
--dbt-commands strings A list of commands to execute for jobs with DBT tasks.
--jar-params strings A list of parameters for jobs with Spark JAR tasks.
--notebook-params stringToString A map from keys to values for jobs with notebook tasks. (default [])
--params stringToString comma separated k=v pairs for job parameters (default [])
--pipeline-params stringToString A map from keys to values for jobs with pipeline tasks. (default [])
--python-named-params stringToString A map from keys to values for jobs with Python wheel tasks. (default [])
--python-params strings A list of parameters for jobs with Python tasks.
--spark-submit-params strings A list of parameters for jobs with Spark submit tasks.
--sql-params stringToString A map from keys to values for jobs with SQL tasks. (default [])
Pipeline Flags:
--full-refresh strings List of tables to reset and recompute.
--full-refresh-all Perform a full graph reset and recompute.
--refresh strings List of tables to update.
--refresh-all Perform a full graph update.
Flags:
-h, --help help for run
--no-wait Don't wait for the run to complete.
Global Flags:
--debug enable debug logging
-o, --output type output type: text or json (default text)
-p, --profile string ~/.databrickscfg profile
-t, --target string bundle target to use (if applicable)
--var strings set values for variables defined in bundle config. Example: --var="foo=bar"
```
## Changes
This function could panic when either side of the comparison is a nil or
empty slice. This logic is triggered when comparing the input value to
the output value when calling `dyn.Map`.
## Tests
Unit tests.
## Changes
Adds the short_name helper function. short_name is useful when templates
do not want to print the full userName (typically email or service
principal application-id) of the current user.
## Tests
Integration test. Also adds integration tests for other helper functions
that interact with the Databricks API.
## Changes
Allow specifying executable in artifact section
```
artifacts:
test:
type: whl
executable: bash
...
```
We also skip bash found on Windows if it's from WSL because it won't be
correctly executed, see the issue above
Fixes#1159
## Changes
In the dynamic configuration, the nil value (dyn.NilValue) denotes a
value that should not be serialized, ie a value being nil is the same as
it not existing in the first place.
This is not true for zero values in maps and slices. This PR fixes the
conversion from typed values to dyn.Value, to treat zero values in maps
and slices as zero and not nil.
## Tests
Unit tests
## Changes
This PR:
Introduces `anyOf` to `skip_prompt_if`. This allows you to make OR
conditionals for skipping prompts during template initialization.
## Tests
Added unit test and confirmed existing ones still work. Also tested
manually.
---------
Co-authored-by: Shreyas Goenka <shreyas.goenka@databricks.com>
## Changes
This is the `dyn` counterpart to the `bundle/config/interpolation`
package.
It relies on the paths in `${foo.bar}` being valid `dyn.Path` instances.
It leverages `dyn.Walk` to get a complete picture of all variable
references and uses `dyn.Get` to retrieve values pointed to by variable
references.
Depends on #1142.
## Tests
Unit test coverage. I tried to mirror the tests from
`bundle/config/interpolation` and added new ones where applicable (for
example to test type retention of referenced values).
## Changes
This change adds the following functions:
* `dyn.Get(value, "foo.bar") -> (dyn.Value, error)`
* `dyn.Set(value, "foo.bar", newValue) -> (dyn.Value, error)`
* `dyn.Map(value, "foo.bar", func) -> (dyn.Value, error)`
And equivalent functions that take a previously constructed `dyn.Path`:
* `dyn.GetByPath(value, dyn.Path) -> (dyn.Value, error)`
* `dyn.SetByPath(value, dyn.Path, newValue) -> (dyn.Value, error)`
* `dyn.MapByPath(value, dyn.Path, func) -> (dyn.Value, error)`
Changes made by the "set" and "map" functions are never reflected in the
input argument; they return new `dyn.Value` instances for all nodes in
the path leading up to the changed value.
## Tests
New unit tests cover all critical paths.
## Changes
Now it's possible to generate bundle configuration for existing job.
For now it only supports jobs with notebook tasks.
It will download notebooks referenced in the job tasks and generate
bundle YAML config for this job which can be included in larger bundle.
## Tests
Running command manually
Example of generated config
```
resources:
jobs:
job_128737545467921:
name: Notebook job
format: MULTI_TASK
tasks:
- task_key: as_notebook
existing_cluster_id: 0704-xxxxxx-yyyyyyy
notebook_task:
base_parameters:
bundle_root: /Users/andrew.nester@databricks.com/.bundle/job_with_module_imports/development/files
notebook_path: ./entry_notebook.py
source: WORKSPACE
run_if: ALL_SUCCESS
max_concurrent_runs: 1
```
## Tests
Manual (on our last 100 jobs) + added end-to-end test
```
--- PASS: TestAccGenerateFromExistingJobAndDeploy (50.91s)
PASS
coverage: 61.5% of statements in ./...
ok github.com/databricks/cli/internal/bundle 51.209s coverage: 61.5% of
statements in ./...
```
## Changes
The nil value is a real valid value that we need to represent. To
accommodate this we introduced `dyn.KindInvalid` as the zero-value for
`dyn.Kind` (see #904), but did not yet update the comments on
`dyn.NilValue` or add tests for `kind.go`.
This also moves `KindNil` to be last in the definition order (least
likely to care about it).
## Tests
Tests pass.
## Changes
The file `value.go` had a couple `AsZZZ` and `MustZZZ` functions.
This change backfills missing versions and moves all of them to a
separate file.
## Tests
Tests pass; full coverage.
## Changes
This PR changes the default and `mode: production` recommendation to
target `/Users` for deployment. Previously, we used `/Shared`, but
because of a lack of POSIX-like permissions in WorkspaceFS this meant
that files inside would be readable and writable by other users in the
workspace.
Detailed change:
* `default-python` no longer uses a path that starts with `/Shared`
* `mode: production` no longer requires a path that starts with
`/Shared`
## Related PRs
Docs: https://github.com/databricks/docs/pull/14585
Examples: https://github.com/databricks/bundle-examples/pull/17
## Tests
* Manual tests
* Template unit tests (with an extra check to avoid /Shared)
## Changes
This PR adds retry logic to user input prompts, prompting users again if
the value does not match the requirements specified in the bundle
template schema.
## Tests
Manually. Here's an example UX. The first prompt expects an integer and
the second one a string made only from the letters "defg"
```
shreyas.goenka@THW32HFW6T cli % cli bundle init ~/mlops-stack
Please enter an integer [123]: abc
Validation failed: "abc" is not a integer
Please enter an integer [123]: 123
Please enter a string [dddd]: apple
Validation failed: invalid value for input_root_dir: "apple". Only characters the 'd', 'e', 'f', 'g' are allowed
```
## Changes
The name "dynamic value", or "dyn" for short, is more descriptive than
the opaque "config". Also, it conveniently does not alias with other
packages in the repository, or (popular ones) elsewhere.
(discussed with @andrewnester)
## Tests
n/a
## Changes
This change adds:
* A `config.Walk` function to walk a configuration tree
* A `config.Path` type to represent a value's path inside a tree
* Functions to create a `config.Path` from a string, or convert one to a
string
## Tests
Additional unit tests with full coverage.
## Changes
Instead of handling command chaining ourselves, we execute passed
commands as-is by storing them, in temp file and passing to correct
interpreter (bash or cmd) based on OS.
Fixes#1065
## Tests
Added unit tests
## Changes
Fixes nightly test `TestAccBundleInitErrorOnUnknownFields`.
`TestAccBundleInitErrorOnUnknownFields` has an interactive shell by
default so the test fails on waiting for prompt.
This was introduced in #1069.
## Tests
Nightly test succeed.
## Changes
If a user configures a workspace host in a bundle and wants to use the
"azure-cli" authentication type, we would still run profile resolution.
If the databrickscfg has a matching profile, we still load it, even
though it should be a fallback.
## Tests
* Unit test.
* Manually confirmed that setting `DATABRICKS_AUTH_TYPE=azure-cli` now
works as expected.
## Changes
- Tweak strings, documentation in template
- Extend requirements-dev.txt with setuptools/wheel for building whl
files
- Clarify what the "_job.yml" file is for for users who are only
interested in DLT pipelines (answering a question that came up recently)
## Tests
Existing tests exercise this template
## Changes
It wasn't working because it deferred to the regular `slog.TextHandler`
for the `WithAttr` and `WithGroup` functions. Both of these functions
don't mutate the handler but return a new one. When the top-level logger
called one of these, log records in that context used the standard
handler instead of ours.
To implement tracking of attributes and groups, I followed the guide at
https://github.com/golang/example/blob/master/slog-handler-guide/README.md
for writing custom handlers.
## Tests
The new tests demonstrate formatting through `t.Log` and look good.
## Changes
This PR introduces the `skip_prompt_if` extension to the jsonschema
library. If the inputs provided by the user match the JSON schema then
the prompt for that property is skipped.
Right now only constant checks are supported, but if in the future more
complicated conditionals are required, this can be extended to support
`allOf`, `oneOf`, `anyOf` etc allowing template authors to specify
conditionals of arbitary complexity.
## Tests
Unit tests and manually.
## Changes
This PR adds versioning for bundle templates. Right now there's only
logic for the maximum version of templates supported. At some point in
the future if we make a breaking template change we can also include a
minimum version of template supported by the CLI.
## Tests
Unit tests.
## Changes
Only clusters with their source attribute equal to `UI` or `API` should
be presented in the dropdown.
## Tests
Unit test and manual confirmation.