## Changes
- Tweak strings, documentation in template
- Extend requirements-dev.txt with setuptools/wheel for building whl
files
- Clarify what the "_job.yml" file is for for users who are only
interested in DLT pipelines (answering a question that came up recently)
## Tests
Existing tests exercise this template
## Changes
It wasn't working because it deferred to the regular `slog.TextHandler`
for the `WithAttr` and `WithGroup` functions. Both of these functions
don't mutate the handler but return a new one. When the top-level logger
called one of these, log records in that context used the standard
handler instead of ours.
To implement tracking of attributes and groups, I followed the guide at
https://github.com/golang/example/blob/master/slog-handler-guide/README.md
for writing custom handlers.
## Tests
The new tests demonstrate formatting through `t.Log` and look good.
## Changes
This PR introduces the `skip_prompt_if` extension to the jsonschema
library. If the inputs provided by the user match the JSON schema then
the prompt for that property is skipped.
Right now only constant checks are supported, but if in the future more
complicated conditionals are required, this can be extended to support
`allOf`, `oneOf`, `anyOf` etc allowing template authors to specify
conditionals of arbitary complexity.
## Tests
Unit tests and manually.
## Changes
This PR adds versioning for bundle templates. Right now there's only
logic for the maximum version of templates supported. At some point in
the future if we make a breaking template change we can also include a
minimum version of template supported by the CLI.
## Tests
Unit tests.
## Changes
Only clusters with their source attribute equal to `UI` or `API` should
be presented in the dropdown.
## Tests
Unit test and manual confirmation.
## Changes
If a struct has a field of type `config.Value`, then we set it to the
source value while converting a `config.Value` instance to a struct as
part of a call to `convert.ToTyped`.
This is convenient when dealing with deeply nested structs where
functions on inner structs need access to the metadata provided by their
corresponding `config.Value` (e.g. where they were defined).
## Tests
Added unit tests pass.
## Changes
A bug in the code that pulls the remote state could cause the local
state to be empty instead of a copy of the remote state. This happened
only if the local state was present and stale when compared to the
remote version.
We correctly checked for the state serial to see if the local state had
to be replaced but didn't seek back on the remote state before writing
it out. Because the staleness check would read the remote state in full,
copying from the same reader would immediately yield an EOF.
## Tests
* Unit tests for state pull and push mutators that rely on a mocked
filer.
* An integration test that deploys the same bundle from multiple paths,
triggering the staleness logic.
Both failed prior to the fix and now pass.
Adds better error message when input path is not a bundle template
before:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init ~/bricks
Error: open /Users/shreyas.goenka/bricks/databricks_template_schema.json: no such file or directory
```
after:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init ~/bricks
Error: expected to find a template schema file at /Users/shreyas.goenka/bricks/databricks_template_schema.json
```
## Changes
DLT currently doesn't always set `$PYTHONPATH` correctly (ES-947370).
This restores the original workaround to make new pipelines work while
that issue is being addressed. The workaround was removed in #832.
Manually tested.
## Changes
This PR is the counterpart to #904. With this change, we are able to
convert a `config.Value` into a Go struct, make modifications to the Go
struct, and reflect those changes in a new `config.Value`.
This functionality allows us to incrementally introduce this
configuration representation to existing bundle mutators. Bundle
mutators expect a `*bundle.Bundle` argument and mutate its configuration
directly. These mutations are not reflected in the corresponding
`config.Value` (once introduced), which means we cannot use the
`config.Value` as source of truth until we update _all_ mutators. To
address this, we can run `convert.ToTyped` and `convert.FromTyped` at
the mutator boundary (from `bundle.Apply`) and capture changes made to
the Go struct. Then we can incrementally make mutators aware of the
`config.Value` configuration and have them mutate that structure
directly.
## Tests
New unit tests pass.
Manual spot checks against the bundle configuration type.
## Changes
If args[0] == "." was provided to bundle init command, it would try to
resolve it as a built in template and error out.
## Tests
Manually
before:
```
shreyas.goenka@THW32HFW6T mlops-stack % cli bundle init .
Error: open /var/folders/lg/njll3hjx7pjcgxs6n7b290bw0000gp/T/templates3934264356/templates/databricks_template_schema.json: no such file or directory
```
after:
```
shreyas.goenka@THW32HFW6T mlops-stack % cli bundle init .
Welcome to MLOps Stacks. For detailed information on project generation, see the README at https://github.com/databricks/mlops-stacks/blob/main/README.md.
Project Name [my-mlops-project]: ^C
```
## Changes
We rely on the descriptions to render the prompts to a user. Thus we
should not allow empty descriptions here. Note, both mlops stacks and
the default-python template have descriptions for all their properties
so this should not be an issue.
## Tests
Unit test
## Changes
`os.Getenv(..)` is not friendly with `libs/env`. This PR makes the
relevant changes to places where we need to read user home directory.
## Tests
Mainly done in https://github.com/databricks/cli/pull/914
## Changes
This PR removes validation for default value against the regex pattern
specified in a JSON schema at schema load time. This is required because
https://github.com/databricks/cli/pull/795 introduces parameterising the
default value as a Go text template impling that the default value now
does not necessarily have to match the pattern at schema load time.
This will also unblock:
https://github.com/databricks/mlops-stacks/pull/108
Note, this does not remove runtime validation for input parameters right
before template initialization, which happens here:
fb32e78c9b/libs/template/materialize.go (L76)
## Tests
Changes to existing test.
## Changes
This PR makes a few methods private, exposing cleaner interfaces to get
the string representations for enums and default values of a JSON
Schema.
## Tests
Manually, template initialization for the `default-python` template
still works as expected.
## Changes
Semantics for merging two instances of `config.Value`:
* Merging x with nil or nil with x always yields x
* Merging maps a and b means entries from map b take precedence
* Merging sequences a and b means concatenating them
These are the same semantics that we use today when calling into mergo
in `bundle/config`.
## Tests
Unit tests pass.
## Changes
This functionality is not exercised (and will not be anytime soon).
Instead we use a map to have first party aliases for supported
templates.
1e46b9f88a/cmd/bundle/init.go (L21)
## Tests
Existing tests and manually, bundle init still works.
## Changes
<!-- Summary of your changes that are easy to understand -->
Take @andrefurlan-db 's original
[commit](https://github.com/databricks/cli/compare/databricks:6e21ced...andrefurlan-db:12ed10c)
to add `apps` support to the CLI and add the yaml file-support as an
override (the apps routes are already apart of the Go SDK and are
available for use in the CLI)
**NOTE: this feature is still private preview. CLI usage will be
internal only**
## Tests
<!-- How is this tested? -->
## Changes
This PR introduces a metadata struct that stores a subset of bundle
configuration that we wish to expose to other Databricks services that
wish to integrate with bundles.
This metadata file is uploaded to a file
`${bundle.workspace.state_path}/metadata.json` in the WSFS destination
of the bundle deployment.
Documentation for emitted metadata fields:
* `version`: Version for the metadata file schema
* `config.bundle.git.branch`: Name of the git branch the bundle was
deployed from.
* `config.bundle.git.origin_url`: URL for git remote "origin"
* `config.bundle.git.bundle_root_path`: Relative path of the bundle root
from the root of the git repository. Is set to "." if they are the same.
* `config.bundle.git.commit`: SHA-1 commit hash of the exact commit this
bundle was deployed from. Note, the deployment might not exactly match
this commit version if there are changes that have not been committed to
git at deploy time,
* `file_path`: Path in workspace where we sync bundle files to.
* `resources.jobs.[job-ref].id`: Id of the job
* `resources.jobs.[job-ref].relative_path`: Relative path of the yaml
config file from the bundle root where this job was defined.
Example metadata object when bundle root and git root are the same:
```json
{
"version": 1,
"config": {
"bundle": {
"lock": {},
"git": {
"branch": "master",
"origin_url": "www.host.com",
"commit": "7af8e5d3f5dceffff9295d42d21606ccf056dce0",
"bundle_root_path": "."
}
},
"workspace": {
"file_path": "/Users/shreyas.goenka@databricks.com/.bundle/pipeline-progress/default/files"
},
"resources": {
"jobs": {
"bar": {
"id": "245921165354846",
"relative_path": "databricks.yml"
}
}
},
"sync": {}
}
}
```
Example metadata when the git root is one level above the bundle repo:
```json
{
"version": 1,
"config": {
"bundle": {
"lock": {},
"git": {
"branch": "dev-branch",
"origin_url": "www.my-repo.com",
"commit": "3db46ef750998952b00a2b3e7991e31787e4b98b",
"bundle_root_path": "pipeline-progress"
}
},
"workspace": {
"file_path": "/Users/shreyas.goenka@databricks.com/.bundle/pipeline-progress/default/files"
},
"resources": {
"jobs": {
"bar": {
"id": "245921165354846",
"relative_path": "databricks.yml"
}
}
},
"sync": {}
}
}
```
This unblocks integration to the jobs break glass UI for bundles.
## Tests
Unit tests and integration tests.
## Changes
Adds a welcome_message field to templates and the default python
template.
## Tests
Manually.
Here's the output logs during template init now:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init
Template to use [default-python]:
Welcome to the sample Databricks Asset Bundle template! Please enter the following information to initialize your sample DAB.
Unique name for this project [my_project]: abcde
Include a stub (sample) notebook in 'abcde/src': no
Include a stub (sample) Delta Live Tables pipeline in 'abcde/src': yes
Include a stub (sample) Python package in 'abcde/src': no
✨ Your new project has been created in the 'abcde' directory!
Please refer to the README.md of your project for further instructions on getting started.
Or read the documentation on Databricks Asset Bundles at https://docs.databricks.com/dev-tools/bundles/index.html.
```
## Changes
This is similar to #904 but instead of converting the dynamic
configuration to Go structs, this normalizes a `config.Value` according
to the type of a Go struct and returns the new, normalized
`config.Value`.
This will be used to ensure that two `config.Value` trees are
type-compatible before we can merge them (i.e. instances from different
files).
Warnings and errors during normalization are accumulated and returned as
a `diag.Diagnostics` structure. We can use this to surface warnings
about unknown fields, or errors about invalid types, in aggregate
instead of one-by-one. This approach is inspired by the pattern to
accumulate diagnostics in Terraform provider code.
## Tests
New unit tests.
## Changes
Now that we have a new YAML loader (see #828), we need code to turn this
into our Go structs.
## Tests
New unit tests pass.
Confirmed that we can replace our existing loader/converter with this
one and that existing unit tests for bundle loading still pass.
## Changes
If a bundle configuration specifies a workspace host, and the user
specifies a profile to use, we perform a check to confirm that the
workspace host in the bundle configuration and the workspace host from
the profile are identical. If they are not, we return an error. The
check was introduced in #571.
Previously, the code included an assumption that the client
configuration was already loaded from the environment prior to
performing the check. This was not the case, and as such if the user
intended to use a non-default path to `.databrickscfg`, this path was
not used when performing the check.
The fix does the following:
* Resolve the configuration prior to performing the check.
* Don't treat the configuration file not existing as an error.
* Add unit tests.
Fixes#884.
## Tests
Unit tests and manual confirmation.
## Changes
In order to support variable interpolation on fields that aren't a
string in the resource types, we need a separate representation of the
bundle configuration tree with the type equivalent of Go's `any`. But
instead of using `any` directly, we can do better and use a custom type
equivalent to `any` that captures additional metadata. In this PR, the
additional metadata is limited to the origin of the configuration value
(file, line number, and column).
The YAML in this commit uses the upstream YAML parser's `yaml.Node` type
to get access to location information. It reimplements the loader that
takes the `yaml.Node` structure and turns it into the configuration tree
we need.
Next steps after this PR:
* Implement configuration tree type checking (against a Go type)
* Implement configuration tree merging (to replace the current merge
functionality)
* Implement conversion to and from the bundle configuration struct
* Perform variable interpolation against this configuration tree (to
support variable interpolation for ints)
* (later) Implement a `jsonloader` that produces the same tree and
includes location information
## Tests
The tests in `yamlloader` perform an equality check on the untyped
output of loading a YAML file between the upstream YAML loader and this
loader. The YAML examples were generated by prompting ChatGPT for
examples that showcase anchors, primitive values, edge cases, etc.
## Changes
Updates to bundle templates can require updated versions of the CLI.
This PR extends the JSON schema representation to allow template authors
to set a min CLI version they require for their templates.
This is required to make improvements/additions to the mlops-stacks repo
## Tests
Tested using unit tests and manually.
For manualy testing, I created a custom build of the CLI using go
releaser and then tested it against a local instance of mlops-stack
When mlops-stack schema has:
```
"min_databricks_cli_version": "v5000.1.1",
```
output (error as expected)
```
shreyas.goenka@THW32HFW6T bricks % ./dist/cli_darwin_arm64/databricks bundle init ~/mlops-stack
Error: minimum CLI version "v5000.1.1" is greater than current CLI version "v0.207.2-dev+1b992c0". Please upgrade your current Databricks CLI
```
When the mlops-stack schema has:
```
"min_databricks_cli_version": "v0.1.1",
```
output (validation passes)
```
shreyas.goenka@THW32HFW6T bricks % ./dist/cli_darwin_arm64/databricks bundle init ~/mlops-stack
Welcome to MLOps Stack. For detailed information on project generation, see the README at https://github.com/databricks/mlops-stack/blob/main/README.md.
Project Name [my-mlops-project]: ^C
```
Improve the output of help, prompts, and so on for `databricks bundle
init` and the default template.
Among other things, this PR adds support for a new `welcome_message`
property that lets a template print a custom message on success:
```
$ databricks bundle init
Template to use [default-python]:
Unique name for this project [my_project]: lennart_project
Include a stub (sample) notebook in 'lennart_project/src': yes
Include a stub (sample) Delta Live Tables pipeline in 'lennart_project/src': yes
Include a stub (sample) Python package in 'lennart_project/src': yes
✨ Your new project has been created in the 'lennart_project' directory!
Please refer to the README.md of your project for further instructions on getting started.
Or read the documentation on Databricks Asset Bundles at https://docs.databricks.com/dev-tools/bundles/index.html.
```
---------
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
## Changes
This PR pays some tech debt by refactoring sync diff computation into
interfaces that are more robust.
Specifically:
1. Refactor the single diff computation function into a `SnapshotState`
class that computes the target state only based on the current local
files making it more robust and not carrying over state from previous
iterations.
2. Adds new validations for the sync state which make sure that the
invariants that downstream code expects are actually held true. This
prevents a class of issues where these invariants break and the
synchroniser behaves unexpectedly.
Note, this does not change the existing schema for the snapshot, only
the way the diff is computed, and thus is backwards compatible (ie does
not require a schema version bump).
## Tests
<!-- How is this tested? -->
This PR adds a few utilities related to Python interpreter detection:
- `python.DetectInterpreters` to detect all Python versions available in
`$PATH` by executing every matched binary name with `--version` flag.
- `python.DetectVirtualEnvPath` to detect if there's any child virtual
environment in `src` directory
- `python.DetectExecutable` to detect if there's python3 installed
either by `which python3` command or by calling
`python.DetectInterpreters().AtLeast("v3.8")`
To be merged after https://github.com/databricks/cli/pull/804, as one of
the steps to get https://github.com/databricks/cli/pull/637 in, as
previously discussed.
## Changes
The jobs backend propagates job tags to the underlying cloud provider's
resources. As such, they need to match the constraints a cloud provider
places on tag values. The display name can contain anything. With this
change, we modify the tag value to equal the short name as used in the
name prefix.
Additionally, we leverage tag normalization as introduced in #819 to
make sure characters that aren't accepted are removed before using the
value as a tag value.
This is a new stab at #810 and should completely eliminate this class of
problems.
## Tests
Tests pass.
## Changes
Prompted by the proposed fix for a tagging-related problem in #810, I
investigated how tag validation works. This turned out to be quite a bit
more complex than anticipated. Tags at the job level (or cluster level)
are passed through to the underlying compute infrastructure and as such
are tested against cloud-specific validation rules. GCP appears to be
the most restrictive. It would be disappointing to always restrict to
`\w+`, so this package implements validation and normalization rules for
each cloud. It can pick the right cloud to use using a Go SDK
configuration.
## Tests
Exhaustive unit tests. The regular expressions were pulled by #814.
## Changes
This PR adds higher-level wrappers for calling subprocesses. One of the
steps to get https://github.com/databricks/cli/pull/637 in, as
previously discussed.
The reason to add `process.Forwarded()` is to proxy Python's `input()`
calls from a child process seamlessly. Another use-case is plugging in
`less` as a pager for the list results.
## Tests
`make test`
## Changes
This PR introduces support for regex pattern validation in our custom
jsonschema validator. This allows us to fail early if a user enters an
invalid value for a field.
For example, now this is what initializing the default template looks
like with an invalid project name:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init
Template to use [default-python]:
Unique name for this project [my_project]: (_*_)
Error: invalid value for project_name: (_*_). Must consist of letter and underscores only.
```
## Tests
New unit tests and manually.
## Changes
Git repos hosted over HTTP do not support shallow cloning. This PR adds
retry logic if we detect shallow cloning is not supported.
Note I saw the match string `dumb http transport does not support
shallow capabilities` being reported in for different hosts on the
internet, so this should work accross a large class of git servers.
Howerver, it's not strictly necessary to have the `--depth` flag so we
can remove it if this issue is reported again.
## Tests
Tested manually. `bundle init` successfully downloads the private HTTP
repo reported during by internal user.