## Changes
This PR sets the following fields for all jobs that are deployed from a
DAB
1. `deployment`: This provides the platform with the path to a file to
read the metadata from.
2. `edit_mode`: This tells the platform to display the break-glass UI
for jobs deployed from a DAB. Setting this is required to re-lock the UI
after a user clicks "disconnect from source".
3. `format = MULTI_TASK`. This makes the Terraform provider always use
jobs API 2.1 for creating/updating the job. Required because
`deployment` and `edit_mode` are only available in API 2.1.
## Tests
Unit test and manually. Manually verified that deployments trigger the
break glass UI. Manually verified there is no Terraform drift when all
three fields are set.
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Bumps [github.com/google/uuid](https://github.com/google/uuid) from
1.4.0 to 1.5.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/google/uuid/releases">github.com/google/uuid's
releases</a>.</em></p>
<blockquote>
<h2>v1.5.0</h2>
<h2><a
href="https://github.com/google/uuid/compare/v1.4.0...v1.5.0">1.5.0</a>
(2023-12-12)</h2>
<h3>Features</h3>
<ul>
<li>Validate UUID without creating new UUID (<a
href="https://redirect.github.com/google/uuid/issues/141">#141</a>) (<a
href="9ee7366e66">9ee7366</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/google/uuid/blob/master/CHANGELOG.md">github.com/google/uuid's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/google/uuid/compare/v1.4.0...v1.5.0">1.5.0</a>
(2023-12-12)</h2>
<h3>Features</h3>
<ul>
<li>Validate UUID without creating new UUID (<a
href="https://redirect.github.com/google/uuid/issues/141">#141</a>) (<a
href="9ee7366e66">9ee7366</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4d47f8eb06"><code>4d47f8e</code></a>
chore(master): release 1.5.0 (<a
href="https://redirect.github.com/google/uuid/issues/145">#145</a>)</li>
<li><a
href="9ee7366e66"><code>9ee7366</code></a>
feat: Validate UUID without creating new UUID (<a
href="https://redirect.github.com/google/uuid/issues/141">#141</a>)</li>
<li><a
href="b35aa6a595"><code>b35aa6a</code></a>
add uuid version 6 and 7 (<a
href="https://redirect.github.com/google/uuid/issues/139">#139</a>)</li>
<li>See full diff in <a
href="https://github.com/google/uuid/compare/v1.4.0...v1.5.0">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/google/uuid&package-manager=go_modules&previous-version=1.4.0&new-version=1.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Changes
If a user configures a workspace host in a bundle and wants to use the
"azure-cli" authentication type, we would still run profile resolution.
If the databrickscfg has a matching profile, we still load it, even
though it should be a fallback.
## Tests
* Unit test.
* Manually confirmed that setting `DATABRICKS_AUTH_TYPE=azure-cli` now
works as expected.
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.26.1 to 0.26.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.26.2</h2>
<p>This is a bugfix release, including a fix correcting issues with
OAuth flows, due to a bug with the propagation of the response status in
<code>httpclient</code>'s <code>RoundTrip()</code> implementation. This
fixes the <code>failed during request visitor: token: oauth2: cannot
fetch token: Response: {...}</code> error.</p>
<p>All fixes:</p>
<ul>
<li>Migrate Azure MSI & Metadata Service token sources to
<code>httpclient</code> and add 100% test coverage (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/709">#709</a>).</li>
<li>Added <code>config.NewAzureCliTokenSource</code> and
<code>config.NewAzureMsiTokenSource</code> constructors (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/727">#727</a>).</li>
<li>Use per-config refresh context for OAuth tokens (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/728">#728</a>).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>0.26.2</h2>
<p>This is a bugfix release, including a fix correcting issues with
OAuth flows, due to a bug with the propagation of the response status in
<code>httpclient</code>'s <code>RoundTrip()</code> implementation. This
fixes the <code>failed during request visitor: token: oauth2: cannot
fetch token: Response: {...}</code> error.</p>
<p>All fixes:</p>
<ul>
<li>Migrate Azure MSI & Metadata Service token sources to
<code>httpclient</code> and add 100% test coverage (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/709">#709</a>).</li>
<li>Added <code>config.NewAzureCliTokenSource</code> and
<code>config.NewAzureMsiTokenSource</code> constructors (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/727">#727</a>).</li>
<li>Use per-config refresh context for OAuth tokens (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/728">#728</a>).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ebb3ea48bc"><code>ebb3ea4</code></a>
Release v0.26.2 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/732">#732</a>)</li>
<li><a
href="2dbf06aac2"><code>2dbf06a</code></a>
Use per-config refresh context for OAuth tokens (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/728">#728</a>)</li>
<li><a
href="964cdc10f7"><code>964cdc1</code></a>
Added <code>config.NewAzureCliTokenSource</code> and
<code>config.NewAzureMsiTokenSource</code> con...</li>
<li><a
href="82d089a391"><code>82d089a</code></a>
Migrate Azure MSI & Metadata Service token sources to
<code>httpclient</code> and add 10...</li>
<li><a
href="8b28282a41"><code>8b28282</code></a>
Add account level MSI credentials test (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/726">#726</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.26.1...v0.26.2">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.26.1&new-version=0.26.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
## Changes
Notifications weren't passed along because of a plural vs singular
mismatch.
## Tests
* Added unit test coverage.
* Manually confirmed it now works in an example bundle.
## Changes
- Tweak strings, documentation in template
- Extend requirements-dev.txt with setuptools/wheel for building whl
files
- Clarify what the "_job.yml" file is for for users who are only
interested in DLT pipelines (answering a question that came up recently)
## Tests
Existing tests exercise this template
## Changes
This PR:
1. Move code to load bundle JSON Schema descriptions from the OpenAPI
spec to an internal Go module
2. Remove command line flags from the `bundle schema` command. These
flags were meant for internal processes and at no point were meant for
customer use.
3. Regenerate `bundle_descriptions.json`
4. Add support for `bundle: "deprecated"`. The `environments` field is
tagged as deprecated in this PR and consequently will no longer be a
part of the bundle schema.
## Tests
Tested by regenerating the CLI against its current OpenAPI spec (as
defined in `__openapi_sha`). The `bundle_descriptions.json` in this PR
was generated from the code generator.
Manually checked that the autocompletion / descriptions from the new
bundle schema are correct.
## Changes
It wasn't working because it deferred to the regular `slog.TextHandler`
for the `WithAttr` and `WithGroup` functions. Both of these functions
don't mutate the handler but return a new one. When the top-level logger
called one of these, log records in that context used the standard
handler instead of ours.
To implement tracking of attributes and groups, I followed the guide at
https://github.com/golang/example/blob/master/slog-handler-guide/README.md
for writing custom handlers.
## Tests
The new tests demonstrate formatting through `t.Log` and look good.
## Changes
It makes the behaviour consistent with or without `python_wheel_wrapper`
on when job is run with `--python-params` flag.
In `python_wheel_wrapper` mode it converts dynamic `python_params` in a
dynamic specially named `notebook_param` and the wrapper reads them with
`dbutils` and pass to `sys.argv`
Fixes#1000
## Tests
Added an integration test.
Integration tests pass.
## Changes
This PR adds documentation for positional arguments in commands that are
generated from the openapi spec.
Note: the changes to `.gitattributes` will be revert / properly fixed in
https://github.com/databricks/cli/pull/1012
## Changes
This PR introduces the `skip_prompt_if` extension to the jsonschema
library. If the inputs provided by the user match the JSON schema then
the prompt for that property is skipped.
Right now only constant checks are supported, but if in the future more
complicated conditionals are required, this can be extended to support
`allOf`, `oneOf`, `anyOf` etc allowing template authors to specify
conditionals of arbitary complexity.
## Tests
Unit tests and manually.
## Changes
This PR adds versioning for bundle templates. Right now there's only
logic for the maximum version of templates supported. At some point in
the future if we make a breaking template change we can also include a
minimum version of template supported by the CLI.
## Tests
Unit tests.
## Changes
Only clusters with their source attribute equal to `UI` or `API` should
be presented in the dropdown.
## Tests
Unit test and manual confirmation.
## Changes
The code included the to-be-created profile in the configuration and
that triggered the SDK to try and load it. Instead, we must use the
specified host and token directly.
## Tests
Manually. More integration test coverage tbd.
## Changes
The manual unshallow step is superfluous and can be done as part of the
`actions/checkout` step.
Companion to #1022.
## Tests
Manual trigger of the snapshot build workflow.
## Changes
Build failure for 32-bit Windows binary due to integer overflow in the
SDK.
We don't test 32-bit anywhere. I propose we stop publishing these builds
until we receive evidence they are still useful.
## Tests
n/a
## Changes
We didn't return the error upon creating a workspace or account client.
If there is an error, it must always propagate up the stack. The result
of this bug was that we were setting a `nil` account or workspace
client, which in turn caused SIGSEGVs.
Fixes#913.
## Tests
Manually confirmed this fixes the linked issue. The CLI now correctly
returns an error when the client cannot be constructed.
The issue was reproducible using a `.databrickscfg` with a single,
incorrectly configured profile.
## Changes
If there are no matches when doing Glob call for pipeline library
defined, leave the entry as is.
The next mutators in the chain will detect that file is missing and the
error will be more user friendly.
Before the change
```
Starting resource deployment
Error: terraform apply: exit status 1
Error: cannot create pipeline: libraries must contain at least one element
```
After
```
Error: notebook ./non-existent not found
```
## Tests
Added regression unit tests
## Changes
Removed hash from the upload path since it's not useful anyway.
The main reason for that change was to make it work on all-purpose
clusters. But in order to make it work, wheel version needs to be
increased anyway. So having only hash in path is useless.
Note: using --build-number (build tag) flag does not help with
re-installing libraries on all-purpose clusters. The reason is that
`pip` ignoring build tag when upgrading the library and only look at
wheel version.
Build tag is only used for sorting the versions and the one with higher
build tag takes priority when installed. It only works if no library is
installed.
See
a15dd75d98/src/pip/_internal/index/package_finder.py (L522-L556)https://github.com/pypa/pip/issues/4781
Thus, the only way to reinstall the library on all-purpose cluster is to
increase wheel version manually or use automatic version generation,
f.e.
```
setup(
version=datetime.datetime.utcnow().strftime("%Y%m%d.%H%M%S"),
...
)
```
## Tests
Integration tests passed.
## Changes
This makes mlops-stacks more discoverable and makes the UX of
initialising the mlops-stack template better.
## Tests
Manually
Dropdown UI:
```
shreyas.goenka@THW32HFW6T projects % cli bundle init
Template to use:
▸ default-python
mlops-stacks
```
Help message:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init -h
Initialize using a bundle template.
TEMPLATE_PATH optionally specifies which template to use. It can be one of the following:
- default-python: The default Python template
- mlops-stacks: The Databricks MLOps Stacks template. More information can be found at: https://github.com/databricks/mlops-stacks
```
## Changes
This PR makes changes required to automatically update the bundle docs
during the CLI release process. We rely on `post_generate` scripts that
are executed after code generation with CWD as the CLI repo root.
The new `output-file` flag is introduced because stdout redirect does
not work here and would otherwise require changes to our release
automation CLI (deco CLI)
## Tests
Manually. Regenerated the CLI and the descriptions were indeed generated
for the CLI from the provided openapi spec.
## Changes
If a struct has a field of type `config.Value`, then we set it to the
source value while converting a `config.Value` instance to a struct as
part of a call to `convert.ToTyped`.
This is convenient when dealing with deeply nested structs where
functions on inner structs need access to the metadata provided by their
corresponding `config.Value` (e.g. where they were defined).
## Tests
Added unit tests pass.
## Changes
A bug in the code that pulls the remote state could cause the local
state to be empty instead of a copy of the remote state. This happened
only if the local state was present and stale when compared to the
remote version.
We correctly checked for the state serial to see if the local state had
to be replaced but didn't seek back on the remote state before writing
it out. Because the staleness check would read the remote state in full,
copying from the same reader would immediately yield an EOF.
## Tests
* Unit tests for state pull and push mutators that rely on a mocked
filer.
* An integration test that deploys the same bundle from multiple paths,
triggering the staleness logic.
Both failed prior to the fix and now pass.
## Changes
This breaks out the flags into a separate struct to make it easier to
pass around.
If specified, the flag calls into the `cfgpicker` to prompt for a
cluster to associated with the profile.
## Tests
Existing tests pass; added one for host validation.
---------
Co-authored-by: Miles Yucht <miles@databricks.com>
## Changes
`databricks configure` creates a new .databrickscfg if one doesn't
already exist, but `databricks auth login` fails in this case. Because
`databricks auth login` anyways writes out the config file, we
gracefully handle this error and continue.
## Tests
Unit test.
```
$ ls ~/.databrickscfg*
/Users/miles/.databrickscfg.bak
$ ./cli auth login
Databricks Profile Name: test
Databricks Host: https://<HOST>
Profile test was successfully saved
$ ls ~/.databrickscfg*
/Users/miles/.databrickscfg /Users/miles/.databrickscfg.bak
$ cat ~/.databrickscfg
; The profile defined in the DEFAULT section is to be used as a fallback when no profile is explicitly specified.
[DEFAULT]
[test]
host = https://<HOST>
auth_type = databricks-cli
```
Adds better error message when input path is not a bundle template
before:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init ~/bricks
Error: open /Users/shreyas.goenka/bricks/databricks_template_schema.json: no such file or directory
```
after:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init ~/bricks
Error: expected to find a template schema file at /Users/shreyas.goenka/bricks/databricks_template_schema.json
```