## Changes
The host URL for databricks workspaces includes the workspaceId by
default as a positional arg. Eg:
https://e2-dogfood.staging.cloud.databricks.com/?o=1234
Thus a user can't simply copy paste the URL today to the auth login
command. They'll see a runtime error:
```
➜ cli git:(main) ✗ databricks auth login --host https://e2-dogfood.staging.cloud.databricks.com/\?o\=xxx --profile new-dg
Error: oidc: fetch .well-known: failed to unmarshal response body: invalid character '<' looking for beginning of value. This is likely a bug in the Databricks SDK for Go or the underlying REST API. Please report this issue with the following debugging information to the SDK issue tracker at https://github.com/databricks/databricks-sdk-go/issues. Request log:
GET /login.html
...
```
## Tests
Unit tests and manually. Now auth login works even when the workspace_id
is included in the URL.
## Changes
The file presence check for dashboard files was missing a
`filepath.ToSlash`.
This means it didn't work on Windows unless the dashboard was located at
a path without slashes (i.e. the bundle root).
Closes#1875.
## Tests
* Added a unit test to cover this case (failed before the fix).
* Manually ran a dashboard deployment on Windows.
## Changes
The commit where resource lookup was factored out into a separate
package (#1858) didn't take into account the use of `args` further down
in the code.
This change fixes that oversight by returning the tail arguments when
determining which resource to run. The later call no longer has to index
the `args` slice.
## Tests
Manually confirmed that the command works when being prompted for the
resource to run.
## Changes
Added E2E test to run python wheels on interactive cluster created in
bundle.
We had a gap in testing wheel on all purpose clusters, so this PR
addresses the gap
## Changes
This PR adds the `cmd-exec-id` field to the user agent. This allows us
to correlate multiple HTTP requests made from the CLI.
### Why Not Use HTTP traceparent?
We considered using the traceparent header in HTTP as an alternative,
but it's not a good fit for our use case. Here's why:
1. Purpose of traceparent: It's designed to trace a single HTTP request
across a distributed system as it moves through subsystems and proxies.
2. Our requirement: We need to trace multiple HTTP requests made during
a single command execution in the CLI.
For more details about how traceparent itself works and how it's used in
the Go SDK, see
https://github.com/databricks/databricks-sdk-go/pull/914.
## Tests
Unit test
## Changes
Old script could not be run from master due to security restrictions and
there is no reliable way to detect if a user as secrets.
## Tests
Opened a PR in SDK Java from fork
https://github.com/databricks/databricks-sdk-java/pull/375
## Changes
This was released 2+ months ago so it has baked enough.
Blog post: https://go.dev/blog/go1.23.
## Tests
None other than unit and integration tests.
This patch release fixes the following error observed when deploying to
/Shared root folder
"Error: Path (/Shared/.bundle/.../resources) doesn't exist"
Bundles:
* Fixed adding /Workspace prefix for resource paths
([#1866](https://github.com/databricks/cli/pull/1866)).
## Changes
This package can be used to marshal a `dyn.Value` as JSON and retain the
ordering of keys in a mapping. Unlike the default behavior of
`json.Marshal,` the output does not encode HTML characters.
Otherwise, this is no different from using `JSON.Marshal` with
`v.AsAny().`
## Tests
Unit tests.
## Changes
This change adds the `databricks bundle generate dashboard` command.
The command requires one of three flags:
* `--existing-id` to generate configuration for an existing dashboard by
its ID.
* `--existing-path` to generate configuration for an existing dashboard
by its path in the workspace file system.
* `--resource` to generate the `.lvdash.json` dashboard file for a
dashboard that's already defined in the bundle. This option does not
impact the YAML configuration.
A typical workflow could look like this:
1. Use the command with `--existing-id` or `--existing-path` for a
starting point
2. Run `bundle deploy` to deploy a copy of the dashboard
3. Run `bundle open` to open this copy in your browser
4. Navigate to the draft mode and make modifications
5. Run `bundle generate dashboard` with `--resource` to update the local
`.lvdash.json` file with the remote modifications
## Tests
* Unit tests.
* Manual walkthrough as documented in the [Dashboard for NYC Taxi Trip
Analysis
example](https://github.com/databricks/bundle-examples/tree/main/knowledge_base/dashboard_nyc_taxi).
Bumps [github.com/fatih/color](https://github.com/fatih/color) from
1.17.0 to 1.18.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fatih/color/releases">github.com/fatih/color's
releases</a>.</em></p>
<blockquote>
<h2>v1.18.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add RGB API support by <a
href="https://github.com/fatih"><code>@fatih</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/225">fatih/color#225</a></li>
<li>Bump GitHub workflow actions by <a
href="https://github.com/deining"><code>@deining</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/235">fatih/color#235</a></li>
<li>Bump golang.org/x/sys from 0.18.0 to 0.24.0 by <a
href="https://github.com/dependabot"><code>@dependabot</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/236">fatih/color#236</a></li>
<li>Bump golang.org/x/sys from 0.24.0 to 0.25.0 by <a
href="https://github.com/dependabot"><code>@dependabot</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/237">fatih/color#237</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/deining"><code>@deining</code></a> made
their first contribution in <a
href="https://redirect.github.com/fatih/color/pull/235">fatih/color#235</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/fatih/color/compare/v1.17.0...v1.18.0">https://github.com/fatih/color/compare/v1.17.0...v1.18.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1c8d870660"><code>1c8d870</code></a>
Update README.md</li>
<li><a
href="546c2d0f9a"><code>546c2d0</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/225">#225</a> from
fatih/add-rgb-api</li>
<li><a
href="1ff0f97908"><code>1ff0f97</code></a>
Apply suggestions from code review</li>
<li><a
href="5723903daa"><code>5723903</code></a>
Add RGB API support</li>
<li><a
href="f203fbcecb"><code>f203fbc</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/237">#237</a> from
fatih/dependabot/go_modules/golang.org/x/sys-0.25.0</li>
<li><a
href="60aa7fb483"><code>60aa7fb</code></a>
Bump golang.org/x/sys from 0.24.0 to 0.25.0</li>
<li><a
href="741c2f4087"><code>741c2f4</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/236">#236</a> from
fatih/dependabot/go_modules/golang.org/x/sys-0.24.0</li>
<li><a
href="0d24b42a27"><code>0d24b42</code></a>
Bump golang.org/x/sys from 0.18.0 to 0.24.0</li>
<li><a
href="cb154c0218"><code>cb154c0</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/235">#235</a> from
deining/fix-typo</li>
<li><a
href="9b9653e8ce"><code>9b9653e</code></a>
Bump GitHub workflow actions</li>
<li>See full diff in <a
href="https://github.com/fatih/color/compare/v1.17.0...v1.18.0">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/fatih/color&package-manager=go_modules&previous-version=1.17.0&new-version=1.18.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Changes
Adding this notice allows us to collect telemetry for the CLI from the
next CLI version onwards.
---------
Co-authored-by: Julia Crawford (Databricks) <julia.crawford@databricks.com>
## Changes
Automatically trigger integration tests when a PR is opened or updated
## Tests
Workflow below.
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
As of #1846 we have a generalized package for doing resource lookups and
completion.
This change updates the run command to use this instead of more specific
code under `bundle/run`.
## Tests
* Unit tests pass
* Manually confirmed that completion and prompting works
## Changes
This validator checks permissions defined in top-level bundle config and
permissions set in workspace for the folders bundle is deployed to. It
raises the warning if the permissions defined in the workspace are not
defined in bundle.
This validator is executed only during `bundle validate` command.
## Tests
```
Warning: untracked permissions apply to target workspace path
The following permissions apply to the workspace folder at "/Workspace/Users/andrew.nester@databricks.com/.bundle/clusters/default" but are not configured in the bundle:
- level: CAN_MANAGE, user_name: andrew.nester@databricks.com
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
This builds on the functionality added in #1731 that produces a URL for
every resource.
Adds `bundle/resources` package to deal with resource lookups and
command completion. The new functionality is similar to the lookup and
command completion functionality located in `bundle/run`. It differs in
that it doesn't gracefully deal with ambiguous references to resources,
now that we explicitly validate this doesn't occur in the bundle
configuration. It still allows resources to be looked up with their
fully qualified key, `<plural type>.<key>`.
## Tests
* Added unit tests for resource lookup and completion
* Manually confirmed that `bundle open` prompts, accepts a key argument,
and opens a browser
## Changes
Test failures indicate that both stdout and stderr are consumed, yet the
content of stdout doesn't end up in the intended output. This can happen
if the goroutines responsible for writing to the combined output buffer
attempt to write to the same underlying buffer concurrently.
Example failure:
```
=== RUN TestBackgroundCombinedOutput
background_test.go:65:
Error Trace: D:/a/cli/cli/libs/process/background_test.go:65
Error: elements differ
extra elements in list A:
([]interface {}) (len=1) {
(string) (len=1) "2"
}
listA:
([]string) (len=2) {
(string) (len=1) "1",
(string) (len=1) "2"
}
listB:
([]string) (len=1) {
(string) (len=1) "1"
}
Test: TestBackgroundCombinedOutput
```
With the test body:
ca45e53f42/libs/process/background_test.go (L48-L66)
With the implementation of `WithCombinedOutput`:
ca45e53f42/libs/process/opts.go (L72-L78)
Notice that `c.Stdout` does get the "2", or the test failure would have
included the relevant assertion error. This leads me to believe that
there is a race on writing to `buf` from the two goroutines writing to
`c.Stdout` and `c.Stderr`.
## Tests
The test passes. If this PR has the intended effect remains to be
seen...
## Changes
We want to use 'bundle sync' in the vscode extension before running a
file as an ad-hoc job (or through the context api). Right now we use
bundle deploy in these cases, but deploying bundle resources is not
always expected when you just want to quickly run a file. Sync makes
more sense in these cases, but we still want to have verbose output to
see what's happening.
In the 'deploy' command we have hidden 'verbose' flag. For the sync I've
just added 'output' flag, handling both json and text cases, similar to
how it's done in the non-bundle `sync` command. The flag is not hidden
(although we still don't show any output by default, if the flag is not
set).
VSCode Extension PR:
https://github.com/databricks/databricks-vscode/pull/1401
## Tests
Manually
Change the default-python template to not set the `catalog` field for
the pipeline for workspaces that set `hive_metastore` as the default
catalog. The Pipelines service currently returns an error when that
value is used for the `catalog` field.
This is the most simple fix for this issue, which was reported by a
customer. As a followup, we should look at whether we want to prompt for
a catalog instead, possibly just for this specific scenario.
## Changes
We don't need to cancel existing runs when the job is continuous and
unpaused. The `/jobs/run-now` command will cancel the existing run and
trigger a new one automatically.
Cancelling the job manually can cause a race condition where both the
manual trigger from the CLI and the continuous trigger from the job
configuration happens at the same time. This PR prevents that from
happening.
## Tests
Unit tests and manually
## Changes
This change allows the `sync` command to work from [git
worktrees](https://git-scm.com/docs/git-worktree).
## Tests
* Added unit tests for traversal of worktree related files.
* Manually confirmed that synchronization of files from a main checkout,
as well as a worktree, observed the same ignore rules (both locally
defined as well as from `$GIT_DIR/info/exclude`).
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Convenience script to exec into a shell where a CLI build for a specific
branch is made available.
## Tests
Manually from `/tmp` with `bash <([path]) demo-dashboards`.
## Changes
This file is at `info/exclude`, and not `info/excludes`.
Also see https://git-scm.com/docs/gitignore.
## Tests
Manually confirmed that these ignore patterns are now picked up. I
created a repository with a pattern in this file and ran `sync` to
confirm it ignores files matching the pattern.
## Changes
Dashboards can be imported either via its own CRUD API, or via the
workspace import API. This test encodes the assumptions we can make
about the API behavior. More specifically, the identity of the dashboard
is retained on workspace import, as are the properties that aren't
surfaced in the workspace import API.
## Tests
The integration test passes.
## Changes
In #1218, the `BundleToTerraform` function was replaced by a version
based on the dynamic configuration tree. Since then, it has only been
used in tests to confirm that the output of the old function was equal
to the output of the new function. We no longer need this and can
exclusively rely on the version based on the dynamic configuration tree.
## Tests
Tests pass.
## Changes
Added a warning when incorrect permissions used for `/Workspace/Shared`
bundle root
## Tests
Added unit test
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
Adds a textual output to the `databricks bundle summary` command, which
includes URLs of deployed resources.
Example usage:
```
$ databricks bundle summary
Name: my_pipeline
Target: dev
Workspace:
Host: https://domain.databricks.com
User: user@databricks.com
Path: /Users/user@databricks.com/.bundle/my_pipeline/dev
Resources:
Jobs:
my_project_job:
Name: [dev lennart] my_project_job
URL: https://domain.databricks.com/jobs/206899209187287?o=6051921418418893
Pipelines:
my_project_pipeline:
Name: [dev lennart] my_project_pipeline
URL: https://domain.databricks.com/pipelines/3f849fd5-ba7d-47fa-a34c-c6bf034b4f58?o=6051921418418893
```
Notes:
* The top headers of the output are the same as those from the existing
`bundle validate` command
* URLs are colored light blue in the output
* For resources that haven't been deployed yet, we show `(not deployed)`
in place of the URL
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Pieter Noordhuis <pcnoordhuis@gmail.com>
## Changes
I took the examples from https://yaml.org/spec/1.2.2.
The required modifications to the loader are:
* Correctly parse floating point infinities and NaN
* Correctly parse octal numbers per the YAML 1.2 spec
* Treat "null" keys in a map as valid
## Tests
Existing and new unit tests pass.
## Changes
The issue reported in #1828 illustrates how using a YAML timestamp-like
value (a date in this case) causes an issue during conversion to and
from the typed configuration tree.
We use the `AsAny()` function on the `dyn.Value` when normalizing for
the `any` type. We only use the `any` type for variable values, because
they can assume every type. The `AsAny()` function returns a `time.Time`
for the time value during conversion **to** the typed configuration
tree. Upon conversion **from** the typed configuration tree back into
the dynamic configuration tree, we cannot distinguish a `time.Time`
struct from any other struct.
To address this, we use the underlying string value of the time value
when we normalize for the `any` type.
Fixes#1828.
## Tests
Existing unit tests pass
## Changes
This test passes on normal `azure-prod` but started to fail on
`azure-prod-is`, which is the isolated version of azure-prod. This PR
patches the test to include the error returned from the cloud setup in
`azure-prod-is`.
## Tests
The test passes now on `azure-prod-is`.
## Changes
Fixed unmarshalling json input into `interface{}` type
Commands like `api post` support free form request input, so it should
be unmarshaled correctly
## Tests
Added regression test + E2E test pass
## Changes
Followup from
https://github.com/databricks/cli/pull/1809#discussion_r1790045086. User
will see the following error if the SDK version does not match during
CLI code generation.
```
--- FAIL: TestConsistentDatabricksSdkVersion (1.34s)
info_test.go:53:
Error Trace: /Users/shreyas.goenka/cli/internal/build/info_test.go:53
Error: Not equal:
expected: "0c86ea6dbd9a730c24ff0d4e509603e476955ac5"
actual : "6f6b1371e640f2dfeba72d365ac566368656f6b6"
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-0c86ea6dbd9a730c24ff0d4e509603e476955ac5
+6f6b1371e640f2dfeba72d365ac566368656f6b6
Test: TestConsistentDatabricksSdkVersion
Messages: please update the SDK version before generating the CLI
```
## Tests
Manually asserted that:
1. Generating the CLI without the SDK bump causes an error.
2. Generating the CLI after the SDK bump works as expected.
3. The test works even when the SDK is pinned to a specific commit like
`v0.47.1-0.20241002195128-6cecc224cbf7`
## Changes
Added JSON input validation for CLI commands. Now when invalid JSON
passed as a payload to CLI commands, CLI performs input normalisation
and detects if there are any mismatches such as incorrect types, unknown
fields and etc.
This diagnostic information is printed in standard error output and does
not block command execution, so the change is backward compatible.
Fixes#1769#1764#1625#1560
## Tests
Added unit tests
```
andrew.nester@HFW9Y94129 ~ % databricks jobs create --json '{"seeti}'
Error: error decoding JSON at (inline):1:2: unexpected EOF
andrew.nester@HFW9Y94129 ~ % databricks jobs create --json '{"seeti": true}'
Warning: unknown field: seeti
in (inline):1:9
Error: Job settings must be specified.
```
---------
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
## Changes
This extends the `{{default_catalog}}` helper in templates to ignore any
`PERMISSION_DENIED` error. We're still reviewing when exactly this error
occurs, but if it does, it should not break templates. We should fall
back to assuming there's no default catalog (and no UC) instead.
## Testing
I have not been able to reproduce this issue, but there is a customer
report about "access denied to clusters that don't have unity catalog
enabled" being returned on a non-UC workspace. The error code in this PR
corresponds to that message.
## Next steps
We'll work together with the UC team to review if this error even makes
sense for this API. If that discussion leads to a behavior change in the
API we can update the CLI code again.
## Changes
The two functions `GetShortUserName` and `IsServicePrincipal` are
unrelated to auth or the purpose of the auth package. This change moves
them into their own package and updates `IsServicePrincipal` to take an
`*iam.User` argument instead of a string username.
## Tests
Tests pass.
## Changes
This adds diagnostics for collaborative (production) deployment
scenarios, including:
- Bob deploys a bundle that is normally deployed by Alice, but this
fails because Bob can't write to `/Users/Alice/.bundle`.
- Charlie deploys a bundle that is normally deployed by Alice, but this
fails because he can't create a new pipeline where Alice would be the
owner.
- Alice deploys a bundle where she didn't list herself as one of the
CAN_MANAGE users in permissions. That can work, but is probably a
mistake.
## Tests
Unit tests, manual testing.