Compare commits

..

26 Commits

Author SHA1 Message Date
Shreyas Goenka faca8ccd7a
remove second py notebook 2024-10-31 14:00:38 +01:00
Shreyas Goenka 606c0c52a1
combine tests with and wo override flag 2024-10-31 13:56:41 +01:00
Shreyas Goenka 9602785347
- 2024-10-31 11:46:13 +01:00
Shreyas Goenka 7d5eca562f
remove extension type 2024-10-31 11:25:37 +01:00
Shreyas Goenka 865c6138f2
Merge remote-tracking branch 'origin' into support-non-python-ipynb 2024-10-31 11:21:32 +01:00
Andrew Nester a56d082910
[Release] Release v0.232.1 (#1869)
This patch release fixes the following error observed when deploying to
/Shared root folder
"Error: Path (/Shared/.bundle/.../resources) doesn't exist"

Bundles:
* Fixed adding /Workspace prefix for resource paths
([#1866](https://github.com/databricks/cli/pull/1866)).
2024-10-30 18:02:49 +00:00
Andrew Nester ac71d2e5ce
Fixed adding /Workspace prefix for resource paths (#1866)
## Changes
`/Workspace` prefix needs to be added to `resource_path` as well.

Fixes the issue mentioned here:
https://github.com/databricks/cli/pull/1822#issuecomment-2447697498

Fixes #1867 

## Tests
Added regression test
2024-10-30 17:34:11 +00:00
Andrew Nester 001a8da882
[Release] Release v0.232.0 (#1865)
**New features for Databricks Asset Bundles:**

This release adds support for managing AI/BI dashboards as part of your
bundle configuration. The `bundle generate` command is updated to
support producing dashboard bundle configuration as well as a serialized
JSON representation of the dashboard.
You can find an example configuration and walkthrough at
https://github.com/databricks/bundle-examples/tree/main/knowledge_base/dashboard_nyc_taxi

CLI:
* Add privacy notice to README
([#1841](https://github.com/databricks/cli/pull/1841)).

Bundles:
* Add support for AI/BI dashboards
([#1743](https://github.com/databricks/cli/pull/1743)).
* Added validator for folder permissions
([#1824](https://github.com/databricks/cli/pull/1824)).
* Add bundle generate variant for dashboards
([#1847](https://github.com/databricks/cli/pull/1847)).
* Use SetPermissions instead of UpdatePermissions when setting folder
permissions based on top-level ones
([#1822](https://github.com/databricks/cli/pull/1822)).

Internal:
* Attempt to reduce test flakiness on Windows
([#1845](https://github.com/databricks/cli/pull/1845)).
* Reuse resource resolution code for the run command
([#1858](https://github.com/databricks/cli/pull/1858)).
* [Internal] Automatically trigger integration tests on PR
([#1857](https://github.com/databricks/cli/pull/1857)).
* [Internal] Add test instructions for external contributors
([#1863](https://github.com/databricks/cli/pull/1863)).
* Add `libs/dyn/jsonsaver`
([#1862](https://github.com/databricks/cli/pull/1862)).


Dependency updates:
* Bump github.com/fatih/color from 1.17.0 to 1.18.0
([#1861](https://github.com/databricks/cli/pull/1861)).

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-30 14:39:02 +00:00
Pieter Noordhuis fa25b92ba1
Add `libs/dyn/jsonsaver` (#1862)
## Changes

This package can be used to marshal a `dyn.Value` as JSON and retain the
ordering of keys in a mapping. Unlike the default behavior of
`json.Marshal,` the output does not encode HTML characters.

Otherwise, this is no different from using `JSON.Marshal` with
`v.AsAny().`

## Tests

Unit tests.
2024-10-29 15:32:33 +00:00
hectorcast-db 20fd401550
[Internal] Add test instructions for external contributors (#1863)
## Changes
Add test instructions for external contributors

## Tests
See Go Changes
https://github.com/databricks/databricks-sdk-go/pull/1073
2024-10-29 13:42:36 +00:00
Andrew Nester f018daf413
Use SetPermissions instead of UpdatePermissions when setting folder permissions based on top-level ones (#1822)
## Changes
Changed to use SetPermissions() to configure the permissions which
remove other permissions on deployment folders.

## Tests
Added unit test
2024-10-29 12:06:38 +00:00
Pieter Noordhuis 1896b09350
Add bundle generate variant for dashboards (#1847)
## Changes

This change adds the `databricks bundle generate dashboard` command.

The command requires one of three flags:
* `--existing-id` to generate configuration for an existing dashboard by
its ID.
* `--existing-path` to generate configuration for an existing dashboard
by its path in the workspace file system.
* `--resource` to generate the `.lvdash.json` dashboard file for a
dashboard that's already defined in the bundle. This option does not
impact the YAML configuration.

A typical workflow could look like this:
1. Use the command with `--existing-id` or `--existing-path` for a
starting point
2. Run `bundle deploy` to deploy a copy of the dashboard
3. Run `bundle open` to open this copy in your browser
4. Navigate to the draft mode and make modifications
5. Run `bundle generate dashboard` with `--resource` to update the local
`.lvdash.json` file with the remote modifications

## Tests

* Unit tests.
* Manual walkthrough as documented in the [Dashboard for NYC Taxi Trip
Analysis
example](https://github.com/databricks/bundle-examples/tree/main/knowledge_base/dashboard_nyc_taxi).
2024-10-29 11:51:59 +00:00
Pieter Noordhuis 11f75fd320
Add support for AI/BI dashboards (#1743)
## Changes

This change adds support for modeling [AI/BI dashboards][docs] in DABs.


[Example bundle configuration][example] is located in the
`bundle-examples` repository.

[docs]: https://docs.databricks.com/en/dashboards/index.html#dashboards
[example]:
https://github.com/databricks/bundle-examples/tree/main/knowledge_base/dashboard_nyc_taxi

## Tests

* Added unit tests for self-contained parts
* Integration test for e2e dashboard deployment and remote change
modification
2024-10-29 09:11:08 +00:00
dependabot[bot] 9c96f006c4
Bump github.com/fatih/color from 1.17.0 to 1.18.0 (#1861)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from
1.17.0 to 1.18.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/fatih/color/releases">github.com/fatih/color's
releases</a>.</em></p>
<blockquote>
<h2>v1.18.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add RGB API support by <a
href="https://github.com/fatih"><code>@​fatih</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/225">fatih/color#225</a></li>
<li>Bump GitHub workflow actions by <a
href="https://github.com/deining"><code>@​deining</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/235">fatih/color#235</a></li>
<li>Bump golang.org/x/sys from 0.18.0 to 0.24.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/236">fatih/color#236</a></li>
<li>Bump golang.org/x/sys from 0.24.0 to 0.25.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/fatih/color/pull/237">fatih/color#237</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/deining"><code>@​deining</code></a> made
their first contribution in <a
href="https://redirect.github.com/fatih/color/pull/235">fatih/color#235</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/fatih/color/compare/v1.17.0...v1.18.0">https://github.com/fatih/color/compare/v1.17.0...v1.18.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1c8d870660"><code>1c8d870</code></a>
Update README.md</li>
<li><a
href="546c2d0f9a"><code>546c2d0</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/225">#225</a> from
fatih/add-rgb-api</li>
<li><a
href="1ff0f97908"><code>1ff0f97</code></a>
Apply suggestions from code review</li>
<li><a
href="5723903daa"><code>5723903</code></a>
Add RGB API support</li>
<li><a
href="f203fbcecb"><code>f203fbc</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/237">#237</a> from
fatih/dependabot/go_modules/golang.org/x/sys-0.25.0</li>
<li><a
href="60aa7fb483"><code>60aa7fb</code></a>
Bump golang.org/x/sys from 0.24.0 to 0.25.0</li>
<li><a
href="741c2f4087"><code>741c2f4</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/236">#236</a> from
fatih/dependabot/go_modules/golang.org/x/sys-0.24.0</li>
<li><a
href="0d24b42a27"><code>0d24b42</code></a>
Bump golang.org/x/sys from 0.18.0 to 0.24.0</li>
<li><a
href="cb154c0218"><code>cb154c0</code></a>
Merge pull request <a
href="https://redirect.github.com/fatih/color/issues/235">#235</a> from
deining/fix-typo</li>
<li><a
href="9b9653e8ce"><code>9b9653e</code></a>
Bump GitHub workflow actions</li>
<li>See full diff in <a
href="https://github.com/fatih/color/compare/v1.17.0...v1.18.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/fatih/color&package-manager=go_modules&previous-version=1.17.0&new-version=1.18.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-29 08:57:10 +00:00
shreyas-goenka 6f05931600
Add privacy notice to README (#1841)
## Changes
Adding this notice allows us to collect telemetry for the CLI from the
next CLI version onwards.

---------

Co-authored-by: Julia Crawford (Databricks) <julia.crawford@databricks.com>
2024-10-28 18:19:55 +00:00
hectorcast-db 5a555de503
[Internal] Automatically trigger integration tests on PR (#1857)
## Changes
Automatically trigger integration tests when a PR is opened or updated

## Tests
Workflow below.

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-25 09:15:24 +00:00
Pieter Noordhuis ed84a33b0a
Reuse resource resolution code for the run command (#1858)
## Changes

As of #1846 we have a generalized package for doing resource lookups and
completion.

This change updates the run command to use this instead of more specific
code under `bundle/run`.

## Tests

* Unit tests pass
* Manually confirmed that completion and prompting works
2024-10-24 13:24:30 +00:00
Andrew Nester eaea308254
Added validator for folder permissions (#1824)
## Changes
This validator checks permissions defined in top-level bundle config and
permissions set in workspace for the folders bundle is deployed to. It
raises the warning if the permissions defined in the workspace are not
defined in bundle.

This validator is executed only during `bundle validate` command.

## Tests

```
Warning: untracked permissions apply to target workspace path

The following permissions apply to the workspace folder at "/Workspace/Users/andrew.nester@databricks.com/.bundle/clusters/default" but are not configured in the bundle:
- level: CAN_MANAGE, user_name: andrew.nester@databricks.com
```

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-24 12:36:17 +00:00
Pieter Noordhuis 89ee7d8a99
Add command to open a resource in the browser (#1846)
## Changes

This builds on the functionality added in #1731 that produces a URL for
every resource.

Adds `bundle/resources` package to deal with resource lookups and
command completion. The new functionality is similar to the lookup and
command completion functionality located in `bundle/run`. It differs in
that it doesn't gracefully deal with ambiguous references to resources,
now that we explicitly validate this doesn't occur in the bundle
configuration. It still allows resources to be looked up with their
fully qualified key, `<plural type>.<key>`.

## Tests

* Added unit tests for resource lookup and completion
* Manually confirmed that `bundle open` prompts, accepts a key argument,
and opens a browser
2024-10-24 12:20:33 +00:00
Pieter Noordhuis eddaddaf8b
Attempt to reduce test flakiness on Windows (#1845)
## Changes

Test failures indicate that both stdout and stderr are consumed, yet the
content of stdout doesn't end up in the intended output. This can happen
if the goroutines responsible for writing to the combined output buffer
attempt to write to the same underlying buffer concurrently.

Example failure:
```
=== RUN   TestBackgroundCombinedOutput
    background_test.go:65: 
        	Error Trace:	D:/a/cli/cli/libs/process/background_test.go:65
        	Error:      	elements differ
        	            	
        	            	extra elements in list A:
        	            	([]interface {}) (len=1) {
        	            	 (string) (len=1) "2"
        	            	}
        	            	
        	            	
        	            	listA:
        	            	([]string) (len=2) {
        	            	 (string) (len=1) "1",
        	            	 (string) (len=1) "2"
        	            	}
        	            	
        	            	
        	            	listB:
        	            	([]string) (len=1) {
        	            	 (string) (len=1) "1"
        	            	}
        	Test:       	TestBackgroundCombinedOutput
```

With the test body:

ca45e53f42/libs/process/background_test.go (L48-L66)

With the implementation of `WithCombinedOutput`:

ca45e53f42/libs/process/opts.go (L72-L78)

Notice that `c.Stdout` does get the "2", or the test failure would have
included the relevant assertion error. This leads me to believe that
there is a race on writing to `buf` from the two goroutines writing to
`c.Stdout` and `c.Stderr`.

## Tests

The test passes. If this PR has the intended effect remains to be
seen...
2024-10-24 12:03:12 +00:00
Andrew Nester ab622e65bb
[Release] Release v0.231.0 (#1856)
CLI:
* Added JSON input validation for CLI commands
([#1771](https://github.com/databricks/cli/pull/1771)).
* Support Git worktrees for `sync`
([#1831](https://github.com/databricks/cli/pull/1831)).

Bundles:
* Add `bundle summary` to display URLs for deployed resources
([#1731](https://github.com/databricks/cli/pull/1731)).
* Added a warning when incorrect permissions used for
`/Workspace/Shared` bundle root
([#1821](https://github.com/databricks/cli/pull/1821)).
* Show actionable errors for collaborative deployment scenarios
([#1386](https://github.com/databricks/cli/pull/1386)).
* Fix path to repository-wide exclude file
([#1837](https://github.com/databricks/cli/pull/1837)).
* Fixed typo in converting cluster permissions
([#1826](https://github.com/databricks/cli/pull/1826)).
* Ignore metastore permission error during template generation
([#1819](https://github.com/databricks/cli/pull/1819)).
* Handle normalization of `dyn.KindTime` into an any type
([#1836](https://github.com/databricks/cli/pull/1836)).
* Added support for pip options in environment dependencies
([#1842](https://github.com/databricks/cli/pull/1842)).
* Fix race condition when restarting continuous jobs
([#1849](https://github.com/databricks/cli/pull/1849)).
* Fix pipeline in default-python template not working for certain
workspaces ([#1854](https://github.com/databricks/cli/pull/1854)).
* Add "output" flag to the bundle sync command
([#1853](https://github.com/databricks/cli/pull/1853)).

Internal:
* Move utility functions dealing with IAM to libs/iamutil
([#1820](https://github.com/databricks/cli/pull/1820)).
* Remove unused `IS_OWNER` constant
([#1823](https://github.com/databricks/cli/pull/1823)).
* Assert SDK version is consistent in the CLI generation process
([#1814](https://github.com/databricks/cli/pull/1814)).
* Fixed unmarshalling json input into `interface{}` type
([#1832](https://github.com/databricks/cli/pull/1832)).
* Fix `TestAccFsMkdirWhenFileExistsAtPath` in isolated Azure
environments ([#1833](https://github.com/databricks/cli/pull/1833)).
* Add behavioral tests for examples from the YAML spec
([#1835](https://github.com/databricks/cli/pull/1835)).
* Remove Terraform conversion function that's no longer used
([#1840](https://github.com/databricks/cli/pull/1840)).
* Encode assumptions about the dashboards API in a test
([#1839](https://github.com/databricks/cli/pull/1839)).
* Add script to make testing of code on branches easier
([#1844](https://github.com/databricks/cli/pull/1844)).

API Changes:
 * Added `databricks disable-legacy-dbfs` command group.

OpenAPI commit cf9c61453990df0f9453670f2fe68e1b128647a2 (2024-10-14)
Dependency updates:
* Upgrade TF provider to 1.54.0
([#1852](https://github.com/databricks/cli/pull/1852)).
* Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0
([#1843](https://github.com/databricks/cli/pull/1843)).
2024-10-23 14:08:27 +00:00
Ilia Babanov 55a055d0f5
Add "output" flag to the bundle sync command (#1853)
## Changes
We want to use 'bundle sync' in the vscode extension before running a
file as an ad-hoc job (or through the context api). Right now we use
bundle deploy in these cases, but deploying bundle resources is not
always expected when you just want to quickly run a file. Sync makes
more sense in these cases, but we still want to have verbose output to
see what's happening.

In the 'deploy' command we have hidden 'verbose' flag. For the sync I've
just added 'output' flag, handling both json and text cases, similar to
how it's done in the non-bundle `sync` command. The flag is not hidden
(although we still don't show any output by default, if the flag is not
set).

VSCode Extension PR:
https://github.com/databricks/databricks-vscode/pull/1401


## Tests
Manually
2024-10-23 11:08:12 +00:00
Lennart Kats (databricks) 60c153c0e7
Fix pipeline in default-python template not working for certain workspaces (#1854)
Change the default-python template to not set the `catalog` field for
the pipeline for workspaces that set `hive_metastore` as the default
catalog. The Pipelines service currently returns an error when that
value is used for the `catalog` field.

This is the most simple fix for this issue, which was reported by a
customer. As a followup, we should look at whether we want to prompt for
a catalog instead, possibly just for this specific scenario.
2024-10-22 15:52:46 +00:00
shreyas-goenka 3bab21e72e
Fix race condition when restarting continuous jobs (#1849)
## Changes
We don't need to cancel existing runs when the job is continuous and
unpaused. The `/jobs/run-now` command will cancel the existing run and
trigger a new one automatically.

Cancelling the job manually can cause a race condition where both the
manual trigger from the CLI and the continuous trigger from the job
configuration happens at the same time. This PR prevents that from
happening.

## Tests
Unit tests and manually
2024-10-22 14:59:17 +00:00
Andrew Nester 68d69d6e0b
Upgrade TF provider to 1.54.0 (#1852)
## Changes
Upgrade TF provider to 1.54.0
2024-10-22 10:43:43 +00:00
dependabot[bot] f8bb3a8d72
Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0 (#1843)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.48.0 to 0.49.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.49.0</h2>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DisableLegacyDbfsAPI">w.DisableLegacyDbfs</a>
workspace-level service.</li>
<li>Added <code>UnityCatalogProvisioningState</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li>
<li>Added <code>IsTruncated</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Result">dashboards.Result</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#BaseJob">jobs.BaseJob</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CreateJob">jobs.CreateJob</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#Job">jobs.Job</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobSettings">jobs.JobSettings</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li>
<li>Added <code>Report</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li>
<li>Added <code>SequenceBy</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpecificConfig">pipelines.TableSpecificConfig</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#Alert">sql.Alert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#CreateAlertRequestAlert">sql.CreateAlertRequestAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#ListAlertsResponseAlert">sql.ListAlertsResponseAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#UpdateAlertRequestAlert">sql.UpdateAlertRequestAlert</a>.</li>
</ul>
<p>OpenAPI SHA: cf9c61453990df0f9453670f2fe68e1b128647a2, Date:
2024-10-14</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.49.0</h2>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DisableLegacyDbfsAPI">w.DisableLegacyDbfs</a>
workspace-level service.</li>
<li>Added <code>UnityCatalogProvisioningState</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li>
<li>Added <code>IsTruncated</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Result">dashboards.Result</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#BaseJob">jobs.BaseJob</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CreateJob">jobs.CreateJob</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#Job">jobs.Job</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobSettings">jobs.JobSettings</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li>
<li>Added <code>Report</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li>
<li>Added <code>SequenceBy</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpecificConfig">pipelines.TableSpecificConfig</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#Alert">sql.Alert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#CreateAlertRequestAlert">sql.CreateAlertRequestAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#ListAlertsResponseAlert">sql.ListAlertsResponseAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#UpdateAlertRequestAlert">sql.UpdateAlertRequestAlert</a>.</li>
</ul>
<p>OpenAPI SHA: cf9c61453990df0f9453670f2fe68e1b128647a2, Date:
2024-10-14</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b47ecac9ea"><code>b47ecac</code></a>
[Release] Release v0.49.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1062">#1062</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.48.0...v0.49.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.48.0&new-version=0.49.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-10-22 09:37:01 +00:00
96 changed files with 4412 additions and 379 deletions

View File

@ -1 +1 @@
0c86ea6dbd9a730c24ff0d4e509603e476955ac5 cf9c61453990df0f9453670f2fe68e1b128647a2

1
.gitattributes vendored
View File

@ -54,6 +54,7 @@ cmd/workspace/dashboards/dashboards.go linguist-generated=true
cmd/workspace/data-sources/data-sources.go linguist-generated=true cmd/workspace/data-sources/data-sources.go linguist-generated=true
cmd/workspace/default-namespace/default-namespace.go linguist-generated=true cmd/workspace/default-namespace/default-namespace.go linguist-generated=true
cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true
cmd/workspace/disable-legacy-dbfs/disable-legacy-dbfs.go linguist-generated=true
cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true
cmd/workspace/experiments/experiments.go linguist-generated=true cmd/workspace/experiments/experiments.go linguist-generated=true
cmd/workspace/external-locations/external-locations.go linguist-generated=true cmd/workspace/external-locations/external-locations.go linguist-generated=true

114
.github/workflows/external-message.yml vendored Normal file
View File

@ -0,0 +1,114 @@
name: PR Comment
# WARNING:
# THIS WORKFLOW ALWAYS RUNS FOR EXTERNAL CONTRIBUTORS WITHOUT ANY APPROVAL.
# THIS WORKFLOW RUNS FROM MAIN BRANCH, NOT FROM THE PR BRANCH.
# DO NOT PULL THE PR OR EXECUTE ANY CODE FROM THE PR.
on:
pull_request_target:
types: [opened, reopened, synchronize]
branches:
- main
jobs:
comment-on-pr:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
# NOTE: The following checks may not be accurate depending on Org or Repo settings.
- name: Check user and potential secret access
id: check-secrets-access
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
USER_LOGIN="${{ github.event.pull_request.user.login }}"
REPO_OWNER="${{ github.repository_owner }}"
REPO_NAME="${{ github.event.repository.name }}"
echo "Pull request opened by: $USER_LOGIN"
# Check if PR is from a fork
IS_FORK=$([[ "${{ github.event.pull_request.head.repo.full_name }}" != "${{ github.repository }}" ]] && echo "true" || echo "false")
HAS_ACCESS="false"
# Check user's permission level on the repository
USER_PERMISSION=$(gh api repos/$REPO_OWNER/$REPO_NAME/collaborators/$USER_LOGIN/permission --jq '.permission')
if [[ "$USER_PERMISSION" == "admin" || "$USER_PERMISSION" == "write" ]]; then
HAS_ACCESS="true"
elif [[ "$USER_PERMISSION" == "read" ]]; then
# For read access, we need to check if the user has been explicitly granted secret access
# This information is not directly available via API, so we'll make an assumption
# that read access does not imply secret access
HAS_ACCESS="false"
fi
# Check if repo owner is an organization
IS_ORG=$(gh api users/$REPO_OWNER --jq '.type == "Organization"')
if [[ "$IS_ORG" == "true" && "$HAS_ACCESS" == "false" ]]; then
# Check if user is a member of any team with write or admin access to the repo
TEAMS_WITH_ACCESS=$(gh api repos/$REPO_OWNER/$REPO_NAME/teams --jq '.[] | select(.permission == "push" or .permission == "admin") | .slug')
for team in $TEAMS_WITH_ACCESS; do
IS_TEAM_MEMBER=$(gh api orgs/$REPO_OWNER/teams/$team/memberships/$USER_LOGIN --silent && echo "true" || echo "false")
if [[ "$IS_TEAM_MEMBER" == "true" ]]; then
HAS_ACCESS="true"
break
fi
done
fi
# If it's a fork, set HAS_ACCESS to false regardless of other checks
if [[ "$IS_FORK" == "true" ]]; then
HAS_ACCESS="false"
fi
echo "has_secrets_access=$HAS_ACCESS" >> $GITHUB_OUTPUT
if [[ "$HAS_ACCESS" == "true" ]]; then
echo "User $USER_LOGIN likely has access to secrets"
else
echo "User $USER_LOGIN likely does not have access to secrets"
fi
- uses: actions/checkout@v4
- name: Delete old comments
if: steps.check-secrets-access.outputs.has_secrets_access != 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Delete previous comment if it exists
previous_comment_ids=$(gh api "repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments" \
--jq '.[] | select(.body | startswith("<!-- INTEGRATION_TESTS -->")) | .id')
echo "Previous comment IDs: $previous_comment_ids"
# Iterate over each comment ID and delete the comment
if [ ! -z "$previous_comment_ids" ]; then
echo "$previous_comment_ids" | while read -r comment_id; do
echo "Deleting comment with ID: $comment_id"
gh api "repos/${{ github.repository }}/issues/comments/$comment_id" -X DELETE
done
fi
- name: Comment on PR
if: steps.check-secrets-access.outputs.has_secrets_access != 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COMMIT_SHA: ${{ github.event.pull_request.head.sha }}
run: |
gh pr comment ${{ github.event.pull_request.number }} --body \
"<!-- INTEGRATION_TESTS -->
Run integration tests manually:
[go/deco-tests-run/cli](https://go/deco-tests-run/cli)
Inputs:
* PR number: ${{github.event.pull_request.number}}
* Commit SHA: \`${{ env.COMMIT_SHA }}\`
Checks will be approved automatically on success.
"

77
.github/workflows/integration-tests.yml vendored Normal file
View File

@ -0,0 +1,77 @@
name: integration
on:
pull_request:
types: [opened, synchronize]
merge_group:
jobs:
check-token:
runs-on: ubuntu-latest
outputs:
has_token: ${{ steps.set-token-status.outputs.has_token }}
steps:
- name: Check if GITHUB_TOKEN is set
id: set-token-status
run: |
if [ -z "${{ secrets.GITHUB_TOKEN }}" ]; then
echo "GITHUB_TOKEN is empty. User has no access to tokens."
echo "::set-output name=has_token::false"
else
echo "GITHUB_TOKEN is set. User has no access to tokens."
echo "::set-output name=has_token::true"
fi
trigger-tests:
runs-on: ubuntu-latest
needs: check-token
if: github.event_name == 'pull_request' && needs.check-token.outputs.has_token == 'true'
environment: "test-trigger-is"
steps:
- uses: actions/checkout@v4
- name: Generate GitHub App Token
id: generate-token
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}
private-key: ${{ secrets.DECO_WORKFLOW_TRIGGER_PRIVATE_KEY }}
owner: ${{ secrets.ORG_NAME }}
repositories: ${{secrets.REPO_NAME}}
- name: Trigger Workflow in Another Repo
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
run: |
gh workflow run cli-isolated-pr.yml -R ${{ secrets.ORG_NAME }}/${{secrets.REPO_NAME}} \
--ref main \
-f pull_request_number=${{ github.event.pull_request.number }} \
-f commit_sha=${{ github.event.pull_request.head.sha }}
# Statuses and checks apply to specific commits (by hash).
# Enforcement of required checks is done both at the PR level and the merge queue level.
# In case of multiple commits in a single PR, the hash of the squashed commit
# will not match the one for the latest (approved) commit in the PR.
# We auto approve the check for the merge queue for two reasons:
# * Queue times out due to duration of tests.
# * Avoid running integration tests twice, since it was already run at the tip of the branch before squashing.
auto-approve:
if: github.event_name == 'merge_group'
runs-on: ubuntu-latest
steps:
- name: Mark Check
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
gh api -X POST -H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
/repos/${{ github.repository }}/statuses/${{ github.sha }} \
-f 'state=success' \
-f 'context=Integration Tests Check'

View File

@ -1,5 +1,79 @@
# Version changelog # Version changelog
## [Release] Release v0.232.1
This patch release fixes the following error observed when deploying to /Shared root folder
"Error: Path (/Shared/.bundle/.../resources) doesn't exist"
Bundles:
* Fixed adding /Workspace prefix for resource paths ([#1866](https://github.com/databricks/cli/pull/1866)).
## [Release] Release v0.232.0
**New features for Databricks Asset Bundles:**
This release adds support for managing AI/BI dashboards as part of your bundle configuration. The `bundle generate` command is updated to support producing dashboard bundle configuration as well as a serialized JSON representation of the dashboard.
You can find an example configuration and walkthrough at https://github.com/databricks/bundle-examples/tree/main/knowledge_base/dashboard_nyc_taxi
CLI:
* Add privacy notice to README ([#1841](https://github.com/databricks/cli/pull/1841)).
Bundles:
* Add support for AI/BI dashboards ([#1743](https://github.com/databricks/cli/pull/1743)).
* Added validator for folder permissions ([#1824](https://github.com/databricks/cli/pull/1824)).
* Add bundle generate variant for dashboards ([#1847](https://github.com/databricks/cli/pull/1847)).
* Use SetPermissions instead of UpdatePermissions when setting folder permissions based on top-level ones ([#1822](https://github.com/databricks/cli/pull/1822)).
Internal:
* Attempt to reduce test flakiness on Windows ([#1845](https://github.com/databricks/cli/pull/1845)).
* Reuse resource resolution code for the run command ([#1858](https://github.com/databricks/cli/pull/1858)).
* [Internal] Automatically trigger integration tests on PR ([#1857](https://github.com/databricks/cli/pull/1857)).
* [Internal] Add test instructions for external contributors ([#1863](https://github.com/databricks/cli/pull/1863)).
* Add `libs/dyn/jsonsaver` ([#1862](https://github.com/databricks/cli/pull/1862)).
Dependency updates:
* Bump github.com/fatih/color from 1.17.0 to 1.18.0 ([#1861](https://github.com/databricks/cli/pull/1861)).
## [Release] Release v0.231.0
CLI:
* Added JSON input validation for CLI commands ([#1771](https://github.com/databricks/cli/pull/1771)).
* Support Git worktrees for `sync` ([#1831](https://github.com/databricks/cli/pull/1831)).
Bundles:
* Add `bundle summary` to display URLs for deployed resources ([#1731](https://github.com/databricks/cli/pull/1731)).
* Added a warning when incorrect permissions used for `/Workspace/Shared` bundle root ([#1821](https://github.com/databricks/cli/pull/1821)).
* Show actionable errors for collaborative deployment scenarios ([#1386](https://github.com/databricks/cli/pull/1386)).
* Fix path to repository-wide exclude file ([#1837](https://github.com/databricks/cli/pull/1837)).
* Fixed typo in converting cluster permissions ([#1826](https://github.com/databricks/cli/pull/1826)).
* Ignore metastore permission error during template generation ([#1819](https://github.com/databricks/cli/pull/1819)).
* Handle normalization of `dyn.KindTime` into an any type ([#1836](https://github.com/databricks/cli/pull/1836)).
* Added support for pip options in environment dependencies ([#1842](https://github.com/databricks/cli/pull/1842)).
* Fix race condition when restarting continuous jobs ([#1849](https://github.com/databricks/cli/pull/1849)).
* Fix pipeline in default-python template not working for certain workspaces ([#1854](https://github.com/databricks/cli/pull/1854)).
* Add "output" flag to the bundle sync command ([#1853](https://github.com/databricks/cli/pull/1853)).
Internal:
* Move utility functions dealing with IAM to libs/iamutil ([#1820](https://github.com/databricks/cli/pull/1820)).
* Remove unused `IS_OWNER` constant ([#1823](https://github.com/databricks/cli/pull/1823)).
* Assert SDK version is consistent in the CLI generation process ([#1814](https://github.com/databricks/cli/pull/1814)).
* Fixed unmarshalling json input into `interface{}` type ([#1832](https://github.com/databricks/cli/pull/1832)).
* Fix `TestAccFsMkdirWhenFileExistsAtPath` in isolated Azure environments ([#1833](https://github.com/databricks/cli/pull/1833)).
* Add behavioral tests for examples from the YAML spec ([#1835](https://github.com/databricks/cli/pull/1835)).
* Remove Terraform conversion function that's no longer used ([#1840](https://github.com/databricks/cli/pull/1840)).
* Encode assumptions about the dashboards API in a test ([#1839](https://github.com/databricks/cli/pull/1839)).
* Add script to make testing of code on branches easier ([#1844](https://github.com/databricks/cli/pull/1844)).
API Changes:
* Added `databricks disable-legacy-dbfs` command group.
OpenAPI commit cf9c61453990df0f9453670f2fe68e1b128647a2 (2024-10-14)
Dependency updates:
* Upgrade TF provider to 1.54.0 ([#1852](https://github.com/databricks/cli/pull/1852)).
* Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0 ([#1843](https://github.com/databricks/cli/pull/1843)).
## [Release] Release v0.230.0 ## [Release] Release v0.230.0
Notable changes for Databricks Asset Bundles: Notable changes for Databricks Asset Bundles:

View File

@ -35,3 +35,6 @@ docker run -e DATABRICKS_HOST=$YOUR_HOST_URL -e DATABRICKS_TOKEN=$YOUR_TOKEN ghc
This CLI follows the Databricks Unified Authentication principles. This CLI follows the Databricks Unified Authentication principles.
You can find a detailed description at https://github.com/databricks/databricks-sdk-go#authentication. You can find a detailed description at https://github.com/databricks/databricks-sdk-go#authentication.
## Privacy Notice
Databricks CLI use is subject to the [Databricks License](https://github.com/databricks/cli/blob/main/LICENSE) and [Databricks Privacy Notice](https://www.databricks.com/legal/privacynotice), including any Usage Data provisions.

View File

@ -0,0 +1,18 @@
package generate
import (
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/dashboards"
)
func ConvertDashboardToValue(dashboard *dashboards.Dashboard, filePath string) (dyn.Value, error) {
// The majority of fields of the dashboard struct are read-only.
// We copy the relevant fields manually.
dv := map[string]dyn.Value{
"display_name": dyn.NewValue(dashboard.DisplayName, []dyn.Location{{Line: 1}}),
"warehouse_id": dyn.NewValue(dashboard.WarehouseId, []dyn.Location{{Line: 2}}),
"file_path": dyn.NewValue(filePath, []dyn.Location{{Line: 3}}),
}
return dyn.V(dv), nil
}

View File

@ -212,6 +212,15 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
} }
// Dashboards: Prefix
for key, dashboard := range r.Dashboards {
if dashboard == nil || dashboard.CreateDashboardRequest == nil {
diags = diags.Extend(diag.Errorf("dashboard %s s is not defined", key))
continue
}
dashboard.DisplayName = prefix + dashboard.DisplayName
}
return diags return diags
} }

View File

@ -0,0 +1,70 @@
package mutator
import (
"context"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
)
type configureDashboardDefaults struct{}
func ConfigureDashboardDefaults() bundle.Mutator {
return &configureDashboardDefaults{}
}
func (m *configureDashboardDefaults) Name() string {
return "ConfigureDashboardDefaults"
}
func (m *configureDashboardDefaults) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
pattern := dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("dashboards"),
dyn.AnyKey(),
)
// Configure defaults for all dashboards.
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
return dyn.MapByPattern(v, pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
var err error
v, err = setIfNotExists(v, dyn.NewPath(dyn.Key("parent_path")), dyn.V(b.Config.Workspace.ResourcePath))
if err != nil {
return dyn.InvalidValue, err
}
v, err = setIfNotExists(v, dyn.NewPath(dyn.Key("embed_credentials")), dyn.V(false))
if err != nil {
return dyn.InvalidValue, err
}
return v, nil
})
})
diags = diags.Extend(diag.FromErr(err))
return diags
}
func setIfNotExists(v dyn.Value, path dyn.Path, defaultValue dyn.Value) (dyn.Value, error) {
// Get the field at the specified path (if set).
_, err := dyn.GetByPath(v, path)
switch {
case dyn.IsNoSuchKeyError(err):
// OK, we'll set the default value.
break
case dyn.IsCannotTraverseNilError(err):
// Cannot traverse the value, skip it.
return v, nil
case err == nil:
// The field is set, skip it.
return v, nil
default:
// Return the error.
return v, err
}
// Set the field at the specified path.
return dyn.SetByPath(v, path, defaultValue)
}

View File

@ -0,0 +1,130 @@
package mutator_test
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigureDashboardDefaultsParentPath(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ResourcePath: "/foo/bar",
},
Resources: config.Resources{
Dashboards: map[string]*resources.Dashboard{
"d1": {
// Empty string is skipped.
// See below for how it is set.
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
ParentPath: "",
},
},
"d2": {
// Non-empty string is skipped.
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
ParentPath: "already-set",
},
},
"d3": {
// No parent path set.
},
"d4": nil,
},
},
},
}
// We can't set an empty string in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.dashboards.d1.parent_path", dyn.V(""))
})
diags := bundle.Apply(context.Background(), b, mutator.ConfigureDashboardDefaults())
require.NoError(t, diags.Error())
var v dyn.Value
var err error
// Set to empty string; unchanged.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d1.parent_path")
if assert.NoError(t, err) {
assert.Equal(t, "", v.MustString())
}
// Set to "already-set"; unchanged.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d2.parent_path")
if assert.NoError(t, err) {
assert.Equal(t, "already-set", v.MustString())
}
// Not set; now set to the workspace resource path.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d3.parent_path")
if assert.NoError(t, err) {
assert.Equal(t, "/foo/bar", v.MustString())
}
// No valid dashboard; no change.
_, err = dyn.Get(b.Config.Value(), "resources.dashboards.d4.parent_path")
assert.True(t, dyn.IsCannotTraverseNilError(err))
}
func TestConfigureDashboardDefaultsEmbedCredentials(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Dashboards: map[string]*resources.Dashboard{
"d1": {
EmbedCredentials: true,
},
"d2": {
EmbedCredentials: false,
},
"d3": {
// No parent path set.
},
"d4": nil,
},
},
},
}
diags := bundle.Apply(context.Background(), b, mutator.ConfigureDashboardDefaults())
require.NoError(t, diags.Error())
var v dyn.Value
var err error
// Set to true; still true.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d1.embed_credentials")
if assert.NoError(t, err) {
assert.Equal(t, true, v.MustBool())
}
// Set to false; still false.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d2.embed_credentials")
if assert.NoError(t, err) {
assert.Equal(t, false, v.MustBool())
}
// Not set; now false.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d3.embed_credentials")
if assert.NoError(t, err) {
assert.Equal(t, false, v.MustBool())
}
// No valid dashboard; no change.
_, err = dyn.Get(b.Config.Value(), "resources.dashboards.d4.embed_credentials")
assert.True(t, dyn.IsCannotTraverseNilError(err))
}

View File

@ -8,6 +8,7 @@ import (
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
@ -85,6 +86,14 @@ func TestInitializeURLs(t *testing.T) {
}, },
}, },
}, },
Dashboards: map[string]*resources.Dashboard{
"dashboard1": {
ID: "01ef8d56871e1d50ae30ce7375e42478",
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "My special dashboard",
},
},
},
}, },
}, },
} }
@ -99,6 +108,7 @@ func TestInitializeURLs(t *testing.T) {
"qualityMonitor1": "https://mycompany.databricks.com/explore/data/catalog/schema/qualityMonitor1?o=123456", "qualityMonitor1": "https://mycompany.databricks.com/explore/data/catalog/schema/qualityMonitor1?o=123456",
"schema1": "https://mycompany.databricks.com/explore/data/catalog/schema?o=123456", "schema1": "https://mycompany.databricks.com/explore/data/catalog/schema?o=123456",
"cluster1": "https://mycompany.databricks.com/compute/clusters/1017-103929-vlr7jzcf?o=123456", "cluster1": "https://mycompany.databricks.com/compute/clusters/1017-103929-vlr7jzcf?o=123456",
"dashboard1": "https://mycompany.databricks.com/dashboardsv3/01ef8d56871e1d50ae30ce7375e42478/published?o=123456",
} }
initializeForWorkspace(b, "123456", "https://mycompany.databricks.com/") initializeForWorkspace(b, "123456", "https://mycompany.databricks.com/")

View File

@ -32,6 +32,7 @@ func (m *prependWorkspacePrefix) Apply(ctx context.Context, b *bundle.Bundle) di
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("file_path")), dyn.NewPattern(dyn.Key("workspace"), dyn.Key("file_path")),
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("artifact_path")), dyn.NewPattern(dyn.Key("workspace"), dyn.Key("artifact_path")),
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("state_path")), dyn.NewPattern(dyn.Key("workspace"), dyn.Key("state_path")),
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("resource_path")),
} }
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) { err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {

View File

@ -41,6 +41,7 @@ func TestPrependWorkspacePrefix(t *testing.T) {
ArtifactPath: tc.path, ArtifactPath: tc.path,
FilePath: tc.path, FilePath: tc.path,
StatePath: tc.path, StatePath: tc.path,
ResourcePath: tc.path,
}, },
}, },
} }
@ -51,6 +52,7 @@ func TestPrependWorkspacePrefix(t *testing.T) {
require.Equal(t, tc.expected, b.Config.Workspace.ArtifactPath) require.Equal(t, tc.expected, b.Config.Workspace.ArtifactPath)
require.Equal(t, tc.expected, b.Config.Workspace.FilePath) require.Equal(t, tc.expected, b.Config.Workspace.FilePath)
require.Equal(t, tc.expected, b.Config.Workspace.StatePath) require.Equal(t, tc.expected, b.Config.Workspace.StatePath)
require.Equal(t, tc.expected, b.Config.Workspace.ResourcePath)
} }
} }
@ -76,4 +78,5 @@ func TestPrependWorkspaceForDefaultConfig(t *testing.T) {
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/artifacts", b.Config.Workspace.ArtifactPath) require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/artifacts", b.Config.Workspace.ArtifactPath)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/files", b.Config.Workspace.FilePath) require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/files", b.Config.Workspace.FilePath)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/state", b.Config.Workspace.StatePath) require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/state", b.Config.Workspace.StatePath)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/resources", b.Config.Workspace.ResourcePath)
} }

View File

@ -14,6 +14,7 @@ import (
sdkconfig "github.com/databricks/databricks-sdk-go/config" sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/iam" "github.com/databricks/databricks-sdk-go/service/iam"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
@ -123,6 +124,13 @@ func mockBundle(mode config.Mode) *bundle.Bundle {
Clusters: map[string]*resources.Cluster{ Clusters: map[string]*resources.Cluster{
"cluster1": {ClusterSpec: &compute.ClusterSpec{ClusterName: "cluster1", SparkVersion: "13.2.x", NumWorkers: 1}}, "cluster1": {ClusterSpec: &compute.ClusterSpec{ClusterName: "cluster1", SparkVersion: "13.2.x", NumWorkers: 1}},
}, },
Dashboards: map[string]*resources.Dashboard{
"dashboard1": {
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "dashboard1",
},
},
},
}, },
}, },
// Use AWS implementation for testing. // Use AWS implementation for testing.
@ -184,6 +192,9 @@ func TestProcessTargetModeDevelopment(t *testing.T) {
// Clusters // Clusters
assert.Equal(t, "[dev lennart] cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName) assert.Equal(t, "[dev lennart] cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName)
// Dashboards
assert.Equal(t, "[dev lennart] dashboard1", b.Config.Resources.Dashboards["dashboard1"].DisplayName)
} }
func TestProcessTargetModeDevelopmentTagNormalizationForAws(t *testing.T) { func TestProcessTargetModeDevelopmentTagNormalizationForAws(t *testing.T) {
@ -272,6 +283,7 @@ func TestValidateDevelopmentMode(t *testing.T) {
b.Config.Workspace.StatePath = "/Users/lennart@company.com/.bundle/x/y/state" b.Config.Workspace.StatePath = "/Users/lennart@company.com/.bundle/x/y/state"
b.Config.Workspace.FilePath = "/Users/lennart@company.com/.bundle/x/y/files" b.Config.Workspace.FilePath = "/Users/lennart@company.com/.bundle/x/y/files"
b.Config.Workspace.ArtifactPath = "/Users/lennart@company.com/.bundle/x/y/artifacts" b.Config.Workspace.ArtifactPath = "/Users/lennart@company.com/.bundle/x/y/artifacts"
b.Config.Workspace.ResourcePath = "/Users/lennart@company.com/.bundle/x/y/resources"
diags = validateDevelopmentMode(b) diags = validateDevelopmentMode(b)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
} }
@ -300,6 +312,7 @@ func TestProcessTargetModeProduction(t *testing.T) {
b.Config.Workspace.StatePath = "/Shared/.bundle/x/y/state" b.Config.Workspace.StatePath = "/Shared/.bundle/x/y/state"
b.Config.Workspace.ArtifactPath = "/Shared/.bundle/x/y/artifacts" b.Config.Workspace.ArtifactPath = "/Shared/.bundle/x/y/artifacts"
b.Config.Workspace.FilePath = "/Shared/.bundle/x/y/files" b.Config.Workspace.FilePath = "/Shared/.bundle/x/y/files"
b.Config.Workspace.ResourcePath = "/Shared/.bundle/x/y/resources"
diags = validateProductionMode(context.Background(), b, false) diags = validateProductionMode(context.Background(), b, false)
require.ErrorContains(t, diags.Error(), "production") require.ErrorContains(t, diags.Error(), "production")

View File

@ -110,6 +110,16 @@ func validateRunAs(b *bundle.Bundle) diag.Diagnostics {
)) ))
} }
// Dashboards do not support run_as in the API.
if len(b.Config.Resources.Dashboards) > 0 {
diags = diags.Extend(reportRunAsNotSupported(
"dashboards",
b.Config.GetLocation("resources.dashboards"),
b.Config.Workspace.CurrentUser.UserName,
identity,
))
}
return diags return diags
} }

View File

@ -33,6 +33,7 @@ func allResourceTypes(t *testing.T) []string {
// also update this check when adding a new resource // also update this check when adding a new resource
require.Equal(t, []string{ require.Equal(t, []string{
"clusters", "clusters",
"dashboards",
"experiments", "experiments",
"jobs", "jobs",
"model_serving_endpoints", "model_serving_endpoints",
@ -188,6 +189,7 @@ func TestRunAsErrorForUnsupportedResources(t *testing.T) {
Config: *r, Config: *r,
} }
diags := bundle.Apply(context.Background(), b, SetRunAs()) diags := bundle.Apply(context.Background(), b, SetRunAs())
require.Error(t, diags.Error())
assert.Contains(t, diags.Error().Error(), "do not support a setting a run_as user that is different from the owner.\n"+ assert.Contains(t, diags.Error().Error(), "do not support a setting a run_as user that is different from the owner.\n"+
"Current identity: alice. Run as identity: bob.\n"+ "Current identity: alice. Run as identity: bob.\n"+
"See https://docs.databricks.com/dev-tools/bundles/run-as.html to learn more about the run_as property.", rt) "See https://docs.databricks.com/dev-tools/bundles/run-as.html to learn more about the run_as property.", rt)

View File

@ -162,6 +162,20 @@ func (t *translateContext) translateNoOp(literal, localFullPath, localRelPath, r
return localRelPath, nil return localRelPath, nil
} }
func (t *translateContext) retainLocalAbsoluteFilePath(literal, localFullPath, localRelPath, remotePath string) (string, error) {
info, err := t.b.SyncRoot.Stat(localRelPath)
if errors.Is(err, fs.ErrNotExist) {
return "", fmt.Errorf("file %s not found", literal)
}
if err != nil {
return "", fmt.Errorf("unable to determine if %s is a file: %w", localFullPath, err)
}
if info.IsDir() {
return "", fmt.Errorf("expected %s to be a file but found a directory", literal)
}
return localFullPath, nil
}
func (t *translateContext) translateNoOpWithPrefix(literal, localFullPath, localRelPath, remotePath string) (string, error) { func (t *translateContext) translateNoOpWithPrefix(literal, localFullPath, localRelPath, remotePath string) (string, error) {
if !strings.HasPrefix(localRelPath, ".") { if !strings.HasPrefix(localRelPath, ".") {
localRelPath = "." + string(filepath.Separator) + localRelPath localRelPath = "." + string(filepath.Separator) + localRelPath
@ -215,6 +229,7 @@ func (m *translatePaths) Apply(_ context.Context, b *bundle.Bundle) diag.Diagnos
t.applyJobTranslations, t.applyJobTranslations,
t.applyPipelineTranslations, t.applyPipelineTranslations,
t.applyArtifactTranslations, t.applyArtifactTranslations,
t.applyDashboardTranslations,
} { } {
v, err = fn(v) v, err = fn(v)
if err != nil { if err != nil {

View File

@ -0,0 +1,28 @@
package mutator
import (
"fmt"
"github.com/databricks/cli/libs/dyn"
)
func (t *translateContext) applyDashboardTranslations(v dyn.Value) (dyn.Value, error) {
// Convert the `file_path` field to a local absolute path.
// We load the file at this path and use its contents for the dashboard contents.
pattern := dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("dashboards"),
dyn.AnyKey(),
dyn.Key("file_path"),
)
return dyn.MapByPattern(v, pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
key := p[2].Key()
dir, err := v.Location().Directory()
if err != nil {
return dyn.InvalidValue, fmt.Errorf("unable to determine directory for dashboard %s: %w", key, err)
}
return t.rewriteRelativeTo(p, v, t.retainLocalAbsoluteFilePath, dir, "")
})
}

View File

@ -21,6 +21,7 @@ type Resources struct {
QualityMonitors map[string]*resources.QualityMonitor `json:"quality_monitors,omitempty"` QualityMonitors map[string]*resources.QualityMonitor `json:"quality_monitors,omitempty"`
Schemas map[string]*resources.Schema `json:"schemas,omitempty"` Schemas map[string]*resources.Schema `json:"schemas,omitempty"`
Clusters map[string]*resources.Cluster `json:"clusters,omitempty"` Clusters map[string]*resources.Cluster `json:"clusters,omitempty"`
Dashboards map[string]*resources.Dashboard `json:"dashboards,omitempty"`
} }
type ConfigResource interface { type ConfigResource interface {
@ -77,6 +78,7 @@ func (r *Resources) AllResources() []ResourceGroup {
collectResourceMap(descriptions["quality_monitors"], r.QualityMonitors), collectResourceMap(descriptions["quality_monitors"], r.QualityMonitors),
collectResourceMap(descriptions["schemas"], r.Schemas), collectResourceMap(descriptions["schemas"], r.Schemas),
collectResourceMap(descriptions["clusters"], r.Clusters), collectResourceMap(descriptions["clusters"], r.Clusters),
collectResourceMap(descriptions["dashboards"], r.Dashboards),
} }
} }
@ -175,5 +177,11 @@ func SupportedResources() map[string]ResourceDescription {
SingularTitle: "Cluster", SingularTitle: "Cluster",
PluralTitle: "Clusters", PluralTitle: "Clusters",
}, },
"dashboards": {
SingularName: "dashboard",
PluralName: "dashboards",
SingularTitle: "Dashboard",
PluralTitle: "Dashboards",
},
} }
} }

View File

@ -0,0 +1,81 @@
package resources
import (
"context"
"fmt"
"net/url"
"github.com/databricks/cli/libs/log"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/marshal"
"github.com/databricks/databricks-sdk-go/service/dashboards"
)
type Dashboard struct {
ID string `json:"id,omitempty" bundle:"readonly"`
Permissions []Permission `json:"permissions,omitempty"`
ModifiedStatus ModifiedStatus `json:"modified_status,omitempty" bundle:"internal"`
URL string `json:"url,omitempty" bundle:"internal"`
*dashboards.CreateDashboardRequest
// =========================
// === Additional fields ===
// =========================
// SerializedDashboard holds the contents of the dashboard in serialized JSON form.
// We override the field's type from the SDK struct here to allow for inlining as YAML.
// If the value is a string, it is used as is.
// If it is not a string, its contents is marshalled as JSON.
SerializedDashboard any `json:"serialized_dashboard,omitempty"`
// EmbedCredentials is a flag to indicate if the publisher's credentials should
// be embedded in the published dashboard. These embedded credentials will be used
// to execute the published dashboard's queries.
//
// Defaults to false if not set.
EmbedCredentials bool `json:"embed_credentials,omitempty"`
// FilePath points to the local `.lvdash.json` file containing the dashboard definition.
FilePath string `json:"file_path,omitempty"`
}
func (r *Dashboard) UnmarshalJSON(b []byte) error {
return marshal.Unmarshal(b, r)
}
func (r Dashboard) MarshalJSON() ([]byte, error) {
return marshal.Marshal(r)
}
func (*Dashboard) Exists(ctx context.Context, w *databricks.WorkspaceClient, id string) (bool, error) {
_, err := w.Lakeview.Get(ctx, dashboards.GetDashboardRequest{
DashboardId: id,
})
if err != nil {
log.Debugf(ctx, "dashboard %s does not exist", id)
return false, err
}
return true, nil
}
func (*Dashboard) TerraformResourceName() string {
return "databricks_dashboard"
}
func (r *Dashboard) InitializeURL(baseURL url.URL) {
if r.ID == "" {
return
}
baseURL.Path = fmt.Sprintf("dashboardsv3/%s/published", r.ID)
r.URL = baseURL.String()
}
func (r *Dashboard) GetName() string {
return r.DisplayName
}
func (r *Dashboard) GetURL() string {
return r.URL
}

View File

@ -1,5 +1,7 @@
package resources package resources
import "fmt"
// Permission holds the permission level setting for a single principal. // Permission holds the permission level setting for a single principal.
// Multiple of these can be defined on any resource. // Multiple of these can be defined on any resource.
type Permission struct { type Permission struct {
@ -9,3 +11,19 @@ type Permission struct {
ServicePrincipalName string `json:"service_principal_name,omitempty"` ServicePrincipalName string `json:"service_principal_name,omitempty"`
GroupName string `json:"group_name,omitempty"` GroupName string `json:"group_name,omitempty"`
} }
func (p Permission) String() string {
if p.UserName != "" {
return fmt.Sprintf("level: %s, user_name: %s", p.Level, p.UserName)
}
if p.ServicePrincipalName != "" {
return fmt.Sprintf("level: %s, service_principal_name: %s", p.Level, p.ServicePrincipalName)
}
if p.GroupName != "" {
return fmt.Sprintf("level: %s, group_name: %s", p.Level, p.GroupName)
}
return fmt.Sprintf("level: %s", p.Level)
}

View File

@ -0,0 +1,106 @@
package validate
import (
"context"
"fmt"
"path"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/bundle/paths"
"github.com/databricks/cli/bundle/permissions"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/workspace"
"golang.org/x/sync/errgroup"
)
type folderPermissions struct {
}
// Apply implements bundle.ReadOnlyMutator.
func (f *folderPermissions) Apply(ctx context.Context, b bundle.ReadOnlyBundle) diag.Diagnostics {
if len(b.Config().Permissions) == 0 {
return nil
}
bundlePaths := paths.CollectUniqueWorkspacePathPrefixes(b.Config().Workspace)
var diags diag.Diagnostics
g, ctx := errgroup.WithContext(ctx)
results := make([]diag.Diagnostics, len(bundlePaths))
for i, p := range bundlePaths {
g.Go(func() error {
results[i] = checkFolderPermission(ctx, b, p)
return nil
})
}
if err := g.Wait(); err != nil {
return diag.FromErr(err)
}
for _, r := range results {
diags = diags.Extend(r)
}
return diags
}
func checkFolderPermission(ctx context.Context, b bundle.ReadOnlyBundle, folderPath string) diag.Diagnostics {
// If the folder is shared, then we don't need to check permissions as it was already checked in the other mutator before.
if libraries.IsWorkspaceSharedPath(folderPath) {
return nil
}
w := b.WorkspaceClient().Workspace
obj, err := getClosestExistingObject(ctx, w, folderPath)
if err != nil {
return diag.FromErr(err)
}
objPermissions, err := w.GetPermissions(ctx, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: fmt.Sprint(obj.ObjectId),
WorkspaceObjectType: "directories",
})
if err != nil {
return diag.FromErr(err)
}
p := permissions.ObjectAclToResourcePermissions(folderPath, objPermissions.AccessControlList)
return p.Compare(b.Config().Permissions)
}
func getClosestExistingObject(ctx context.Context, w workspace.WorkspaceInterface, folderPath string) (*workspace.ObjectInfo, error) {
for {
obj, err := w.GetStatusByPath(ctx, folderPath)
if err == nil {
return obj, nil
}
if !apierr.IsMissing(err) {
return nil, err
}
parent := path.Dir(folderPath)
// If the parent is the same as the current folder, then we have reached the root
if folderPath == parent {
break
}
folderPath = parent
}
return nil, fmt.Errorf("folder %s and its parent folders do not exist", folderPath)
}
// Name implements bundle.ReadOnlyMutator.
func (f *folderPermissions) Name() string {
return "validate:folder_permissions"
}
// ValidateFolderPermissions validates that permissions for the folders in Workspace file system matches
// the permissions in the top-level permissions section of the bundle.
func ValidateFolderPermissions() bundle.ReadOnlyMutator {
return &folderPermissions{}
}

View File

@ -0,0 +1,208 @@
package validate
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/permissions"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestFolderPermissionsInheritedWhenRootPathDoesNotExist(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com",
ArtifactPath: "/Workspace/Users/otherfoo@bar.com/artifacts",
FilePath: "/Workspace/Users/foo@bar.com/files",
StatePath: "/Workspace/Users/foo@bar.com/state",
ResourcePath: "/Workspace/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/otherfoo@bar.com/artifacts").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/otherfoo@bar.com").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/foo@bar.com").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace").Return(&workspace.ObjectInfo{
ObjectId: 1234,
}, nil)
api.EXPECT().GetPermissions(mock.Anything, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: "1234",
WorkspaceObjectType: "directories",
}).Return(&workspace.WorkspaceObjectPermissions{
ObjectId: "1234",
AccessControlList: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Empty(t, diags)
}
func TestValidateFolderPermissionsFailsOnMissingBundlePermission(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com",
ArtifactPath: "/Workspace/Users/foo@bar.com/artifacts",
FilePath: "/Workspace/Users/foo@bar.com/files",
StatePath: "/Workspace/Users/foo@bar.com/state",
ResourcePath: "/Workspace/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/foo@bar.com").Return(&workspace.ObjectInfo{
ObjectId: 1234,
}, nil)
api.EXPECT().GetPermissions(mock.Anything, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: "1234",
WorkspaceObjectType: "directories",
}).Return(&workspace.WorkspaceObjectPermissions{
ObjectId: "1234",
AccessControlList: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
{
UserName: "foo2@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Len(t, diags, 1)
require.Equal(t, "untracked permissions apply to target workspace path", diags[0].Summary)
require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, "The following permissions apply to the workspace folder at \"/Workspace/Users/foo@bar.com\" but are not configured in the bundle:\n- level: CAN_MANAGE, user_name: foo2@bar.com\n", diags[0].Detail)
}
func TestValidateFolderPermissionsFailsOnPermissionMismatch(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com",
ArtifactPath: "/Workspace/Users/foo@bar.com/artifacts",
FilePath: "/Workspace/Users/foo@bar.com/files",
StatePath: "/Workspace/Users/foo@bar.com/state",
ResourcePath: "/Workspace/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/foo@bar.com").Return(&workspace.ObjectInfo{
ObjectId: 1234,
}, nil)
api.EXPECT().GetPermissions(mock.Anything, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: "1234",
WorkspaceObjectType: "directories",
}).Return(&workspace.WorkspaceObjectPermissions{
ObjectId: "1234",
AccessControlList: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo2@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Len(t, diags, 1)
require.Equal(t, "untracked permissions apply to target workspace path", diags[0].Summary)
require.Equal(t, diag.Warning, diags[0].Severity)
}
func TestValidateFolderPermissionsFailsOnNoRootFolder(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/NotExisting",
ArtifactPath: "/NotExisting/artifacts",
FilePath: "/NotExisting/files",
StatePath: "/NotExisting/state",
ResourcePath: "/NotExisting/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/NotExisting").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Len(t, diags, 1)
require.Equal(t, "folder / and its parent folders do not exist", diags[0].Summary)
require.Equal(t, diag.Error, diags[0].Severity)
}

View File

@ -35,6 +35,7 @@ func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics
FilesToSync(), FilesToSync(),
ValidateSyncPatterns(), ValidateSyncPatterns(),
JobTaskClusterSpec(), JobTaskClusterSpec(),
ValidateFolderPermissions(),
)) ))
} }

View File

@ -0,0 +1,117 @@
package terraform
import (
"context"
"fmt"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
tfjson "github.com/hashicorp/terraform-json"
)
type dashboardState struct {
Name string
ID string
ETag string
}
func collectDashboardsFromState(ctx context.Context, b *bundle.Bundle) ([]dashboardState, error) {
state, err := ParseResourcesState(ctx, b)
if err != nil && state == nil {
return nil, err
}
var dashboards []dashboardState
for _, resource := range state.Resources {
if resource.Mode != tfjson.ManagedResourceMode {
continue
}
for _, instance := range resource.Instances {
switch resource.Type {
case "databricks_dashboard":
dashboards = append(dashboards, dashboardState{
Name: resource.Name,
ID: instance.Attributes.ID,
ETag: instance.Attributes.ETag,
})
}
}
}
return dashboards, nil
}
type checkDashboardsModifiedRemotely struct {
}
func (l *checkDashboardsModifiedRemotely) Name() string {
return "CheckDashboardsModifiedRemotely"
}
func (l *checkDashboardsModifiedRemotely) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// This mutator is relevant only if the bundle includes dashboards.
if len(b.Config.Resources.Dashboards) == 0 {
return nil
}
// If the user has forced the deployment, skip this check.
if b.Config.Bundle.Force {
return nil
}
dashboards, err := collectDashboardsFromState(ctx, b)
if err != nil {
return diag.FromErr(err)
}
var diags diag.Diagnostics
for _, dashboard := range dashboards {
// Skip dashboards that are not defined in the bundle.
// These will be destroyed upon deployment.
if _, ok := b.Config.Resources.Dashboards[dashboard.Name]; !ok {
continue
}
path := dyn.MustPathFromString(fmt.Sprintf("resources.dashboards.%s", dashboard.Name))
loc := b.Config.GetLocation(path.String())
actual, err := b.WorkspaceClient().Lakeview.GetByDashboardId(ctx, dashboard.ID)
if err != nil {
diags = diags.Append(diag.Diagnostic{
Severity: diag.Error,
Summary: fmt.Sprintf("failed to get dashboard %q", dashboard.Name),
Detail: err.Error(),
Paths: []dyn.Path{path},
Locations: []dyn.Location{loc},
})
continue
}
// If the ETag is the same, the dashboard has not been modified.
if actual.Etag == dashboard.ETag {
continue
}
diags = diags.Append(diag.Diagnostic{
Severity: diag.Error,
Summary: fmt.Sprintf("dashboard %q has been modified remotely", dashboard.Name),
Detail: "" +
"This dashboard has been modified remotely since the last bundle deployment.\n" +
"These modifications are untracked and will be overwritten on deploy.\n" +
"\n" +
"Make sure that the local dashboard definition matches what you intend to deploy\n" +
"before proceeding with the deployment.\n" +
"\n" +
"Run `databricks bundle deploy --force` to bypass this error." +
"",
Paths: []dyn.Path{path},
Locations: []dyn.Location{loc},
})
}
return diags
}
func CheckDashboardsModifiedRemotely() *checkDashboardsModifiedRemotely {
return &checkDashboardsModifiedRemotely{}
}

View File

@ -0,0 +1,191 @@
package terraform
import (
"context"
"fmt"
"path/filepath"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func mockDashboardBundle(t *testing.T) *bundle.Bundle {
dir := t.TempDir()
b := &bundle.Bundle{
BundleRootPath: dir,
Config: config.Root{
Bundle: config.Bundle{
Target: "test",
},
Resources: config.Resources{
Dashboards: map[string]*resources.Dashboard{
"dash1": {
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "My Special Dashboard",
},
},
},
},
},
}
return b
}
func TestCheckDashboardsModifiedRemotely_NoDashboards(t *testing.T) {
dir := t.TempDir()
b := &bundle.Bundle{
BundleRootPath: dir,
Config: config.Root{
Bundle: config.Bundle{
Target: "test",
},
Resources: config.Resources{},
},
}
diags := bundle.Apply(context.Background(), b, CheckDashboardsModifiedRemotely())
assert.Empty(t, diags)
}
func TestCheckDashboardsModifiedRemotely_FirstDeployment(t *testing.T) {
b := mockDashboardBundle(t)
diags := bundle.Apply(context.Background(), b, CheckDashboardsModifiedRemotely())
assert.Empty(t, diags)
}
func TestCheckDashboardsModifiedRemotely_ExistingStateNoChange(t *testing.T) {
ctx := context.Background()
b := mockDashboardBundle(t)
writeFakeDashboardState(t, ctx, b)
// Mock the call to the API.
m := mocks.NewMockWorkspaceClient(t)
dashboardsAPI := m.GetMockLakeviewAPI()
dashboardsAPI.EXPECT().
GetByDashboardId(mock.Anything, "id1").
Return(&dashboards.Dashboard{
DisplayName: "My Special Dashboard",
Etag: "1000",
}, nil).
Once()
b.SetWorkpaceClient(m.WorkspaceClient)
// No changes, so no diags.
diags := bundle.Apply(ctx, b, CheckDashboardsModifiedRemotely())
assert.Empty(t, diags)
}
func TestCheckDashboardsModifiedRemotely_ExistingStateChange(t *testing.T) {
ctx := context.Background()
b := mockDashboardBundle(t)
writeFakeDashboardState(t, ctx, b)
// Mock the call to the API.
m := mocks.NewMockWorkspaceClient(t)
dashboardsAPI := m.GetMockLakeviewAPI()
dashboardsAPI.EXPECT().
GetByDashboardId(mock.Anything, "id1").
Return(&dashboards.Dashboard{
DisplayName: "My Special Dashboard",
Etag: "1234",
}, nil).
Once()
b.SetWorkpaceClient(m.WorkspaceClient)
// The dashboard has changed, so expect an error.
diags := bundle.Apply(ctx, b, CheckDashboardsModifiedRemotely())
if assert.Len(t, diags, 1) {
assert.Equal(t, diag.Error, diags[0].Severity)
assert.Equal(t, `dashboard "dash1" has been modified remotely`, diags[0].Summary)
}
}
func TestCheckDashboardsModifiedRemotely_ExistingStateFailureToGet(t *testing.T) {
ctx := context.Background()
b := mockDashboardBundle(t)
writeFakeDashboardState(t, ctx, b)
// Mock the call to the API.
m := mocks.NewMockWorkspaceClient(t)
dashboardsAPI := m.GetMockLakeviewAPI()
dashboardsAPI.EXPECT().
GetByDashboardId(mock.Anything, "id1").
Return(nil, fmt.Errorf("failure")).
Once()
b.SetWorkpaceClient(m.WorkspaceClient)
// Unable to get the dashboard, so expect an error.
diags := bundle.Apply(ctx, b, CheckDashboardsModifiedRemotely())
if assert.Len(t, diags, 1) {
assert.Equal(t, diag.Error, diags[0].Severity)
assert.Equal(t, `failed to get dashboard "dash1"`, diags[0].Summary)
}
}
func writeFakeDashboardState(t *testing.T, ctx context.Context, b *bundle.Bundle) {
tfDir, err := Dir(ctx, b)
require.NoError(t, err)
// Write fake state file.
testutil.WriteFile(t, `
{
"version": 4,
"terraform_version": "1.5.5",
"resources": [
{
"mode": "managed",
"type": "databricks_dashboard",
"name": "dash1",
"instances": [
{
"schema_version": 0,
"attributes": {
"etag": "1000",
"id": "id1"
}
}
]
},
{
"mode": "managed",
"type": "databricks_job",
"name": "job",
"instances": [
{
"schema_version": 0,
"attributes": {
"id": "1234"
}
}
]
},
{
"mode": "managed",
"type": "databricks_dashboard",
"name": "dash2",
"instances": [
{
"schema_version": 0,
"attributes": {
"etag": "1001",
"id": "id2"
}
}
]
}
]
}
`, filepath.Join(tfDir, TerraformStateFileName))
}

View File

@ -176,6 +176,16 @@ func TerraformToBundle(state *resourcesState, config *config.Root) error {
} }
cur.ID = instance.Attributes.ID cur.ID = instance.Attributes.ID
config.Resources.Clusters[resource.Name] = cur config.Resources.Clusters[resource.Name] = cur
case "databricks_dashboard":
if config.Resources.Dashboards == nil {
config.Resources.Dashboards = make(map[string]*resources.Dashboard)
}
cur := config.Resources.Dashboards[resource.Name]
if cur == nil {
cur = &resources.Dashboard{ModifiedStatus: resources.ModifiedStatusDeleted}
}
cur.ID = instance.Attributes.ID
config.Resources.Dashboards[resource.Name] = cur
case "databricks_permissions": case "databricks_permissions":
case "databricks_grants": case "databricks_grants":
// Ignore; no need to pull these back into the configuration. // Ignore; no need to pull these back into the configuration.
@ -230,6 +240,11 @@ func TerraformToBundle(state *resourcesState, config *config.Root) error {
src.ModifiedStatus = resources.ModifiedStatusCreated src.ModifiedStatus = resources.ModifiedStatusCreated
} }
} }
for _, src := range config.Resources.Dashboards {
if src.ModifiedStatus == "" && src.ID == "" {
src.ModifiedStatus = resources.ModifiedStatusCreated
}
}
return nil return nil
} }

View File

@ -12,6 +12,7 @@ import (
"github.com/databricks/cli/libs/dyn/convert" "github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
@ -677,6 +678,14 @@ func TestTerraformToBundleEmptyLocalResources(t *testing.T) {
{Attributes: stateInstanceAttributes{ID: "1"}}, {Attributes: stateInstanceAttributes{ID: "1"}},
}, },
}, },
{
Type: "databricks_dashboard",
Mode: "managed",
Name: "test_dashboard",
Instances: []stateResourceInstance{
{Attributes: stateInstanceAttributes{ID: "1"}},
},
},
}, },
} }
err := TerraformToBundle(&tfState, &config) err := TerraformToBundle(&tfState, &config)
@ -709,6 +718,9 @@ func TestTerraformToBundleEmptyLocalResources(t *testing.T) {
assert.Equal(t, "1", config.Resources.Clusters["test_cluster"].ID) assert.Equal(t, "1", config.Resources.Clusters["test_cluster"].ID)
assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Clusters["test_cluster"].ModifiedStatus) assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Clusters["test_cluster"].ModifiedStatus)
assert.Equal(t, "1", config.Resources.Dashboards["test_dashboard"].ID)
assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Dashboards["test_dashboard"].ModifiedStatus)
AssertFullResourceCoverage(t, &config) AssertFullResourceCoverage(t, &config)
} }
@ -778,6 +790,13 @@ func TestTerraformToBundleEmptyRemoteResources(t *testing.T) {
}, },
}, },
}, },
Dashboards: map[string]*resources.Dashboard{
"test_dashboard": {
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "test_dashboard",
},
},
},
}, },
} }
var tfState = resourcesState{ var tfState = resourcesState{
@ -813,6 +832,9 @@ func TestTerraformToBundleEmptyRemoteResources(t *testing.T) {
assert.Equal(t, "", config.Resources.Clusters["test_cluster"].ID) assert.Equal(t, "", config.Resources.Clusters["test_cluster"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Clusters["test_cluster"].ModifiedStatus) assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Clusters["test_cluster"].ModifiedStatus)
assert.Equal(t, "", config.Resources.Dashboards["test_dashboard"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Dashboards["test_dashboard"].ModifiedStatus)
AssertFullResourceCoverage(t, &config) AssertFullResourceCoverage(t, &config)
} }
@ -927,6 +949,18 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
}, },
}, },
}, },
Dashboards: map[string]*resources.Dashboard{
"test_dashboard": {
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "test_dashboard",
},
},
"test_dashboard_new": {
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "test_dashboard_new",
},
},
},
}, },
} }
var tfState = resourcesState{ var tfState = resourcesState{
@ -1075,6 +1109,22 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
{Attributes: stateInstanceAttributes{ID: "2"}}, {Attributes: stateInstanceAttributes{ID: "2"}},
}, },
}, },
{
Type: "databricks_dashboard",
Mode: "managed",
Name: "test_dashboard",
Instances: []stateResourceInstance{
{Attributes: stateInstanceAttributes{ID: "1"}},
},
},
{
Type: "databricks_dashboard",
Mode: "managed",
Name: "test_dashboard_old",
Instances: []stateResourceInstance{
{Attributes: stateInstanceAttributes{ID: "2"}},
},
},
}, },
} }
err := TerraformToBundle(&tfState, &config) err := TerraformToBundle(&tfState, &config)
@ -1143,6 +1193,13 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
assert.Equal(t, "", config.Resources.Clusters["test_cluster_new"].ID) assert.Equal(t, "", config.Resources.Clusters["test_cluster_new"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Clusters["test_cluster_new"].ModifiedStatus) assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Clusters["test_cluster_new"].ModifiedStatus)
assert.Equal(t, "1", config.Resources.Dashboards["test_dashboard"].ID)
assert.Equal(t, "", config.Resources.Dashboards["test_dashboard"].ModifiedStatus)
assert.Equal(t, "2", config.Resources.Dashboards["test_dashboard_old"].ID)
assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Dashboards["test_dashboard_old"].ModifiedStatus)
assert.Equal(t, "", config.Resources.Dashboards["test_dashboard_new"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Dashboards["test_dashboard_new"].ModifiedStatus)
AssertFullResourceCoverage(t, &config) AssertFullResourceCoverage(t, &config)
} }

View File

@ -60,6 +60,8 @@ func (m *interpolateMutator) Apply(ctx context.Context, b *bundle.Bundle) diag.D
path = dyn.NewPath(dyn.Key("databricks_schema")).Append(path[2:]...) path = dyn.NewPath(dyn.Key("databricks_schema")).Append(path[2:]...)
case dyn.Key("clusters"): case dyn.Key("clusters"):
path = dyn.NewPath(dyn.Key("databricks_cluster")).Append(path[2:]...) path = dyn.NewPath(dyn.Key("databricks_cluster")).Append(path[2:]...)
case dyn.Key("dashboards"):
path = dyn.NewPath(dyn.Key("databricks_dashboard")).Append(path[2:]...)
default: default:
// Trigger "key not found" for unknown resource types. // Trigger "key not found" for unknown resource types.
return dyn.GetByPath(root, path) return dyn.GetByPath(root, path)

View File

@ -32,6 +32,7 @@ func TestInterpolate(t *testing.T) {
"other_registered_model": "${resources.registered_models.other_registered_model.id}", "other_registered_model": "${resources.registered_models.other_registered_model.id}",
"other_schema": "${resources.schemas.other_schema.id}", "other_schema": "${resources.schemas.other_schema.id}",
"other_cluster": "${resources.clusters.other_cluster.id}", "other_cluster": "${resources.clusters.other_cluster.id}",
"other_dashboard": "${resources.dashboards.other_dashboard.id}",
}, },
Tasks: []jobs.Task{ Tasks: []jobs.Task{
{ {
@ -69,6 +70,7 @@ func TestInterpolate(t *testing.T) {
assert.Equal(t, "${databricks_registered_model.other_registered_model.id}", j.Tags["other_registered_model"]) assert.Equal(t, "${databricks_registered_model.other_registered_model.id}", j.Tags["other_registered_model"])
assert.Equal(t, "${databricks_schema.other_schema.id}", j.Tags["other_schema"]) assert.Equal(t, "${databricks_schema.other_schema.id}", j.Tags["other_schema"])
assert.Equal(t, "${databricks_cluster.other_cluster.id}", j.Tags["other_cluster"]) assert.Equal(t, "${databricks_cluster.other_cluster.id}", j.Tags["other_cluster"])
assert.Equal(t, "${databricks_dashboard.other_dashboard.id}", j.Tags["other_dashboard"])
m := b.Config.Resources.Models["my_model"] m := b.Config.Resources.Models["my_model"]
assert.Equal(t, "my_model", m.Model.Name) assert.Equal(t, "my_model", m.Model.Name)

View File

@ -0,0 +1,109 @@
package tfdyn
import (
"context"
"encoding/json"
"fmt"
"github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/log"
)
const (
filePathFieldName = "file_path"
serializedDashboardFieldName = "serialized_dashboard"
)
// Marshal "serialized_dashboard" as JSON if it is set in the input but not in the output.
func marshalSerializedDashboard(vin dyn.Value, vout dyn.Value) (dyn.Value, error) {
// Skip if the "serialized_dashboard" field is already set.
if v := vout.Get(serializedDashboardFieldName); v.IsValid() {
return vout, nil
}
// Skip if the "serialized_dashboard" field on the input is not set.
v := vin.Get(serializedDashboardFieldName)
if !v.IsValid() {
return vout, nil
}
// Marshal the "serialized_dashboard" field as JSON.
data, err := json.Marshal(v.AsAny())
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to marshal serialized_dashboard: %w", err)
}
// Set the "serialized_dashboard" field on the output.
return dyn.Set(vout, serializedDashboardFieldName, dyn.V(string(data)))
}
func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, error) {
var err error
// Normalize the output value to the target schema.
vout, diags := convert.Normalize(schema.ResourceDashboard{}, vin)
for _, diag := range diags {
log.Debugf(ctx, "dashboard normalization diagnostic: %s", diag.Summary)
}
// Include "serialized_dashboard" field if "file_path" is set.
// Note: the Terraform resource supports "file_path" natively, but its
// change detection mechanism doesn't work as expected at the time of writing (Sep 30).
if path, ok := vout.Get(filePathFieldName).AsString(); ok {
vout, err = dyn.Set(vout, serializedDashboardFieldName, dyn.V(fmt.Sprintf("${file(%q)}", path)))
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to set serialized_dashboard: %w", err)
}
// Drop the "file_path" field. It is mutually exclusive with "serialized_dashboard".
vout, err = dyn.Walk(vout, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
switch len(p) {
case 0:
return v, nil
case 1:
if p[0] == dyn.Key(filePathFieldName) {
return v, dyn.ErrDrop
}
}
// Skip everything else.
return v, dyn.ErrSkip
})
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to drop file_path: %w", err)
}
}
// Marshal "serialized_dashboard" as JSON if it is set in the input but not in the output.
vout, err = marshalSerializedDashboard(vin, vout)
if err != nil {
return dyn.InvalidValue, err
}
return vout, nil
}
type dashboardConverter struct{}
func (dashboardConverter) Convert(ctx context.Context, key string, vin dyn.Value, out *schema.Resources) error {
vout, err := convertDashboardResource(ctx, vin)
if err != nil {
return err
}
// Add the converted resource to the output.
out.Dashboard[key] = vout.AsAny()
// Configure permissions for this resource.
if permissions := convertPermissionsResource(ctx, vin); permissions != nil {
permissions.DashboardId = fmt.Sprintf("${databricks_dashboard.%s.id}", key)
out.Permissions["dashboard_"+key] = permissions
}
return nil
}
func init() {
registerConverter("dashboards", dashboardConverter{})
}

View File

@ -0,0 +1,153 @@
package tfdyn
import (
"context"
"testing"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConvertDashboard(t *testing.T) {
var src = resources.Dashboard{
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "my dashboard",
WarehouseId: "f00dcafe",
ParentPath: "/some/path",
},
EmbedCredentials: true,
Permissions: []resources.Permission{
{
Level: "CAN_VIEW",
UserName: "jane@doe.com",
},
},
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert equality on the job
assert.Equal(t, map[string]any{
"display_name": "my dashboard",
"warehouse_id": "f00dcafe",
"parent_path": "/some/path",
"embed_credentials": true,
}, out.Dashboard["my_dashboard"])
// Assert equality on the permissions
assert.Equal(t, &schema.ResourcePermissions{
DashboardId: "${databricks_dashboard.my_dashboard.id}",
AccessControl: []schema.ResourcePermissionsAccessControl{
{
PermissionLevel: "CAN_VIEW",
UserName: "jane@doe.com",
},
},
}, out.Permissions["dashboard_my_dashboard"])
}
func TestConvertDashboardFilePath(t *testing.T) {
var src = resources.Dashboard{
FilePath: "some/path",
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": "${file(\"some/path\")}",
})
// Assert that the "file_path" doesn't carry over.
assert.NotSubset(t, out.Dashboard["my_dashboard"], map[string]any{
"file_path": "some/path",
})
}
func TestConvertDashboardFilePathQuoted(t *testing.T) {
var src = resources.Dashboard{
FilePath: `C:\foo\bar\baz\dashboard.lvdash.json`,
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": `${file("C:\\foo\\bar\\baz\\dashboard.lvdash.json")}`,
})
// Assert that the "file_path" doesn't carry over.
assert.NotSubset(t, out.Dashboard["my_dashboard"], map[string]any{
"file_path": `C:\foo\bar\baz\dashboard.lvdash.json`,
})
}
func TestConvertDashboardSerializedDashboardString(t *testing.T) {
var src = resources.Dashboard{
SerializedDashboard: `{ "json": true }`,
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": `{ "json": true }`,
})
}
func TestConvertDashboardSerializedDashboardAny(t *testing.T) {
var src = resources.Dashboard{
SerializedDashboard: map[string]any{
"pages": []map[string]any{
{
"displayName": "New Page",
"layout": []map[string]any{},
},
},
},
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": `{"pages":[{"displayName":"New Page","layout":[]}]}`,
})
}

View File

@ -13,7 +13,7 @@ import (
// Partial representation of the Terraform state file format. // Partial representation of the Terraform state file format.
// We are only interested global version and serial numbers, // We are only interested global version and serial numbers,
// plus resource types, names, modes, and ids. // plus resource types, names, modes, IDs, and ETags (for dashboards).
type resourcesState struct { type resourcesState struct {
Version int `json:"version"` Version int `json:"version"`
Resources []stateResource `json:"resources"` Resources []stateResource `json:"resources"`
@ -34,6 +34,7 @@ type stateResourceInstance struct {
type stateInstanceAttributes struct { type stateInstanceAttributes struct {
ID string `json:"id"` ID string `json:"id"`
ETag string `json:"etag,omitempty"`
} }
func ParseResourcesState(ctx context.Context, b *bundle.Bundle) (*resourcesState, error) { func ParseResourcesState(ctx context.Context, b *bundle.Bundle) (*resourcesState, error) {

View File

@ -0,0 +1,20 @@
package bundletest
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/require"
)
func Mutate(t *testing.T, b *bundle.Bundle, f func(v dyn.Value) (dyn.Value, error)) {
diags := bundle.ApplyFunc(context.Background(), b, func(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
err := b.Config.Mutate(f)
require.NoError(t, err)
return nil
})
require.NoError(t, diags.Error())
}

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.53.0" const ProviderVersion = "1.54.0"

View File

@ -0,0 +1,15 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceNotificationDestinationsNotificationDestinations struct {
DestinationType string `json:"destination_type,omitempty"`
DisplayName string `json:"display_name,omitempty"`
Id string `json:"id,omitempty"`
}
type DataSourceNotificationDestinations struct {
DisplayNameContains string `json:"display_name_contains,omitempty"`
Type string `json:"type,omitempty"`
NotificationDestinations []DataSourceNotificationDestinationsNotificationDestinations `json:"notification_destinations,omitempty"`
}

View File

@ -0,0 +1,32 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceRegisteredModelModelInfoAliases struct {
AliasName string `json:"alias_name,omitempty"`
VersionNum int `json:"version_num,omitempty"`
}
type DataSourceRegisteredModelModelInfo struct {
BrowseOnly bool `json:"browse_only,omitempty"`
CatalogName string `json:"catalog_name,omitempty"`
Comment string `json:"comment,omitempty"`
CreatedAt int `json:"created_at,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
FullName string `json:"full_name,omitempty"`
MetastoreId string `json:"metastore_id,omitempty"`
Name string `json:"name,omitempty"`
Owner string `json:"owner,omitempty"`
SchemaName string `json:"schema_name,omitempty"`
StorageLocation string `json:"storage_location,omitempty"`
UpdatedAt int `json:"updated_at,omitempty"`
UpdatedBy string `json:"updated_by,omitempty"`
Aliases []DataSourceRegisteredModelModelInfoAliases `json:"aliases,omitempty"`
}
type DataSourceRegisteredModel struct {
FullName string `json:"full_name"`
IncludeAliases bool `json:"include_aliases,omitempty"`
IncludeBrowse bool `json:"include_browse,omitempty"`
ModelInfo []DataSourceRegisteredModelModelInfo `json:"model_info,omitempty"`
}

View File

@ -36,7 +36,9 @@ type DataSources struct {
NodeType map[string]any `json:"databricks_node_type,omitempty"` NodeType map[string]any `json:"databricks_node_type,omitempty"`
Notebook map[string]any `json:"databricks_notebook,omitempty"` Notebook map[string]any `json:"databricks_notebook,omitempty"`
NotebookPaths map[string]any `json:"databricks_notebook_paths,omitempty"` NotebookPaths map[string]any `json:"databricks_notebook_paths,omitempty"`
NotificationDestinations map[string]any `json:"databricks_notification_destinations,omitempty"`
Pipelines map[string]any `json:"databricks_pipelines,omitempty"` Pipelines map[string]any `json:"databricks_pipelines,omitempty"`
RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"`
Schema map[string]any `json:"databricks_schema,omitempty"` Schema map[string]any `json:"databricks_schema,omitempty"`
Schemas map[string]any `json:"databricks_schemas,omitempty"` Schemas map[string]any `json:"databricks_schemas,omitempty"`
ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"` ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"`
@ -92,7 +94,9 @@ func NewDataSources() *DataSources {
NodeType: make(map[string]any), NodeType: make(map[string]any),
Notebook: make(map[string]any), Notebook: make(map[string]any),
NotebookPaths: make(map[string]any), NotebookPaths: make(map[string]any),
NotificationDestinations: make(map[string]any),
Pipelines: make(map[string]any), Pipelines: make(map[string]any),
RegisteredModel: make(map[string]any),
Schema: make(map[string]any), Schema: make(map[string]any),
Schemas: make(map[string]any), Schemas: make(map[string]any),
ServicePrincipal: make(map[string]any), ServicePrincipal: make(map[string]any),

View File

@ -1448,6 +1448,7 @@ type ResourceJobWebhookNotifications struct {
type ResourceJob struct { type ResourceJob struct {
AlwaysRunning bool `json:"always_running,omitempty"` AlwaysRunning bool `json:"always_running,omitempty"`
BudgetPolicyId string `json:"budget_policy_id,omitempty"`
ControlRunState bool `json:"control_run_state,omitempty"` ControlRunState bool `json:"control_run_state,omitempty"`
Description string `json:"description,omitempty"` Description string `json:"description,omitempty"`
EditMode string `json:"edit_mode,omitempty"` EditMode string `json:"edit_mode,omitempty"`

View File

@ -23,5 +23,6 @@ type ResourceOnlineTable struct {
Name string `json:"name"` Name string `json:"name"`
Status []any `json:"status,omitempty"` Status []any `json:"status,omitempty"`
TableServingUrl string `json:"table_serving_url,omitempty"` TableServingUrl string `json:"table_serving_url,omitempty"`
UnityCatalogProvisioningState string `json:"unity_catalog_provisioning_state,omitempty"`
Spec *ResourceOnlineTableSpec `json:"spec,omitempty"` Spec *ResourceOnlineTableSpec `json:"spec,omitempty"`
} }

View File

@ -142,10 +142,26 @@ type ResourcePipelineGatewayDefinition struct {
GatewayStorageSchema string `json:"gateway_storage_schema,omitempty"` GatewayStorageSchema string `json:"gateway_storage_schema,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsReportTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
}
type ResourcePipelineIngestionDefinitionObjectsReport struct {
DestinationCatalog string `json:"destination_catalog,omitempty"`
DestinationSchema string `json:"destination_schema,omitempty"`
DestinationTable string `json:"destination_table,omitempty"`
SourceUrl string `json:"source_url,omitempty"`
TableConfiguration *ResourcePipelineIngestionDefinitionObjectsReportTableConfiguration `json:"table_configuration,omitempty"`
}
type ResourcePipelineIngestionDefinitionObjectsSchemaTableConfiguration struct { type ResourcePipelineIngestionDefinitionObjectsSchemaTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsSchema struct { type ResourcePipelineIngestionDefinitionObjectsSchema struct {
@ -160,6 +176,7 @@ type ResourcePipelineIngestionDefinitionObjectsTableTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsTable struct { type ResourcePipelineIngestionDefinitionObjectsTable struct {
@ -173,6 +190,7 @@ type ResourcePipelineIngestionDefinitionObjectsTable struct {
} }
type ResourcePipelineIngestionDefinitionObjects struct { type ResourcePipelineIngestionDefinitionObjects struct {
Report *ResourcePipelineIngestionDefinitionObjectsReport `json:"report,omitempty"`
Schema *ResourcePipelineIngestionDefinitionObjectsSchema `json:"schema,omitempty"` Schema *ResourcePipelineIngestionDefinitionObjectsSchema `json:"schema,omitempty"`
Table *ResourcePipelineIngestionDefinitionObjectsTable `json:"table,omitempty"` Table *ResourcePipelineIngestionDefinitionObjectsTable `json:"table,omitempty"`
} }
@ -181,6 +199,7 @@ type ResourcePipelineIngestionDefinitionTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinition struct { type ResourcePipelineIngestionDefinition struct {

View File

@ -21,7 +21,7 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.53.0" const ProviderVersion = "1.54.0"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{

View File

@ -36,3 +36,13 @@ func IsWorkspaceLibrary(library *compute.Library) bool {
return IsWorkspacePath(path) return IsWorkspacePath(path)
} }
// IsVolumesPath returns true if the specified path indicates that
// it should be interpreted as a Databricks Volumes path.
func IsVolumesPath(path string) bool {
return strings.HasPrefix(path, "/Volumes/")
}
func IsWorkspaceSharedPath(path string) bool {
return strings.HasPrefix(path, "/Workspace/Shared/")
}

View File

@ -31,3 +31,13 @@ func TestIsWorkspaceLibrary(t *testing.T) {
// Empty. // Empty.
assert.False(t, IsWorkspaceLibrary(&compute.Library{})) assert.False(t, IsWorkspaceLibrary(&compute.Library{}))
} }
func TestIsVolumesPath(t *testing.T) {
// Absolute paths with particular prefixes.
assert.True(t, IsVolumesPath("/Volumes/path/to/package"))
// Relative paths.
assert.False(t, IsVolumesPath("myfile.txt"))
assert.False(t, IsVolumesPath("./myfile.txt"))
assert.False(t, IsVolumesPath("../myfile.txt"))
}

39
bundle/paths/paths.go Normal file
View File

@ -0,0 +1,39 @@
package paths
import (
"strings"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/libraries"
)
func CollectUniqueWorkspacePathPrefixes(workspace config.Workspace) []string {
rootPath := workspace.RootPath
paths := []string{}
if !libraries.IsVolumesPath(rootPath) {
paths = append(paths, rootPath)
}
if !strings.HasSuffix(rootPath, "/") {
rootPath += "/"
}
for _, p := range []string{
workspace.ArtifactPath,
workspace.FilePath,
workspace.StatePath,
workspace.ResourcePath,
} {
if libraries.IsVolumesPath(p) {
continue
}
if strings.HasPrefix(p, rootPath) {
continue
}
paths = append(paths, p)
}
return paths
}

View File

@ -39,6 +39,10 @@ var levelsMap = map[string](map[string]string){
CAN_VIEW: "CAN_VIEW", CAN_VIEW: "CAN_VIEW",
CAN_RUN: "CAN_QUERY", CAN_RUN: "CAN_QUERY",
}, },
"dashboards": {
CAN_MANAGE: "CAN_MANAGE",
CAN_VIEW: "CAN_READ",
},
} }
type bundlePermissions struct{} type bundlePermissions struct{}

View File

@ -3,9 +3,9 @@ package permissions
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
) )
@ -21,17 +21,13 @@ func (*validateSharedRootPermissions) Name() string {
} }
func (*validateSharedRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (*validateSharedRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if isWorkspaceSharedRoot(b.Config.Workspace.RootPath) { if libraries.IsWorkspaceSharedPath(b.Config.Workspace.RootPath) {
return isUsersGroupPermissionSet(b) return isUsersGroupPermissionSet(b)
} }
return nil return nil
} }
func isWorkspaceSharedRoot(path string) bool {
return strings.HasPrefix(path, "/Workspace/Shared/")
}
// isUsersGroupPermissionSet checks that top-level permissions set for bundle contain group_name: users with CAN_MANAGE permission. // isUsersGroupPermissionSet checks that top-level permissions set for bundle contain group_name: users with CAN_MANAGE permission.
func isUsersGroupPermissionSet(b *bundle.Bundle) diag.Diagnostics { func isUsersGroupPermissionSet(b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics var diags diag.Diagnostics

View File

@ -0,0 +1,89 @@
package permissions
import (
"fmt"
"strings"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/workspace"
)
type WorkspacePathPermissions struct {
Path string
Permissions []resources.Permission
}
func ObjectAclToResourcePermissions(path string, acl []workspace.WorkspaceObjectAccessControlResponse) *WorkspacePathPermissions {
permissions := make([]resources.Permission, 0)
for _, a := range acl {
// Skip the admin group because it's added to all resources by default.
if a.GroupName == "admins" {
continue
}
for _, pl := range a.AllPermissions {
permissions = append(permissions, resources.Permission{
Level: convertWorkspaceObjectPermissionLevel(pl.PermissionLevel),
GroupName: a.GroupName,
UserName: a.UserName,
ServicePrincipalName: a.ServicePrincipalName,
})
}
}
return &WorkspacePathPermissions{Permissions: permissions, Path: path}
}
func (p WorkspacePathPermissions) Compare(perms []resources.Permission) diag.Diagnostics {
var diags diag.Diagnostics
// Check the permissions in the workspace and see if they are all set in the bundle.
ok, missing := containsAll(p.Permissions, perms)
if !ok {
diags = diags.Append(diag.Diagnostic{
Severity: diag.Warning,
Summary: "untracked permissions apply to target workspace path",
Detail: fmt.Sprintf("The following permissions apply to the workspace folder at %q but are not configured in the bundle:\n%s", p.Path, toString(missing)),
})
}
return diags
}
// containsAll checks if permA contains all permissions in permB.
func containsAll(permA []resources.Permission, permB []resources.Permission) (bool, []resources.Permission) {
missing := make([]resources.Permission, 0)
for _, a := range permA {
found := false
for _, b := range permB {
if a == b {
found = true
break
}
}
if !found {
missing = append(missing, a)
}
}
return len(missing) == 0, missing
}
// convertWorkspaceObjectPermissionLevel converts matching object permission levels to bundle ones.
// If there is no matching permission level, it returns permission level as is, for example, CAN_EDIT.
func convertWorkspaceObjectPermissionLevel(level workspace.WorkspaceObjectPermissionLevel) string {
switch level {
case workspace.WorkspaceObjectPermissionLevelCanRead:
return CAN_VIEW
default:
return string(level)
}
}
func toString(p []resources.Permission) string {
var sb strings.Builder
for _, perm := range p {
sb.WriteString(fmt.Sprintf("- %s\n", perm.String()))
}
return sb.String()
}

View File

@ -0,0 +1,121 @@
package permissions
import (
"testing"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require"
)
func TestWorkspacePathPermissionsCompare(t *testing.T) {
testCases := []struct {
perms []resources.Permission
acl []workspace.WorkspaceObjectAccessControlResponse
expected diag.Diagnostics
}{
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: nil,
},
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
{
GroupName: "admins",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: nil,
},
{
perms: []resources.Permission{
{Level: CAN_VIEW, UserName: "foo@bar.com"},
{Level: CAN_MANAGE, ServicePrincipalName: "sp.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_READ"},
},
},
},
expected: nil,
},
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
{
GroupName: "foo",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: diag.Diagnostics{
{
Severity: diag.Warning,
Summary: "untracked permissions apply to target workspace path",
Detail: "The following permissions apply to the workspace folder at \"path\" but are not configured in the bundle:\n- level: CAN_MANAGE, group_name: foo\n",
},
},
},
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo2@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: diag.Diagnostics{
{
Severity: diag.Warning,
Summary: "untracked permissions apply to target workspace path",
Detail: "The following permissions apply to the workspace folder at \"path\" but are not configured in the bundle:\n- level: CAN_MANAGE, user_name: foo2@bar.com\n",
},
},
},
}
for _, tc := range testCases {
wp := ObjectAclToResourcePermissions("path", tc.acl)
diags := wp.Compare(tc.perms)
require.Equal(t, tc.expected, diags)
}
}

View File

@ -5,8 +5,11 @@ import (
"fmt" "fmt"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/bundle/paths"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/workspace" "github.com/databricks/databricks-sdk-go/service/workspace"
"golang.org/x/sync/errgroup"
) )
type workspaceRootPermissions struct { type workspaceRootPermissions struct {
@ -34,7 +37,7 @@ func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error {
permissions := make([]workspace.WorkspaceObjectAccessControlRequest, 0) permissions := make([]workspace.WorkspaceObjectAccessControlRequest, 0)
for _, p := range b.Config.Permissions { for _, p := range b.Config.Permissions {
level, err := getWorkspaceObjectPermissionLevel(p.Level) level, err := GetWorkspaceObjectPermissionLevel(p.Level)
if err != nil { if err != nil {
return err return err
} }
@ -52,20 +55,39 @@ func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error {
} }
w := b.WorkspaceClient().Workspace w := b.WorkspaceClient().Workspace
obj, err := w.GetStatusByPath(ctx, b.Config.Workspace.RootPath) bundlePaths := paths.CollectUniqueWorkspacePathPrefixes(b.Config.Workspace)
g, ctx := errgroup.WithContext(ctx)
for _, p := range bundlePaths {
g.Go(func() error {
return setPermissions(ctx, w, p, permissions)
})
}
return g.Wait()
}
func setPermissions(ctx context.Context, w workspace.WorkspaceInterface, path string, permissions []workspace.WorkspaceObjectAccessControlRequest) error {
// If the folder is shared, then we don't need to set permissions since it's always set for all users and it's checked in mutators before.
if libraries.IsWorkspaceSharedPath(path) {
return nil
}
obj, err := w.GetStatusByPath(ctx, path)
if err != nil { if err != nil {
return err return err
} }
_, err = w.UpdatePermissions(ctx, workspace.WorkspaceObjectPermissionsRequest{ _, err = w.SetPermissions(ctx, workspace.WorkspaceObjectPermissionsRequest{
WorkspaceObjectId: fmt.Sprint(obj.ObjectId), WorkspaceObjectId: fmt.Sprint(obj.ObjectId),
WorkspaceObjectType: "directories", WorkspaceObjectType: "directories",
AccessControlList: permissions, AccessControlList: permissions,
}) })
return err return err
} }
func getWorkspaceObjectPermissionLevel(bundlePermission string) (workspace.WorkspaceObjectPermissionLevel, error) { func GetWorkspaceObjectPermissionLevel(bundlePermission string) (workspace.WorkspaceObjectPermissionLevel, error) {
switch bundlePermission { switch bundlePermission {
case CAN_MANAGE: case CAN_MANAGE:
return workspace.WorkspaceObjectPermissionLevelCanManage, nil return workspace.WorkspaceObjectPermissionLevelCanManage, nil

View File

@ -22,6 +22,10 @@ func TestApplyWorkspaceRootPermissions(t *testing.T) {
Config: config.Root{ Config: config.Root{
Workspace: config.Workspace{ Workspace: config.Workspace{
RootPath: "/Users/foo@bar.com", RootPath: "/Users/foo@bar.com",
ArtifactPath: "/Users/foo@bar.com/artifacts",
FilePath: "/Users/foo@bar.com/files",
StatePath: "/Users/foo@bar.com/state",
ResourcePath: "/Users/foo@bar.com/resources",
}, },
Permissions: []resources.Permission{ Permissions: []resources.Permission{
{Level: CAN_MANAGE, UserName: "TestUser"}, {Level: CAN_MANAGE, UserName: "TestUser"},
@ -59,7 +63,7 @@ func TestApplyWorkspaceRootPermissions(t *testing.T) {
workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Users/foo@bar.com").Return(&workspace.ObjectInfo{ workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Users/foo@bar.com").Return(&workspace.ObjectInfo{
ObjectId: 1234, ObjectId: 1234,
}, nil) }, nil)
workspaceApi.EXPECT().UpdatePermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{ workspaceApi.EXPECT().SetPermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{
AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{ AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{
{UserName: "TestUser", PermissionLevel: "CAN_MANAGE"}, {UserName: "TestUser", PermissionLevel: "CAN_MANAGE"},
{GroupName: "TestGroup", PermissionLevel: "CAN_READ"}, {GroupName: "TestGroup", PermissionLevel: "CAN_READ"},
@ -72,3 +76,116 @@ func TestApplyWorkspaceRootPermissions(t *testing.T) {
diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions(), ApplyWorkspaceRootPermissions())) diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions(), ApplyWorkspaceRootPermissions()))
require.Empty(t, diags) require.Empty(t, diags)
} }
func TestApplyWorkspaceRootPermissionsForAllPaths(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Some/Root/Path",
ArtifactPath: "/Users/foo@bar.com/artifacts",
FilePath: "/Users/foo@bar.com/files",
StatePath: "/Users/foo@bar.com/state",
ResourcePath: "/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: CAN_MANAGE, UserName: "TestUser"},
{Level: CAN_VIEW, GroupName: "TestGroup"},
{Level: CAN_RUN, ServicePrincipalName: "TestServicePrincipal"},
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job_1": {JobSettings: &jobs.JobSettings{Name: "job_1"}},
"job_2": {JobSettings: &jobs.JobSettings{Name: "job_2"}},
},
Pipelines: map[string]*resources.Pipeline{
"pipeline_1": {PipelineSpec: &pipelines.PipelineSpec{}},
"pipeline_2": {PipelineSpec: &pipelines.PipelineSpec{}},
},
Models: map[string]*resources.MlflowModel{
"model_1": {Model: &ml.Model{}},
"model_2": {Model: &ml.Model{}},
},
Experiments: map[string]*resources.MlflowExperiment{
"experiment_1": {Experiment: &ml.Experiment{}},
"experiment_2": {Experiment: &ml.Experiment{}},
},
ModelServingEndpoints: map[string]*resources.ModelServingEndpoint{
"endpoint_1": {CreateServingEndpoint: &serving.CreateServingEndpoint{}},
"endpoint_2": {CreateServingEndpoint: &serving.CreateServingEndpoint{}},
},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
workspaceApi := m.GetMockWorkspaceAPI()
workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Some/Root/Path").Return(&workspace.ObjectInfo{
ObjectId: 1,
}, nil)
workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Users/foo@bar.com/artifacts").Return(&workspace.ObjectInfo{
ObjectId: 2,
}, nil)
workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Users/foo@bar.com/files").Return(&workspace.ObjectInfo{
ObjectId: 3,
}, nil)
workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Users/foo@bar.com/state").Return(&workspace.ObjectInfo{
ObjectId: 4,
}, nil)
workspaceApi.EXPECT().GetStatusByPath(mock.Anything, "/Users/foo@bar.com/resources").Return(&workspace.ObjectInfo{
ObjectId: 5,
}, nil)
workspaceApi.EXPECT().SetPermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{
AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{
{UserName: "TestUser", PermissionLevel: "CAN_MANAGE"},
{GroupName: "TestGroup", PermissionLevel: "CAN_READ"},
{ServicePrincipalName: "TestServicePrincipal", PermissionLevel: "CAN_RUN"},
},
WorkspaceObjectId: "1",
WorkspaceObjectType: "directories",
}).Return(nil, nil)
workspaceApi.EXPECT().SetPermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{
AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{
{UserName: "TestUser", PermissionLevel: "CAN_MANAGE"},
{GroupName: "TestGroup", PermissionLevel: "CAN_READ"},
{ServicePrincipalName: "TestServicePrincipal", PermissionLevel: "CAN_RUN"},
},
WorkspaceObjectId: "2",
WorkspaceObjectType: "directories",
}).Return(nil, nil)
workspaceApi.EXPECT().SetPermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{
AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{
{UserName: "TestUser", PermissionLevel: "CAN_MANAGE"},
{GroupName: "TestGroup", PermissionLevel: "CAN_READ"},
{ServicePrincipalName: "TestServicePrincipal", PermissionLevel: "CAN_RUN"},
},
WorkspaceObjectId: "3",
WorkspaceObjectType: "directories",
}).Return(nil, nil)
workspaceApi.EXPECT().SetPermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{
AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{
{UserName: "TestUser", PermissionLevel: "CAN_MANAGE"},
{GroupName: "TestGroup", PermissionLevel: "CAN_READ"},
{ServicePrincipalName: "TestServicePrincipal", PermissionLevel: "CAN_RUN"},
},
WorkspaceObjectId: "4",
WorkspaceObjectType: "directories",
}).Return(nil, nil)
workspaceApi.EXPECT().SetPermissions(mock.Anything, workspace.WorkspaceObjectPermissionsRequest{
AccessControlList: []workspace.WorkspaceObjectAccessControlRequest{
{UserName: "TestUser", PermissionLevel: "CAN_MANAGE"},
{GroupName: "TestGroup", PermissionLevel: "CAN_READ"},
{ServicePrincipalName: "TestServicePrincipal", PermissionLevel: "CAN_RUN"},
},
WorkspaceObjectId: "5",
WorkspaceObjectType: "directories",
}).Return(nil, nil)
diags := bundle.Apply(context.Background(), b, ApplyWorkspaceRootPermissions())
require.NoError(t, diags.Error())
}

View File

@ -152,6 +152,7 @@ func Deploy(outputHandler sync.OutputHandler) bundle.Mutator {
bundle.Defer( bundle.Defer(
bundle.Seq( bundle.Seq(
terraform.StatePull(), terraform.StatePull(),
terraform.CheckDashboardsModifiedRemotely(),
deploy.StatePull(), deploy.StatePull(),
mutator.ValidateGitDetails(), mutator.ValidateGitDetails(),
artifacts.CleanUp(), artifacts.CleanUp(),

View File

@ -66,6 +66,7 @@ func Initialize() bundle.Mutator {
permissions.PermissionDiagnostics(), permissions.PermissionDiagnostics(),
mutator.SetRunAs(), mutator.SetRunAs(),
mutator.OverrideCompute(), mutator.OverrideCompute(),
mutator.ConfigureDashboardDefaults(),
mutator.ProcessTargetMode(), mutator.ProcessTargetMode(),
mutator.ApplyPresets(), mutator.ApplyPresets(),
mutator.DefaultQueueing(), mutator.DefaultQueueing(),

View File

@ -0,0 +1,17 @@
package resources
import "github.com/databricks/cli/bundle"
// Completions returns the same as [References] except
// that every key maps directly to a single reference.
func Completions(b *bundle.Bundle, filters ...Filter) map[string]Reference {
out := make(map[string]Reference)
keyOnlyRefs, _ := References(b, filters...)
for k, refs := range keyOnlyRefs {
if len(refs) != 1 {
continue
}
out[k] = refs[0]
}
return out
}

View File

@ -0,0 +1,58 @@
package resources
import (
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/stretchr/testify/assert"
)
func TestCompletions_SkipDuplicates(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
"bar": {},
},
Pipelines: map[string]*resources.Pipeline{
"foo": {},
},
},
},
}
// Test that this skips duplicates and only includes unambiguous completions.
out := Completions(b)
if assert.Len(t, out, 1) {
assert.Contains(t, out, "bar")
}
}
func TestCompletions_Filter(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
Pipelines: map[string]*resources.Pipeline{
"bar": {},
},
},
},
}
includeJobs := func(ref Reference) bool {
_, ok := ref.Resource.(*resources.Job)
return ok
}
// Test that this does not include the pipeline.
out := Completions(b, includeJobs)
if assert.Len(t, out, 1) {
assert.Contains(t, out, "foo")
}
}

View File

@ -0,0 +1,98 @@
package resources
import (
"fmt"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
)
// Reference is a reference to a resource.
// It includes the resource type description, and a reference to the resource itself.
type Reference struct {
// Key is the unique key of the resource, e.g. "my_job".
Key string
// KeyWithType is the unique key of the resource, including the resource type, e.g. "jobs.my_job".
KeyWithType string
// Description is the resource type description.
Description config.ResourceDescription
// Resource is the resource itself.
Resource config.ConfigResource
}
// Map is the core type for resource lookup and completion.
type Map map[string][]Reference
// Filter defines the function signature for filtering resources.
type Filter func(Reference) bool
// includeReference checks if the specified reference passes all filters.
// If the list of filters is empty, the reference is always included.
func includeReference(filters []Filter, ref Reference) bool {
for _, filter := range filters {
if !filter(ref) {
return false
}
}
return true
}
// References returns maps of resource keys to a slice of [Reference].
//
// The first map is indexed by the resource key only.
// The second map is indexed by the resource type name and its key.
//
// While the return types allows for multiple resources to share the same key,
// this is confirmed not to happen in the [validate.UniqueResourceKeys] mutator.
func References(b *bundle.Bundle, filters ...Filter) (Map, Map) {
keyOnly := make(Map)
keyWithType := make(Map)
// Collect map of resource references indexed by their keys.
for _, group := range b.Config.Resources.AllResources() {
for k, v := range group.Resources {
ref := Reference{
Key: k,
KeyWithType: fmt.Sprintf("%s.%s", group.Description.PluralName, k),
Description: group.Description,
Resource: v,
}
// Skip resources that do not pass all filters.
if !includeReference(filters, ref) {
continue
}
keyOnly[ref.Key] = append(keyOnly[ref.Key], ref)
keyWithType[ref.KeyWithType] = append(keyWithType[ref.KeyWithType], ref)
}
}
return keyOnly, keyWithType
}
// Lookup returns the resource with the specified key.
// If the key maps to more than one resource, an error is returned.
// If the key does not map to any resource, an error is returned.
func Lookup(b *bundle.Bundle, key string, filters ...Filter) (Reference, error) {
keyOnlyRefs, keyWithTypeRefs := References(b, filters...)
refs, ok := keyOnlyRefs[key]
if !ok {
refs, ok = keyWithTypeRefs[key]
if !ok {
return Reference{}, fmt.Errorf("resource with key %q not found", key)
}
}
switch {
case len(refs) == 1:
return refs[0], nil
case len(refs) > 1:
return Reference{}, fmt.Errorf("multiple resources with key %q found", key)
default:
panic("unreachable")
}
}

View File

@ -0,0 +1,117 @@
package resources
import (
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLookup_EmptyBundle(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{},
},
}
_, err := Lookup(b, "foo")
require.Error(t, err)
assert.ErrorContains(t, err, "resource with key \"foo\" not found")
}
func TestLookup_NotFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
"bar": {},
},
},
},
}
_, err := Lookup(b, "qux")
require.Error(t, err)
assert.ErrorContains(t, err, `resource with key "qux" not found`)
}
func TestLookup_MultipleFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
Pipelines: map[string]*resources.Pipeline{
"foo": {},
},
},
},
}
_, err := Lookup(b, "foo")
require.Error(t, err)
assert.ErrorContains(t, err, `multiple resources with key "foo" found`)
}
func TestLookup_Nominal(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Name: "Foo job",
},
},
},
},
},
}
// Lookup by key only.
out, err := Lookup(b, "foo")
if assert.NoError(t, err) {
assert.Equal(t, "Foo job", out.Resource.GetName())
}
// Lookup by type and key.
out, err = Lookup(b, "jobs.foo")
if assert.NoError(t, err) {
assert.Equal(t, "Foo job", out.Resource.GetName())
}
}
func TestLookup_NominalWithFilters(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
Pipelines: map[string]*resources.Pipeline{
"bar": {},
},
},
},
}
includeJobs := func(ref Reference) bool {
_, ok := ref.Resource.(*resources.Job)
return ok
}
// This should succeed because the filter includes jobs.
_, err := Lookup(b, "foo", includeJobs)
require.NoError(t, err)
// This should fail because the filter excludes pipelines.
_, err = Lookup(b, "bar", includeJobs)
require.Error(t, err)
assert.ErrorContains(t, err, `resource with key "bar" not found`)
}

View File

@ -317,6 +317,29 @@ func (r *jobRunner) Cancel(ctx context.Context) error {
return errGroup.Wait() return errGroup.Wait()
} }
func (r *jobRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
// We don't need to cancel existing runs if the job is continuous and unpaused.
// the /jobs/run-now API will automatically cancel any existing runs before starting a new one.
//
// /jobs/run-now will not cancel existing runs if the job is continuous and paused.
// New job runs will be queued instead and will wait for existing runs to finish.
// In this case, we need to cancel the existing runs before starting a new one.
continuous := r.job.JobSettings.Continuous
if continuous != nil && continuous.PauseStatus == jobs.PauseStatusUnpaused {
return r.Run(ctx, opts)
}
s := cmdio.Spinner(ctx)
s <- "Cancelling all active job runs"
err := r.Cancel(ctx)
close(s)
if err != nil {
return nil, err
}
return r.Run(ctx, opts)
}
func (r *jobRunner) ParseArgs(args []string, opts *Options) error { func (r *jobRunner) ParseArgs(args []string, opts *Options) error {
return r.posArgsHandler().ParseArgs(args, opts) return r.posArgsHandler().ParseArgs(args, opts)
} }

View File

@ -1,6 +1,7 @@
package run package run
import ( import (
"bytes"
"context" "context"
"testing" "testing"
"time" "time"
@ -8,6 +9,8 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/mock" "github.com/stretchr/testify/mock"
@ -126,3 +129,132 @@ func TestJobRunnerCancelWithNoActiveRuns(t *testing.T) {
err := runner.Cancel(context.Background()) err := runner.Cancel(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
func TestJobRunnerRestart(t *testing.T) {
for _, jobSettings := range []*jobs.JobSettings{
{},
{
Continuous: &jobs.Continuous{
PauseStatus: jobs.PauseStatusPaused,
},
},
} {
job := &resources.Job{
ID: "123",
JobSettings: jobSettings,
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": job,
},
},
},
}
runner := jobRunner{key: "test", bundle: b, job: job}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", ""))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
jobApi := m.GetMockJobsAPI()
jobApi.EXPECT().ListRunsAll(mock.Anything, jobs.ListRunsRequest{
ActiveOnly: true,
JobId: 123,
}).Return([]jobs.BaseRun{
{RunId: 1},
{RunId: 2},
}, nil)
// Mock the runner cancelling existing job runs.
mockWait := &jobs.WaitGetRunJobTerminatedOrSkipped[struct{}]{
Poll: func(time time.Duration, f func(j *jobs.Run)) (*jobs.Run, error) {
return nil, nil
},
}
jobApi.EXPECT().CancelRun(mock.Anything, jobs.CancelRun{
RunId: 1,
}).Return(mockWait, nil)
jobApi.EXPECT().CancelRun(mock.Anything, jobs.CancelRun{
RunId: 2,
}).Return(mockWait, nil)
// Mock the runner triggering a job run
mockWaitForRun := &jobs.WaitGetRunJobTerminatedOrSkipped[jobs.RunNowResponse]{
Poll: func(d time.Duration, f func(*jobs.Run)) (*jobs.Run, error) {
return &jobs.Run{
State: &jobs.RunState{
ResultState: jobs.RunResultStateSuccess,
},
}, nil
},
}
jobApi.EXPECT().RunNow(mock.Anything, jobs.RunNow{
JobId: 123,
}).Return(mockWaitForRun, nil)
// Mock the runner getting the job output
jobApi.EXPECT().GetRun(mock.Anything, jobs.GetRunRequest{}).Return(&jobs.Run{}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}
}
func TestJobRunnerRestartForContinuousUnpausedJobs(t *testing.T) {
job := &resources.Job{
ID: "123",
JobSettings: &jobs.JobSettings{
Continuous: &jobs.Continuous{
PauseStatus: jobs.PauseStatusUnpaused,
},
},
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": job,
},
},
},
}
runner := jobRunner{key: "test", bundle: b, job: job}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
jobApi := m.GetMockJobsAPI()
// The runner should not try and cancel existing job runs for unpaused continuous jobs.
jobApi.AssertNotCalled(t, "ListRunsAll")
jobApi.AssertNotCalled(t, "CancelRun")
// Mock the runner triggering a job run
mockWaitForRun := &jobs.WaitGetRunJobTerminatedOrSkipped[jobs.RunNowResponse]{
Poll: func(d time.Duration, f func(*jobs.Run)) (*jobs.Run, error) {
return &jobs.Run{
State: &jobs.RunState{
ResultState: jobs.RunResultStateSuccess,
},
}, nil
},
}
jobApi.EXPECT().RunNow(mock.Anything, jobs.RunNow{
JobId: 123,
}).Return(mockWaitForRun, nil)
// Mock the runner getting the job output
jobApi.EXPECT().GetRun(mock.Anything, jobs.GetRunRequest{}).Return(&jobs.Run{}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}

View File

@ -1,69 +0,0 @@
package run
import (
"fmt"
"github.com/databricks/cli/bundle"
"golang.org/x/exp/maps"
)
// RunnerLookup maps identifiers to a list of workloads that match that identifier.
// The list can have more than 1 entry if resources of different types use the
// same key. When this happens, the user should disambiguate between them.
type RunnerLookup map[string][]Runner
// ResourceKeys computes a map with
func ResourceKeys(b *bundle.Bundle) (keyOnly RunnerLookup, keyWithType RunnerLookup) {
keyOnly = make(RunnerLookup)
keyWithType = make(RunnerLookup)
r := b.Config.Resources
for k, v := range r.Jobs {
kt := fmt.Sprintf("jobs.%s", k)
w := jobRunner{key: key(kt), bundle: b, job: v}
keyOnly[k] = append(keyOnly[k], &w)
keyWithType[kt] = append(keyWithType[kt], &w)
}
for k, v := range r.Pipelines {
kt := fmt.Sprintf("pipelines.%s", k)
w := pipelineRunner{key: key(kt), bundle: b, pipeline: v}
keyOnly[k] = append(keyOnly[k], &w)
keyWithType[kt] = append(keyWithType[kt], &w)
}
return
}
// ResourceCompletionMap returns a map of resource keys to their respective names.
func ResourceCompletionMap(b *bundle.Bundle) map[string]string {
out := make(map[string]string)
keyOnly, keyWithType := ResourceKeys(b)
// Keep track of resources we have seen by their fully qualified key.
seen := make(map[string]bool)
// First add resources that can be identified by key alone.
for k, v := range keyOnly {
// Invariant: len(v) >= 1. See [ResourceKeys].
if len(v) == 1 {
seen[v[0].Key()] = true
out[k] = v[0].Name()
}
}
// Then add resources that can only be identified by their type and key.
for k, v := range keyWithType {
// Invariant: len(v) == 1. See [ResourceKeys].
_, ok := seen[v[0].Key()]
if ok {
continue
}
out[k] = v[0].Name()
}
return out
}
// ResourceCompletions returns a list of keys that unambiguously reference resources in the bundle.
func ResourceCompletions(b *bundle.Bundle) []string {
return maps.Keys(ResourceCompletionMap(b))
}

View File

@ -1,25 +0,0 @@
package run
import (
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/stretchr/testify/assert"
)
func TestResourceCompletionsUnique(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
"bar": {},
},
},
},
}
assert.ElementsMatch(t, []string{"foo", "bar"}, ResourceCompletions(b))
}

View File

@ -183,6 +183,18 @@ func (r *pipelineRunner) Cancel(ctx context.Context) error {
return err return err
} }
func (r *pipelineRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
s := cmdio.Spinner(ctx)
s <- "Cancelling the active pipeline update"
err := r.Cancel(ctx)
close(s)
if err != nil {
return nil, err
}
return r.Run(ctx, opts)
}
func (r *pipelineRunner) ParseArgs(args []string, opts *Options) error { func (r *pipelineRunner) ParseArgs(args []string, opts *Options) error {
if len(args) == 0 { if len(args) == 0 {
return nil return nil

View File

@ -1,6 +1,7 @@
package run package run
import ( import (
"bytes"
"context" "context"
"testing" "testing"
"time" "time"
@ -8,8 +9,12 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
sdk_config "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -47,3 +52,68 @@ func TestPipelineRunnerCancel(t *testing.T) {
err := runner.Cancel(context.Background()) err := runner.Cancel(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
func TestPipelineRunnerRestart(t *testing.T) {
pipeline := &resources.Pipeline{
ID: "123",
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"test_pipeline": pipeline,
},
},
},
}
runner := pipelineRunner{key: "test", bundle: b, pipeline: pipeline}
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdk_config.Config{
Host: "https://test.com",
}
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
mockWait := &pipelines.WaitGetPipelineIdle[struct{}]{
Poll: func(time.Duration, func(*pipelines.GetPipelineResponse)) (*pipelines.GetPipelineResponse, error) {
return nil, nil
},
}
pipelineApi := m.GetMockPipelinesAPI()
pipelineApi.EXPECT().Stop(mock.Anything, pipelines.StopRequest{
PipelineId: "123",
}).Return(mockWait, nil)
pipelineApi.EXPECT().GetByPipelineId(mock.Anything, "123").Return(&pipelines.GetPipelineResponse{}, nil)
// Mock runner starting a new update
pipelineApi.EXPECT().StartUpdate(mock.Anything, pipelines.StartUpdate{
PipelineId: "123",
}).Return(&pipelines.StartUpdateResponse{
UpdateId: "456",
}, nil)
// Mock runner polling for events
pipelineApi.EXPECT().ListPipelineEventsAll(mock.Anything, pipelines.ListPipelineEventsRequest{
Filter: `update_id = '456'`,
MaxResults: 100,
PipelineId: "123",
}).Return([]pipelines.PipelineEvent{}, nil)
// Mock runner polling for update status
pipelineApi.EXPECT().GetUpdateByPipelineIdAndUpdateId(mock.Anything, "123", "456").
Return(&pipelines.GetUpdateResponse{
Update: &pipelines.UpdateInfo{
State: pipelines.UpdateInfoStateCompleted,
},
}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}

View File

@ -3,9 +3,10 @@ package run
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/resources"
refs "github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/bundle/run/output" "github.com/databricks/cli/bundle/run/output"
) )
@ -27,6 +28,10 @@ type Runner interface {
// Run the underlying worklow. // Run the underlying worklow.
Run(ctx context.Context, opts *Options) (output.RunOutput, error) Run(ctx context.Context, opts *Options) (output.RunOutput, error)
// Restart the underlying workflow by cancelling any existing runs before
// starting a new one.
Restart(ctx context.Context, opts *Options) (output.RunOutput, error)
// Cancel the underlying workflow. // Cancel the underlying workflow.
Cancel(ctx context.Context) error Cancel(ctx context.Context) error
@ -34,34 +39,24 @@ type Runner interface {
argsHandler argsHandler
} }
// Find locates a runner matching the specified argument. // IsRunnable returns a filter that only allows runnable resources.
// func IsRunnable(ref refs.Reference) bool {
// Its behavior is as follows: switch ref.Resource.(type) {
// 1. Try to find a resource with <key> identical to the argument. case *resources.Job, *resources.Pipeline:
// 2. Try to find a resource with <type>.<key> identical to the argument. return true
// default:
// If an argument resolves to multiple resources, it returns an error. return false
func Find(b *bundle.Bundle, arg string) (Runner, error) { }
keyOnly, keyWithType := ResourceKeys(b) }
if len(keyWithType) == 0 {
return nil, fmt.Errorf("bundle defines no resources") // ToRunner converts a resource reference to a runnable resource.
func ToRunner(b *bundle.Bundle, ref refs.Reference) (Runner, error) {
switch resource := ref.Resource.(type) {
case *resources.Job:
return &jobRunner{key: key(ref.KeyWithType), bundle: b, job: resource}, nil
case *resources.Pipeline:
return &pipelineRunner{key: key(ref.KeyWithType), bundle: b, pipeline: resource}, nil
default:
return nil, fmt.Errorf("unsupported resource type: %T", resource)
} }
runners, ok := keyOnly[arg]
if !ok {
runners, ok = keyWithType[arg]
if !ok {
return nil, fmt.Errorf("no such resource: %s", arg)
}
}
if len(runners) != 1 {
var keys []string
for _, runner := range runners {
keys = append(keys, runner.Key())
}
return nil, fmt.Errorf("ambiguous: %s (can resolve to all of %s)", arg, strings.Join(keys, ", "))
}
return runners[0], nil
} }

View File

@ -3,82 +3,14 @@ package run
import ( import (
"testing" "testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
refs "github.com/databricks/cli/bundle/resources"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func TestFindNoResources(t *testing.T) { func TestRunner_IsRunnable(t *testing.T) {
b := &bundle.Bundle{ assert.True(t, IsRunnable(refs.Reference{Resource: &resources.Job{}}))
Config: config.Root{ assert.True(t, IsRunnable(refs.Reference{Resource: &resources.Pipeline{}}))
Resources: config.Resources{}, assert.False(t, IsRunnable(refs.Reference{Resource: &resources.MlflowModel{}}))
}, assert.False(t, IsRunnable(refs.Reference{Resource: &resources.MlflowExperiment{}}))
}
_, err := Find(b, "foo")
assert.ErrorContains(t, err, "bundle defines no resources")
}
func TestFindSingleArg(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
},
},
}
_, err := Find(b, "foo")
assert.NoError(t, err)
}
func TestFindSingleArgNotFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
},
},
}
_, err := Find(b, "bar")
assert.ErrorContains(t, err, "no such resource: bar")
}
func TestFindSingleArgAmbiguous(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"key": {},
},
Pipelines: map[string]*resources.Pipeline{
"key": {},
},
},
},
}
_, err := Find(b, "key")
assert.ErrorContains(t, err, "ambiguous: ")
}
func TestFindSingleArgWithType(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"key": {},
},
},
},
}
_, err := Find(b, "jobs.key")
assert.NoError(t, err)
} }

View File

@ -180,6 +180,48 @@
} }
] ]
}, },
"resources.Dashboard": {
"anyOf": [
{
"type": "object",
"properties": {
"display_name": {
"description": "The display name of the dashboard.",
"$ref": "#/$defs/string"
},
"embed_credentials": {
"$ref": "#/$defs/bool"
},
"file_path": {
"$ref": "#/$defs/string"
},
"parent_path": {
"description": "The workspace path of the folder containing the dashboard. Includes leading slash and no\ntrailing slash.\nThis field is excluded in List Dashboards responses.",
"$ref": "#/$defs/string"
},
"permissions": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission"
},
"serialized_dashboard": {
"description": "The contents of the dashboard in serialized string form.\nThis field is excluded in List Dashboards responses.\nUse the [get dashboard API](https://docs.databricks.com/api/workspace/lakeview/get)\nto retrieve an example response, which includes the `serialized_dashboard` field.\nThis field provides the structure of the JSON string that represents the dashboard's\nlayout and components.",
"$ref": "#/$defs/interface"
},
"warehouse_id": {
"description": "The warehouse ID used to run the dashboard.",
"$ref": "#/$defs/string"
}
},
"additionalProperties": false,
"required": [
"display_name"
]
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Grant": { "resources.Grant": {
"anyOf": [ "anyOf": [
{ {
@ -209,6 +251,10 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"budget_policy_id": {
"description": "The id of the user specified budget policy to use for this job.\nIf not specified, a default budget policy may be applied when creating or modifying the job.\nSee `effective_budget_policy_id` for the budget policy used by this workload.",
"$ref": "#/$defs/string"
},
"continuous": { "continuous": {
"description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.", "description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.Continuous" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.Continuous"
@ -1050,6 +1096,9 @@
"clusters": { "clusters": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster"
}, },
"dashboards": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Dashboard"
},
"experiments": { "experiments": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment"
}, },
@ -3901,6 +3950,10 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"report": {
"description": "Select tables from a specific source report.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.ReportSpec"
},
"schema": { "schema": {
"description": "Select tables from a specific source schema.", "description": "Select tables from a specific source schema.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.SchemaSpec" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.SchemaSpec"
@ -4233,6 +4286,40 @@
} }
] ]
}, },
"pipelines.ReportSpec": {
"anyOf": [
{
"type": "object",
"properties": {
"destination_catalog": {
"description": "Required. Destination catalog to store table.",
"$ref": "#/$defs/string"
},
"destination_schema": {
"description": "Required. Destination schema to store table.",
"$ref": "#/$defs/string"
},
"destination_table": {
"description": "Required. Destination table name. The pipeline fails if a table with that name already exists.",
"$ref": "#/$defs/string"
},
"source_url": {
"description": "Required. Report URL in the source system.",
"$ref": "#/$defs/string"
},
"table_configuration": {
"description": "Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.TableSpecificConfig"
}
},
"additionalProperties": false
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"pipelines.SchemaSpec": { "pipelines.SchemaSpec": {
"anyOf": [ "anyOf": [
{ {
@ -4281,7 +4368,7 @@
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"destination_table": { "destination_table": {
"description": "Optional. Destination table name. The pipeline fails If a table with that name already exists. If not set, the source table name is used.", "description": "Optional. Destination table name. The pipeline fails if a table with that name already exists. If not set, the source table name is used.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"source_catalog": { "source_catalog": {
@ -4329,6 +4416,10 @@
"SCD_TYPE_1", "SCD_TYPE_1",
"SCD_TYPE_2" "SCD_TYPE_2"
] ]
},
"sequence_by": {
"description": "The column names specifying the logical order of events in the source data. Delta Live Tables uses this sequencing to handle change events that arrive out of order.",
"$ref": "#/$defs/slice/string"
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -5246,6 +5337,20 @@
} }
] ]
}, },
"resources.Dashboard": {
"anyOf": [
{
"type": "object",
"additionalProperties": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/resources.Dashboard"
}
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Job": { "resources.Job": {
"anyOf": [ "anyOf": [
{ {

View File

@ -27,5 +27,6 @@ func New() *cobra.Command {
cmd.AddCommand(newGenerateCommand()) cmd.AddCommand(newGenerateCommand())
cmd.AddCommand(newDebugCommand()) cmd.AddCommand(newDebugCommand())
cmd.AddCommand(deployment.NewDeploymentCommand()) cmd.AddCommand(deployment.NewDeploymentCommand())
cmd.AddCommand(newOpenCommand())
return cmd return cmd
} }

View File

@ -16,6 +16,7 @@ func newGenerateCommand() *cobra.Command {
cmd.AddCommand(generate.NewGenerateJobCommand()) cmd.AddCommand(generate.NewGenerateJobCommand())
cmd.AddCommand(generate.NewGeneratePipelineCommand()) cmd.AddCommand(generate.NewGeneratePipelineCommand())
cmd.AddCommand(generate.NewGenerateDashboardCommand())
cmd.PersistentFlags().StringVar(&key, "key", "", `resource key to use for the generated configuration`) cmd.PersistentFlags().StringVar(&key, "key", "", `resource key to use for the generated configuration`)
return cmd return cmd
} }

View File

@ -0,0 +1,467 @@
package generate
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"os"
"path"
"path/filepath"
"strings"
"time"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/generate"
"github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/render"
"github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/yamlsaver"
"github.com/databricks/cli/libs/textutil"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/spf13/cobra"
"golang.org/x/exp/maps"
"gopkg.in/yaml.v3"
)
type dashboard struct {
// Lookup flags for one-time generate.
existingPath string
existingID string
// Lookup flag for existing bundle resource.
resource string
// Where to write the configuration and dashboard representation.
resourceDir string
dashboardDir string
// Force overwrite of existing files.
force bool
// Watch for changes to the dashboard.
watch bool
// Relative path from the resource directory to the dashboard directory.
relativeDashboardDir string
}
func (d *dashboard) resolveID(ctx context.Context, b *bundle.Bundle) (string, diag.Diagnostics) {
switch {
case d.existingPath != "":
return d.resolveFromPath(ctx, b)
case d.existingID != "":
return d.resolveFromID(ctx, b)
}
return "", diag.Errorf("expected one of --dashboard-path, --dashboard-id")
}
func (d *dashboard) resolveFromPath(ctx context.Context, b *bundle.Bundle) (string, diag.Diagnostics) {
w := b.WorkspaceClient()
obj, err := w.Workspace.GetStatusByPath(ctx, d.existingPath)
if err != nil {
if apierr.IsMissing(err) {
return "", diag.Errorf("dashboard %q not found", path.Base(d.existingPath))
}
// Emit a more descriptive error message for legacy dashboards.
if errors.Is(err, apierr.ErrBadRequest) && strings.HasPrefix(err.Error(), "dbsqlDashboard ") {
return "", diag.Diagnostics{
{
Severity: diag.Error,
Summary: fmt.Sprintf("dashboard %q is a legacy dashboard", path.Base(d.existingPath)),
Detail: "" +
"Databricks Asset Bundles work exclusively with AI/BI dashboards.\n" +
"\n" +
"Instructions on how to convert a legacy dashboard to an AI/BI dashboard\n" +
"can be found at: https://docs.databricks.com/en/dashboards/clone-legacy-to-aibi.html.",
},
}
}
return "", diag.FromErr(err)
}
if obj.ObjectType != workspace.ObjectTypeDashboard {
found := strings.ToLower(obj.ObjectType.String())
return "", diag.Diagnostics{
{
Severity: diag.Error,
Summary: fmt.Sprintf("expected a dashboard, found a %s", found),
},
}
}
if obj.ResourceId == "" {
return "", diag.Diagnostics{
{
Severity: diag.Error,
Summary: "expected a non-empty dashboard resource ID",
},
}
}
return obj.ResourceId, nil
}
func (d *dashboard) resolveFromID(ctx context.Context, b *bundle.Bundle) (string, diag.Diagnostics) {
w := b.WorkspaceClient()
obj, err := w.Lakeview.GetByDashboardId(ctx, d.existingID)
if err != nil {
if apierr.IsMissing(err) {
return "", diag.Errorf("dashboard with ID %s not found", d.existingID)
}
return "", diag.FromErr(err)
}
return obj.DashboardId, nil
}
func remarshalJSON(data []byte) ([]byte, error) {
var tmp any
var err error
err = json.Unmarshal(data, &tmp)
if err != nil {
return nil, err
}
// Remarshal the data to ensure its formatting is stable.
// The result will have alphabetically sorted keys and be indented.
// HTML escaping is disabled to retain characters such as &, <, and >.
var buf bytes.Buffer
enc := json.NewEncoder(&buf)
enc.SetIndent("", " ")
enc.SetEscapeHTML(false)
err = enc.Encode(tmp)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func (d *dashboard) saveSerializedDashboard(_ context.Context, b *bundle.Bundle, dashboard *dashboards.Dashboard, filename string) error {
// Unmarshal and remarshal the serialized dashboard to ensure it is formatted correctly.
// The result will have alphabetically sorted keys and be indented.
data, err := remarshalJSON([]byte(dashboard.SerializedDashboard))
if err != nil {
return err
}
// Make sure the output directory exists.
if err := os.MkdirAll(filepath.Dir(filename), 0755); err != nil {
return err
}
// Clean the filename to ensure it is a valid path (and can be used on this OS).
filename = filepath.Clean(filename)
// Attempt to make the path relative to the bundle root.
rel, err := filepath.Rel(b.BundleRootPath, filename)
if err != nil {
rel = filename
}
// Verify that the file does not already exist.
info, err := os.Stat(filename)
if err == nil {
if info.IsDir() {
return fmt.Errorf("%s is a directory", rel)
}
if !d.force {
return fmt.Errorf("%s already exists. Use --force to overwrite", rel)
}
}
fmt.Printf("Writing dashboard to %q\n", rel)
return os.WriteFile(filename, data, 0644)
}
func (d *dashboard) saveConfiguration(ctx context.Context, b *bundle.Bundle, dashboard *dashboards.Dashboard, key string) error {
// Save serialized dashboard definition to the dashboard directory.
dashboardBasename := fmt.Sprintf("%s.lvdash.json", key)
dashboardPath := filepath.Join(d.dashboardDir, dashboardBasename)
err := d.saveSerializedDashboard(ctx, b, dashboard, dashboardPath)
if err != nil {
return err
}
// Synthesize resource configuration.
v, err := generate.ConvertDashboardToValue(dashboard, path.Join(d.relativeDashboardDir, dashboardBasename))
if err != nil {
return err
}
result := map[string]dyn.Value{
"resources": dyn.V(map[string]dyn.Value{
"dashboards": dyn.V(map[string]dyn.Value{
key: v,
}),
}),
}
// Make sure the output directory exists.
if err := os.MkdirAll(d.resourceDir, 0755); err != nil {
return err
}
// Save the configuration to the resource directory.
resourcePath := filepath.Join(d.resourceDir, fmt.Sprintf("%s.dashboard.yml", key))
saver := yamlsaver.NewSaverWithStyle(map[string]yaml.Style{
"display_name": yaml.DoubleQuotedStyle,
})
// Attempt to make the path relative to the bundle root.
rel, err := filepath.Rel(b.BundleRootPath, resourcePath)
if err != nil {
rel = resourcePath
}
fmt.Printf("Writing configuration to %q\n", rel)
err = saver.SaveAsYAML(result, resourcePath, d.force)
if err != nil {
return err
}
return nil
}
func waitForChanges(ctx context.Context, w *databricks.WorkspaceClient, dashboard *dashboards.Dashboard) diag.Diagnostics {
// Compute [time.Time] for the most recent update.
tref, err := time.Parse(time.RFC3339, dashboard.UpdateTime)
if err != nil {
return diag.FromErr(err)
}
for {
obj, err := w.Workspace.GetStatusByPath(ctx, dashboard.Path)
if err != nil {
return diag.FromErr(err)
}
// Compute [time.Time] from timestamp in millis since epoch.
tcur := time.Unix(0, obj.ModifiedAt*int64(time.Millisecond))
if tcur.After(tref) {
break
}
time.Sleep(1 * time.Second)
}
return nil
}
func (d *dashboard) updateDashboardForResource(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
resource, ok := b.Config.Resources.Dashboards[d.resource]
if !ok {
return diag.Errorf("dashboard resource %q is not defined", d.resource)
}
if resource.FilePath == "" {
return diag.Errorf("dashboard resource %q has no file path defined", d.resource)
}
// Resolve the dashboard ID from the resource.
dashboardID := resource.ID
// Overwrite the dashboard at the path referenced from the resource.
dashboardPath := resource.FilePath
w := b.WorkspaceClient()
// Start polling the underlying dashboard for changes.
var etag string
for {
dashboard, err := w.Lakeview.GetByDashboardId(ctx, dashboardID)
if err != nil {
return diag.FromErr(err)
}
if etag != dashboard.Etag {
err = d.saveSerializedDashboard(ctx, b, dashboard, dashboardPath)
if err != nil {
return diag.FromErr(err)
}
}
// Abort if we are not watching for changes.
if !d.watch {
return nil
}
// Update the etag for the next iteration.
etag = dashboard.Etag
// Now poll the workspace API for changes.
// This is much more efficient than polling the dashboard API because it
// includes the entire serialized dashboard whereas we're only interested
// in the last modified time of the dashboard here.
waitForChanges(ctx, w, dashboard)
}
}
func (d *dashboard) generateForExisting(ctx context.Context, b *bundle.Bundle, dashboardID string) diag.Diagnostics {
w := b.WorkspaceClient()
dashboard, err := w.Lakeview.GetByDashboardId(ctx, dashboardID)
if err != nil {
return diag.FromErr(err)
}
key := textutil.NormalizeString(dashboard.DisplayName)
err = d.saveConfiguration(ctx, b, dashboard, key)
if err != nil {
return diag.FromErr(err)
}
return nil
}
func (d *dashboard) initialize(b *bundle.Bundle) diag.Diagnostics {
// Make the paths absolute if they aren't already.
if !filepath.IsAbs(d.resourceDir) {
d.resourceDir = filepath.Join(b.BundleRootPath, d.resourceDir)
}
if !filepath.IsAbs(d.dashboardDir) {
d.dashboardDir = filepath.Join(b.BundleRootPath, d.dashboardDir)
}
// Make sure we know how the dashboard path is relative to the resource path.
rel, err := filepath.Rel(d.resourceDir, d.dashboardDir)
if err != nil {
return diag.FromErr(err)
}
d.relativeDashboardDir = filepath.ToSlash(rel)
return nil
}
func (d *dashboard) runForResource(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
diags := bundle.Apply(ctx, b, bundle.Seq(
phases.Initialize(),
terraform.Interpolate(),
terraform.Write(),
terraform.StatePull(),
terraform.Load(),
))
if diags.HasError() {
return diags
}
return d.updateDashboardForResource(ctx, b)
}
func (d *dashboard) runForExisting(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// Resolve the ID of the dashboard to generate configuration for.
dashboardID, diags := d.resolveID(ctx, b)
if diags.HasError() {
return diags
}
return d.generateForExisting(ctx, b, dashboardID)
}
func (d *dashboard) RunE(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
b, diags := root.MustConfigureBundle(cmd)
if diags.HasError() {
return diags.Error()
}
diags = d.initialize(b)
if diags.HasError() {
return diags.Error()
}
if d.resource != "" {
diags = d.runForResource(ctx, b)
} else {
diags = d.runForExisting(ctx, b)
}
renderOpts := render.RenderOptions{RenderSummaryTable: false}
err := render.RenderDiagnostics(cmd.OutOrStdout(), b, diags, renderOpts)
if err != nil {
return fmt.Errorf("failed to render output: %w", err)
}
if diags.HasError() {
return root.ErrAlreadyPrinted
}
return nil
}
// filterDashboards returns a filter that only includes dashboards.
func filterDashboards(ref resources.Reference) bool {
return ref.Description.SingularName == "dashboard"
}
// dashboardResourceCompletion executes to autocomplete the argument to the resource flag.
func dashboardResourceCompletion(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
b, diags := root.MustConfigureBundle(cmd)
if err := diags.Error(); err != nil {
cobra.CompErrorln(err.Error())
return nil, cobra.ShellCompDirectiveError
}
if b == nil {
return nil, cobra.ShellCompDirectiveNoFileComp
}
return maps.Keys(resources.Completions(b, filterDashboards)), cobra.ShellCompDirectiveNoFileComp
}
func NewGenerateDashboardCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "dashboard",
Short: "Generate configuration for a dashboard",
}
d := &dashboard{}
// Lookup flags.
cmd.Flags().StringVar(&d.existingPath, "existing-path", "", `workspace path of the dashboard to generate configuration for`)
cmd.Flags().StringVar(&d.existingID, "existing-id", "", `ID of the dashboard to generate configuration for`)
cmd.Flags().StringVar(&d.resource, "resource", "", `resource key of dashboard to watch for changes`)
// Alias lookup flags that include the resource type name.
// Included for symmetry with the other generate commands, but we prefer the shorter flags.
cmd.Flags().StringVar(&d.existingPath, "existing-dashboard-path", "", `workspace path of the dashboard to generate configuration for`)
cmd.Flags().StringVar(&d.existingID, "existing-dashboard-id", "", `ID of the dashboard to generate configuration for`)
cmd.Flags().MarkHidden("existing-dashboard-path")
cmd.Flags().MarkHidden("existing-dashboard-id")
// Output flags.
cmd.Flags().StringVarP(&d.resourceDir, "resource-dir", "d", "./resources", `directory to write the configuration to`)
cmd.Flags().StringVarP(&d.dashboardDir, "dashboard-dir", "s", "./src", `directory to write the dashboard representation to`)
cmd.Flags().BoolVarP(&d.force, "force", "f", false, `force overwrite existing files in the output directory`)
// Exactly one of the lookup flags must be provided.
cmd.MarkFlagsOneRequired(
"existing-path",
"existing-id",
"resource",
)
// Watch flag. This is relevant only in combination with the resource flag.
cmd.Flags().BoolVar(&d.watch, "watch", false, `watch for changes to the dashboard and update the configuration`)
// Make sure the watch flag is only used with the existing-resource flag.
cmd.MarkFlagsMutuallyExclusive("watch", "existing-path")
cmd.MarkFlagsMutuallyExclusive("watch", "existing-id")
// Completion for the resource flag.
cmd.RegisterFlagCompletionFunc("resource", dashboardResourceCompletion)
cmd.RunE = d.RunE
return cmd
}

View File

@ -0,0 +1,182 @@
package generate
import (
"context"
"os"
"path/filepath"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestDashboard_ErrorOnLegacyDashboard(t *testing.T) {
// Response to a GetStatus request on a path pointing to a legacy dashboard.
//
// < HTTP/2.0 400 Bad Request
// < {
// < "error_code": "BAD_REQUEST",
// < "message": "dbsqlDashboard is not user-facing."
// < }
d := dashboard{
existingPath: "/path/to/legacy dashboard",
}
m := mocks.NewMockWorkspaceClient(t)
w := m.GetMockWorkspaceAPI()
w.On("GetStatusByPath", mock.Anything, "/path/to/legacy dashboard").Return(nil, &apierr.APIError{
StatusCode: 400,
ErrorCode: "BAD_REQUEST",
Message: "dbsqlDashboard is not user-facing.",
})
ctx := context.Background()
b := &bundle.Bundle{}
b.SetWorkpaceClient(m.WorkspaceClient)
_, diags := d.resolveID(ctx, b)
require.Len(t, diags, 1)
assert.Equal(t, diags[0].Summary, "dashboard \"legacy dashboard\" is a legacy dashboard")
}
func TestDashboard_ExistingID_Nominal(t *testing.T) {
root := t.TempDir()
b := &bundle.Bundle{
BundleRootPath: root,
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
dashboardsAPI := m.GetMockLakeviewAPI()
dashboardsAPI.EXPECT().GetByDashboardId(mock.Anything, "f00dcafe").Return(&dashboards.Dashboard{
DashboardId: "f00dcafe",
DisplayName: "This is a test dashboard",
SerializedDashboard: `{"pages":[{"displayName":"New Page","layout":[],"name":"12345678"}]}`,
WarehouseId: "w4r3h0us3",
}, nil)
ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx)
cmd.Flag("existing-id").Value.Set("f00dcafe")
err := cmd.RunE(cmd, []string{})
require.NoError(t, err)
// Assert the contents of the generated configuration
data, err := os.ReadFile(filepath.Join(root, "resources", "this_is_a_test_dashboard.dashboard.yml"))
require.NoError(t, err)
assert.Equal(t, `resources:
dashboards:
this_is_a_test_dashboard:
display_name: "This is a test dashboard"
warehouse_id: w4r3h0us3
file_path: ../src/this_is_a_test_dashboard.lvdash.json
`, string(data))
data, err = os.ReadFile(filepath.Join(root, "src", "this_is_a_test_dashboard.lvdash.json"))
require.NoError(t, err)
assert.JSONEq(t, `{"pages":[{"displayName":"New Page","layout":[],"name":"12345678"}]}`, string(data))
}
func TestDashboard_ExistingID_NotFound(t *testing.T) {
root := t.TempDir()
b := &bundle.Bundle{
BundleRootPath: root,
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
dashboardsAPI := m.GetMockLakeviewAPI()
dashboardsAPI.EXPECT().GetByDashboardId(mock.Anything, "f00dcafe").Return(nil, &apierr.APIError{
StatusCode: 404,
})
ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx)
cmd.Flag("existing-id").Value.Set("f00dcafe")
err := cmd.RunE(cmd, []string{})
require.Error(t, err)
}
func TestDashboard_ExistingPath_Nominal(t *testing.T) {
root := t.TempDir()
b := &bundle.Bundle{
BundleRootPath: root,
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
workspaceAPI := m.GetMockWorkspaceAPI()
workspaceAPI.EXPECT().GetStatusByPath(mock.Anything, "/path/to/dashboard").Return(&workspace.ObjectInfo{
ObjectType: workspace.ObjectTypeDashboard,
ResourceId: "f00dcafe",
}, nil)
dashboardsAPI := m.GetMockLakeviewAPI()
dashboardsAPI.EXPECT().GetByDashboardId(mock.Anything, "f00dcafe").Return(&dashboards.Dashboard{
DashboardId: "f00dcafe",
DisplayName: "This is a test dashboard",
SerializedDashboard: `{"pages":[{"displayName":"New Page","layout":[],"name":"12345678"}]}`,
WarehouseId: "w4r3h0us3",
}, nil)
ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx)
cmd.Flag("existing-path").Value.Set("/path/to/dashboard")
err := cmd.RunE(cmd, []string{})
require.NoError(t, err)
// Assert the contents of the generated configuration
data, err := os.ReadFile(filepath.Join(root, "resources", "this_is_a_test_dashboard.dashboard.yml"))
require.NoError(t, err)
assert.Equal(t, `resources:
dashboards:
this_is_a_test_dashboard:
display_name: "This is a test dashboard"
warehouse_id: w4r3h0us3
file_path: ../src/this_is_a_test_dashboard.lvdash.json
`, string(data))
data, err = os.ReadFile(filepath.Join(root, "src", "this_is_a_test_dashboard.lvdash.json"))
require.NoError(t, err)
assert.JSONEq(t, `{"pages":[{"displayName":"New Page","layout":[],"name":"12345678"}]}`, string(data))
}
func TestDashboard_ExistingPath_NotFound(t *testing.T) {
root := t.TempDir()
b := &bundle.Bundle{
BundleRootPath: root,
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
workspaceAPI := m.GetMockWorkspaceAPI()
workspaceAPI.EXPECT().GetStatusByPath(mock.Anything, "/path/to/dashboard").Return(nil, &apierr.APIError{
StatusCode: 404,
})
ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx)
cmd.Flag("existing-path").Value.Set("/path/to/dashboard")
err := cmd.RunE(cmd, []string{})
require.Error(t, err)
}

View File

@ -71,7 +71,7 @@ func (n *downloader) markNotebookForDownload(ctx context.Context, notebookPath *
ext := notebook.GetExtensionByLanguage(info) ext := notebook.GetExtensionByLanguage(info)
filename := path.Base(*notebookPath) + string(ext) filename := path.Base(*notebookPath) + ext
targetPath := filepath.Join(n.sourceDir, filename) targetPath := filepath.Join(n.sourceDir, filename)
n.files[targetPath] = *notebookPath n.files[targetPath] = *notebookPath

144
cmd/bundle/open.go Normal file
View File

@ -0,0 +1,144 @@
package bundle
import (
"context"
"errors"
"fmt"
"os"
"path/filepath"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/spf13/cobra"
"golang.org/x/exp/maps"
"github.com/pkg/browser"
)
func promptOpenArgument(ctx context.Context, b *bundle.Bundle) (string, error) {
// Compute map of "Human readable name of resource" -> "resource key".
inv := make(map[string]string)
for k, ref := range resources.Completions(b) {
title := fmt.Sprintf("%s: %s", ref.Description.SingularTitle, ref.Resource.GetName())
inv[title] = k
}
key, err := cmdio.Select(ctx, inv, "Resource to open")
if err != nil {
return "", err
}
return key, nil
}
func resolveOpenArgument(ctx context.Context, b *bundle.Bundle, args []string) (string, error) {
// If no arguments are specified, prompt the user to select the resource to open.
if len(args) == 0 && cmdio.IsPromptSupported(ctx) {
return promptOpenArgument(ctx, b)
}
if len(args) < 1 {
return "", fmt.Errorf("expected a KEY of the resource to open")
}
return args[0], nil
}
func newOpenCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "open",
Short: "Open a resource in the browser",
Args: root.MaximumNArgs(1),
}
var forcePull bool
cmd.Flags().BoolVar(&forcePull, "force-pull", false, "Skip local cache and load the state from the remote workspace")
cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
b, diags := utils.ConfigureBundleWithVariables(cmd)
if err := diags.Error(); err != nil {
return diags.Error()
}
diags = bundle.Apply(ctx, b, phases.Initialize())
if err := diags.Error(); err != nil {
return err
}
arg, err := resolveOpenArgument(ctx, b, args)
if err != nil {
return err
}
cacheDir, err := terraform.Dir(ctx, b)
if err != nil {
return err
}
_, stateFileErr := os.Stat(filepath.Join(cacheDir, terraform.TerraformStateFileName))
_, configFileErr := os.Stat(filepath.Join(cacheDir, terraform.TerraformConfigFileName))
noCache := errors.Is(stateFileErr, os.ErrNotExist) || errors.Is(configFileErr, os.ErrNotExist)
if forcePull || noCache {
diags = bundle.Apply(ctx, b, bundle.Seq(
terraform.StatePull(),
terraform.Interpolate(),
terraform.Write(),
))
if err := diags.Error(); err != nil {
return err
}
}
diags = bundle.Apply(ctx, b, bundle.Seq(
terraform.Load(),
mutator.InitializeURLs(),
))
if err := diags.Error(); err != nil {
return err
}
// Locate resource to open.
ref, err := resources.Lookup(b, arg)
if err != nil {
return err
}
// Confirm that the resource has a URL.
url := ref.Resource.GetURL()
if url == "" {
return fmt.Errorf("resource does not have a URL associated with it (has it been deployed?)")
}
return browser.OpenURL(url)
}
cmd.ValidArgsFunction = func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
b, diags := root.MustConfigureBundle(cmd)
if err := diags.Error(); err != nil {
cobra.CompErrorln(err.Error())
return nil, cobra.ShellCompDirectiveError
}
// No completion in the context of a bundle.
// Source and destination paths are taken from bundle configuration.
if b == nil {
return nil, cobra.ShellCompDirectiveNoFileComp
}
if len(args) == 0 {
completions := resources.Completions(b)
return maps.Keys(completions), cobra.ShellCompDirectiveNoFileComp
} else {
return nil, cobra.ShellCompDirectiveNoFileComp
}
}
return cmd
}

View File

@ -1,20 +1,69 @@
package bundle package bundle
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/deploy/terraform" "github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/bundle/run" "github.com/databricks/cli/bundle/run"
"github.com/databricks/cli/bundle/run/output"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags" "github.com/databricks/cli/libs/flags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/maps"
) )
func promptRunArgument(ctx context.Context, b *bundle.Bundle) (string, error) {
// Compute map of "Human readable name of resource" -> "resource key".
inv := make(map[string]string)
for k, ref := range resources.Completions(b, run.IsRunnable) {
title := fmt.Sprintf("%s: %s", ref.Description.SingularTitle, ref.Resource.GetName())
inv[title] = k
}
key, err := cmdio.Select(ctx, inv, "Resource to run")
if err != nil {
return "", err
}
return key, nil
}
func resolveRunArgument(ctx context.Context, b *bundle.Bundle, args []string) (string, error) {
// If no arguments are specified, prompt the user to select something to run.
if len(args) == 0 && cmdio.IsPromptSupported(ctx) {
return promptRunArgument(ctx, b)
}
if len(args) < 1 {
return "", fmt.Errorf("expected a KEY of the resource to run")
}
return args[0], nil
}
func keyToRunner(b *bundle.Bundle, arg string) (run.Runner, error) {
// Locate the resource to run.
ref, err := resources.Lookup(b, arg, run.IsRunnable)
if err != nil {
return nil, err
}
// Convert the resource to a runnable resource.
runner, err := run.ToRunner(b, ref)
if err != nil {
return nil, err
}
return runner, nil
}
func newRunCommand() *cobra.Command { func newRunCommand() *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "run [flags] KEY", Use: "run [flags] KEY",
@ -60,23 +109,10 @@ task or a Python wheel task, the second example applies.
return err return err
} }
// If no arguments are specified, prompt the user to select something to run. arg, err := resolveRunArgument(ctx, b, args)
if len(args) == 0 && cmdio.IsPromptSupported(ctx) {
// Invert completions from KEY -> NAME, to NAME -> KEY.
inv := make(map[string]string)
for k, v := range run.ResourceCompletionMap(b) {
inv[v] = k
}
id, err := cmdio.Select(ctx, inv, "Resource to run")
if err != nil { if err != nil {
return err return err
} }
args = append(args, id)
}
if len(args) < 1 {
return fmt.Errorf("expected a KEY of the resource to run")
}
diags = bundle.Apply(ctx, b, bundle.Seq( diags = bundle.Apply(ctx, b, bundle.Seq(
terraform.Interpolate(), terraform.Interpolate(),
@ -88,7 +124,7 @@ task or a Python wheel task, the second example applies.
return err return err
} }
runner, err := run.Find(b, args[0]) runner, err := keyToRunner(b, arg)
if err != nil { if err != nil {
return err return err
} }
@ -100,19 +136,16 @@ task or a Python wheel task, the second example applies.
} }
runOptions.NoWait = noWait runOptions.NoWait = noWait
var output output.RunOutput
if restart { if restart {
s := cmdio.Spinner(ctx) output, err = runner.Restart(ctx, &runOptions)
s <- "Cancelling all runs" } else {
err := runner.Cancel(ctx) output, err = runner.Run(ctx, &runOptions)
close(s) }
if err != nil {
return err
}
}
output, err := runner.Run(ctx, &runOptions)
if err != nil { if err != nil {
return err return err
} }
if output != nil { if output != nil {
switch root.OutputType(cmd) { switch root.OutputType(cmd) {
case flags.OutputText: case flags.OutputText:
@ -148,10 +181,11 @@ task or a Python wheel task, the second example applies.
} }
if len(args) == 0 { if len(args) == 0 {
return run.ResourceCompletions(b), cobra.ShellCompDirectiveNoFileComp completions := resources.Completions(b, run.IsRunnable)
return maps.Keys(completions), cobra.ShellCompDirectiveNoFileComp
} else { } else {
// If we know the resource to run, we can complete additional positional arguments. // If we know the resource to run, we can complete additional positional arguments.
runner, err := run.Find(b, args[0]) runner, err := keyToRunner(b, args[0])
if err != nil { if err != nil {
return nil, cobra.ShellCompDirectiveError return nil, cobra.ShellCompDirectiveError
} }

View File

@ -1,7 +1,9 @@
package bundle package bundle
import ( import (
"context"
"fmt" "fmt"
"io"
"time" "time"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
@ -9,6 +11,7 @@ import (
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/sync" "github.com/databricks/cli/libs/sync"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -18,6 +21,7 @@ type syncFlags struct {
interval time.Duration interval time.Duration
full bool full bool
watch bool watch bool
output flags.Output
} }
func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle) (*sync.SyncOptions, error) { func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle) (*sync.SyncOptions, error) {
@ -26,6 +30,21 @@ func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle)
return nil, fmt.Errorf("cannot get sync options: %w", err) return nil, fmt.Errorf("cannot get sync options: %w", err)
} }
if f.output != "" {
var outputFunc func(context.Context, <-chan sync.Event, io.Writer)
switch f.output {
case flags.OutputText:
outputFunc = sync.TextOutput
case flags.OutputJSON:
outputFunc = sync.JsonOutput
}
if outputFunc != nil {
opts.OutputHandler = func(ctx context.Context, c <-chan sync.Event) {
outputFunc(ctx, c, cmd.OutOrStdout())
}
}
}
opts.Full = f.full opts.Full = f.full
opts.PollInterval = f.interval opts.PollInterval = f.interval
return opts, nil return opts, nil
@ -42,6 +61,7 @@ func newSyncCommand() *cobra.Command {
cmd.Flags().DurationVar(&f.interval, "interval", 1*time.Second, "file system polling interval (for --watch)") cmd.Flags().DurationVar(&f.interval, "interval", 1*time.Second, "file system polling interval (for --watch)")
cmd.Flags().BoolVar(&f.full, "full", false, "perform full synchronization (default is incremental)") cmd.Flags().BoolVar(&f.full, "full", false, "perform full synchronization (default is incremental)")
cmd.Flags().BoolVar(&f.watch, "watch", false, "watch local file system for changes") cmd.Flags().BoolVar(&f.watch, "watch", false, "watch local file system for changes")
cmd.Flags().Var(&f.output, "output", "type of the output format")
cmd.RunE = func(cmd *cobra.Command, args []string) error { cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context() ctx := cmd.Context()
@ -65,6 +85,7 @@ func newSyncCommand() *cobra.Command {
if err != nil { if err != nil {
return err return err
} }
defer s.Close()
log.Infof(ctx, "Remote file sync location: %v", opts.RemotePath) log.Infof(ctx, "Remote file sync location: %v", opts.RemotePath)

View File

@ -28,9 +28,6 @@ func New() *cobra.Command {
Annotations: map[string]string{ Annotations: map[string]string{
"package": "apps", "package": "apps",
}, },
// This service is being previewed; hide from help output.
Hidden: true,
} }
// Add methods // Add methods

View File

@ -0,0 +1,220 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package disable_legacy_dbfs
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "disable-legacy-dbfs",
Short: `When this setting is on, access to DBFS root and DBFS mounts is disallowed (as well as creation of new mounts).`,
Long: `When this setting is on, access to DBFS root and DBFS mounts is disallowed (as
well as creation of new mounts). When the setting is off, all DBFS
functionality is enabled`,
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteDisableLegacyDbfsRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteDisableLegacyDbfsRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete the disable legacy DBFS setting.`
cmd.Long = `Delete the disable legacy DBFS setting.
Deletes the disable legacy DBFS setting for a workspace, reverting back to the
default.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyDbfs().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*settings.GetDisableLegacyDbfsRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq settings.GetDisableLegacyDbfsRequest
// TODO: short flags
cmd.Flags().StringVar(&getReq.Etag, "etag", getReq.Etag, `etag used for versioning.`)
cmd.Use = "get"
cmd.Short = `Get the disable legacy DBFS setting.`
cmd.Long = `Get the disable legacy DBFS setting.
Gets the disable legacy DBFS setting.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyDbfs().Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*settings.UpdateDisableLegacyDbfsRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq settings.UpdateDisableLegacyDbfsRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Use = "update"
cmd.Short = `Update the disable legacy DBFS setting.`
cmd.Long = `Update the disable legacy DBFS setting.
Updates the disable legacy DBFS setting for the workspace.`
cmd.Annotations = make(map[string]string)
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
} else {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}
response, err := w.Settings.DisableLegacyDbfs().Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service DisableLegacyDbfs

View File

@ -1557,6 +1557,7 @@ func newSubmit() *cobra.Command {
cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: array: access_control_list // TODO: array: access_control_list
cmd.Flags().StringVar(&submitReq.BudgetPolicyId, "budget-policy-id", submitReq.BudgetPolicyId, `The user specified id of the budget policy to use for this one-time run.`)
// TODO: complex arg: email_notifications // TODO: complex arg: email_notifications
// TODO: array: environments // TODO: array: environments
// TODO: complex arg: git_source // TODO: complex arg: git_source

View File

@ -9,6 +9,7 @@ import (
compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile" compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile"
default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace" default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace"
disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access" disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access"
disable_legacy_dbfs "github.com/databricks/cli/cmd/workspace/disable-legacy-dbfs"
enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring" enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring"
restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins" restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins"
) )
@ -33,6 +34,7 @@ func New() *cobra.Command {
cmd.AddCommand(compliance_security_profile.New()) cmd.AddCommand(compliance_security_profile.New())
cmd.AddCommand(default_namespace.New()) cmd.AddCommand(default_namespace.New())
cmd.AddCommand(disable_legacy_access.New()) cmd.AddCommand(disable_legacy_access.New())
cmd.AddCommand(disable_legacy_dbfs.New())
cmd.AddCommand(enhanced_security_monitoring.New()) cmd.AddCommand(enhanced_security_monitoring.New())
cmd.AddCommand(restrict_workspace_admins.New()) cmd.AddCommand(restrict_workspace_admins.New())

View File

@ -48,7 +48,7 @@ func (opts exportDirOptions) callback(ctx context.Context, workspaceFiler filer.
return err return err
} }
objectInfo := info.Sys().(workspace.ObjectInfo) objectInfo := info.Sys().(workspace.ObjectInfo)
targetPath += string(notebook.GetExtensionByLanguage(&objectInfo)) targetPath += notebook.GetExtensionByLanguage(&objectInfo)
// Skip file if a file already exists in path. // Skip file if a file already exists in path.
// os.Stat returns a fs.ErrNotExist if a file does not exist at path. // os.Stat returns a fs.ErrNotExist if a file does not exist at path.

4
go.mod
View File

@ -7,8 +7,8 @@ toolchain go1.22.7
require ( require (
github.com/Masterminds/semver/v3 v3.3.0 // MIT github.com/Masterminds/semver/v3 v3.3.0 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0 github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.48.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.49.0 // Apache 2.0
github.com/fatih/color v1.17.0 // MIT github.com/fatih/color v1.18.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause
github.com/hashicorp/go-version v1.7.0 // MPL 2.0 github.com/hashicorp/go-version v1.7.0 // MPL 2.0

8
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.48.0 h1:46KtsnRo+FGhC3izUXbpL0PXBNomvsdignYDhJZlm9s= github.com/databricks/databricks-sdk-go v0.49.0 h1:VBTeZZMLIuBSM4kxOCfUcW9z4FUQZY2QeNRD5qm9FUQ=
github.com/databricks/databricks-sdk-go v0.48.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU= github.com/databricks/databricks-sdk-go v0.49.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@ -44,8 +44,8 @@ github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.m
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.17.0 h1:GlRw1BRJxkpqUCBKzKOw098ed57fEsKeNjpTe3cSjK4= github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
github.com/fatih/color v1.17.0/go.mod h1:YZ7TlrGPkiz6ku9fK3TLD/pl3CpsiFyu8N92HLgmosI= github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=

View File

@ -0,0 +1,12 @@
{
"properties": {
"unique_id": {
"type": "string",
"description": "Unique ID for job name"
},
"warehouse_id": {
"type": "string",
"description": "The SQL warehouse ID to use for the dashboard"
}
}
}

View File

@ -0,0 +1,34 @@
{
"pages": [
{
"displayName": "New Page",
"layout": [
{
"position": {
"height": 2,
"width": 6,
"x": 0,
"y": 0
},
"widget": {
"name": "82eb9107",
"textbox_spec": "# I'm a title"
}
},
{
"position": {
"height": 2,
"width": 6,
"x": 0,
"y": 2
},
"widget": {
"name": "ffa6de4f",
"textbox_spec": "Text"
}
}
],
"name": "fdd21a3c"
}
]
}

View File

@ -0,0 +1,12 @@
bundle:
name: dashboards
workspace:
root_path: "~/.bundle/{{.unique_id}}"
resources:
dashboards:
file_reference:
display_name: test-dashboard-{{.unique_id}}
file_path: ./dashboard.lvdash.json
warehouse_id: {{.warehouse_id}}

View File

@ -0,0 +1,63 @@
package bundle
import (
"fmt"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAccDashboards(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
warehouseID := acc.GetEnvOrSkipTest(t, "TEST_DEFAULT_WAREHOUSE_ID")
uniqueID := uuid.New().String()
root, err := initTestTemplate(t, ctx, "dashboards", map[string]any{
"unique_id": uniqueID,
"warehouse_id": warehouseID,
})
require.NoError(t, err)
t.Cleanup(func() {
err = destroyBundle(t, ctx, root)
require.NoError(t, err)
})
err = deployBundle(t, ctx, root)
require.NoError(t, err)
// Load bundle configuration by running the validate command.
b := unmarshalConfig(t, mustValidateBundle(t, ctx, root))
// Assert that the dashboard exists at the expected path and is, indeed, a dashboard.
oi, err := wt.W.Workspace.GetStatusByPath(ctx, fmt.Sprintf("%s/test-dashboard-%s.lvdash.json", b.Config.Workspace.ResourcePath, uniqueID))
require.NoError(t, err)
assert.EqualValues(t, workspace.ObjectTypeDashboard, oi.ObjectType)
// Load the dashboard by its ID and confirm its display name.
dashboard, err := wt.W.Lakeview.GetByDashboardId(ctx, oi.ResourceId)
require.NoError(t, err)
assert.Equal(t, fmt.Sprintf("test-dashboard-%s", uniqueID), dashboard.DisplayName)
// Make an out of band modification to the dashboard and confirm that it is detected.
_, err = wt.W.Lakeview.Update(ctx, dashboards.UpdateDashboardRequest{
DashboardId: oi.ResourceId,
SerializedDashboard: dashboard.SerializedDashboard,
})
require.NoError(t, err)
// Try to redeploy the bundle and confirm that the out of band modification is detected.
stdout, _, err := deployBundleWithArgs(t, ctx, root)
require.Error(t, err)
assert.Contains(t, stdout, `Error: dashboard "file_reference" has been modified remotely`+"\n")
// Redeploy the bundle with the --force flag and confirm that the out of band modification is ignored.
_, stderr, err := deployBundleWithArgs(t, ctx, root, "--force")
require.NoError(t, err)
assert.Contains(t, stderr, `Deployment complete!`+"\n")
}

View File

@ -11,6 +11,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/internal" "github.com/databricks/cli/internal"
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
@ -66,6 +67,19 @@ func validateBundle(t *testing.T, ctx context.Context, path string) ([]byte, err
return stdout.Bytes(), err return stdout.Bytes(), err
} }
func mustValidateBundle(t *testing.T, ctx context.Context, path string) []byte {
data, err := validateBundle(t, ctx, path)
require.NoError(t, err)
return data
}
func unmarshalConfig(t *testing.T, data []byte) *bundle.Bundle {
bundle := &bundle.Bundle{}
err := json.Unmarshal(data, &bundle.Config)
require.NoError(t, err)
return bundle
}
func deployBundle(t *testing.T, ctx context.Context, path string) error { func deployBundle(t *testing.T, ctx context.Context, path string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path) ctx = env.Set(ctx, "BUNDLE_ROOT", path)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--auto-approve") c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--auto-approve")
@ -73,6 +87,14 @@ func deployBundle(t *testing.T, ctx context.Context, path string) error {
return err return err
} }
func deployBundleWithArgs(t *testing.T, ctx context.Context, path string, args ...string) (string, string, error) {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
args = append([]string{"bundle", "deploy"}, args...)
c := internal.NewCobraTestRunnerWithContext(t, ctx, args...)
stdout, stderr, err := c.Run()
return stdout.String(), stderr.String(), err
}
func deployBundleWithFlags(t *testing.T, ctx context.Context, path string, flags []string) error { func deployBundleWithFlags(t *testing.T, ctx context.Context, path string, flags []string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path) ctx = env.Set(ctx, "BUNDLE_ROOT", path)
args := []string{"bundle", "deploy", "--force-lock"} args := []string{"bundle", "deploy", "--force-lock"}

View File

@ -361,104 +361,13 @@ func TestAccFilerReadDir(t *testing.T) {
} }
} }
func TestAccFilerWorkspaceNotebookConflict(t *testing.T) { func TestAccFilerWorkspaceNotebook(t *testing.T) {
t.Parallel() t.Parallel()
f, _ := setupWsfsFiler(t)
ctx := context.Background() ctx := context.Background()
var err error var err error
for _, tc := range []struct { tcases := []struct {
name string
nameWithoutExt string
content1 string
expected1 string
content2 string
}{
{
name: "pyNb.py",
nameWithoutExt: "pyNb",
content1: "# Databricks notebook source\nprint('first upload')",
expected1: "# Databricks notebook source\nprint('first upload')",
content2: "# Databricks notebook source\nprint('second upload')",
},
{
name: "rNb.r",
nameWithoutExt: "rNb",
content1: "# Databricks notebook source\nprint('first upload')",
expected1: "# Databricks notebook source\nprint('first upload')",
content2: "# Databricks notebook source\nprint('second upload')",
},
{
name: "sqlNb.sql",
nameWithoutExt: "sqlNb",
content1: "-- Databricks notebook source\n SELECT \"first upload\"",
expected1: "-- Databricks notebook source\n SELECT \"first upload\"",
content2: "-- Databricks notebook source\n SELECT \"second upload\"",
},
{
name: "scalaNb.scala",
nameWithoutExt: "scalaNb",
content1: "// Databricks notebook source\n println(\"first upload\")",
expected1: "// Databricks notebook source\n println(\"first upload\")",
content2: "// Databricks notebook source\n println(\"second upload\")",
},
{
name: "pythonJupyterNb.ipynb",
nameWithoutExt: "pythonJupyterNb",
content1: readFile(t, "testdata/notebooks/py1.ipynb"),
expected1: "# Databricks notebook source\nprint(1)",
content2: readFile(t, "testdata/notebooks/py2.ipynb"),
},
{
name: "rJupyterNb.ipynb",
nameWithoutExt: "rJupyterNb",
content1: readFile(t, "testdata/notebooks/r1.ipynb"),
expected1: "# Databricks notebook source\nprint(1)",
content2: readFile(t, "testdata/notebooks/r2.ipynb"),
},
{
name: "scalaJupyterNb.ipynb",
nameWithoutExt: "scalaJupyterNb",
content1: readFile(t, "testdata/notebooks/scala1.ipynb"),
expected1: "// Databricks notebook source\nprintln(1)",
content2: readFile(t, "testdata/notebooks/scala2.ipynb"),
},
{
name: "sqlJupyterNotebook.ipynb",
nameWithoutExt: "sqlJupyterNotebook",
content1: readFile(t, "testdata/notebooks/sql1.ipynb"),
expected1: "-- Databricks notebook source\nselect 1",
content2: readFile(t, "testdata/notebooks/sql2.ipynb"),
},
} {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
// Upload the notebook
err = f.Write(ctx, tc.name, strings.NewReader(tc.content1))
require.NoError(t, err)
// Assert contents after initial upload
filerTest{t, f}.assertContents(ctx, tc.nameWithoutExt, tc.expected1)
// Assert uploading a second time fails due to overwrite mode missing
err = f.Write(ctx, tc.name, strings.NewReader(tc.content2))
assert.ErrorIs(t, err, fs.ErrExist)
assert.Regexp(t, regexp.MustCompile(`file already exists: .*/`+tc.nameWithoutExt+`$`), err.Error())
})
}
}
func TestAccFilerWorkspaceNotebookWithOverwriteFlag(t *testing.T) {
t.Parallel()
f, _ := setupWsfsFiler(t)
ctx := context.Background()
var err error
for _, tcases := range []struct {
name string name string
nameWithoutExt string nameWithoutExt string
content1 string content1 string
@ -530,25 +439,36 @@ func TestAccFilerWorkspaceNotebookWithOverwriteFlag(t *testing.T) {
content2: readFile(t, "testdata/notebooks/sql2.ipynb"), content2: readFile(t, "testdata/notebooks/sql2.ipynb"),
expected2: "-- Databricks notebook source\nselect 2", expected2: "-- Databricks notebook source\nselect 2",
}, },
} { }
t.Run(tcases.name, func(t *testing.T) {
for _, tc := range tcases {
f, _ := setupWsfsFiler(t)
t.Run(tc.name, func(t *testing.T) {
t.Parallel() t.Parallel()
// Upload the notebook // Upload the notebook
err = f.Write(ctx, tcases.name, strings.NewReader(tcases.content1)) err = f.Write(ctx, tc.name, strings.NewReader(tc.content1))
require.NoError(t, err) require.NoError(t, err)
// Assert contents after initial upload // Assert contents after initial upload. Note that we expect the content
filerTest{t, f}.assertContents(ctx, tcases.nameWithoutExt, tcases.expected1) // for jupyter notebooks to be of type source because the workspace files
// client always uses the source format to read notebooks from the workspace.
filerTest{t, f}.assertContents(ctx, tc.nameWithoutExt, tc.expected1)
// Upload the notebook a second time with overwrite flag // Assert uploading a second time fails due to overwrite mode missing
err = f.Write(ctx, tcases.name, strings.NewReader(tcases.content2), filer.OverwriteIfExists) err = f.Write(ctx, tc.name, strings.NewReader(tc.content2))
assert.ErrorIs(t, err, fs.ErrExist)
assert.Regexp(t, regexp.MustCompile(`file already exists: .*/`+tc.nameWithoutExt+`$`), err.Error())
// Try uploading the notebook again with overwrite flag. This time it should succeed.
err = f.Write(ctx, tc.name, strings.NewReader(tc.content2), filer.OverwriteIfExists)
require.NoError(t, err) require.NoError(t, err)
// Assert contents after second upload // Assert contents after second upload
filerTest{t, f}.assertContents(ctx, tcases.nameWithoutExt, tcases.expected2) filerTest{t, f}.assertContents(ctx, tc.nameWithoutExt, tc.expected2)
}) })
} }
} }
func TestAccFilerWorkspaceFilesExtensionsReadDir(t *testing.T) { func TestAccFilerWorkspaceFilesExtensionsReadDir(t *testing.T) {
@ -565,7 +485,6 @@ func TestAccFilerWorkspaceFilesExtensionsReadDir(t *testing.T) {
{"foo.scala", "println('foo')"}, {"foo.scala", "println('foo')"},
{"foo.sql", "SELECT 'foo'"}, {"foo.sql", "SELECT 'foo'"},
{"py1.ipynb", readFile(t, "testdata/notebooks/py1.ipynb")}, {"py1.ipynb", readFile(t, "testdata/notebooks/py1.ipynb")},
{"py2.ipynb", readFile(t, "testdata/notebooks/py2.ipynb")},
{"r1.ipynb", readFile(t, "testdata/notebooks/r1.ipynb")}, {"r1.ipynb", readFile(t, "testdata/notebooks/r1.ipynb")},
{"pyNb.py", "# Databricks notebook source\nprint('first upload'))"}, {"pyNb.py", "# Databricks notebook source\nprint('first upload'))"},
{"rNb.r", "# Databricks notebook source\nprint('first upload'))"}, {"rNb.r", "# Databricks notebook source\nprint('first upload'))"},
@ -607,7 +526,6 @@ func TestAccFilerWorkspaceFilesExtensionsReadDir(t *testing.T) {
"foo.scala", "foo.scala",
"foo.sql", "foo.sql",
"py1.ipynb", "py1.ipynb",
"py2.ipynb",
"pyNb.py", "pyNb.py",
"r1.ipynb", "r1.ipynb",
"rNb.r", "rNb.r",

View File

@ -111,3 +111,7 @@ func PanicsWithError(t assert.TestingT, errString string, f func(), msgAndArgs .
func NotPanics(t assert.TestingT, f func(), msgAndArgs ...interface{}) bool { func NotPanics(t assert.TestingT, f func(), msgAndArgs ...interface{}) bool {
return assert.NotPanics(t, f, msgAndArgs...) return assert.NotPanics(t, f, msgAndArgs...)
} }
func JSONEq(t assert.TestingT, expected string, actual string, msgAndArgs ...interface{}) bool {
return assert.JSONEq(t, expected, actual, msgAndArgs...)
}

View File

@ -0,0 +1,39 @@
package jsonsaver
import (
"bytes"
"encoding/json"
)
// The encoder type encapsulates a [json.Encoder] and its target buffer.
// Escaping of HTML characters in the output is disabled.
type encoder struct {
*json.Encoder
*bytes.Buffer
}
func newEncoder() encoder {
var buf bytes.Buffer
enc := json.NewEncoder(&buf)
// By default, json.Encoder escapes HTML characters, converting symbols like '<' to '\u003c'.
// This behavior helps prevent XSS attacks when JSON is embedded within HTML.
// However, we disable this feature since we're not dealing with HTML context.
// Keeping the escapes enabled would result in unnecessary differences when processing JSON payloads
// that already contain escaped characters.
enc.SetEscapeHTML(false)
return encoder{enc, &buf}
}
func marshalNoEscape(v any) ([]byte, error) {
enc := newEncoder()
err := enc.Encode(v)
return enc.Bytes(), err
}
func marshalIndentNoEscape(v any, prefix, indent string) ([]byte, error) {
enc := newEncoder()
enc.SetIndent(prefix, indent)
err := enc.Encode(v)
return enc.Bytes(), err
}

View File

@ -0,0 +1,41 @@
package jsonsaver
import (
"testing"
assert "github.com/databricks/cli/libs/dyn/dynassert"
)
func TestEncoder_MarshalNoEscape(t *testing.T) {
out, err := marshalNoEscape("1 < 2")
if !assert.NoError(t, err) {
return
}
// Confirm the output.
assert.JSONEq(t, `"1 < 2"`, string(out))
// Confirm that HTML escaping is disabled.
assert.NotContains(t, string(out), "\\u003c")
// Confirm that the encoder writes a trailing newline.
assert.Contains(t, string(out), "\n")
}
func TestEncoder_MarshalIndentNoEscape(t *testing.T) {
out, err := marshalIndentNoEscape([]string{"1 < 2", "2 < 3"}, "", " ")
if !assert.NoError(t, err) {
return
}
// Confirm the output.
assert.JSONEq(t, `["1 < 2", "2 < 3"]`, string(out))
// Confirm that HTML escaping is disabled.
assert.NotContains(t, string(out), "\\u003c")
// Confirm that the encoder performs indenting and writes a trailing newline.
assert.Contains(t, string(out), "[\n")
assert.Contains(t, string(out), " \"1 < 2\",\n")
assert.Contains(t, string(out), "]\n")
}

View File

@ -0,0 +1,89 @@
package jsonsaver
import (
"bytes"
"fmt"
"github.com/databricks/cli/libs/dyn"
)
// Marshal is a version of [json.Marshal] for [dyn.Value].
//
// Objects in the output retain the order of keys as they appear in the underlying [dyn.Value].
// The output does not escape HTML characters in strings.
func Marshal(v dyn.Value) ([]byte, error) {
return marshalNoEscape(wrap{v})
}
// MarshalIndent is a version of [json.MarshalIndent] for [dyn.Value].
//
// Objects in the output retain the order of keys as they appear in the underlying [dyn.Value].
// The output does not escape HTML characters in strings.
func MarshalIndent(v dyn.Value, prefix, indent string) ([]byte, error) {
return marshalIndentNoEscape(wrap{v}, prefix, indent)
}
// Wrapper type for [dyn.Value] to expose the [json.Marshaler] interface.
type wrap struct {
v dyn.Value
}
// MarshalJSON implements the [json.Marshaler] interface for the [dyn.Value] wrapper type.
func (w wrap) MarshalJSON() ([]byte, error) {
var buf bytes.Buffer
if err := marshalValue(&buf, w.v); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// marshalValue recursively writes JSON for a [dyn.Value] to the buffer.
func marshalValue(buf *bytes.Buffer, v dyn.Value) error {
switch v.Kind() {
case dyn.KindString, dyn.KindBool, dyn.KindInt, dyn.KindFloat, dyn.KindTime, dyn.KindNil:
out, err := marshalNoEscape(v.AsAny())
if err != nil {
return err
}
// The encoder writes a trailing newline, so we need to remove it
// to avoid adding extra newlines when embedding this JSON.
out = out[:len(out)-1]
buf.Write(out)
case dyn.KindMap:
buf.WriteByte('{')
for i, pair := range v.MustMap().Pairs() {
if i > 0 {
buf.WriteByte(',')
}
// Require keys to be strings.
if pair.Key.Kind() != dyn.KindString {
return fmt.Errorf("map key must be a string, got %s", pair.Key.Kind())
}
// Marshal the key
if err := marshalValue(buf, pair.Key); err != nil {
return err
}
buf.WriteByte(':')
// Marshal the value
if err := marshalValue(buf, pair.Value); err != nil {
return err
}
}
buf.WriteByte('}')
case dyn.KindSequence:
buf.WriteByte('[')
for i, item := range v.MustSequence() {
if i > 0 {
buf.WriteByte(',')
}
if err := marshalValue(buf, item); err != nil {
return err
}
}
buf.WriteByte(']')
default:
return fmt.Errorf("unsupported kind: %d", v.Kind())
}
return nil
}

View File

@ -0,0 +1,100 @@
package jsonsaver
import (
"testing"
"github.com/databricks/cli/libs/dyn"
assert "github.com/databricks/cli/libs/dyn/dynassert"
)
func TestMarshal_String(t *testing.T) {
b, err := Marshal(dyn.V("string"))
if assert.NoError(t, err) {
assert.JSONEq(t, `"string"`, string(b))
}
}
func TestMarshal_Bool(t *testing.T) {
b, err := Marshal(dyn.V(true))
if assert.NoError(t, err) {
assert.JSONEq(t, `true`, string(b))
}
}
func TestMarshal_Int(t *testing.T) {
b, err := Marshal(dyn.V(42))
if assert.NoError(t, err) {
assert.JSONEq(t, `42`, string(b))
}
}
func TestMarshal_Float(t *testing.T) {
b, err := Marshal(dyn.V(42.1))
if assert.NoError(t, err) {
assert.JSONEq(t, `42.1`, string(b))
}
}
func TestMarshal_Time(t *testing.T) {
b, err := Marshal(dyn.V(dyn.MustTime("2021-01-01T00:00:00Z")))
if assert.NoError(t, err) {
assert.JSONEq(t, `"2021-01-01T00:00:00Z"`, string(b))
}
}
func TestMarshal_Map(t *testing.T) {
m := dyn.NewMapping()
m.Set(dyn.V("key1"), dyn.V("value1"))
m.Set(dyn.V("key2"), dyn.V("value2"))
b, err := Marshal(dyn.V(m))
if assert.NoError(t, err) {
assert.JSONEq(t, `{"key1":"value1","key2":"value2"}`, string(b))
}
}
func TestMarshal_Sequence(t *testing.T) {
var s []dyn.Value
s = append(s, dyn.V("value1"))
s = append(s, dyn.V("value2"))
b, err := Marshal(dyn.V(s))
if assert.NoError(t, err) {
assert.JSONEq(t, `["value1","value2"]`, string(b))
}
}
func TestMarshal_Complex(t *testing.T) {
map1 := dyn.NewMapping()
map1.Set(dyn.V("str1"), dyn.V("value1"))
map1.Set(dyn.V("str2"), dyn.V("value2"))
seq1 := []dyn.Value{}
seq1 = append(seq1, dyn.V("value1"))
seq1 = append(seq1, dyn.V("value2"))
root := dyn.NewMapping()
root.Set(dyn.V("map1"), dyn.V(map1))
root.Set(dyn.V("seq1"), dyn.V(seq1))
// Marshal without indent.
b, err := Marshal(dyn.V(root))
if assert.NoError(t, err) {
assert.Equal(t, `{"map1":{"str1":"value1","str2":"value2"},"seq1":["value1","value2"]}`+"\n", string(b))
}
// Marshal with indent.
b, err = MarshalIndent(dyn.V(root), "", " ")
if assert.NoError(t, err) {
assert.Equal(t, `{
"map1": {
"str1": "value1",
"str2": "value2"
},
"seq1": [
"value1",
"value2"
]
}`+"\n", string(b))
}
}

View File

@ -43,11 +43,11 @@ func (w *workspaceFilesExtensionsClient) stat(ctx context.Context, name string)
// This function returns the stat for the provided notebook. The stat object itself contains the path // This function returns the stat for the provided notebook. The stat object itself contains the path
// with the extension since it is meant to be used in the context of a fs.FileInfo. // with the extension since it is meant to be used in the context of a fs.FileInfo.
func (w *workspaceFilesExtensionsClient) getNotebookStatByNameWithExt(ctx context.Context, name string) (*workspaceFileStatus, error) { func (w *workspaceFilesExtensionsClient) getNotebookStatByNameWithExt(ctx context.Context, name string) (*workspaceFileStatus, error) {
ext := notebook.Extension(path.Ext(name)) ext := path.Ext(name)
nameWithoutExt := strings.TrimSuffix(name, string(ext)) nameWithoutExt := strings.TrimSuffix(name, ext)
// File name does not have an extension associated with Databricks notebooks, return early. // File name does not have an extension associated with Databricks notebooks, return early.
if !slices.Contains([]notebook.Extension{ if !slices.Contains([]string{
notebook.ExtensionPython, notebook.ExtensionPython,
notebook.ExtensionR, notebook.ExtensionR,
notebook.ExtensionScala, notebook.ExtensionScala,
@ -82,7 +82,7 @@ func (w *workspaceFilesExtensionsClient) getNotebookStatByNameWithExt(ctx contex
// When the extension is one of .py, .r, .scala or .sql we expect the export format to be source. // When the extension is one of .py, .r, .scala or .sql we expect the export format to be source.
// If it's not, return early. // If it's not, return early.
if slices.Contains([]notebook.Extension{ if slices.Contains([]string{
notebook.ExtensionPython, notebook.ExtensionPython,
notebook.ExtensionR, notebook.ExtensionR,
notebook.ExtensionScala, notebook.ExtensionScala,
@ -101,7 +101,7 @@ func (w *workspaceFilesExtensionsClient) getNotebookStatByNameWithExt(ctx contex
// Modify the stat object path to include the extension. This stat object will be used // Modify the stat object path to include the extension. This stat object will be used
// to return the fs.FileInfo object in the stat method. // to return the fs.FileInfo object in the stat method.
stat.Path = stat.Path + string(ext) stat.Path = stat.Path + ext
return &workspaceFileStatus{ return &workspaceFileStatus{
wsfsFileInfo: stat, wsfsFileInfo: stat,
nameForWorkspaceAPI: nameWithoutExt, nameForWorkspaceAPI: nameWithoutExt,
@ -130,7 +130,7 @@ func (w *workspaceFilesExtensionsClient) getNotebookStatByNameWithoutExt(ctx con
// Modify the stat object path to include the extension. This stat object will be used // Modify the stat object path to include the extension. This stat object will be used
// to return the fs.DirEntry object in the ReadDir method. // to return the fs.DirEntry object in the ReadDir method.
stat.Path = stat.Path + string(ext) stat.Path = stat.Path + ext
return &workspaceFileStatus{ return &workspaceFileStatus{
wsfsFileInfo: stat, wsfsFileInfo: stat,
nameForWorkspaceAPI: name, nameForWorkspaceAPI: name,

View File

@ -5,15 +5,15 @@ import "github.com/databricks/databricks-sdk-go/service/workspace"
type Extension string type Extension string
const ( const (
ExtensionNone Extension = "" ExtensionNone string = ""
ExtensionPython Extension = ".py" ExtensionPython string = ".py"
ExtensionR Extension = ".r" ExtensionR string = ".r"
ExtensionScala Extension = ".scala" ExtensionScala string = ".scala"
ExtensionSql Extension = ".sql" ExtensionSql string = ".sql"
ExtensionJupyter Extension = ".ipynb" ExtensionJupyter string = ".ipynb"
) )
var ExtensionToLanguage = map[Extension]workspace.Language{ var ExtensionToLanguage = map[string]workspace.Language{
ExtensionPython: workspace.LanguagePython, ExtensionPython: workspace.LanguagePython,
ExtensionR: workspace.LanguageR, ExtensionR: workspace.LanguageR,
ExtensionScala: workspace.LanguageScala, ExtensionScala: workspace.LanguageScala,
@ -22,7 +22,7 @@ var ExtensionToLanguage = map[Extension]workspace.Language{
// The platform supports all languages (Python, R, Scala, and SQL) for Jupyter notebooks. // The platform supports all languages (Python, R, Scala, and SQL) for Jupyter notebooks.
} }
func GetExtensionByLanguage(objectInfo *workspace.ObjectInfo) Extension { func GetExtensionByLanguage(objectInfo *workspace.ObjectInfo) string {
if objectInfo.ObjectType != workspace.ObjectTypeNotebook { if objectInfo.ObjectType != workspace.ObjectTypeNotebook {
return ExtensionNone return ExtensionNone
} }

View File

@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"io" "io"
"os/exec" "os/exec"
"sync"
) )
type execOption func(context.Context, *exec.Cmd) error type execOption func(context.Context, *exec.Cmd) error
@ -69,10 +70,24 @@ func WithStdoutWriter(dst io.Writer) execOption {
} }
} }
// safeWriter is a writer that is safe to use concurrently.
// It serializes writes to the underlying writer.
type safeWriter struct {
w io.Writer
m sync.Mutex
}
func (s *safeWriter) Write(p []byte) (n int, err error) {
s.m.Lock()
defer s.m.Unlock()
return s.w.Write(p)
}
func WithCombinedOutput(buf *bytes.Buffer) execOption { func WithCombinedOutput(buf *bytes.Buffer) execOption {
sw := &safeWriter{w: buf}
return func(_ context.Context, c *exec.Cmd) error { return func(_ context.Context, c *exec.Cmd) error {
c.Stdout = io.MultiWriter(buf, c.Stdout) c.Stdout = io.MultiWriter(sw, c.Stdout)
c.Stderr = io.MultiWriter(buf, c.Stderr) c.Stderr = io.MultiWriter(sw, c.Stderr)
return nil return nil
} }
} }

View File

@ -3,7 +3,7 @@ resources:
pipelines: pipelines:
{{.project_name}}_pipeline: {{.project_name}}_pipeline:
name: {{.project_name}}_pipeline name: {{.project_name}}_pipeline
{{- if eq default_catalog ""}} {{- if or (eq default_catalog "") (eq default_catalog "hive_metastore")}}
## Specify the 'catalog' field to configure this pipeline to make use of Unity Catalog: ## Specify the 'catalog' field to configure this pipeline to make use of Unity Catalog:
# catalog: catalog_name # catalog: catalog_name
{{- else}} {{- else}}