Compare commits

...

30 Commits

Author SHA1 Message Date
Pieter Noordhuis bfa9c3a2c3
Completion for the resource flag 2024-10-28 17:15:31 +01:00
Pieter Noordhuis 9397c03f41
Merge branch 'dashboards' into dashboards-generate 2024-10-25 15:28:44 +02:00
Pieter Noordhuis 2348e852ac
Update schema 2024-10-25 15:28:20 +02:00
Pieter Noordhuis 21dabe87fd
Quote path when passing to TF 2024-10-25 13:00:37 +02:00
Pieter Noordhuis 3be89dfc59
Merge remote-tracking branch 'origin/main' into dashboards 2024-10-25 12:16:27 +02:00
Pieter Noordhuis 72078bb09c
Add integration test 2024-10-25 12:16:16 +02:00
hectorcast-db 5a555de503
[Internal] Automatically trigger integration tests on PR (#1857)
## Changes
Automatically trigger integration tests when a PR is opened or updated

## Tests
Workflow below.

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-25 09:15:24 +00:00
Pieter Noordhuis a3e32c0adb
Marshal contents of 'serialized_dashboard' if not a string 2024-10-24 17:10:13 +02:00
Pieter Noordhuis c4df41c3d6
Rename remote modification check 2024-10-24 16:29:52 +02:00
Pieter Noordhuis 93155f1c77
Inline SDK struct 2024-10-24 16:24:06 +02:00
Pieter Noordhuis ed84a33b0a
Reuse resource resolution code for the run command (#1858)
## Changes

As of #1846 we have a generalized package for doing resource lookups and
completion.

This change updates the run command to use this instead of more specific
code under `bundle/run`.

## Tests

* Unit tests pass
* Manually confirmed that completion and prompting works
2024-10-24 13:24:30 +00:00
Andrew Nester eaea308254
Added validator for folder permissions (#1824)
## Changes
This validator checks permissions defined in top-level bundle config and
permissions set in workspace for the folders bundle is deployed to. It
raises the warning if the permissions defined in the workspace are not
defined in bundle.

This validator is executed only during `bundle validate` command.

## Tests

```
Warning: untracked permissions apply to target workspace path

The following permissions apply to the workspace folder at "/Workspace/Users/andrew.nester@databricks.com/.bundle/clusters/default" but are not configured in the bundle:
- level: CAN_MANAGE, user_name: andrew.nester@databricks.com
```

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-24 12:36:17 +00:00
Pieter Noordhuis 89ee7d8a99
Add command to open a resource in the browser (#1846)
## Changes

This builds on the functionality added in #1731 that produces a URL for
every resource.

Adds `bundle/resources` package to deal with resource lookups and
command completion. The new functionality is similar to the lookup and
command completion functionality located in `bundle/run`. It differs in
that it doesn't gracefully deal with ambiguous references to resources,
now that we explicitly validate this doesn't occur in the bundle
configuration. It still allows resources to be looked up with their
fully qualified key, `<plural type>.<key>`.

## Tests

* Added unit tests for resource lookup and completion
* Manually confirmed that `bundle open` prompts, accepts a key argument,
and opens a browser
2024-10-24 12:20:33 +00:00
Pieter Noordhuis eddaddaf8b
Attempt to reduce test flakiness on Windows (#1845)
## Changes

Test failures indicate that both stdout and stderr are consumed, yet the
content of stdout doesn't end up in the intended output. This can happen
if the goroutines responsible for writing to the combined output buffer
attempt to write to the same underlying buffer concurrently.

Example failure:
```
=== RUN   TestBackgroundCombinedOutput
    background_test.go:65: 
        	Error Trace:	D:/a/cli/cli/libs/process/background_test.go:65
        	Error:      	elements differ
        	            	
        	            	extra elements in list A:
        	            	([]interface {}) (len=1) {
        	            	 (string) (len=1) "2"
        	            	}
        	            	
        	            	
        	            	listA:
        	            	([]string) (len=2) {
        	            	 (string) (len=1) "1",
        	            	 (string) (len=1) "2"
        	            	}
        	            	
        	            	
        	            	listB:
        	            	([]string) (len=1) {
        	            	 (string) (len=1) "1"
        	            	}
        	Test:       	TestBackgroundCombinedOutput
```

With the test body:

ca45e53f42/libs/process/background_test.go (L48-L66)

With the implementation of `WithCombinedOutput`:

ca45e53f42/libs/process/opts.go (L72-L78)

Notice that `c.Stdout` does get the "2", or the test failure would have
included the relevant assertion error. This leads me to believe that
there is a race on writing to `buf` from the two goroutines writing to
`c.Stdout` and `c.Stderr`.

## Tests

The test passes. If this PR has the intended effect remains to be
seen...
2024-10-24 12:03:12 +00:00
Pieter Noordhuis 866bfc5be7
Include JSON schema for dashboards 2024-10-24 13:35:22 +02:00
Pieter Noordhuis b8e9a293a1
Merge remote-tracking branch 'origin/main' into dashboards 2024-10-24 13:29:01 +02:00
Andrew Nester ab622e65bb
[Release] Release v0.231.0 (#1856)
CLI:
* Added JSON input validation for CLI commands
([#1771](https://github.com/databricks/cli/pull/1771)).
* Support Git worktrees for `sync`
([#1831](https://github.com/databricks/cli/pull/1831)).

Bundles:
* Add `bundle summary` to display URLs for deployed resources
([#1731](https://github.com/databricks/cli/pull/1731)).
* Added a warning when incorrect permissions used for
`/Workspace/Shared` bundle root
([#1821](https://github.com/databricks/cli/pull/1821)).
* Show actionable errors for collaborative deployment scenarios
([#1386](https://github.com/databricks/cli/pull/1386)).
* Fix path to repository-wide exclude file
([#1837](https://github.com/databricks/cli/pull/1837)).
* Fixed typo in converting cluster permissions
([#1826](https://github.com/databricks/cli/pull/1826)).
* Ignore metastore permission error during template generation
([#1819](https://github.com/databricks/cli/pull/1819)).
* Handle normalization of `dyn.KindTime` into an any type
([#1836](https://github.com/databricks/cli/pull/1836)).
* Added support for pip options in environment dependencies
([#1842](https://github.com/databricks/cli/pull/1842)).
* Fix race condition when restarting continuous jobs
([#1849](https://github.com/databricks/cli/pull/1849)).
* Fix pipeline in default-python template not working for certain
workspaces ([#1854](https://github.com/databricks/cli/pull/1854)).
* Add "output" flag to the bundle sync command
([#1853](https://github.com/databricks/cli/pull/1853)).

Internal:
* Move utility functions dealing with IAM to libs/iamutil
([#1820](https://github.com/databricks/cli/pull/1820)).
* Remove unused `IS_OWNER` constant
([#1823](https://github.com/databricks/cli/pull/1823)).
* Assert SDK version is consistent in the CLI generation process
([#1814](https://github.com/databricks/cli/pull/1814)).
* Fixed unmarshalling json input into `interface{}` type
([#1832](https://github.com/databricks/cli/pull/1832)).
* Fix `TestAccFsMkdirWhenFileExistsAtPath` in isolated Azure
environments ([#1833](https://github.com/databricks/cli/pull/1833)).
* Add behavioral tests for examples from the YAML spec
([#1835](https://github.com/databricks/cli/pull/1835)).
* Remove Terraform conversion function that's no longer used
([#1840](https://github.com/databricks/cli/pull/1840)).
* Encode assumptions about the dashboards API in a test
([#1839](https://github.com/databricks/cli/pull/1839)).
* Add script to make testing of code on branches easier
([#1844](https://github.com/databricks/cli/pull/1844)).

API Changes:
 * Added `databricks disable-legacy-dbfs` command group.

OpenAPI commit cf9c61453990df0f9453670f2fe68e1b128647a2 (2024-10-14)
Dependency updates:
* Upgrade TF provider to 1.54.0
([#1852](https://github.com/databricks/cli/pull/1852)).
* Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0
([#1843](https://github.com/databricks/cli/pull/1843)).
2024-10-23 14:08:27 +00:00
Ilia Babanov 55a055d0f5
Add "output" flag to the bundle sync command (#1853)
## Changes
We want to use 'bundle sync' in the vscode extension before running a
file as an ad-hoc job (or through the context api). Right now we use
bundle deploy in these cases, but deploying bundle resources is not
always expected when you just want to quickly run a file. Sync makes
more sense in these cases, but we still want to have verbose output to
see what's happening.

In the 'deploy' command we have hidden 'verbose' flag. For the sync I've
just added 'output' flag, handling both json and text cases, similar to
how it's done in the non-bundle `sync` command. The flag is not hidden
(although we still don't show any output by default, if the flag is not
set).

VSCode Extension PR:
https://github.com/databricks/databricks-vscode/pull/1401


## Tests
Manually
2024-10-23 11:08:12 +00:00
Lennart Kats (databricks) 60c153c0e7
Fix pipeline in default-python template not working for certain workspaces (#1854)
Change the default-python template to not set the `catalog` field for
the pipeline for workspaces that set `hive_metastore` as the default
catalog. The Pipelines service currently returns an error when that
value is used for the `catalog` field.

This is the most simple fix for this issue, which was reported by a
customer. As a followup, we should look at whether we want to prompt for
a catalog instead, possibly just for this specific scenario.
2024-10-22 15:52:46 +00:00
shreyas-goenka 3bab21e72e
Fix race condition when restarting continuous jobs (#1849)
## Changes
We don't need to cancel existing runs when the job is continuous and
unpaused. The `/jobs/run-now` command will cancel the existing run and
trigger a new one automatically.

Cancelling the job manually can cause a race condition where both the
manual trigger from the CLI and the continuous trigger from the job
configuration happens at the same time. This PR prevents that from
happening.

## Tests
Unit tests and manually
2024-10-22 14:59:17 +00:00
Pieter Noordhuis 5b6a7b2e7c
Interpolate references to the dashboard resource correctly 2024-10-22 14:56:48 +02:00
Andrew Nester 68d69d6e0b
Upgrade TF provider to 1.54.0 (#1852)
## Changes
Upgrade TF provider to 1.54.0
2024-10-22 10:43:43 +00:00
dependabot[bot] f8bb3a8d72
Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0 (#1843)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.48.0 to 0.49.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.49.0</h2>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DisableLegacyDbfsAPI">w.DisableLegacyDbfs</a>
workspace-level service.</li>
<li>Added <code>UnityCatalogProvisioningState</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li>
<li>Added <code>IsTruncated</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Result">dashboards.Result</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#BaseJob">jobs.BaseJob</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CreateJob">jobs.CreateJob</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#Job">jobs.Job</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobSettings">jobs.JobSettings</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li>
<li>Added <code>Report</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li>
<li>Added <code>SequenceBy</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpecificConfig">pipelines.TableSpecificConfig</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#Alert">sql.Alert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#CreateAlertRequestAlert">sql.CreateAlertRequestAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#ListAlertsResponseAlert">sql.ListAlertsResponseAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#UpdateAlertRequestAlert">sql.UpdateAlertRequestAlert</a>.</li>
</ul>
<p>OpenAPI SHA: cf9c61453990df0f9453670f2fe68e1b128647a2, Date:
2024-10-14</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.49.0</h2>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DisableLegacyDbfsAPI">w.DisableLegacyDbfs</a>
workspace-level service.</li>
<li>Added <code>UnityCatalogProvisioningState</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li>
<li>Added <code>IsTruncated</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Result">dashboards.Result</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#BaseJob">jobs.BaseJob</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CreateJob">jobs.CreateJob</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#Job">jobs.Job</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobSettings">jobs.JobSettings</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li>
<li>Added <code>Report</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li>
<li>Added <code>SequenceBy</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpecificConfig">pipelines.TableSpecificConfig</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#Alert">sql.Alert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#CreateAlertRequestAlert">sql.CreateAlertRequestAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#ListAlertsResponseAlert">sql.ListAlertsResponseAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#UpdateAlertRequestAlert">sql.UpdateAlertRequestAlert</a>.</li>
</ul>
<p>OpenAPI SHA: cf9c61453990df0f9453670f2fe68e1b128647a2, Date:
2024-10-14</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b47ecac9ea"><code>b47ecac</code></a>
[Release] Release v0.49.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1062">#1062</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.48.0...v0.49.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.48.0&new-version=0.49.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-10-22 09:37:01 +00:00
Stephen Macke 571076d5e1
Support Git worktrees for `sync` (#1831)
## Changes

This change allows the `sync` command to work from [git
worktrees](https://git-scm.com/docs/git-worktree).

## Tests

* Added unit tests for traversal of worktree related files.
* Manually confirmed that synchronization of files from a main checkout,
as well as a worktree, observed the same ignore rules (both locally
defined as well as from `$GIT_DIR/info/exclude`).

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-21 18:27:07 +00:00
Pieter Noordhuis ca45e53f42
Add script to make testing of code on branches easier (#1844)
## Changes

Convenience script to exec into a shell where a CLI build for a specific
branch is made available.

## Tests

Manually from `/tmp` with `bash <([path]) demo-dashboards`.
2024-10-21 17:56:17 +00:00
Andrew Nester ffdbec87cc
Added support for pip options in environment dependencies (#1842)
## Changes
Added support for specifying pip options such as `--extra-index-url` and
etc. in environments dependencies

```
environments:
  - environment_key: Default
    spec:
      client: "1"
      dependencies:
        - --extra-index-url https://foo@bar.com/packages/smth somepackage
        - json==1.0.0
```

## Tests
Added regression test

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-21 11:45:39 +00:00
Pieter Noordhuis eefda8c198
Fix path to repository-wide exclude file (#1837)
## Changes

This file is at `info/exclude`, and not `info/excludes`.

Also see https://git-scm.com/docs/gitignore.

## Tests

Manually confirmed that these ignore patterns are now picked up. I
created a repository with a pattern in this file and ran `sync` to
confirm it ignores files matching the pattern.
2024-10-18 15:46:39 +00:00
Pieter Noordhuis c9770503f8
Encode assumptions about the dashboards API in a test (#1839)
## Changes

Dashboards can be imported either via its own CRUD API, or via the
workspace import API. This test encodes the assumptions we can make
about the API behavior. More specifically, the identity of the dashboard
is retained on workspace import, as are the properties that aren't
surfaced in the workspace import API.

## Tests

The integration test passes.
2024-10-18 15:42:05 +00:00
Pieter Noordhuis 3b1fb6d2df
Remove Terraform conversion function that's no longer used (#1840)
## Changes

In #1218, the `BundleToTerraform` function was replaced by a version
based on the dynamic configuration tree. Since then, it has only been
used in tests to confirm that the output of the old function was equal
to the output of the new function. We no longer need this and can
exclusively rely on the version based on the dynamic configuration tree.

## Tests

Tests pass.
2024-10-18 15:38:10 +00:00
Andrew Nester 0c9c90208f
Added a warning when incorrect permissions used for `/Workspace/Shared` bundle root (#1821)
## Changes
Added a warning when incorrect permissions used for `/Workspace/Shared`
bundle root

## Tests
Added unit test

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-18 15:37:16 +00:00
80 changed files with 3027 additions and 666 deletions

View File

@ -1 +1 @@
0c86ea6dbd9a730c24ff0d4e509603e476955ac5 cf9c61453990df0f9453670f2fe68e1b128647a2

1
.gitattributes vendored
View File

@ -54,6 +54,7 @@ cmd/workspace/dashboards/dashboards.go linguist-generated=true
cmd/workspace/data-sources/data-sources.go linguist-generated=true cmd/workspace/data-sources/data-sources.go linguist-generated=true
cmd/workspace/default-namespace/default-namespace.go linguist-generated=true cmd/workspace/default-namespace/default-namespace.go linguist-generated=true
cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true
cmd/workspace/disable-legacy-dbfs/disable-legacy-dbfs.go linguist-generated=true
cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true
cmd/workspace/experiments/experiments.go linguist-generated=true cmd/workspace/experiments/experiments.go linguist-generated=true
cmd/workspace/external-locations/external-locations.go linguist-generated=true cmd/workspace/external-locations/external-locations.go linguist-generated=true

60
.github/workflows/integration-tests.yml vendored Normal file
View File

@ -0,0 +1,60 @@
name: integration
on:
pull_request:
types: [opened, synchronize]
merge_group:
jobs:
trigger-tests:
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
environment: "test-trigger-is"
steps:
- uses: actions/checkout@v4
- name: Generate GitHub App Token
id: generate-token
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}
private-key: ${{ secrets.DECO_WORKFLOW_TRIGGER_PRIVATE_KEY }}
owner: ${{ secrets.ORG_NAME }}
repositories: ${{secrets.REPO_NAME}}
- name: Trigger Workflow in Another Repo
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
run: |
gh workflow run cli-isolated-pr.yml -R ${{ secrets.ORG_NAME }}/${{secrets.REPO_NAME}} \
--ref main \
-f pull_request_number=${{ github.event.pull_request.number }} \
-f commit_sha=${{ github.event.pull_request.head.sha }}
# Statuses and checks apply to specific commits (by hash).
# Enforcement of required checks is done both at the PR level and the merge queue level.
# In case of multiple commits in a single PR, the hash of the squashed commit
# will not match the one for the latest (approved) commit in the PR.
# We auto approve the check for the merge queue for two reasons:
# * Queue times out due to duration of tests.
# * Avoid running integration tests twice, since it was already run at the tip of the branch before squashing.
auto-approve:
if: github.event_name == 'merge_group'
runs-on: ubuntu-latest
steps:
- name: Mark Check
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
gh api -X POST -H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
/repos/${{ github.repository }}/statuses/${{ github.sha }} \
-f 'state=success' \
-f 'context=Integration Tests Check'

View File

@ -1,5 +1,43 @@
# Version changelog # Version changelog
## [Release] Release v0.231.0
CLI:
* Added JSON input validation for CLI commands ([#1771](https://github.com/databricks/cli/pull/1771)).
* Support Git worktrees for `sync` ([#1831](https://github.com/databricks/cli/pull/1831)).
Bundles:
* Add `bundle summary` to display URLs for deployed resources ([#1731](https://github.com/databricks/cli/pull/1731)).
* Added a warning when incorrect permissions used for `/Workspace/Shared` bundle root ([#1821](https://github.com/databricks/cli/pull/1821)).
* Show actionable errors for collaborative deployment scenarios ([#1386](https://github.com/databricks/cli/pull/1386)).
* Fix path to repository-wide exclude file ([#1837](https://github.com/databricks/cli/pull/1837)).
* Fixed typo in converting cluster permissions ([#1826](https://github.com/databricks/cli/pull/1826)).
* Ignore metastore permission error during template generation ([#1819](https://github.com/databricks/cli/pull/1819)).
* Handle normalization of `dyn.KindTime` into an any type ([#1836](https://github.com/databricks/cli/pull/1836)).
* Added support for pip options in environment dependencies ([#1842](https://github.com/databricks/cli/pull/1842)).
* Fix race condition when restarting continuous jobs ([#1849](https://github.com/databricks/cli/pull/1849)).
* Fix pipeline in default-python template not working for certain workspaces ([#1854](https://github.com/databricks/cli/pull/1854)).
* Add "output" flag to the bundle sync command ([#1853](https://github.com/databricks/cli/pull/1853)).
Internal:
* Move utility functions dealing with IAM to libs/iamutil ([#1820](https://github.com/databricks/cli/pull/1820)).
* Remove unused `IS_OWNER` constant ([#1823](https://github.com/databricks/cli/pull/1823)).
* Assert SDK version is consistent in the CLI generation process ([#1814](https://github.com/databricks/cli/pull/1814)).
* Fixed unmarshalling json input into `interface{}` type ([#1832](https://github.com/databricks/cli/pull/1832)).
* Fix `TestAccFsMkdirWhenFileExistsAtPath` in isolated Azure environments ([#1833](https://github.com/databricks/cli/pull/1833)).
* Add behavioral tests for examples from the YAML spec ([#1835](https://github.com/databricks/cli/pull/1835)).
* Remove Terraform conversion function that's no longer used ([#1840](https://github.com/databricks/cli/pull/1840)).
* Encode assumptions about the dashboards API in a test ([#1839](https://github.com/databricks/cli/pull/1839)).
* Add script to make testing of code on branches easier ([#1844](https://github.com/databricks/cli/pull/1844)).
API Changes:
* Added `databricks disable-legacy-dbfs` command group.
OpenAPI commit cf9c61453990df0f9453670f2fe68e1b128647a2 (2024-10-14)
Dependency updates:
* Upgrade TF provider to 1.54.0 ([#1852](https://github.com/databricks/cli/pull/1852)).
* Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0 ([#1843](https://github.com/databricks/cli/pull/1843)).
## [Release] Release v0.230.0 ## [Release] Release v0.230.0
Notable changes for Databricks Asset Bundles: Notable changes for Databricks Asset Bundles:

View File

@ -214,7 +214,7 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
// Dashboards: Prefix // Dashboards: Prefix
for key, dashboard := range r.Dashboards { for key, dashboard := range r.Dashboards {
if dashboard == nil { if dashboard == nil || dashboard.CreateDashboardRequest == nil {
diags = diags.Extend(diag.Errorf("dashboard %s s is not defined", key)) diags = diags.Extend(diag.Errorf("dashboard %s s is not defined", key))
continue continue
} }

View File

@ -10,6 +10,7 @@ import (
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -25,11 +26,15 @@ func TestConfigureDashboardDefaultsParentPath(t *testing.T) {
"d1": { "d1": {
// Empty string is skipped. // Empty string is skipped.
// See below for how it is set. // See below for how it is set.
ParentPath: "", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
ParentPath: "",
},
}, },
"d2": { "d2": {
// Non-empty string is skipped. // Non-empty string is skipped.
ParentPath: "already-set", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
ParentPath: "already-set",
},
}, },
"d3": { "d3": {
// No parent path set. // No parent path set.

View File

@ -8,6 +8,7 @@ import (
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
@ -87,8 +88,10 @@ func TestInitializeURLs(t *testing.T) {
}, },
Dashboards: map[string]*resources.Dashboard{ Dashboards: map[string]*resources.Dashboard{
"dashboard1": { "dashboard1": {
ID: "01ef8d56871e1d50ae30ce7375e42478", ID: "01ef8d56871e1d50ae30ce7375e42478",
DisplayName: "My special dashboard", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "My special dashboard",
},
}, },
}, },
}, },

View File

@ -14,6 +14,7 @@ import (
sdkconfig "github.com/databricks/databricks-sdk-go/config" sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/iam" "github.com/databricks/databricks-sdk-go/service/iam"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
@ -124,7 +125,11 @@ func mockBundle(mode config.Mode) *bundle.Bundle {
"cluster1": {ClusterSpec: &compute.ClusterSpec{ClusterName: "cluster1", SparkVersion: "13.2.x", NumWorkers: 1}}, "cluster1": {ClusterSpec: &compute.ClusterSpec{ClusterName: "cluster1", SparkVersion: "13.2.x", NumWorkers: 1}},
}, },
Dashboards: map[string]*resources.Dashboard{ Dashboards: map[string]*resources.Dashboard{
"dashboard1": {DisplayName: "dashboard1"}, "dashboard1": {
CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "dashboard1",
},
},
}, },
}, },
}, },

View File

@ -699,6 +699,9 @@ func TestTranslatePathJobEnvironments(t *testing.T) {
"../dist/env2.whl", "../dist/env2.whl",
"simplejson", "simplejson",
"/Workspace/Users/foo@bar.com/test.whl", "/Workspace/Users/foo@bar.com/test.whl",
"--extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple foobar",
"foobar --extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple",
"https://foo@bar.com/packages/pypi/simple",
}, },
}, },
}, },
@ -719,6 +722,9 @@ func TestTranslatePathJobEnvironments(t *testing.T) {
assert.Equal(t, strings.Join([]string{".", "dist", "env2.whl"}, string(os.PathSeparator)), b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[1]) assert.Equal(t, strings.Join([]string{".", "dist", "env2.whl"}, string(os.PathSeparator)), b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[1])
assert.Equal(t, "simplejson", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[2]) assert.Equal(t, "simplejson", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[2])
assert.Equal(t, "/Workspace/Users/foo@bar.com/test.whl", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[3]) assert.Equal(t, "/Workspace/Users/foo@bar.com/test.whl", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[3])
assert.Equal(t, "--extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple foobar", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[4])
assert.Equal(t, "foobar --extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[5])
assert.Equal(t, "https://foo@bar.com/packages/pypi/simple", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[6])
} }
func TestTranslatePathWithComplexVariables(t *testing.T) { func TestTranslatePathWithComplexVariables(t *testing.T) {

View File

@ -17,27 +17,18 @@ type Dashboard struct {
ModifiedStatus ModifiedStatus `json:"modified_status,omitempty" bundle:"internal"` ModifiedStatus ModifiedStatus `json:"modified_status,omitempty" bundle:"internal"`
URL string `json:"url,omitempty" bundle:"internal"` URL string `json:"url,omitempty" bundle:"internal"`
// =========================== *dashboards.CreateDashboardRequest
// === BEGIN OF API FIELDS ===
// ===========================
// DisplayName is the display name of the dashboard (both as title and as basename in the workspace). // =========================
DisplayName string `json:"display_name"` // === Additional fields ===
// =========================
// WarehouseID is the ID of the SQL Warehouse used to run the dashboard's queries.
WarehouseID string `json:"warehouse_id"`
// SerializedDashboard holds the contents of the dashboard in serialized JSON form. // SerializedDashboard holds the contents of the dashboard in serialized JSON form.
// Note: its type is any and not string such that it can be inlined as YAML. // We override the field's type from the SDK struct here to allow for inlining as YAML.
// If it is not a string, its contents will be marshalled as JSON. // If the value is a string, it is used as is.
// If it is not a string, its contents is marshalled as JSON.
SerializedDashboard any `json:"serialized_dashboard,omitempty"` SerializedDashboard any `json:"serialized_dashboard,omitempty"`
// ParentPath is the workspace path of the folder containing the dashboard.
// Includes leading slash and no trailing slash.
//
// Defaults to ${workspace.resource_path} if not set.
ParentPath string `json:"parent_path,omitempty"`
// EmbedCredentials is a flag to indicate if the publisher's credentials should // EmbedCredentials is a flag to indicate if the publisher's credentials should
// be embedded in the published dashboard. These embedded credentials will be used // be embedded in the published dashboard. These embedded credentials will be used
// to execute the published dashboard's queries. // to execute the published dashboard's queries.
@ -45,10 +36,6 @@ type Dashboard struct {
// Defaults to false if not set. // Defaults to false if not set.
EmbedCredentials bool `json:"embed_credentials,omitempty"` EmbedCredentials bool `json:"embed_credentials,omitempty"`
// ===========================
// ==== END OF API FIELDS ====
// ===========================
// FilePath points to the local `.lvdash.json` file containing the dashboard definition. // FilePath points to the local `.lvdash.json` file containing the dashboard definition.
FilePath string `json:"file_path,omitempty"` FilePath string `json:"file_path,omitempty"`
} }

View File

@ -1,5 +1,7 @@
package resources package resources
import "fmt"
// Permission holds the permission level setting for a single principal. // Permission holds the permission level setting for a single principal.
// Multiple of these can be defined on any resource. // Multiple of these can be defined on any resource.
type Permission struct { type Permission struct {
@ -9,3 +11,19 @@ type Permission struct {
ServicePrincipalName string `json:"service_principal_name,omitempty"` ServicePrincipalName string `json:"service_principal_name,omitempty"`
GroupName string `json:"group_name,omitempty"` GroupName string `json:"group_name,omitempty"`
} }
func (p Permission) String() string {
if p.UserName != "" {
return fmt.Sprintf("level: %s, user_name: %s", p.Level, p.UserName)
}
if p.ServicePrincipalName != "" {
return fmt.Sprintf("level: %s, service_principal_name: %s", p.Level, p.ServicePrincipalName)
}
if p.GroupName != "" {
return fmt.Sprintf("level: %s, group_name: %s", p.Level, p.GroupName)
}
return fmt.Sprintf("level: %s", p.Level)
}

View File

@ -0,0 +1,126 @@
package validate
import (
"context"
"fmt"
"path"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/bundle/permissions"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/workspace"
"golang.org/x/sync/errgroup"
)
type folderPermissions struct {
}
// Apply implements bundle.ReadOnlyMutator.
func (f *folderPermissions) Apply(ctx context.Context, b bundle.ReadOnlyBundle) diag.Diagnostics {
if len(b.Config().Permissions) == 0 {
return nil
}
rootPath := b.Config().Workspace.RootPath
paths := []string{}
if !libraries.IsVolumesPath(rootPath) && !libraries.IsWorkspaceSharedPath(rootPath) {
paths = append(paths, rootPath)
}
if !strings.HasSuffix(rootPath, "/") {
rootPath += "/"
}
for _, p := range []string{
b.Config().Workspace.ArtifactPath,
b.Config().Workspace.FilePath,
b.Config().Workspace.StatePath,
b.Config().Workspace.ResourcePath,
} {
if libraries.IsWorkspaceSharedPath(p) || libraries.IsVolumesPath(p) {
continue
}
if strings.HasPrefix(p, rootPath) {
continue
}
paths = append(paths, p)
}
var diags diag.Diagnostics
g, ctx := errgroup.WithContext(ctx)
results := make([]diag.Diagnostics, len(paths))
for i, p := range paths {
g.Go(func() error {
results[i] = checkFolderPermission(ctx, b, p)
return nil
})
}
if err := g.Wait(); err != nil {
return diag.FromErr(err)
}
for _, r := range results {
diags = diags.Extend(r)
}
return diags
}
func checkFolderPermission(ctx context.Context, b bundle.ReadOnlyBundle, folderPath string) diag.Diagnostics {
w := b.WorkspaceClient().Workspace
obj, err := getClosestExistingObject(ctx, w, folderPath)
if err != nil {
return diag.FromErr(err)
}
objPermissions, err := w.GetPermissions(ctx, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: fmt.Sprint(obj.ObjectId),
WorkspaceObjectType: "directories",
})
if err != nil {
return diag.FromErr(err)
}
p := permissions.ObjectAclToResourcePermissions(folderPath, objPermissions.AccessControlList)
return p.Compare(b.Config().Permissions)
}
func getClosestExistingObject(ctx context.Context, w workspace.WorkspaceInterface, folderPath string) (*workspace.ObjectInfo, error) {
for {
obj, err := w.GetStatusByPath(ctx, folderPath)
if err == nil {
return obj, nil
}
if !apierr.IsMissing(err) {
return nil, err
}
parent := path.Dir(folderPath)
// If the parent is the same as the current folder, then we have reached the root
if folderPath == parent {
break
}
folderPath = parent
}
return nil, fmt.Errorf("folder %s and its parent folders do not exist", folderPath)
}
// Name implements bundle.ReadOnlyMutator.
func (f *folderPermissions) Name() string {
return "validate:folder_permissions"
}
// ValidateFolderPermissions validates that permissions for the folders in Workspace file system matches
// the permissions in the top-level permissions section of the bundle.
func ValidateFolderPermissions() bundle.ReadOnlyMutator {
return &folderPermissions{}
}

View File

@ -0,0 +1,208 @@
package validate
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/permissions"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestFolderPermissionsInheritedWhenRootPathDoesNotExist(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com",
ArtifactPath: "/Workspace/Users/otherfoo@bar.com/artifacts",
FilePath: "/Workspace/Users/foo@bar.com/files",
StatePath: "/Workspace/Users/foo@bar.com/state",
ResourcePath: "/Workspace/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/otherfoo@bar.com/artifacts").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/otherfoo@bar.com").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/foo@bar.com").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace").Return(&workspace.ObjectInfo{
ObjectId: 1234,
}, nil)
api.EXPECT().GetPermissions(mock.Anything, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: "1234",
WorkspaceObjectType: "directories",
}).Return(&workspace.WorkspaceObjectPermissions{
ObjectId: "1234",
AccessControlList: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Empty(t, diags)
}
func TestValidateFolderPermissionsFailsOnMissingBundlePermission(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com",
ArtifactPath: "/Workspace/Users/foo@bar.com/artifacts",
FilePath: "/Workspace/Users/foo@bar.com/files",
StatePath: "/Workspace/Users/foo@bar.com/state",
ResourcePath: "/Workspace/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/foo@bar.com").Return(&workspace.ObjectInfo{
ObjectId: 1234,
}, nil)
api.EXPECT().GetPermissions(mock.Anything, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: "1234",
WorkspaceObjectType: "directories",
}).Return(&workspace.WorkspaceObjectPermissions{
ObjectId: "1234",
AccessControlList: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
{
UserName: "foo2@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Len(t, diags, 1)
require.Equal(t, "untracked permissions apply to target workspace path", diags[0].Summary)
require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, "The following permissions apply to the workspace folder at \"/Workspace/Users/foo@bar.com\" but are not configured in the bundle:\n- level: CAN_MANAGE, user_name: foo2@bar.com\n", diags[0].Detail)
}
func TestValidateFolderPermissionsFailsOnPermissionMismatch(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com",
ArtifactPath: "/Workspace/Users/foo@bar.com/artifacts",
FilePath: "/Workspace/Users/foo@bar.com/files",
StatePath: "/Workspace/Users/foo@bar.com/state",
ResourcePath: "/Workspace/Users/foo@bar.com/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/Workspace/Users/foo@bar.com").Return(&workspace.ObjectInfo{
ObjectId: 1234,
}, nil)
api.EXPECT().GetPermissions(mock.Anything, workspace.GetWorkspaceObjectPermissionsRequest{
WorkspaceObjectId: "1234",
WorkspaceObjectType: "directories",
}).Return(&workspace.WorkspaceObjectPermissions{
ObjectId: "1234",
AccessControlList: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo2@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Len(t, diags, 1)
require.Equal(t, "untracked permissions apply to target workspace path", diags[0].Summary)
require.Equal(t, diag.Warning, diags[0].Severity)
}
func TestValidateFolderPermissionsFailsOnNoRootFolder(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/NotExisting",
ArtifactPath: "/NotExisting/artifacts",
FilePath: "/NotExisting/files",
StatePath: "/NotExisting/state",
ResourcePath: "/NotExisting/resources",
},
Permissions: []resources.Permission{
{Level: permissions.CAN_MANAGE, UserName: "foo@bar.com"},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockWorkspaceAPI()
api.EXPECT().GetStatusByPath(mock.Anything, "/NotExisting").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
api.EXPECT().GetStatusByPath(mock.Anything, "/").Return(nil, &apierr.APIError{
StatusCode: 404,
ErrorCode: "RESOURCE_DOES_NOT_EXIST",
})
b.SetWorkpaceClient(m.WorkspaceClient)
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(context.Background(), rb, ValidateFolderPermissions())
require.Len(t, diags, 1)
require.Equal(t, "folder / and its parent folders do not exist", diags[0].Summary)
require.Equal(t, diag.Error, diags[0].Severity)
}

View File

@ -35,6 +35,7 @@ func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics
FilesToSync(), FilesToSync(),
ValidateSyncPatterns(), ValidateSyncPatterns(),
JobTaskClusterSpec(), JobTaskClusterSpec(),
ValidateFolderPermissions(),
)) ))
} }

View File

@ -16,7 +16,7 @@ type dashboardState struct {
ETag string ETag string
} }
func collectDashboards(ctx context.Context, b *bundle.Bundle) ([]dashboardState, error) { func collectDashboardsFromState(ctx context.Context, b *bundle.Bundle) ([]dashboardState, error) {
state, err := ParseResourcesState(ctx, b) state, err := ParseResourcesState(ctx, b)
if err != nil && state == nil { if err != nil && state == nil {
return nil, err return nil, err
@ -28,22 +28,12 @@ func collectDashboards(ctx context.Context, b *bundle.Bundle) ([]dashboardState,
continue continue
} }
for _, instance := range resource.Instances { for _, instance := range resource.Instances {
id := instance.Attributes.ID
if id == "" {
continue
}
switch resource.Type { switch resource.Type {
case "databricks_dashboard": case "databricks_dashboard":
etag := instance.Attributes.ETag
if etag == "" {
continue
}
dashboards = append(dashboards, dashboardState{ dashboards = append(dashboards, dashboardState{
Name: resource.Name, Name: resource.Name,
ID: id, ID: instance.Attributes.ID,
ETag: etag, ETag: instance.Attributes.ETag,
}) })
} }
} }
@ -52,14 +42,14 @@ func collectDashboards(ctx context.Context, b *bundle.Bundle) ([]dashboardState,
return dashboards, nil return dashboards, nil
} }
type checkModifiedDashboards struct { type checkDashboardsModifiedRemotely struct {
} }
func (l *checkModifiedDashboards) Name() string { func (l *checkDashboardsModifiedRemotely) Name() string {
return "CheckModifiedDashboards" return "CheckDashboardsModifiedRemotely"
} }
func (l *checkModifiedDashboards) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (l *checkDashboardsModifiedRemotely) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// This mutator is relevant only if the bundle includes dashboards. // This mutator is relevant only if the bundle includes dashboards.
if len(b.Config.Resources.Dashboards) == 0 { if len(b.Config.Resources.Dashboards) == 0 {
return nil return nil
@ -70,7 +60,7 @@ func (l *checkModifiedDashboards) Apply(ctx context.Context, b *bundle.Bundle) d
return nil return nil
} }
dashboards, err := collectDashboards(ctx, b) dashboards, err := collectDashboardsFromState(ctx, b)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }
@ -122,6 +112,6 @@ func (l *checkModifiedDashboards) Apply(ctx context.Context, b *bundle.Bundle) d
return diags return diags
} }
func CheckModifiedDashboards() *checkModifiedDashboards { func CheckDashboardsModifiedRemotely() *checkDashboardsModifiedRemotely {
return &checkModifiedDashboards{} return &checkDashboardsModifiedRemotely{}
} }

View File

@ -29,7 +29,9 @@ func mockDashboardBundle(t *testing.T) *bundle.Bundle {
Resources: config.Resources{ Resources: config.Resources{
Dashboards: map[string]*resources.Dashboard{ Dashboards: map[string]*resources.Dashboard{
"dash1": { "dash1": {
DisplayName: "My Special Dashboard", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "My Special Dashboard",
},
}, },
}, },
}, },
@ -38,7 +40,7 @@ func mockDashboardBundle(t *testing.T) *bundle.Bundle {
return b return b
} }
func TestCheckModifiedDashboards_NoDashboards(t *testing.T) { func TestCheckDashboardsModifiedRemotely_NoDashboards(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := &bundle.Bundle{ b := &bundle.Bundle{
BundleRootPath: dir, BundleRootPath: dir,
@ -50,17 +52,17 @@ func TestCheckModifiedDashboards_NoDashboards(t *testing.T) {
}, },
} }
diags := bundle.Apply(context.Background(), b, CheckModifiedDashboards()) diags := bundle.Apply(context.Background(), b, CheckDashboardsModifiedRemotely())
assert.Empty(t, diags) assert.Empty(t, diags)
} }
func TestCheckModifiedDashboards_FirstDeployment(t *testing.T) { func TestCheckDashboardsModifiedRemotely_FirstDeployment(t *testing.T) {
b := mockDashboardBundle(t) b := mockDashboardBundle(t)
diags := bundle.Apply(context.Background(), b, CheckModifiedDashboards()) diags := bundle.Apply(context.Background(), b, CheckDashboardsModifiedRemotely())
assert.Empty(t, diags) assert.Empty(t, diags)
} }
func TestCheckModifiedDashboards_ExistingStateNoChange(t *testing.T) { func TestCheckDashboardsModifiedRemotely_ExistingStateNoChange(t *testing.T) {
ctx := context.Background() ctx := context.Background()
b := mockDashboardBundle(t) b := mockDashboardBundle(t)
@ -79,11 +81,11 @@ func TestCheckModifiedDashboards_ExistingStateNoChange(t *testing.T) {
b.SetWorkpaceClient(m.WorkspaceClient) b.SetWorkpaceClient(m.WorkspaceClient)
// No changes, so no diags. // No changes, so no diags.
diags := bundle.Apply(ctx, b, CheckModifiedDashboards()) diags := bundle.Apply(ctx, b, CheckDashboardsModifiedRemotely())
assert.Empty(t, diags) assert.Empty(t, diags)
} }
func TestCheckModifiedDashboards_ExistingStateChange(t *testing.T) { func TestCheckDashboardsModifiedRemotely_ExistingStateChange(t *testing.T) {
ctx := context.Background() ctx := context.Background()
b := mockDashboardBundle(t) b := mockDashboardBundle(t)
@ -102,14 +104,14 @@ func TestCheckModifiedDashboards_ExistingStateChange(t *testing.T) {
b.SetWorkpaceClient(m.WorkspaceClient) b.SetWorkpaceClient(m.WorkspaceClient)
// The dashboard has changed, so expect an error. // The dashboard has changed, so expect an error.
diags := bundle.Apply(ctx, b, CheckModifiedDashboards()) diags := bundle.Apply(ctx, b, CheckDashboardsModifiedRemotely())
if assert.Len(t, diags, 1) { if assert.Len(t, diags, 1) {
assert.Equal(t, diag.Error, diags[0].Severity) assert.Equal(t, diag.Error, diags[0].Severity)
assert.Equal(t, `dashboard "dash1" has been modified remotely`, diags[0].Summary) assert.Equal(t, `dashboard "dash1" has been modified remotely`, diags[0].Summary)
} }
} }
func TestCheckModifiedDashboards_ExistingStateFailureToGet(t *testing.T) { func TestCheckDashboardsModifiedRemotely_ExistingStateFailureToGet(t *testing.T) {
ctx := context.Background() ctx := context.Background()
b := mockDashboardBundle(t) b := mockDashboardBundle(t)
@ -125,7 +127,7 @@ func TestCheckModifiedDashboards_ExistingStateFailureToGet(t *testing.T) {
b.SetWorkpaceClient(m.WorkspaceClient) b.SetWorkpaceClient(m.WorkspaceClient)
// Unable to get the dashboard, so expect an error. // Unable to get the dashboard, so expect an error.
diags := bundle.Apply(ctx, b, CheckModifiedDashboards()) diags := bundle.Apply(ctx, b, CheckDashboardsModifiedRemotely())
if assert.Len(t, diags, 1) { if assert.Len(t, diags, 1) {
assert.Equal(t, diag.Error, diags[0].Severity) assert.Equal(t, diag.Error, diags[0].Severity)
assert.Equal(t, `failed to get dashboard "dash1"`, diags[0].Summary) assert.Equal(t, `failed to get dashboard "dash1"`, diags[0].Summary)

View File

@ -2,9 +2,7 @@ package terraform
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sort"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
@ -14,244 +12,6 @@ import (
tfjson "github.com/hashicorp/terraform-json" tfjson "github.com/hashicorp/terraform-json"
) )
func conv(from any, to any) {
buf, _ := json.Marshal(from)
json.Unmarshal(buf, &to)
}
func convPermissions(acl []resources.Permission) *schema.ResourcePermissions {
if len(acl) == 0 {
return nil
}
resource := schema.ResourcePermissions{}
for _, ac := range acl {
resource.AccessControl = append(resource.AccessControl, convPermission(ac))
}
return &resource
}
func convPermission(ac resources.Permission) schema.ResourcePermissionsAccessControl {
dst := schema.ResourcePermissionsAccessControl{
PermissionLevel: ac.Level,
}
if ac.UserName != "" {
dst.UserName = ac.UserName
}
if ac.GroupName != "" {
dst.GroupName = ac.GroupName
}
if ac.ServicePrincipalName != "" {
dst.ServicePrincipalName = ac.ServicePrincipalName
}
return dst
}
func convGrants(acl []resources.Grant) *schema.ResourceGrants {
if len(acl) == 0 {
return nil
}
resource := schema.ResourceGrants{}
for _, ac := range acl {
resource.Grant = append(resource.Grant, schema.ResourceGrantsGrant{
Privileges: ac.Privileges,
Principal: ac.Principal,
})
}
return &resource
}
// BundleToTerraform converts resources in a bundle configuration
// to the equivalent Terraform JSON representation.
//
// Note: This function is an older implementation of the conversion logic. It is
// no longer used in any code paths. It is kept around to be used in tests.
// New resources do not need to modify this function and can instead can define
// the conversion login in the tfdyn package.
func BundleToTerraform(config *config.Root) *schema.Root {
tfroot := schema.NewRoot()
tfroot.Provider = schema.NewProviders()
tfroot.Resource = schema.NewResources()
noResources := true
for k, src := range config.Resources.Jobs {
noResources = false
var dst schema.ResourceJob
conv(src, &dst)
if src.JobSettings != nil {
sort.Slice(src.JobSettings.Tasks, func(i, j int) bool {
return src.JobSettings.Tasks[i].TaskKey < src.JobSettings.Tasks[j].TaskKey
})
for _, v := range src.Tasks {
var t schema.ResourceJobTask
conv(v, &t)
for _, v_ := range v.Libraries {
var l schema.ResourceJobTaskLibrary
conv(v_, &l)
t.Library = append(t.Library, l)
}
// Convert for_each_task libraries
if v.ForEachTask != nil {
for _, v_ := range v.ForEachTask.Task.Libraries {
var l schema.ResourceJobTaskForEachTaskTaskLibrary
conv(v_, &l)
t.ForEachTask.Task.Library = append(t.ForEachTask.Task.Library, l)
}
}
dst.Task = append(dst.Task, t)
}
for _, v := range src.JobClusters {
var t schema.ResourceJobJobCluster
conv(v, &t)
dst.JobCluster = append(dst.JobCluster, t)
}
// Unblock downstream work. To be addressed more generally later.
if git := src.GitSource; git != nil {
dst.GitSource = &schema.ResourceJobGitSource{
Url: git.GitUrl,
Branch: git.GitBranch,
Commit: git.GitCommit,
Provider: string(git.GitProvider),
Tag: git.GitTag,
}
}
for _, v := range src.Parameters {
var t schema.ResourceJobParameter
conv(v, &t)
dst.Parameter = append(dst.Parameter, t)
}
}
tfroot.Resource.Job[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.JobId = fmt.Sprintf("${databricks_job.%s.id}", k)
tfroot.Resource.Permissions["job_"+k] = rp
}
}
for k, src := range config.Resources.Pipelines {
noResources = false
var dst schema.ResourcePipeline
conv(src, &dst)
if src.PipelineSpec != nil {
for _, v := range src.Libraries {
var l schema.ResourcePipelineLibrary
conv(v, &l)
dst.Library = append(dst.Library, l)
}
for _, v := range src.Clusters {
var l schema.ResourcePipelineCluster
conv(v, &l)
dst.Cluster = append(dst.Cluster, l)
}
for _, v := range src.Notifications {
var l schema.ResourcePipelineNotification
conv(v, &l)
dst.Notification = append(dst.Notification, l)
}
}
tfroot.Resource.Pipeline[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.PipelineId = fmt.Sprintf("${databricks_pipeline.%s.id}", k)
tfroot.Resource.Permissions["pipeline_"+k] = rp
}
}
for k, src := range config.Resources.Models {
noResources = false
var dst schema.ResourceMlflowModel
conv(src, &dst)
tfroot.Resource.MlflowModel[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.RegisteredModelId = fmt.Sprintf("${databricks_mlflow_model.%s.registered_model_id}", k)
tfroot.Resource.Permissions["mlflow_model_"+k] = rp
}
}
for k, src := range config.Resources.Experiments {
noResources = false
var dst schema.ResourceMlflowExperiment
conv(src, &dst)
tfroot.Resource.MlflowExperiment[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.ExperimentId = fmt.Sprintf("${databricks_mlflow_experiment.%s.id}", k)
tfroot.Resource.Permissions["mlflow_experiment_"+k] = rp
}
}
for k, src := range config.Resources.ModelServingEndpoints {
noResources = false
var dst schema.ResourceModelServing
conv(src, &dst)
tfroot.Resource.ModelServing[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.ServingEndpointId = fmt.Sprintf("${databricks_model_serving.%s.serving_endpoint_id}", k)
tfroot.Resource.Permissions["model_serving_"+k] = rp
}
}
for k, src := range config.Resources.RegisteredModels {
noResources = false
var dst schema.ResourceRegisteredModel
conv(src, &dst)
tfroot.Resource.RegisteredModel[k] = &dst
// Configure permissions for this resource.
if rp := convGrants(src.Grants); rp != nil {
rp.Function = fmt.Sprintf("${databricks_registered_model.%s.id}", k)
tfroot.Resource.Grants["registered_model_"+k] = rp
}
}
for k, src := range config.Resources.QualityMonitors {
noResources = false
var dst schema.ResourceQualityMonitor
conv(src, &dst)
tfroot.Resource.QualityMonitor[k] = &dst
}
for k, src := range config.Resources.Clusters {
noResources = false
var dst schema.ResourceCluster
conv(src, &dst)
tfroot.Resource.Cluster[k] = &dst
}
// We explicitly set "resource" to nil to omit it from a JSON encoding.
// This is required because the terraform CLI requires >= 1 resources defined
// if the "resource" property is used in a .tf.json file.
if noResources {
tfroot.Resource = nil
}
return tfroot
}
// BundleToTerraformWithDynValue converts resources in a bundle configuration // BundleToTerraformWithDynValue converts resources in a bundle configuration
// to the equivalent Terraform JSON representation. // to the equivalent Terraform JSON representation.
func BundleToTerraformWithDynValue(ctx context.Context, root dyn.Value) (*schema.Root, error) { func BundleToTerraformWithDynValue(ctx context.Context, root dyn.Value) (*schema.Root, error) {

View File

@ -2,7 +2,6 @@ package terraform
import ( import (
"context" "context"
"encoding/json"
"reflect" "reflect"
"testing" "testing"
@ -13,6 +12,7 @@ import (
"github.com/databricks/cli/libs/dyn/convert" "github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
@ -21,6 +21,27 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func produceTerraformConfiguration(t *testing.T, config config.Root) *schema.Root {
vin, err := convert.FromTyped(config, dyn.NilValue)
require.NoError(t, err)
out, err := BundleToTerraformWithDynValue(context.Background(), vin)
require.NoError(t, err)
return out
}
func convertToResourceStruct[T any](t *testing.T, resource *T, data any) {
require.NotNil(t, resource)
require.NotNil(t, data)
// Convert data to a dyn.Value.
vin, err := convert.FromTyped(data, dyn.NilValue)
require.NoError(t, err)
// Convert the dyn.Value to a struct.
err = convert.ToTyped(resource, vin)
require.NoError(t, err)
}
func TestBundleToTerraformJob(t *testing.T) { func TestBundleToTerraformJob(t *testing.T) {
var src = resources.Job{ var src = resources.Job{
JobSettings: &jobs.JobSettings{ JobSettings: &jobs.JobSettings{
@ -58,8 +79,9 @@ func TestBundleToTerraformJob(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceJob
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Job["my_job"])
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
assert.Len(t, resource.JobCluster, 1) assert.Len(t, resource.JobCluster, 1)
@ -68,8 +90,6 @@ func TestBundleToTerraformJob(t *testing.T) {
assert.Equal(t, "param1", resource.Parameter[0].Name) assert.Equal(t, "param1", resource.Parameter[0].Name)
assert.Equal(t, "param2", resource.Parameter[1].Name) assert.Equal(t, "param2", resource.Parameter[1].Name)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformJobPermissions(t *testing.T) { func TestBundleToTerraformJobPermissions(t *testing.T) {
@ -90,15 +110,14 @@ func TestBundleToTerraformJobPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["job_my_job"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["job_my_job"])
assert.NotEmpty(t, resource.JobId) assert.NotEmpty(t, resource.JobId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformJobTaskLibraries(t *testing.T) { func TestBundleToTerraformJobTaskLibraries(t *testing.T) {
@ -128,15 +147,14 @@ func TestBundleToTerraformJobTaskLibraries(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceJob
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Job["my_job"])
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
require.Len(t, resource.Task, 1) require.Len(t, resource.Task, 1)
require.Len(t, resource.Task[0].Library, 1) require.Len(t, resource.Task[0].Library, 1)
assert.Equal(t, "mlflow", resource.Task[0].Library[0].Pypi.Package) assert.Equal(t, "mlflow", resource.Task[0].Library[0].Pypi.Package)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformForEachTaskLibraries(t *testing.T) { func TestBundleToTerraformForEachTaskLibraries(t *testing.T) {
@ -172,15 +190,14 @@ func TestBundleToTerraformForEachTaskLibraries(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceJob
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Job["my_job"])
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
require.Len(t, resource.Task, 1) require.Len(t, resource.Task, 1)
require.Len(t, resource.Task[0].ForEachTask.Task.Library, 1) require.Len(t, resource.Task[0].ForEachTask.Task.Library, 1)
assert.Equal(t, "mlflow", resource.Task[0].ForEachTask.Task.Library[0].Pypi.Package) assert.Equal(t, "mlflow", resource.Task[0].ForEachTask.Task.Library[0].Pypi.Package)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformPipeline(t *testing.T) { func TestBundleToTerraformPipeline(t *testing.T) {
@ -230,8 +247,9 @@ func TestBundleToTerraformPipeline(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePipeline
resource := out.Resource.Pipeline["my_pipeline"].(*schema.ResourcePipeline) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Pipeline["my_pipeline"])
assert.Equal(t, "my pipeline", resource.Name) assert.Equal(t, "my pipeline", resource.Name)
assert.Len(t, resource.Library, 2) assert.Len(t, resource.Library, 2)
@ -241,8 +259,6 @@ func TestBundleToTerraformPipeline(t *testing.T) {
assert.Equal(t, resource.Notification[1].Alerts, []string{"on-update-failure", "on-flow-failure"}) assert.Equal(t, resource.Notification[1].Alerts, []string{"on-update-failure", "on-flow-failure"})
assert.Equal(t, resource.Notification[1].EmailRecipients, []string{"jane@doe.com", "john@doe.com"}) assert.Equal(t, resource.Notification[1].EmailRecipients, []string{"jane@doe.com", "john@doe.com"})
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformPipelinePermissions(t *testing.T) { func TestBundleToTerraformPipelinePermissions(t *testing.T) {
@ -263,15 +279,14 @@ func TestBundleToTerraformPipelinePermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["pipeline_my_pipeline"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["pipeline_my_pipeline"])
assert.NotEmpty(t, resource.PipelineId) assert.NotEmpty(t, resource.PipelineId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModel(t *testing.T) { func TestBundleToTerraformModel(t *testing.T) {
@ -300,8 +315,9 @@ func TestBundleToTerraformModel(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceMlflowModel
resource := out.Resource.MlflowModel["my_model"].(*schema.ResourceMlflowModel) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.MlflowModel["my_model"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "description", resource.Description) assert.Equal(t, "description", resource.Description)
@ -311,8 +327,6 @@ func TestBundleToTerraformModel(t *testing.T) {
assert.Equal(t, "k2", resource.Tags[1].Key) assert.Equal(t, "k2", resource.Tags[1].Key)
assert.Equal(t, "v2", resource.Tags[1].Value) assert.Equal(t, "v2", resource.Tags[1].Value)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModelPermissions(t *testing.T) { func TestBundleToTerraformModelPermissions(t *testing.T) {
@ -336,15 +350,14 @@ func TestBundleToTerraformModelPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["mlflow_model_my_model"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["mlflow_model_my_model"])
assert.NotEmpty(t, resource.RegisteredModelId) assert.NotEmpty(t, resource.RegisteredModelId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformExperiment(t *testing.T) { func TestBundleToTerraformExperiment(t *testing.T) {
@ -362,13 +375,12 @@ func TestBundleToTerraformExperiment(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceMlflowExperiment
resource := out.Resource.MlflowExperiment["my_experiment"].(*schema.ResourceMlflowExperiment) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.MlflowExperiment["my_experiment"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformExperimentPermissions(t *testing.T) { func TestBundleToTerraformExperimentPermissions(t *testing.T) {
@ -392,15 +404,14 @@ func TestBundleToTerraformExperimentPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["mlflow_experiment_my_experiment"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["mlflow_experiment_my_experiment"])
assert.NotEmpty(t, resource.ExperimentId) assert.NotEmpty(t, resource.ExperimentId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModelServing(t *testing.T) { func TestBundleToTerraformModelServing(t *testing.T) {
@ -436,8 +447,9 @@ func TestBundleToTerraformModelServing(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceModelServing
resource := out.Resource.ModelServing["my_model_serving_endpoint"].(*schema.ResourceModelServing) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.ModelServing["my_model_serving_endpoint"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "model_name", resource.Config.ServedModels[0].ModelName) assert.Equal(t, "model_name", resource.Config.ServedModels[0].ModelName)
@ -447,8 +459,6 @@ func TestBundleToTerraformModelServing(t *testing.T) {
assert.Equal(t, "model_name-1", resource.Config.TrafficConfig.Routes[0].ServedModelName) assert.Equal(t, "model_name-1", resource.Config.TrafficConfig.Routes[0].ServedModelName)
assert.Equal(t, 100, resource.Config.TrafficConfig.Routes[0].TrafficPercentage) assert.Equal(t, 100, resource.Config.TrafficConfig.Routes[0].TrafficPercentage)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModelServingPermissions(t *testing.T) { func TestBundleToTerraformModelServingPermissions(t *testing.T) {
@ -490,15 +500,14 @@ func TestBundleToTerraformModelServingPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["model_serving_my_model_serving_endpoint"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["model_serving_my_model_serving_endpoint"])
assert.NotEmpty(t, resource.ServingEndpointId) assert.NotEmpty(t, resource.ServingEndpointId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformRegisteredModel(t *testing.T) { func TestBundleToTerraformRegisteredModel(t *testing.T) {
@ -519,16 +528,15 @@ func TestBundleToTerraformRegisteredModel(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceRegisteredModel
resource := out.Resource.RegisteredModel["my_registered_model"].(*schema.ResourceRegisteredModel) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.RegisteredModel["my_registered_model"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "catalog", resource.CatalogName) assert.Equal(t, "catalog", resource.CatalogName)
assert.Equal(t, "schema", resource.SchemaName) assert.Equal(t, "schema", resource.SchemaName)
assert.Equal(t, "comment", resource.Comment) assert.Equal(t, "comment", resource.Comment)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformRegisteredModelGrants(t *testing.T) { func TestBundleToTerraformRegisteredModelGrants(t *testing.T) {
@ -554,15 +562,14 @@ func TestBundleToTerraformRegisteredModelGrants(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceGrants
resource := out.Resource.Grants["registered_model_my_registered_model"].(*schema.ResourceGrants) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Grants["registered_model_my_registered_model"])
assert.NotEmpty(t, resource.Function) assert.NotEmpty(t, resource.Function)
assert.Len(t, resource.Grant, 1) assert.Len(t, resource.Grant, 1)
assert.Equal(t, "jane@doe.com", resource.Grant[0].Principal) assert.Equal(t, "jane@doe.com", resource.Grant[0].Principal)
assert.Equal(t, "EXECUTE", resource.Grant[0].Privileges[0]) assert.Equal(t, "EXECUTE", resource.Grant[0].Privileges[0])
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformDeletedResources(t *testing.T) { func TestBundleToTerraformDeletedResources(t *testing.T) {
@ -785,7 +792,9 @@ func TestTerraformToBundleEmptyRemoteResources(t *testing.T) {
}, },
Dashboards: map[string]*resources.Dashboard{ Dashboards: map[string]*resources.Dashboard{
"test_dashboard": { "test_dashboard": {
DisplayName: "test_dashboard", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "test_dashboard",
},
}, },
}, },
}, },
@ -942,10 +951,14 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
}, },
Dashboards: map[string]*resources.Dashboard{ Dashboards: map[string]*resources.Dashboard{
"test_dashboard": { "test_dashboard": {
DisplayName: "test_dashboard", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "test_dashboard",
},
}, },
"test_dashboard_new": { "test_dashboard_new": {
DisplayName: "test_dashboard_new", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
DisplayName: "test_dashboard_new",
},
}, },
}, },
}, },
@ -1204,25 +1217,3 @@ func AssertFullResourceCoverage(t *testing.T, config *config.Root) {
} }
} }
} }
func assertEqualTerraformRoot(t *testing.T, a, b *schema.Root) {
ba, err := json.Marshal(a)
require.NoError(t, err)
bb, err := json.Marshal(b)
require.NoError(t, err)
assert.JSONEq(t, string(ba), string(bb))
}
func bundleToTerraformEquivalenceTest(t *testing.T, config *config.Root) {
t.Run("dyn equivalence", func(t *testing.T) {
tf1 := BundleToTerraform(config)
vin, err := convert.FromTyped(config, dyn.NilValue)
require.NoError(t, err)
tf2, err := BundleToTerraformWithDynValue(context.Background(), vin)
require.NoError(t, err)
// Compare roots
assertEqualTerraformRoot(t, tf1, tf2)
})
}

View File

@ -60,6 +60,8 @@ func (m *interpolateMutator) Apply(ctx context.Context, b *bundle.Bundle) diag.D
path = dyn.NewPath(dyn.Key("databricks_schema")).Append(path[2:]...) path = dyn.NewPath(dyn.Key("databricks_schema")).Append(path[2:]...)
case dyn.Key("clusters"): case dyn.Key("clusters"):
path = dyn.NewPath(dyn.Key("databricks_cluster")).Append(path[2:]...) path = dyn.NewPath(dyn.Key("databricks_cluster")).Append(path[2:]...)
case dyn.Key("dashboards"):
path = dyn.NewPath(dyn.Key("databricks_dashboard")).Append(path[2:]...)
default: default:
// Trigger "key not found" for unknown resource types. // Trigger "key not found" for unknown resource types.
return dyn.GetByPath(root, path) return dyn.GetByPath(root, path)

View File

@ -32,6 +32,7 @@ func TestInterpolate(t *testing.T) {
"other_registered_model": "${resources.registered_models.other_registered_model.id}", "other_registered_model": "${resources.registered_models.other_registered_model.id}",
"other_schema": "${resources.schemas.other_schema.id}", "other_schema": "${resources.schemas.other_schema.id}",
"other_cluster": "${resources.clusters.other_cluster.id}", "other_cluster": "${resources.clusters.other_cluster.id}",
"other_dashboard": "${resources.dashboards.other_dashboard.id}",
}, },
Tasks: []jobs.Task{ Tasks: []jobs.Task{
{ {
@ -69,6 +70,7 @@ func TestInterpolate(t *testing.T) {
assert.Equal(t, "${databricks_registered_model.other_registered_model.id}", j.Tags["other_registered_model"]) assert.Equal(t, "${databricks_registered_model.other_registered_model.id}", j.Tags["other_registered_model"])
assert.Equal(t, "${databricks_schema.other_schema.id}", j.Tags["other_schema"]) assert.Equal(t, "${databricks_schema.other_schema.id}", j.Tags["other_schema"])
assert.Equal(t, "${databricks_cluster.other_cluster.id}", j.Tags["other_cluster"]) assert.Equal(t, "${databricks_cluster.other_cluster.id}", j.Tags["other_cluster"])
assert.Equal(t, "${databricks_dashboard.other_dashboard.id}", j.Tags["other_dashboard"])
m := b.Config.Resources.Models["my_model"] m := b.Config.Resources.Models["my_model"]
assert.Equal(t, "my_model", m.Model.Name) assert.Equal(t, "my_model", m.Model.Name)

View File

@ -2,6 +2,7 @@ package tfdyn
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"github.com/databricks/cli/bundle/internal/tf/schema" "github.com/databricks/cli/bundle/internal/tf/schema"
@ -10,6 +11,34 @@ import (
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
) )
const (
filePathFieldName = "file_path"
serializedDashboardFieldName = "serialized_dashboard"
)
// Marshal "serialized_dashboard" as JSON if it is set in the input but not in the output.
func marshalSerializedDashboard(vin dyn.Value, vout dyn.Value) (dyn.Value, error) {
// Skip if the "serialized_dashboard" field is already set.
if v := vout.Get(serializedDashboardFieldName); v.IsValid() {
return vout, nil
}
// Skip if the "serialized_dashboard" field on the input is not set.
v := vin.Get(serializedDashboardFieldName)
if !v.IsValid() {
return vout, nil
}
// Marshal the "serialized_dashboard" field as JSON.
data, err := json.Marshal(v.AsAny())
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to marshal serialized_dashboard: %w", err)
}
// Set the "serialized_dashboard" field on the output.
return dyn.Set(vout, serializedDashboardFieldName, dyn.V(string(data)))
}
func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, error) { func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, error) {
var err error var err error
@ -22,8 +51,8 @@ func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, er
// Include "serialized_dashboard" field if "file_path" is set. // Include "serialized_dashboard" field if "file_path" is set.
// Note: the Terraform resource supports "file_path" natively, but its // Note: the Terraform resource supports "file_path" natively, but its
// change detection mechanism doesn't work as expected at the time of writing (Sep 30). // change detection mechanism doesn't work as expected at the time of writing (Sep 30).
if path, ok := vout.Get("file_path").AsString(); ok { if path, ok := vout.Get(filePathFieldName).AsString(); ok {
vout, err = dyn.Set(vout, "serialized_dashboard", dyn.V(fmt.Sprintf("${file(\"%s\")}", path))) vout, err = dyn.Set(vout, serializedDashboardFieldName, dyn.V(fmt.Sprintf("${file(%q)}", path)))
if err != nil { if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to set serialized_dashboard: %w", err) return dyn.InvalidValue, fmt.Errorf("failed to set serialized_dashboard: %w", err)
} }
@ -33,7 +62,7 @@ func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, er
case 0: case 0:
return v, nil return v, nil
case 1: case 1:
if p[0] == dyn.Key("file_path") { if p[0] == dyn.Key(filePathFieldName) {
return v, dyn.ErrDrop return v, dyn.ErrDrop
} }
} }
@ -46,6 +75,12 @@ func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, er
} }
} }
// Marshal "serialized_dashboard" as JSON if it is set in the input but not in the output.
vout, err = marshalSerializedDashboard(vin, vout)
if err != nil {
return dyn.InvalidValue, err
}
return vout, nil return vout, nil
} }

View File

@ -8,15 +8,19 @@ import (
"github.com/databricks/cli/bundle/internal/tf/schema" "github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert" "github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestConvertDashboard(t *testing.T) { func TestConvertDashboard(t *testing.T) {
var src = resources.Dashboard{ var src = resources.Dashboard{
DisplayName: "my dashboard", CreateDashboardRequest: &dashboards.CreateDashboardRequest{
WarehouseID: "f00dcafe", DisplayName: "my dashboard",
ParentPath: "/some/path", WarehouseId: "f00dcafe",
ParentPath: "/some/path",
},
EmbedCredentials: true, EmbedCredentials: true,
Permissions: []resources.Permission{ Permissions: []resources.Permission{
@ -78,3 +82,72 @@ func TestConvertDashboardFilePath(t *testing.T) {
"file_path": "some/path", "file_path": "some/path",
}) })
} }
func TestConvertDashboardFilePathQuoted(t *testing.T) {
var src = resources.Dashboard{
FilePath: `C:\foo\bar\baz\dashboard.lvdash.json`,
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": `${file("C:\\foo\\bar\\baz\\dashboard.lvdash.json")}`,
})
// Assert that the "file_path" doesn't carry over.
assert.NotSubset(t, out.Dashboard["my_dashboard"], map[string]any{
"file_path": `C:\foo\bar\baz\dashboard.lvdash.json`,
})
}
func TestConvertDashboardSerializedDashboardString(t *testing.T) {
var src = resources.Dashboard{
SerializedDashboard: `{ "json": true }`,
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": `{ "json": true }`,
})
}
func TestConvertDashboardSerializedDashboardAny(t *testing.T) {
var src = resources.Dashboard{
SerializedDashboard: map[string]any{
"pages": []map[string]any{
{
"displayName": "New Page",
"layout": []map[string]any{},
},
},
},
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": `{"pages":[{"displayName":"New Page","layout":[]}]}`,
})
}

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.53.0" const ProviderVersion = "1.54.0"

View File

@ -0,0 +1,15 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceNotificationDestinationsNotificationDestinations struct {
DestinationType string `json:"destination_type,omitempty"`
DisplayName string `json:"display_name,omitempty"`
Id string `json:"id,omitempty"`
}
type DataSourceNotificationDestinations struct {
DisplayNameContains string `json:"display_name_contains,omitempty"`
Type string `json:"type,omitempty"`
NotificationDestinations []DataSourceNotificationDestinationsNotificationDestinations `json:"notification_destinations,omitempty"`
}

View File

@ -0,0 +1,32 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceRegisteredModelModelInfoAliases struct {
AliasName string `json:"alias_name,omitempty"`
VersionNum int `json:"version_num,omitempty"`
}
type DataSourceRegisteredModelModelInfo struct {
BrowseOnly bool `json:"browse_only,omitempty"`
CatalogName string `json:"catalog_name,omitempty"`
Comment string `json:"comment,omitempty"`
CreatedAt int `json:"created_at,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
FullName string `json:"full_name,omitempty"`
MetastoreId string `json:"metastore_id,omitempty"`
Name string `json:"name,omitempty"`
Owner string `json:"owner,omitempty"`
SchemaName string `json:"schema_name,omitempty"`
StorageLocation string `json:"storage_location,omitempty"`
UpdatedAt int `json:"updated_at,omitempty"`
UpdatedBy string `json:"updated_by,omitempty"`
Aliases []DataSourceRegisteredModelModelInfoAliases `json:"aliases,omitempty"`
}
type DataSourceRegisteredModel struct {
FullName string `json:"full_name"`
IncludeAliases bool `json:"include_aliases,omitempty"`
IncludeBrowse bool `json:"include_browse,omitempty"`
ModelInfo []DataSourceRegisteredModelModelInfo `json:"model_info,omitempty"`
}

View File

@ -36,7 +36,9 @@ type DataSources struct {
NodeType map[string]any `json:"databricks_node_type,omitempty"` NodeType map[string]any `json:"databricks_node_type,omitempty"`
Notebook map[string]any `json:"databricks_notebook,omitempty"` Notebook map[string]any `json:"databricks_notebook,omitempty"`
NotebookPaths map[string]any `json:"databricks_notebook_paths,omitempty"` NotebookPaths map[string]any `json:"databricks_notebook_paths,omitempty"`
NotificationDestinations map[string]any `json:"databricks_notification_destinations,omitempty"`
Pipelines map[string]any `json:"databricks_pipelines,omitempty"` Pipelines map[string]any `json:"databricks_pipelines,omitempty"`
RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"`
Schema map[string]any `json:"databricks_schema,omitempty"` Schema map[string]any `json:"databricks_schema,omitempty"`
Schemas map[string]any `json:"databricks_schemas,omitempty"` Schemas map[string]any `json:"databricks_schemas,omitempty"`
ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"` ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"`
@ -92,7 +94,9 @@ func NewDataSources() *DataSources {
NodeType: make(map[string]any), NodeType: make(map[string]any),
Notebook: make(map[string]any), Notebook: make(map[string]any),
NotebookPaths: make(map[string]any), NotebookPaths: make(map[string]any),
NotificationDestinations: make(map[string]any),
Pipelines: make(map[string]any), Pipelines: make(map[string]any),
RegisteredModel: make(map[string]any),
Schema: make(map[string]any), Schema: make(map[string]any),
Schemas: make(map[string]any), Schemas: make(map[string]any),
ServicePrincipal: make(map[string]any), ServicePrincipal: make(map[string]any),

View File

@ -1448,6 +1448,7 @@ type ResourceJobWebhookNotifications struct {
type ResourceJob struct { type ResourceJob struct {
AlwaysRunning bool `json:"always_running,omitempty"` AlwaysRunning bool `json:"always_running,omitempty"`
BudgetPolicyId string `json:"budget_policy_id,omitempty"`
ControlRunState bool `json:"control_run_state,omitempty"` ControlRunState bool `json:"control_run_state,omitempty"`
Description string `json:"description,omitempty"` Description string `json:"description,omitempty"`
EditMode string `json:"edit_mode,omitempty"` EditMode string `json:"edit_mode,omitempty"`

View File

@ -19,9 +19,10 @@ type ResourceOnlineTableSpec struct {
} }
type ResourceOnlineTable struct { type ResourceOnlineTable struct {
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
Name string `json:"name"` Name string `json:"name"`
Status []any `json:"status,omitempty"` Status []any `json:"status,omitempty"`
TableServingUrl string `json:"table_serving_url,omitempty"` TableServingUrl string `json:"table_serving_url,omitempty"`
Spec *ResourceOnlineTableSpec `json:"spec,omitempty"` UnityCatalogProvisioningState string `json:"unity_catalog_provisioning_state,omitempty"`
Spec *ResourceOnlineTableSpec `json:"spec,omitempty"`
} }

View File

@ -142,10 +142,26 @@ type ResourcePipelineGatewayDefinition struct {
GatewayStorageSchema string `json:"gateway_storage_schema,omitempty"` GatewayStorageSchema string `json:"gateway_storage_schema,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsReportTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
}
type ResourcePipelineIngestionDefinitionObjectsReport struct {
DestinationCatalog string `json:"destination_catalog,omitempty"`
DestinationSchema string `json:"destination_schema,omitempty"`
DestinationTable string `json:"destination_table,omitempty"`
SourceUrl string `json:"source_url,omitempty"`
TableConfiguration *ResourcePipelineIngestionDefinitionObjectsReportTableConfiguration `json:"table_configuration,omitempty"`
}
type ResourcePipelineIngestionDefinitionObjectsSchemaTableConfiguration struct { type ResourcePipelineIngestionDefinitionObjectsSchemaTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsSchema struct { type ResourcePipelineIngestionDefinitionObjectsSchema struct {
@ -160,6 +176,7 @@ type ResourcePipelineIngestionDefinitionObjectsTableTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsTable struct { type ResourcePipelineIngestionDefinitionObjectsTable struct {
@ -173,6 +190,7 @@ type ResourcePipelineIngestionDefinitionObjectsTable struct {
} }
type ResourcePipelineIngestionDefinitionObjects struct { type ResourcePipelineIngestionDefinitionObjects struct {
Report *ResourcePipelineIngestionDefinitionObjectsReport `json:"report,omitempty"`
Schema *ResourcePipelineIngestionDefinitionObjectsSchema `json:"schema,omitempty"` Schema *ResourcePipelineIngestionDefinitionObjectsSchema `json:"schema,omitempty"`
Table *ResourcePipelineIngestionDefinitionObjectsTable `json:"table,omitempty"` Table *ResourcePipelineIngestionDefinitionObjectsTable `json:"table,omitempty"`
} }
@ -181,6 +199,7 @@ type ResourcePipelineIngestionDefinitionTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinition struct { type ResourcePipelineIngestionDefinition struct {

View File

@ -21,7 +21,7 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.53.0" const ProviderVersion = "1.54.0"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{

View File

@ -57,6 +57,12 @@ func IsLibraryLocal(dep string) bool {
} }
} }
// If the dependency starts with --, it's a pip flag option which is a valid
// entry for environment dependencies but not a local path
if containsPipFlag(dep) {
return false
}
// If the dependency is a requirements file, it's not a valid local path // If the dependency is a requirements file, it's not a valid local path
if strings.HasPrefix(dep, "-r") { if strings.HasPrefix(dep, "-r") {
return false return false
@ -70,6 +76,11 @@ func IsLibraryLocal(dep string) bool {
return IsLocalPath(dep) return IsLocalPath(dep)
} }
func containsPipFlag(input string) bool {
re := regexp.MustCompile(`--[a-zA-Z0-9-]+`)
return re.MatchString(input)
}
// ^[a-zA-Z0-9\-_]+: Matches the package name, allowing alphanumeric characters, dashes (-), and underscores (_). // ^[a-zA-Z0-9\-_]+: Matches the package name, allowing alphanumeric characters, dashes (-), and underscores (_).
// \[.*\])?: Optionally matches any extras specified in square brackets, e.g., [security]. // \[.*\])?: Optionally matches any extras specified in square brackets, e.g., [security].
// ((==|!=|<=|>=|~=|>|<)\d+(\.\d+){0,2}(\.\*)?): Optionally matches version specifiers, supporting various operators (==, !=, etc.) followed by a version number (e.g., 2.25.1). // ((==|!=|<=|>=|~=|>|<)\d+(\.\d+){0,2}(\.\*)?): Optionally matches version specifiers, supporting various operators (==, !=, etc.) followed by a version number (e.g., 2.25.1).

View File

@ -36,3 +36,12 @@ func IsWorkspaceLibrary(library *compute.Library) bool {
return IsWorkspacePath(path) return IsWorkspacePath(path)
} }
// IsVolumesPath returns true if the specified path indicates that
func IsVolumesPath(path string) bool {
return strings.HasPrefix(path, "/Volumes/")
}
func IsWorkspaceSharedPath(path string) bool {
return strings.HasPrefix(path, "/Workspace/Shared/")
}

View File

@ -31,3 +31,13 @@ func TestIsWorkspaceLibrary(t *testing.T) {
// Empty. // Empty.
assert.False(t, IsWorkspaceLibrary(&compute.Library{})) assert.False(t, IsWorkspaceLibrary(&compute.Library{}))
} }
func TestIsVolumesPath(t *testing.T) {
// Absolute paths with particular prefixes.
assert.True(t, IsVolumesPath("/Volumes/path/to/package"))
// Relative paths.
assert.False(t, IsVolumesPath("myfile.txt"))
assert.False(t, IsVolumesPath("./myfile.txt"))
assert.False(t, IsVolumesPath("../myfile.txt"))
}

View File

@ -0,0 +1,52 @@
package permissions
import (
"context"
"fmt"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/libs/diag"
)
type validateSharedRootPermissions struct {
}
func ValidateSharedRootPermissions() bundle.Mutator {
return &validateSharedRootPermissions{}
}
func (*validateSharedRootPermissions) Name() string {
return "ValidateSharedRootPermissions"
}
func (*validateSharedRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if libraries.IsWorkspaceSharedPath(b.Config.Workspace.RootPath) {
return isUsersGroupPermissionSet(b)
}
return nil
}
// isUsersGroupPermissionSet checks that top-level permissions set for bundle contain group_name: users with CAN_MANAGE permission.
func isUsersGroupPermissionSet(b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
allUsers := false
for _, p := range b.Config.Permissions {
if p.GroupName == "users" && p.Level == CAN_MANAGE {
allUsers = true
break
}
}
if !allUsers {
diags = diags.Append(diag.Diagnostic{
Severity: diag.Warning,
Summary: fmt.Sprintf("the bundle root path %s is writable by all workspace users", b.Config.Workspace.RootPath),
Detail: "The bundle is configured to use /Workspace/Shared, which will give read/write access to all users. If this is intentional, add CAN_MANAGE for 'group_name: users' permission to your bundle configuration. If the deployment should be restricted, move it to a restricted folder such as /Workspace/Users/<username or principal name>.",
})
}
return diags
}

View File

@ -0,0 +1,66 @@
package permissions
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require"
)
func TestValidateSharedRootPermissionsForShared(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Shared/foo/bar",
},
Permissions: []resources.Permission{
{Level: CAN_MANAGE, GroupName: "users"},
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job_1": {JobSettings: &jobs.JobSettings{Name: "job_1"}},
"job_2": {JobSettings: &jobs.JobSettings{Name: "job_2"}},
},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions()))
require.Empty(t, diags)
}
func TestValidateSharedRootPermissionsForSharedError(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Shared/foo/bar",
},
Permissions: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job_1": {JobSettings: &jobs.JobSettings{Name: "job_1"}},
"job_2": {JobSettings: &jobs.JobSettings{Name: "job_2"}},
},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions()))
require.Len(t, diags, 1)
require.Equal(t, "the bundle root path /Workspace/Shared/foo/bar is writable by all workspace users", diags[0].Summary)
require.Equal(t, diag.Warning, diags[0].Severity)
}

View File

@ -0,0 +1,89 @@
package permissions
import (
"fmt"
"strings"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/workspace"
)
type WorkspacePathPermissions struct {
Path string
Permissions []resources.Permission
}
func ObjectAclToResourcePermissions(path string, acl []workspace.WorkspaceObjectAccessControlResponse) *WorkspacePathPermissions {
permissions := make([]resources.Permission, 0)
for _, a := range acl {
// Skip the admin group because it's added to all resources by default.
if a.GroupName == "admins" {
continue
}
for _, pl := range a.AllPermissions {
permissions = append(permissions, resources.Permission{
Level: convertWorkspaceObjectPermissionLevel(pl.PermissionLevel),
GroupName: a.GroupName,
UserName: a.UserName,
ServicePrincipalName: a.ServicePrincipalName,
})
}
}
return &WorkspacePathPermissions{Permissions: permissions, Path: path}
}
func (p WorkspacePathPermissions) Compare(perms []resources.Permission) diag.Diagnostics {
var diags diag.Diagnostics
// Check the permissions in the workspace and see if they are all set in the bundle.
ok, missing := containsAll(p.Permissions, perms)
if !ok {
diags = diags.Append(diag.Diagnostic{
Severity: diag.Warning,
Summary: "untracked permissions apply to target workspace path",
Detail: fmt.Sprintf("The following permissions apply to the workspace folder at %q but are not configured in the bundle:\n%s", p.Path, toString(missing)),
})
}
return diags
}
// containsAll checks if permA contains all permissions in permB.
func containsAll(permA []resources.Permission, permB []resources.Permission) (bool, []resources.Permission) {
missing := make([]resources.Permission, 0)
for _, a := range permA {
found := false
for _, b := range permB {
if a == b {
found = true
break
}
}
if !found {
missing = append(missing, a)
}
}
return len(missing) == 0, missing
}
// convertWorkspaceObjectPermissionLevel converts matching object permission levels to bundle ones.
// If there is no matching permission level, it returns permission level as is, for example, CAN_EDIT.
func convertWorkspaceObjectPermissionLevel(level workspace.WorkspaceObjectPermissionLevel) string {
switch level {
case workspace.WorkspaceObjectPermissionLevelCanRead:
return CAN_VIEW
default:
return string(level)
}
}
func toString(p []resources.Permission) string {
var sb strings.Builder
for _, perm := range p {
sb.WriteString(fmt.Sprintf("- %s\n", perm.String()))
}
return sb.String()
}

View File

@ -0,0 +1,121 @@
package permissions
import (
"testing"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require"
)
func TestWorkspacePathPermissionsCompare(t *testing.T) {
testCases := []struct {
perms []resources.Permission
acl []workspace.WorkspaceObjectAccessControlResponse
expected diag.Diagnostics
}{
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: nil,
},
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
{
GroupName: "admins",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: nil,
},
{
perms: []resources.Permission{
{Level: CAN_VIEW, UserName: "foo@bar.com"},
{Level: CAN_MANAGE, ServicePrincipalName: "sp.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_READ"},
},
},
},
expected: nil,
},
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
{
GroupName: "foo",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: diag.Diagnostics{
{
Severity: diag.Warning,
Summary: "untracked permissions apply to target workspace path",
Detail: "The following permissions apply to the workspace folder at \"path\" but are not configured in the bundle:\n- level: CAN_MANAGE, group_name: foo\n",
},
},
},
{
perms: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
acl: []workspace.WorkspaceObjectAccessControlResponse{
{
UserName: "foo2@bar.com",
AllPermissions: []workspace.WorkspaceObjectPermission{
{PermissionLevel: "CAN_MANAGE"},
},
},
},
expected: diag.Diagnostics{
{
Severity: diag.Warning,
Summary: "untracked permissions apply to target workspace path",
Detail: "The following permissions apply to the workspace folder at \"path\" but are not configured in the bundle:\n- level: CAN_MANAGE, user_name: foo2@bar.com\n",
},
},
},
}
for _, tc := range testCases {
wp := ObjectAclToResourcePermissions("path", tc.acl)
diags := wp.Compare(tc.perms)
require.Equal(t, tc.expected, diags)
}
}

View File

@ -16,6 +16,10 @@ func ApplyWorkspaceRootPermissions() bundle.Mutator {
return &workspaceRootPermissions{} return &workspaceRootPermissions{}
} }
func (*workspaceRootPermissions) Name() string {
return "ApplyWorkspaceRootPermissions"
}
// Apply implements bundle.Mutator. // Apply implements bundle.Mutator.
func (*workspaceRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (*workspaceRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
err := giveAccessForWorkspaceRoot(ctx, b) err := giveAccessForWorkspaceRoot(ctx, b)
@ -26,15 +30,11 @@ func (*workspaceRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) di
return nil return nil
} }
func (*workspaceRootPermissions) Name() string {
return "ApplyWorkspaceRootPermissions"
}
func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error { func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error {
permissions := make([]workspace.WorkspaceObjectAccessControlRequest, 0) permissions := make([]workspace.WorkspaceObjectAccessControlRequest, 0)
for _, p := range b.Config.Permissions { for _, p := range b.Config.Permissions {
level, err := getWorkspaceObjectPermissionLevel(p.Level) level, err := GetWorkspaceObjectPermissionLevel(p.Level)
if err != nil { if err != nil {
return err return err
} }
@ -65,7 +65,7 @@ func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error {
return err return err
} }
func getWorkspaceObjectPermissionLevel(bundlePermission string) (workspace.WorkspaceObjectPermissionLevel, error) { func GetWorkspaceObjectPermissionLevel(bundlePermission string) (workspace.WorkspaceObjectPermissionLevel, error) {
switch bundlePermission { switch bundlePermission {
case CAN_MANAGE: case CAN_MANAGE:
return workspace.WorkspaceObjectPermissionLevelCanManage, nil return workspace.WorkspaceObjectPermissionLevelCanManage, nil

View File

@ -69,6 +69,6 @@ func TestApplyWorkspaceRootPermissions(t *testing.T) {
WorkspaceObjectType: "directories", WorkspaceObjectType: "directories",
}).Return(nil, nil) }).Return(nil, nil)
diags := bundle.Apply(context.Background(), b, ApplyWorkspaceRootPermissions()) diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions(), ApplyWorkspaceRootPermissions()))
require.NoError(t, diags.Error()) require.Empty(t, diags)
} }

View File

@ -152,7 +152,7 @@ func Deploy(outputHandler sync.OutputHandler) bundle.Mutator {
bundle.Defer( bundle.Defer(
bundle.Seq( bundle.Seq(
terraform.StatePull(), terraform.StatePull(),
terraform.CheckModifiedDashboards(), terraform.CheckDashboardsModifiedRemotely(),
deploy.StatePull(), deploy.StatePull(),
mutator.ValidateGitDetails(), mutator.ValidateGitDetails(),
artifacts.CleanUp(), artifacts.CleanUp(),

View File

@ -77,8 +77,11 @@ func Initialize() bundle.Mutator {
mutator.TranslatePaths(), mutator.TranslatePaths(),
trampoline.WrapperWarning(), trampoline.WrapperWarning(),
permissions.ValidateSharedRootPermissions(),
permissions.ApplyBundlePermissions(), permissions.ApplyBundlePermissions(),
permissions.FilterCurrentUser(), permissions.FilterCurrentUser(),
metadata.AnnotateJobs(), metadata.AnnotateJobs(),
metadata.AnnotatePipelines(), metadata.AnnotatePipelines(),
terraform.Initialize(), terraform.Initialize(),

View File

@ -0,0 +1,17 @@
package resources
import "github.com/databricks/cli/bundle"
// Completions returns the same as [References] except
// that every key maps directly to a single reference.
func Completions(b *bundle.Bundle, filters ...Filter) map[string]Reference {
out := make(map[string]Reference)
keyOnlyRefs, _ := References(b, filters...)
for k, refs := range keyOnlyRefs {
if len(refs) != 1 {
continue
}
out[k] = refs[0]
}
return out
}

View File

@ -0,0 +1,58 @@
package resources
import (
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/stretchr/testify/assert"
)
func TestCompletions_SkipDuplicates(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
"bar": {},
},
Pipelines: map[string]*resources.Pipeline{
"foo": {},
},
},
},
}
// Test that this skips duplicates and only includes unambiguous completions.
out := Completions(b)
if assert.Len(t, out, 1) {
assert.Contains(t, out, "bar")
}
}
func TestCompletions_Filter(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
Pipelines: map[string]*resources.Pipeline{
"bar": {},
},
},
},
}
includeJobs := func(ref Reference) bool {
_, ok := ref.Resource.(*resources.Job)
return ok
}
// Test that this does not include the pipeline.
out := Completions(b, includeJobs)
if assert.Len(t, out, 1) {
assert.Contains(t, out, "foo")
}
}

View File

@ -0,0 +1,98 @@
package resources
import (
"fmt"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
)
// Reference is a reference to a resource.
// It includes the resource type description, and a reference to the resource itself.
type Reference struct {
// Key is the unique key of the resource, e.g. "my_job".
Key string
// KeyWithType is the unique key of the resource, including the resource type, e.g. "jobs.my_job".
KeyWithType string
// Description is the resource type description.
Description config.ResourceDescription
// Resource is the resource itself.
Resource config.ConfigResource
}
// Map is the core type for resource lookup and completion.
type Map map[string][]Reference
// Filter defines the function signature for filtering resources.
type Filter func(Reference) bool
// includeReference checks if the specified reference passes all filters.
// If the list of filters is empty, the reference is always included.
func includeReference(filters []Filter, ref Reference) bool {
for _, filter := range filters {
if !filter(ref) {
return false
}
}
return true
}
// References returns maps of resource keys to a slice of [Reference].
//
// The first map is indexed by the resource key only.
// The second map is indexed by the resource type name and its key.
//
// While the return types allows for multiple resources to share the same key,
// this is confirmed not to happen in the [validate.UniqueResourceKeys] mutator.
func References(b *bundle.Bundle, filters ...Filter) (Map, Map) {
keyOnly := make(Map)
keyWithType := make(Map)
// Collect map of resource references indexed by their keys.
for _, group := range b.Config.Resources.AllResources() {
for k, v := range group.Resources {
ref := Reference{
Key: k,
KeyWithType: fmt.Sprintf("%s.%s", group.Description.PluralName, k),
Description: group.Description,
Resource: v,
}
// Skip resources that do not pass all filters.
if !includeReference(filters, ref) {
continue
}
keyOnly[ref.Key] = append(keyOnly[ref.Key], ref)
keyWithType[ref.KeyWithType] = append(keyWithType[ref.KeyWithType], ref)
}
}
return keyOnly, keyWithType
}
// Lookup returns the resource with the specified key.
// If the key maps to more than one resource, an error is returned.
// If the key does not map to any resource, an error is returned.
func Lookup(b *bundle.Bundle, key string, filters ...Filter) (Reference, error) {
keyOnlyRefs, keyWithTypeRefs := References(b, filters...)
refs, ok := keyOnlyRefs[key]
if !ok {
refs, ok = keyWithTypeRefs[key]
if !ok {
return Reference{}, fmt.Errorf("resource with key %q not found", key)
}
}
switch {
case len(refs) == 1:
return refs[0], nil
case len(refs) > 1:
return Reference{}, fmt.Errorf("multiple resources with key %q found", key)
default:
panic("unreachable")
}
}

View File

@ -0,0 +1,117 @@
package resources
import (
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestLookup_EmptyBundle(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{},
},
}
_, err := Lookup(b, "foo")
require.Error(t, err)
assert.ErrorContains(t, err, "resource with key \"foo\" not found")
}
func TestLookup_NotFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
"bar": {},
},
},
},
}
_, err := Lookup(b, "qux")
require.Error(t, err)
assert.ErrorContains(t, err, `resource with key "qux" not found`)
}
func TestLookup_MultipleFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
Pipelines: map[string]*resources.Pipeline{
"foo": {},
},
},
},
}
_, err := Lookup(b, "foo")
require.Error(t, err)
assert.ErrorContains(t, err, `multiple resources with key "foo" found`)
}
func TestLookup_Nominal(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Name: "Foo job",
},
},
},
},
},
}
// Lookup by key only.
out, err := Lookup(b, "foo")
if assert.NoError(t, err) {
assert.Equal(t, "Foo job", out.Resource.GetName())
}
// Lookup by type and key.
out, err = Lookup(b, "jobs.foo")
if assert.NoError(t, err) {
assert.Equal(t, "Foo job", out.Resource.GetName())
}
}
func TestLookup_NominalWithFilters(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
Pipelines: map[string]*resources.Pipeline{
"bar": {},
},
},
},
}
includeJobs := func(ref Reference) bool {
_, ok := ref.Resource.(*resources.Job)
return ok
}
// This should succeed because the filter includes jobs.
_, err := Lookup(b, "foo", includeJobs)
require.NoError(t, err)
// This should fail because the filter excludes pipelines.
_, err = Lookup(b, "bar", includeJobs)
require.Error(t, err)
assert.ErrorContains(t, err, `resource with key "bar" not found`)
}

View File

@ -317,6 +317,29 @@ func (r *jobRunner) Cancel(ctx context.Context) error {
return errGroup.Wait() return errGroup.Wait()
} }
func (r *jobRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
// We don't need to cancel existing runs if the job is continuous and unpaused.
// the /jobs/run-now API will automatically cancel any existing runs before starting a new one.
//
// /jobs/run-now will not cancel existing runs if the job is continuous and paused.
// New job runs will be queued instead and will wait for existing runs to finish.
// In this case, we need to cancel the existing runs before starting a new one.
continuous := r.job.JobSettings.Continuous
if continuous != nil && continuous.PauseStatus == jobs.PauseStatusUnpaused {
return r.Run(ctx, opts)
}
s := cmdio.Spinner(ctx)
s <- "Cancelling all active job runs"
err := r.Cancel(ctx)
close(s)
if err != nil {
return nil, err
}
return r.Run(ctx, opts)
}
func (r *jobRunner) ParseArgs(args []string, opts *Options) error { func (r *jobRunner) ParseArgs(args []string, opts *Options) error {
return r.posArgsHandler().ParseArgs(args, opts) return r.posArgsHandler().ParseArgs(args, opts)
} }

View File

@ -1,6 +1,7 @@
package run package run
import ( import (
"bytes"
"context" "context"
"testing" "testing"
"time" "time"
@ -8,6 +9,8 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/mock" "github.com/stretchr/testify/mock"
@ -126,3 +129,132 @@ func TestJobRunnerCancelWithNoActiveRuns(t *testing.T) {
err := runner.Cancel(context.Background()) err := runner.Cancel(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
func TestJobRunnerRestart(t *testing.T) {
for _, jobSettings := range []*jobs.JobSettings{
{},
{
Continuous: &jobs.Continuous{
PauseStatus: jobs.PauseStatusPaused,
},
},
} {
job := &resources.Job{
ID: "123",
JobSettings: jobSettings,
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": job,
},
},
},
}
runner := jobRunner{key: "test", bundle: b, job: job}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", ""))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
jobApi := m.GetMockJobsAPI()
jobApi.EXPECT().ListRunsAll(mock.Anything, jobs.ListRunsRequest{
ActiveOnly: true,
JobId: 123,
}).Return([]jobs.BaseRun{
{RunId: 1},
{RunId: 2},
}, nil)
// Mock the runner cancelling existing job runs.
mockWait := &jobs.WaitGetRunJobTerminatedOrSkipped[struct{}]{
Poll: func(time time.Duration, f func(j *jobs.Run)) (*jobs.Run, error) {
return nil, nil
},
}
jobApi.EXPECT().CancelRun(mock.Anything, jobs.CancelRun{
RunId: 1,
}).Return(mockWait, nil)
jobApi.EXPECT().CancelRun(mock.Anything, jobs.CancelRun{
RunId: 2,
}).Return(mockWait, nil)
// Mock the runner triggering a job run
mockWaitForRun := &jobs.WaitGetRunJobTerminatedOrSkipped[jobs.RunNowResponse]{
Poll: func(d time.Duration, f func(*jobs.Run)) (*jobs.Run, error) {
return &jobs.Run{
State: &jobs.RunState{
ResultState: jobs.RunResultStateSuccess,
},
}, nil
},
}
jobApi.EXPECT().RunNow(mock.Anything, jobs.RunNow{
JobId: 123,
}).Return(mockWaitForRun, nil)
// Mock the runner getting the job output
jobApi.EXPECT().GetRun(mock.Anything, jobs.GetRunRequest{}).Return(&jobs.Run{}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}
}
func TestJobRunnerRestartForContinuousUnpausedJobs(t *testing.T) {
job := &resources.Job{
ID: "123",
JobSettings: &jobs.JobSettings{
Continuous: &jobs.Continuous{
PauseStatus: jobs.PauseStatusUnpaused,
},
},
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": job,
},
},
},
}
runner := jobRunner{key: "test", bundle: b, job: job}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
jobApi := m.GetMockJobsAPI()
// The runner should not try and cancel existing job runs for unpaused continuous jobs.
jobApi.AssertNotCalled(t, "ListRunsAll")
jobApi.AssertNotCalled(t, "CancelRun")
// Mock the runner triggering a job run
mockWaitForRun := &jobs.WaitGetRunJobTerminatedOrSkipped[jobs.RunNowResponse]{
Poll: func(d time.Duration, f func(*jobs.Run)) (*jobs.Run, error) {
return &jobs.Run{
State: &jobs.RunState{
ResultState: jobs.RunResultStateSuccess,
},
}, nil
},
}
jobApi.EXPECT().RunNow(mock.Anything, jobs.RunNow{
JobId: 123,
}).Return(mockWaitForRun, nil)
// Mock the runner getting the job output
jobApi.EXPECT().GetRun(mock.Anything, jobs.GetRunRequest{}).Return(&jobs.Run{}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}

View File

@ -1,69 +0,0 @@
package run
import (
"fmt"
"github.com/databricks/cli/bundle"
"golang.org/x/exp/maps"
)
// RunnerLookup maps identifiers to a list of workloads that match that identifier.
// The list can have more than 1 entry if resources of different types use the
// same key. When this happens, the user should disambiguate between them.
type RunnerLookup map[string][]Runner
// ResourceKeys computes a map with
func ResourceKeys(b *bundle.Bundle) (keyOnly RunnerLookup, keyWithType RunnerLookup) {
keyOnly = make(RunnerLookup)
keyWithType = make(RunnerLookup)
r := b.Config.Resources
for k, v := range r.Jobs {
kt := fmt.Sprintf("jobs.%s", k)
w := jobRunner{key: key(kt), bundle: b, job: v}
keyOnly[k] = append(keyOnly[k], &w)
keyWithType[kt] = append(keyWithType[kt], &w)
}
for k, v := range r.Pipelines {
kt := fmt.Sprintf("pipelines.%s", k)
w := pipelineRunner{key: key(kt), bundle: b, pipeline: v}
keyOnly[k] = append(keyOnly[k], &w)
keyWithType[kt] = append(keyWithType[kt], &w)
}
return
}
// ResourceCompletionMap returns a map of resource keys to their respective names.
func ResourceCompletionMap(b *bundle.Bundle) map[string]string {
out := make(map[string]string)
keyOnly, keyWithType := ResourceKeys(b)
// Keep track of resources we have seen by their fully qualified key.
seen := make(map[string]bool)
// First add resources that can be identified by key alone.
for k, v := range keyOnly {
// Invariant: len(v) >= 1. See [ResourceKeys].
if len(v) == 1 {
seen[v[0].Key()] = true
out[k] = v[0].Name()
}
}
// Then add resources that can only be identified by their type and key.
for k, v := range keyWithType {
// Invariant: len(v) == 1. See [ResourceKeys].
_, ok := seen[v[0].Key()]
if ok {
continue
}
out[k] = v[0].Name()
}
return out
}
// ResourceCompletions returns a list of keys that unambiguously reference resources in the bundle.
func ResourceCompletions(b *bundle.Bundle) []string {
return maps.Keys(ResourceCompletionMap(b))
}

View File

@ -1,25 +0,0 @@
package run
import (
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/stretchr/testify/assert"
)
func TestResourceCompletionsUnique(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
"bar": {},
},
},
},
}
assert.ElementsMatch(t, []string{"foo", "bar"}, ResourceCompletions(b))
}

View File

@ -183,6 +183,18 @@ func (r *pipelineRunner) Cancel(ctx context.Context) error {
return err return err
} }
func (r *pipelineRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
s := cmdio.Spinner(ctx)
s <- "Cancelling the active pipeline update"
err := r.Cancel(ctx)
close(s)
if err != nil {
return nil, err
}
return r.Run(ctx, opts)
}
func (r *pipelineRunner) ParseArgs(args []string, opts *Options) error { func (r *pipelineRunner) ParseArgs(args []string, opts *Options) error {
if len(args) == 0 { if len(args) == 0 {
return nil return nil

View File

@ -1,6 +1,7 @@
package run package run
import ( import (
"bytes"
"context" "context"
"testing" "testing"
"time" "time"
@ -8,8 +9,12 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
sdk_config "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -47,3 +52,68 @@ func TestPipelineRunnerCancel(t *testing.T) {
err := runner.Cancel(context.Background()) err := runner.Cancel(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
func TestPipelineRunnerRestart(t *testing.T) {
pipeline := &resources.Pipeline{
ID: "123",
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"test_pipeline": pipeline,
},
},
},
}
runner := pipelineRunner{key: "test", bundle: b, pipeline: pipeline}
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdk_config.Config{
Host: "https://test.com",
}
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
mockWait := &pipelines.WaitGetPipelineIdle[struct{}]{
Poll: func(time.Duration, func(*pipelines.GetPipelineResponse)) (*pipelines.GetPipelineResponse, error) {
return nil, nil
},
}
pipelineApi := m.GetMockPipelinesAPI()
pipelineApi.EXPECT().Stop(mock.Anything, pipelines.StopRequest{
PipelineId: "123",
}).Return(mockWait, nil)
pipelineApi.EXPECT().GetByPipelineId(mock.Anything, "123").Return(&pipelines.GetPipelineResponse{}, nil)
// Mock runner starting a new update
pipelineApi.EXPECT().StartUpdate(mock.Anything, pipelines.StartUpdate{
PipelineId: "123",
}).Return(&pipelines.StartUpdateResponse{
UpdateId: "456",
}, nil)
// Mock runner polling for events
pipelineApi.EXPECT().ListPipelineEventsAll(mock.Anything, pipelines.ListPipelineEventsRequest{
Filter: `update_id = '456'`,
MaxResults: 100,
PipelineId: "123",
}).Return([]pipelines.PipelineEvent{}, nil)
// Mock runner polling for update status
pipelineApi.EXPECT().GetUpdateByPipelineIdAndUpdateId(mock.Anything, "123", "456").
Return(&pipelines.GetUpdateResponse{
Update: &pipelines.UpdateInfo{
State: pipelines.UpdateInfoStateCompleted,
},
}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}

View File

@ -3,9 +3,10 @@ package run
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/resources"
refs "github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/bundle/run/output" "github.com/databricks/cli/bundle/run/output"
) )
@ -27,6 +28,10 @@ type Runner interface {
// Run the underlying worklow. // Run the underlying worklow.
Run(ctx context.Context, opts *Options) (output.RunOutput, error) Run(ctx context.Context, opts *Options) (output.RunOutput, error)
// Restart the underlying workflow by cancelling any existing runs before
// starting a new one.
Restart(ctx context.Context, opts *Options) (output.RunOutput, error)
// Cancel the underlying workflow. // Cancel the underlying workflow.
Cancel(ctx context.Context) error Cancel(ctx context.Context) error
@ -34,34 +39,24 @@ type Runner interface {
argsHandler argsHandler
} }
// Find locates a runner matching the specified argument. // IsRunnable returns a filter that only allows runnable resources.
// func IsRunnable(ref refs.Reference) bool {
// Its behavior is as follows: switch ref.Resource.(type) {
// 1. Try to find a resource with <key> identical to the argument. case *resources.Job, *resources.Pipeline:
// 2. Try to find a resource with <type>.<key> identical to the argument. return true
// default:
// If an argument resolves to multiple resources, it returns an error. return false
func Find(b *bundle.Bundle, arg string) (Runner, error) { }
keyOnly, keyWithType := ResourceKeys(b) }
if len(keyWithType) == 0 {
return nil, fmt.Errorf("bundle defines no resources") // ToRunner converts a resource reference to a runnable resource.
func ToRunner(b *bundle.Bundle, ref refs.Reference) (Runner, error) {
switch resource := ref.Resource.(type) {
case *resources.Job:
return &jobRunner{key: key(ref.KeyWithType), bundle: b, job: resource}, nil
case *resources.Pipeline:
return &pipelineRunner{key: key(ref.KeyWithType), bundle: b, pipeline: resource}, nil
default:
return nil, fmt.Errorf("unsupported resource type: %T", resource)
} }
runners, ok := keyOnly[arg]
if !ok {
runners, ok = keyWithType[arg]
if !ok {
return nil, fmt.Errorf("no such resource: %s", arg)
}
}
if len(runners) != 1 {
var keys []string
for _, runner := range runners {
keys = append(keys, runner.Key())
}
return nil, fmt.Errorf("ambiguous: %s (can resolve to all of %s)", arg, strings.Join(keys, ", "))
}
return runners[0], nil
} }

View File

@ -3,82 +3,14 @@ package run
import ( import (
"testing" "testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
refs "github.com/databricks/cli/bundle/resources"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func TestFindNoResources(t *testing.T) { func TestRunner_IsRunnable(t *testing.T) {
b := &bundle.Bundle{ assert.True(t, IsRunnable(refs.Reference{Resource: &resources.Job{}}))
Config: config.Root{ assert.True(t, IsRunnable(refs.Reference{Resource: &resources.Pipeline{}}))
Resources: config.Resources{}, assert.False(t, IsRunnable(refs.Reference{Resource: &resources.MlflowModel{}}))
}, assert.False(t, IsRunnable(refs.Reference{Resource: &resources.MlflowExperiment{}}))
}
_, err := Find(b, "foo")
assert.ErrorContains(t, err, "bundle defines no resources")
}
func TestFindSingleArg(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
},
},
}
_, err := Find(b, "foo")
assert.NoError(t, err)
}
func TestFindSingleArgNotFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {},
},
},
},
}
_, err := Find(b, "bar")
assert.ErrorContains(t, err, "no such resource: bar")
}
func TestFindSingleArgAmbiguous(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"key": {},
},
Pipelines: map[string]*resources.Pipeline{
"key": {},
},
},
},
}
_, err := Find(b, "key")
assert.ErrorContains(t, err, "ambiguous: ")
}
func TestFindSingleArgWithType(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"key": {},
},
},
},
}
_, err := Find(b, "jobs.key")
assert.NoError(t, err)
} }

View File

@ -180,6 +180,48 @@
} }
] ]
}, },
"resources.Dashboard": {
"anyOf": [
{
"type": "object",
"properties": {
"display_name": {
"description": "The display name of the dashboard.",
"$ref": "#/$defs/string"
},
"embed_credentials": {
"$ref": "#/$defs/bool"
},
"file_path": {
"$ref": "#/$defs/string"
},
"parent_path": {
"description": "The workspace path of the folder containing the dashboard. Includes leading slash and no\ntrailing slash.\nThis field is excluded in List Dashboards responses.",
"$ref": "#/$defs/string"
},
"permissions": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission"
},
"serialized_dashboard": {
"description": "The contents of the dashboard in serialized string form.\nThis field is excluded in List Dashboards responses.\nUse the [get dashboard API](https://docs.databricks.com/api/workspace/lakeview/get)\nto retrieve an example response, which includes the `serialized_dashboard` field.\nThis field provides the structure of the JSON string that represents the dashboard's\nlayout and components.",
"$ref": "#/$defs/interface"
},
"warehouse_id": {
"description": "The warehouse ID used to run the dashboard.",
"$ref": "#/$defs/string"
}
},
"additionalProperties": false,
"required": [
"display_name"
]
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Grant": { "resources.Grant": {
"anyOf": [ "anyOf": [
{ {
@ -209,6 +251,10 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"budget_policy_id": {
"description": "The id of the user specified budget policy to use for this job.\nIf not specified, a default budget policy may be applied when creating or modifying the job.\nSee `effective_budget_policy_id` for the budget policy used by this workload.",
"$ref": "#/$defs/string"
},
"continuous": { "continuous": {
"description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.", "description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.Continuous" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.Continuous"
@ -1050,6 +1096,9 @@
"clusters": { "clusters": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster"
}, },
"dashboards": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Dashboard"
},
"experiments": { "experiments": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment"
}, },
@ -3901,6 +3950,10 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"report": {
"description": "Select tables from a specific source report.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.ReportSpec"
},
"schema": { "schema": {
"description": "Select tables from a specific source schema.", "description": "Select tables from a specific source schema.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.SchemaSpec" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.SchemaSpec"
@ -4233,6 +4286,40 @@
} }
] ]
}, },
"pipelines.ReportSpec": {
"anyOf": [
{
"type": "object",
"properties": {
"destination_catalog": {
"description": "Required. Destination catalog to store table.",
"$ref": "#/$defs/string"
},
"destination_schema": {
"description": "Required. Destination schema to store table.",
"$ref": "#/$defs/string"
},
"destination_table": {
"description": "Required. Destination table name. The pipeline fails if a table with that name already exists.",
"$ref": "#/$defs/string"
},
"source_url": {
"description": "Required. Report URL in the source system.",
"$ref": "#/$defs/string"
},
"table_configuration": {
"description": "Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.TableSpecificConfig"
}
},
"additionalProperties": false
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"pipelines.SchemaSpec": { "pipelines.SchemaSpec": {
"anyOf": [ "anyOf": [
{ {
@ -4281,7 +4368,7 @@
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"destination_table": { "destination_table": {
"description": "Optional. Destination table name. The pipeline fails If a table with that name already exists. If not set, the source table name is used.", "description": "Optional. Destination table name. The pipeline fails if a table with that name already exists. If not set, the source table name is used.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"source_catalog": { "source_catalog": {
@ -4329,6 +4416,10 @@
"SCD_TYPE_1", "SCD_TYPE_1",
"SCD_TYPE_2" "SCD_TYPE_2"
] ]
},
"sequence_by": {
"description": "The column names specifying the logical order of events in the source data. Delta Live Tables uses this sequencing to handle change events that arrive out of order.",
"$ref": "#/$defs/slice/string"
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -5246,6 +5337,20 @@
} }
] ]
}, },
"resources.Dashboard": {
"anyOf": [
{
"type": "object",
"additionalProperties": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/resources.Dashboard"
}
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Job": { "resources.Job": {
"anyOf": [ "anyOf": [
{ {

View File

@ -27,5 +27,6 @@ func New() *cobra.Command {
cmd.AddCommand(newGenerateCommand()) cmd.AddCommand(newGenerateCommand())
cmd.AddCommand(newDebugCommand()) cmd.AddCommand(newDebugCommand())
cmd.AddCommand(deployment.NewDeploymentCommand()) cmd.AddCommand(deployment.NewDeploymentCommand())
cmd.AddCommand(newOpenCommand())
return cmd return cmd
} }

View File

@ -1,6 +1,7 @@
package generate package generate
import ( import (
"bytes"
"context" "context"
"encoding/json" "encoding/json"
"errors" "errors"
@ -16,6 +17,7 @@ import (
"github.com/databricks/cli/bundle/deploy/terraform" "github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/render" "github.com/databricks/cli/bundle/render"
"github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
@ -25,6 +27,7 @@ import (
"github.com/databricks/databricks-sdk-go/service/dashboards" "github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace" "github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/maps"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
) )
@ -129,11 +132,20 @@ func remarshalJSON(data []byte) ([]byte, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
out, err := json.MarshalIndent(tmp, "", " ")
// Remarshal the data to ensure its formatting is stable.
// The result will have alphabetically sorted keys and be indented.
// HTML escaping is disabled to retain characters such as &, <, and >.
var buf bytes.Buffer
enc := json.NewEncoder(&buf)
enc.SetIndent("", " ")
enc.SetEscapeHTML(false)
err = enc.Encode(tmp)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return out, nil
return buf.Bytes(), nil
} }
func (d *dashboard) saveSerializedDashboard(_ context.Context, b *bundle.Bundle, dashboard *dashboards.Dashboard, filename string) error { func (d *dashboard) saveSerializedDashboard(_ context.Context, b *bundle.Bundle, dashboard *dashboards.Dashboard, filename string) error {
@ -378,6 +390,26 @@ func (d *dashboard) RunE(cmd *cobra.Command, args []string) error {
return nil return nil
} }
// filterDashboards returns a filter that only includes dashboards.
func filterDashboards(ref resources.Reference) bool {
return ref.Description.SingularName == "dashboard"
}
// dashboardResourceCompletion executes to autocomplete the argument to the resource flag.
func dashboardResourceCompletion(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
b, diags := root.MustConfigureBundle(cmd)
if err := diags.Error(); err != nil {
cobra.CompErrorln(err.Error())
return nil, cobra.ShellCompDirectiveError
}
if b == nil {
return nil, cobra.ShellCompDirectiveNoFileComp
}
return maps.Keys(resources.Completions(b, filterDashboards)), cobra.ShellCompDirectiveNoFileComp
}
func NewGenerateDashboardCommand() *cobra.Command { func NewGenerateDashboardCommand() *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "dashboard", Use: "dashboard",
@ -409,6 +441,9 @@ func NewGenerateDashboardCommand() *cobra.Command {
cmd.MarkFlagsMutuallyExclusive("watch", "existing-path") cmd.MarkFlagsMutuallyExclusive("watch", "existing-path")
cmd.MarkFlagsMutuallyExclusive("watch", "existing-id") cmd.MarkFlagsMutuallyExclusive("watch", "existing-id")
// Completion for the resource flag.
cmd.RegisterFlagCompletionFunc("resource", dashboardResourceCompletion)
cmd.RunE = d.RunE cmd.RunE = d.RunE
return cmd return cmd
} }

144
cmd/bundle/open.go Normal file
View File

@ -0,0 +1,144 @@
package bundle
import (
"context"
"errors"
"fmt"
"os"
"path/filepath"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/spf13/cobra"
"golang.org/x/exp/maps"
"github.com/pkg/browser"
)
func promptOpenArgument(ctx context.Context, b *bundle.Bundle) (string, error) {
// Compute map of "Human readable name of resource" -> "resource key".
inv := make(map[string]string)
for k, ref := range resources.Completions(b) {
title := fmt.Sprintf("%s: %s", ref.Description.SingularTitle, ref.Resource.GetName())
inv[title] = k
}
key, err := cmdio.Select(ctx, inv, "Resource to open")
if err != nil {
return "", err
}
return key, nil
}
func resolveOpenArgument(ctx context.Context, b *bundle.Bundle, args []string) (string, error) {
// If no arguments are specified, prompt the user to select the resource to open.
if len(args) == 0 && cmdio.IsPromptSupported(ctx) {
return promptOpenArgument(ctx, b)
}
if len(args) < 1 {
return "", fmt.Errorf("expected a KEY of the resource to open")
}
return args[0], nil
}
func newOpenCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "open",
Short: "Open a resource in the browser",
Args: root.MaximumNArgs(1),
}
var forcePull bool
cmd.Flags().BoolVar(&forcePull, "force-pull", false, "Skip local cache and load the state from the remote workspace")
cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
b, diags := utils.ConfigureBundleWithVariables(cmd)
if err := diags.Error(); err != nil {
return diags.Error()
}
diags = bundle.Apply(ctx, b, phases.Initialize())
if err := diags.Error(); err != nil {
return err
}
arg, err := resolveOpenArgument(ctx, b, args)
if err != nil {
return err
}
cacheDir, err := terraform.Dir(ctx, b)
if err != nil {
return err
}
_, stateFileErr := os.Stat(filepath.Join(cacheDir, terraform.TerraformStateFileName))
_, configFileErr := os.Stat(filepath.Join(cacheDir, terraform.TerraformConfigFileName))
noCache := errors.Is(stateFileErr, os.ErrNotExist) || errors.Is(configFileErr, os.ErrNotExist)
if forcePull || noCache {
diags = bundle.Apply(ctx, b, bundle.Seq(
terraform.StatePull(),
terraform.Interpolate(),
terraform.Write(),
))
if err := diags.Error(); err != nil {
return err
}
}
diags = bundle.Apply(ctx, b, bundle.Seq(
terraform.Load(),
mutator.InitializeURLs(),
))
if err := diags.Error(); err != nil {
return err
}
// Locate resource to open.
ref, err := resources.Lookup(b, arg)
if err != nil {
return err
}
// Confirm that the resource has a URL.
url := ref.Resource.GetURL()
if url == "" {
return fmt.Errorf("resource does not have a URL associated with it (has it been deployed?)")
}
return browser.OpenURL(url)
}
cmd.ValidArgsFunction = func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
b, diags := root.MustConfigureBundle(cmd)
if err := diags.Error(); err != nil {
cobra.CompErrorln(err.Error())
return nil, cobra.ShellCompDirectiveError
}
// No completion in the context of a bundle.
// Source and destination paths are taken from bundle configuration.
if b == nil {
return nil, cobra.ShellCompDirectiveNoFileComp
}
if len(args) == 0 {
completions := resources.Completions(b)
return maps.Keys(completions), cobra.ShellCompDirectiveNoFileComp
} else {
return nil, cobra.ShellCompDirectiveNoFileComp
}
}
return cmd
}

View File

@ -1,20 +1,69 @@
package bundle package bundle
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/deploy/terraform" "github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/bundle/run" "github.com/databricks/cli/bundle/run"
"github.com/databricks/cli/bundle/run/output"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags" "github.com/databricks/cli/libs/flags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/maps"
) )
func promptRunArgument(ctx context.Context, b *bundle.Bundle) (string, error) {
// Compute map of "Human readable name of resource" -> "resource key".
inv := make(map[string]string)
for k, ref := range resources.Completions(b, run.IsRunnable) {
title := fmt.Sprintf("%s: %s", ref.Description.SingularTitle, ref.Resource.GetName())
inv[title] = k
}
key, err := cmdio.Select(ctx, inv, "Resource to run")
if err != nil {
return "", err
}
return key, nil
}
func resolveRunArgument(ctx context.Context, b *bundle.Bundle, args []string) (string, error) {
// If no arguments are specified, prompt the user to select something to run.
if len(args) == 0 && cmdio.IsPromptSupported(ctx) {
return promptRunArgument(ctx, b)
}
if len(args) < 1 {
return "", fmt.Errorf("expected a KEY of the resource to run")
}
return args[0], nil
}
func keyToRunner(b *bundle.Bundle, arg string) (run.Runner, error) {
// Locate the resource to run.
ref, err := resources.Lookup(b, arg, run.IsRunnable)
if err != nil {
return nil, err
}
// Convert the resource to a runnable resource.
runner, err := run.ToRunner(b, ref)
if err != nil {
return nil, err
}
return runner, nil
}
func newRunCommand() *cobra.Command { func newRunCommand() *cobra.Command {
cmd := &cobra.Command{ cmd := &cobra.Command{
Use: "run [flags] KEY", Use: "run [flags] KEY",
@ -60,22 +109,9 @@ task or a Python wheel task, the second example applies.
return err return err
} }
// If no arguments are specified, prompt the user to select something to run. arg, err := resolveRunArgument(ctx, b, args)
if len(args) == 0 && cmdio.IsPromptSupported(ctx) { if err != nil {
// Invert completions from KEY -> NAME, to NAME -> KEY. return err
inv := make(map[string]string)
for k, v := range run.ResourceCompletionMap(b) {
inv[v] = k
}
id, err := cmdio.Select(ctx, inv, "Resource to run")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) < 1 {
return fmt.Errorf("expected a KEY of the resource to run")
} }
diags = bundle.Apply(ctx, b, bundle.Seq( diags = bundle.Apply(ctx, b, bundle.Seq(
@ -88,7 +124,7 @@ task or a Python wheel task, the second example applies.
return err return err
} }
runner, err := run.Find(b, args[0]) runner, err := keyToRunner(b, arg)
if err != nil { if err != nil {
return err return err
} }
@ -100,19 +136,16 @@ task or a Python wheel task, the second example applies.
} }
runOptions.NoWait = noWait runOptions.NoWait = noWait
var output output.RunOutput
if restart { if restart {
s := cmdio.Spinner(ctx) output, err = runner.Restart(ctx, &runOptions)
s <- "Cancelling all runs" } else {
err := runner.Cancel(ctx) output, err = runner.Run(ctx, &runOptions)
close(s)
if err != nil {
return err
}
} }
output, err := runner.Run(ctx, &runOptions)
if err != nil { if err != nil {
return err return err
} }
if output != nil { if output != nil {
switch root.OutputType(cmd) { switch root.OutputType(cmd) {
case flags.OutputText: case flags.OutputText:
@ -148,10 +181,11 @@ task or a Python wheel task, the second example applies.
} }
if len(args) == 0 { if len(args) == 0 {
return run.ResourceCompletions(b), cobra.ShellCompDirectiveNoFileComp completions := resources.Completions(b, run.IsRunnable)
return maps.Keys(completions), cobra.ShellCompDirectiveNoFileComp
} else { } else {
// If we know the resource to run, we can complete additional positional arguments. // If we know the resource to run, we can complete additional positional arguments.
runner, err := run.Find(b, args[0]) runner, err := keyToRunner(b, args[0])
if err != nil { if err != nil {
return nil, cobra.ShellCompDirectiveError return nil, cobra.ShellCompDirectiveError
} }

View File

@ -1,7 +1,9 @@
package bundle package bundle
import ( import (
"context"
"fmt" "fmt"
"io"
"time" "time"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
@ -9,6 +11,7 @@ import (
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/sync" "github.com/databricks/cli/libs/sync"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -18,6 +21,7 @@ type syncFlags struct {
interval time.Duration interval time.Duration
full bool full bool
watch bool watch bool
output flags.Output
} }
func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle) (*sync.SyncOptions, error) { func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle) (*sync.SyncOptions, error) {
@ -26,6 +30,21 @@ func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle)
return nil, fmt.Errorf("cannot get sync options: %w", err) return nil, fmt.Errorf("cannot get sync options: %w", err)
} }
if f.output != "" {
var outputFunc func(context.Context, <-chan sync.Event, io.Writer)
switch f.output {
case flags.OutputText:
outputFunc = sync.TextOutput
case flags.OutputJSON:
outputFunc = sync.JsonOutput
}
if outputFunc != nil {
opts.OutputHandler = func(ctx context.Context, c <-chan sync.Event) {
outputFunc(ctx, c, cmd.OutOrStdout())
}
}
}
opts.Full = f.full opts.Full = f.full
opts.PollInterval = f.interval opts.PollInterval = f.interval
return opts, nil return opts, nil
@ -42,6 +61,7 @@ func newSyncCommand() *cobra.Command {
cmd.Flags().DurationVar(&f.interval, "interval", 1*time.Second, "file system polling interval (for --watch)") cmd.Flags().DurationVar(&f.interval, "interval", 1*time.Second, "file system polling interval (for --watch)")
cmd.Flags().BoolVar(&f.full, "full", false, "perform full synchronization (default is incremental)") cmd.Flags().BoolVar(&f.full, "full", false, "perform full synchronization (default is incremental)")
cmd.Flags().BoolVar(&f.watch, "watch", false, "watch local file system for changes") cmd.Flags().BoolVar(&f.watch, "watch", false, "watch local file system for changes")
cmd.Flags().Var(&f.output, "output", "type of the output format")
cmd.RunE = func(cmd *cobra.Command, args []string) error { cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context() ctx := cmd.Context()
@ -65,6 +85,7 @@ func newSyncCommand() *cobra.Command {
if err != nil { if err != nil {
return err return err
} }
defer s.Close()
log.Infof(ctx, "Remote file sync location: %v", opts.RemotePath) log.Infof(ctx, "Remote file sync location: %v", opts.RemotePath)

View File

@ -28,9 +28,6 @@ func New() *cobra.Command {
Annotations: map[string]string{ Annotations: map[string]string{
"package": "apps", "package": "apps",
}, },
// This service is being previewed; hide from help output.
Hidden: true,
} }
// Add methods // Add methods

View File

@ -0,0 +1,220 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package disable_legacy_dbfs
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "disable-legacy-dbfs",
Short: `When this setting is on, access to DBFS root and DBFS mounts is disallowed (as well as creation of new mounts).`,
Long: `When this setting is on, access to DBFS root and DBFS mounts is disallowed (as
well as creation of new mounts). When the setting is off, all DBFS
functionality is enabled`,
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteDisableLegacyDbfsRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteDisableLegacyDbfsRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete the disable legacy DBFS setting.`
cmd.Long = `Delete the disable legacy DBFS setting.
Deletes the disable legacy DBFS setting for a workspace, reverting back to the
default.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyDbfs().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*settings.GetDisableLegacyDbfsRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq settings.GetDisableLegacyDbfsRequest
// TODO: short flags
cmd.Flags().StringVar(&getReq.Etag, "etag", getReq.Etag, `etag used for versioning.`)
cmd.Use = "get"
cmd.Short = `Get the disable legacy DBFS setting.`
cmd.Long = `Get the disable legacy DBFS setting.
Gets the disable legacy DBFS setting.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyDbfs().Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*settings.UpdateDisableLegacyDbfsRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq settings.UpdateDisableLegacyDbfsRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Use = "update"
cmd.Short = `Update the disable legacy DBFS setting.`
cmd.Long = `Update the disable legacy DBFS setting.
Updates the disable legacy DBFS setting for the workspace.`
cmd.Annotations = make(map[string]string)
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
} else {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}
response, err := w.Settings.DisableLegacyDbfs().Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service DisableLegacyDbfs

View File

@ -1557,6 +1557,7 @@ func newSubmit() *cobra.Command {
cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: array: access_control_list // TODO: array: access_control_list
cmd.Flags().StringVar(&submitReq.BudgetPolicyId, "budget-policy-id", submitReq.BudgetPolicyId, `The user specified id of the budget policy to use for this one-time run.`)
// TODO: complex arg: email_notifications // TODO: complex arg: email_notifications
// TODO: array: environments // TODO: array: environments
// TODO: complex arg: git_source // TODO: complex arg: git_source

View File

@ -9,6 +9,7 @@ import (
compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile" compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile"
default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace" default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace"
disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access" disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access"
disable_legacy_dbfs "github.com/databricks/cli/cmd/workspace/disable-legacy-dbfs"
enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring" enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring"
restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins" restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins"
) )
@ -33,6 +34,7 @@ func New() *cobra.Command {
cmd.AddCommand(compliance_security_profile.New()) cmd.AddCommand(compliance_security_profile.New())
cmd.AddCommand(default_namespace.New()) cmd.AddCommand(default_namespace.New())
cmd.AddCommand(disable_legacy_access.New()) cmd.AddCommand(disable_legacy_access.New())
cmd.AddCommand(disable_legacy_dbfs.New())
cmd.AddCommand(enhanced_security_monitoring.New()) cmd.AddCommand(enhanced_security_monitoring.New())
cmd.AddCommand(restrict_workspace_admins.New()) cmd.AddCommand(restrict_workspace_admins.New())

2
go.mod
View File

@ -7,7 +7,7 @@ toolchain go1.22.7
require ( require (
github.com/Masterminds/semver/v3 v3.3.0 // MIT github.com/Masterminds/semver/v3 v3.3.0 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0 github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.48.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.49.0 // Apache 2.0
github.com/fatih/color v1.17.0 // MIT github.com/fatih/color v1.17.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause

4
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.48.0 h1:46KtsnRo+FGhC3izUXbpL0PXBNomvsdignYDhJZlm9s= github.com/databricks/databricks-sdk-go v0.49.0 h1:VBTeZZMLIuBSM4kxOCfUcW9z4FUQZY2QeNRD5qm9FUQ=
github.com/databricks/databricks-sdk-go v0.48.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU= github.com/databricks/databricks-sdk-go v0.49.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -2,11 +2,14 @@ package acc
import ( import (
"context" "context"
"fmt"
"os" "os"
"testing" "testing"
"github.com/databricks/databricks-sdk-go" "github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -94,3 +97,30 @@ func (t *WorkspaceT) RunPython(code string) (string, error) {
require.True(t, ok, "unexpected type %T", results.Data) require.True(t, ok, "unexpected type %T", results.Data)
return output, nil return output, nil
} }
func (t *WorkspaceT) TemporaryWorkspaceDir(name ...string) string {
ctx := context.Background()
me, err := t.W.CurrentUser.Me(ctx)
require.NoError(t, err)
basePath := fmt.Sprintf("/Users/%s/%s", me.UserName, RandomName(name...))
t.Logf("Creating %s", basePath)
err = t.W.Workspace.MkdirsByPath(ctx, basePath)
require.NoError(t, err)
// Remove test directory on test completion.
t.Cleanup(func() {
t.Logf("Removing %s", basePath)
err := t.W.Workspace.Delete(ctx, workspace.Delete{
Path: basePath,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove temporary workspace directory %s: %#v", basePath, err)
})
return basePath
}

View File

@ -0,0 +1,13 @@
# Bugbash
The script in this directory can be used to conveniently exec into a shell
where a CLI build for a specific branch is made available.
## Usage
This script prompts if you do NOT have at least Bash 5 installed,
but works without command completion with earlier versions.
```shell
bash <(curl -fsSL https://raw.githubusercontent.com/databricks/cli/main/internal/bugbash/exec.sh) my-branch
```

139
internal/bugbash/exec.sh Executable file
View File

@ -0,0 +1,139 @@
#!/usr/bin/env bash
set -euo pipefail
# Set the GitHub repository for the Databricks CLI.
export GH_REPO="databricks/cli"
# Synthesize the directory name for the snapshot build.
function cli_snapshot_directory() {
dir="cli"
# Append OS
case "$(uname -s)" in
Linux)
dir="${dir}_linux"
;;
Darwin)
dir="${dir}_darwin"
;;
*)
echo "Unknown operating system: $os"
;;
esac
# Append architecture
case "$(uname -m)" in
x86_64)
dir="${dir}_amd64_v1"
;;
i386|i686)
dir="${dir}_386"
;;
arm64|aarch64)
dir="${dir}_arm64"
;;
armv7l|armv8l)
dir="${dir}_arm_6"
;;
*)
echo "Unknown architecture: $arch"
;;
esac
echo $dir
}
BRANCH=$1
shift
# Default to main branch if branch is not specified.
if [ -z "$BRANCH" ]; then
BRANCH=main
fi
if [ -z "$BRANCH" ]; then
echo "Please specify which branch to bugbash..."
exit 1
fi
# Check if the "gh" command is available.
if ! command -v gh &> /dev/null; then
echo "The GitHub CLI (gh) is required to download the snapshot build."
echo "Install and configure it with:"
echo ""
echo " brew install gh"
echo " gh auth login"
echo ""
exit 1
fi
echo "Looking for a snapshot build of the Databricks CLI on branch $BRANCH..."
# Find last successful build on $BRANCH.
last_successful_run_id=$(
gh run list -b "$BRANCH" -w release-snapshot --json 'databaseId,conclusion' |
jq 'limit(1; .[] | select(.conclusion == "success")) | .databaseId'
)
if [ -z "$last_successful_run_id" ]; then
echo "Unable to find last successful build of the release-snapshot workflow for branch $BRANCH."
exit 1
fi
# Determine artifact name with the right binaries for this runner.
case "$(uname -s)" in
Linux)
artifact="cli_linux_snapshot"
;;
Darwin)
artifact="cli_darwin_snapshot"
;;
esac
# Create a temporary directory to download the artifact.
dir=$(mktemp -d)
# Download the artifact.
echo "Downloading the snapshot build..."
gh run download "$last_successful_run_id" -n "$artifact" -D "$dir/.bin"
dir="$dir/.bin/$(cli_snapshot_directory)"
if [ ! -d "$dir" ]; then
echo "Directory does not exist: $dir"
exit 1
fi
# Make CLI available on $PATH.
chmod +x "$dir/databricks"
export PATH="$dir:$PATH"
# Set the prompt to indicate the bugbash environment and exec.
export PS1="(bugbash $BRANCH) \[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ "
# Display completion instructions.
echo ""
echo "=================================================================="
if [[ ${BASH_VERSINFO[0]} -lt 5 ]]; then
echo -en "\033[31m"
echo "You have Bash version < 5 installed... completion won't work."
echo -en "\033[0m"
echo ""
echo "Install it with:"
echo ""
echo " brew install bash bash-completion"
echo ""
echo "=================================================================="
fi
echo ""
echo "To load completions in your current shell session:"
echo ""
echo " source /opt/homebrew/etc/profile.d/bash_completion.sh"
echo " source <(databricks completion bash)"
echo ""
echo "=================================================================="
echo ""
# Exec into a new shell.
# Note: don't use zsh because on macOS it _always_ overwrites PS1.
exec /usr/bin/env bash

View File

@ -0,0 +1,12 @@
{
"properties": {
"unique_id": {
"type": "string",
"description": "Unique ID for job name"
},
"warehouse_id": {
"type": "string",
"description": "The SQL warehouse ID to use for the dashboard"
}
}
}

View File

@ -0,0 +1,34 @@
{
"pages": [
{
"displayName": "New Page",
"layout": [
{
"position": {
"height": 2,
"width": 6,
"x": 0,
"y": 0
},
"widget": {
"name": "82eb9107",
"textbox_spec": "# I'm a title"
}
},
{
"position": {
"height": 2,
"width": 6,
"x": 0,
"y": 2
},
"widget": {
"name": "ffa6de4f",
"textbox_spec": "Text"
}
}
],
"name": "fdd21a3c"
}
]
}

View File

@ -0,0 +1,12 @@
bundle:
name: dashboards
workspace:
root_path: "~/.bundle/{{.unique_id}}"
resources:
dashboards:
file_reference:
display_name: test-dashboard-{{.unique_id}}
file_path: ./dashboard.lvdash.json
warehouse_id: {{.warehouse_id}}

View File

@ -0,0 +1,63 @@
package bundle
import (
"fmt"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAccDashboards(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
warehouseID := acc.GetEnvOrSkipTest(t, "TEST_DEFAULT_WAREHOUSE_ID")
uniqueID := uuid.New().String()
root, err := initTestTemplate(t, ctx, "dashboards", map[string]any{
"unique_id": uniqueID,
"warehouse_id": warehouseID,
})
require.NoError(t, err)
t.Cleanup(func() {
err = destroyBundle(t, ctx, root)
require.NoError(t, err)
})
err = deployBundle(t, ctx, root)
require.NoError(t, err)
// Load bundle configuration by running the validate command.
b := unmarshalConfig(t, mustValidateBundle(t, ctx, root))
// Assert that the dashboard exists at the expected path and is, indeed, a dashboard.
oi, err := wt.W.Workspace.GetStatusByPath(ctx, fmt.Sprintf("%s/test-dashboard-%s.lvdash.json", b.Config.Workspace.ResourcePath, uniqueID))
require.NoError(t, err)
assert.EqualValues(t, workspace.ObjectTypeDashboard, oi.ObjectType)
// Load the dashboard by its ID and confirm its display name.
dashboard, err := wt.W.Lakeview.GetByDashboardId(ctx, oi.ResourceId)
require.NoError(t, err)
assert.Equal(t, fmt.Sprintf("test-dashboard-%s", uniqueID), dashboard.DisplayName)
// Make an out of band modification to the dashboard and confirm that it is detected.
_, err = wt.W.Lakeview.Update(ctx, dashboards.UpdateDashboardRequest{
DashboardId: oi.ResourceId,
SerializedDashboard: dashboard.SerializedDashboard,
})
require.NoError(t, err)
// Try to redeploy the bundle and confirm that the out of band modification is detected.
stdout, _, err := deployBundleWithArgs(t, ctx, root)
require.Error(t, err)
assert.Contains(t, stdout, `Error: dashboard "file_reference" has been modified remotely`+"\n")
// Redeploy the bundle with the --force flag and confirm that the out of band modification is ignored.
_, stderr, err := deployBundleWithArgs(t, ctx, root, "--force")
require.NoError(t, err)
assert.Contains(t, stderr, `Deployment complete!`+"\n")
}

View File

@ -11,6 +11,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/internal" "github.com/databricks/cli/internal"
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
@ -66,6 +67,19 @@ func validateBundle(t *testing.T, ctx context.Context, path string) ([]byte, err
return stdout.Bytes(), err return stdout.Bytes(), err
} }
func mustValidateBundle(t *testing.T, ctx context.Context, path string) []byte {
data, err := validateBundle(t, ctx, path)
require.NoError(t, err)
return data
}
func unmarshalConfig(t *testing.T, data []byte) *bundle.Bundle {
bundle := &bundle.Bundle{}
err := json.Unmarshal(data, &bundle.Config)
require.NoError(t, err)
return bundle
}
func deployBundle(t *testing.T, ctx context.Context, path string) error { func deployBundle(t *testing.T, ctx context.Context, path string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path) ctx = env.Set(ctx, "BUNDLE_ROOT", path)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--auto-approve") c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--auto-approve")
@ -73,6 +87,14 @@ func deployBundle(t *testing.T, ctx context.Context, path string) error {
return err return err
} }
func deployBundleWithArgs(t *testing.T, ctx context.Context, path string, args ...string) (string, string, error) {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
args = append([]string{"bundle", "deploy"}, args...)
c := internal.NewCobraTestRunnerWithContext(t, ctx, args...)
stdout, stderr, err := c.Run()
return stdout.String(), stderr.String(), err
}
func deployBundleWithFlags(t *testing.T, ctx context.Context, path string, flags []string) error { func deployBundleWithFlags(t *testing.T, ctx context.Context, path string, flags []string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path) ctx = env.Set(ctx, "BUNDLE_ROOT", path)
args := []string{"bundle", "deploy", "--force-lock"} args := []string{"bundle", "deploy", "--force-lock"}

View File

@ -0,0 +1,110 @@
package internal
import (
"encoding/base64"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/dyn/merge"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Verify that importing a dashboard through the Workspace API retains the identity of the underying resource,
// as well as properties exclusively accessible through the dashboards API.
func TestAccDashboardAssumptions_WorkspaceImport(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
t.Parallel()
dashboardName := "New Dashboard"
dashboardPayload := []byte(`{"pages":[{"name":"2506f97a","displayName":"New Page"}]}`)
warehouseId := acc.GetEnvOrSkipTest(t, "TEST_DEFAULT_WAREHOUSE_ID")
dir := wt.TemporaryWorkspaceDir("dashboard-assumptions-")
dashboard, err := wt.W.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
DisplayName: dashboardName,
ParentPath: dir,
SerializedDashboard: string(dashboardPayload),
WarehouseId: warehouseId,
})
require.NoError(t, err)
t.Logf("Dashboard ID (per Lakeview API): %s", dashboard.DashboardId)
// Overwrite the dashboard via the workspace API.
{
err := wt.W.Workspace.Import(ctx, workspace.Import{
Format: workspace.ImportFormatAuto,
Path: dashboard.Path,
Content: base64.StdEncoding.EncodeToString(dashboardPayload),
Overwrite: true,
})
require.NoError(t, err)
}
// Cross-check consistency with the workspace object.
{
obj, err := wt.W.Workspace.GetStatusByPath(ctx, dashboard.Path)
require.NoError(t, err)
// Confirm that the resource ID included in the response is equal to the dashboard ID.
require.Equal(t, dashboard.DashboardId, obj.ResourceId)
t.Logf("Dashboard ID (per workspace object status): %s", obj.ResourceId)
}
// Try to overwrite the dashboard via the Lakeview API (and expect failure).
{
_, err := wt.W.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
DisplayName: dashboardName,
ParentPath: dir,
SerializedDashboard: string(dashboardPayload),
})
require.ErrorIs(t, err, apierr.ErrResourceAlreadyExists)
}
// Retrieve the dashboard object and confirm that only select fields were updated by the import.
{
previousDashboard := dashboard
currentDashboard, err := wt.W.Lakeview.Get(ctx, dashboards.GetDashboardRequest{
DashboardId: dashboard.DashboardId,
})
require.NoError(t, err)
// Convert the dashboard object to a [dyn.Value] to make comparison easier.
previous, err := convert.FromTyped(previousDashboard, dyn.NilValue)
require.NoError(t, err)
current, err := convert.FromTyped(currentDashboard, dyn.NilValue)
require.NoError(t, err)
// Collect updated paths.
var updatedFieldPaths []string
_, err = merge.Override(previous, current, merge.OverrideVisitor{
VisitDelete: func(basePath dyn.Path, left dyn.Value) error {
assert.Fail(t, "unexpected delete operation")
return nil
},
VisitInsert: func(basePath dyn.Path, right dyn.Value) (dyn.Value, error) {
assert.Fail(t, "unexpected insert operation")
return right, nil
},
VisitUpdate: func(basePath dyn.Path, left dyn.Value, right dyn.Value) (dyn.Value, error) {
updatedFieldPaths = append(updatedFieldPaths, basePath.String())
return right, nil
},
})
require.NoError(t, err)
// Confirm that only the expected fields have been updated.
assert.ElementsMatch(t, []string{
"etag",
"update_time",
}, updatedFieldPaths)
}
}

View File

@ -23,8 +23,21 @@ type Repository struct {
// directory where we process .gitignore files. // directory where we process .gitignore files.
real bool real bool
// root is the absolute path to the repository root. // rootDir is the path to the root of the repository checkout.
root vfs.Path // This can be either the main repository checkout or a worktree checkout.
// For more information about worktrees, see: https://git-scm.com/docs/git-worktree#_description.
rootDir vfs.Path
// gitDir is the equivalent of $GIT_DIR and points to the
// `.git` directory of a repository or a worktree directory.
// See https://git-scm.com/docs/git-worktree#_details for more information.
gitDir vfs.Path
// gitCommonDir is the equivalent of $GIT_COMMON_DIR and points to the
// `.git` directory of the main working tree (common between worktrees).
// This is equivalent to [gitDir] if this is the main working tree.
// See https://git-scm.com/docs/git-worktree#_details for more information.
gitCommonDir vfs.Path
// ignore contains a list of ignore patterns indexed by the // ignore contains a list of ignore patterns indexed by the
// path prefix relative to the repository root. // path prefix relative to the repository root.
@ -44,12 +57,11 @@ type Repository struct {
// Root returns the absolute path to the repository root. // Root returns the absolute path to the repository root.
func (r *Repository) Root() string { func (r *Repository) Root() string {
return r.root.Native() return r.rootDir.Native()
} }
func (r *Repository) CurrentBranch() (string, error) { func (r *Repository) CurrentBranch() (string, error) {
// load .git/HEAD ref, err := LoadReferenceFile(r.gitDir, "HEAD")
ref, err := LoadReferenceFile(r.root, path.Join(GitDirectoryName, "HEAD"))
if err != nil { if err != nil {
return "", err return "", err
} }
@ -65,8 +77,7 @@ func (r *Repository) CurrentBranch() (string, error) {
} }
func (r *Repository) LatestCommit() (string, error) { func (r *Repository) LatestCommit() (string, error) {
// load .git/HEAD ref, err := LoadReferenceFile(r.gitDir, "HEAD")
ref, err := LoadReferenceFile(r.root, path.Join(GitDirectoryName, "HEAD"))
if err != nil { if err != nil {
return "", err return "", err
} }
@ -80,12 +91,12 @@ func (r *Repository) LatestCommit() (string, error) {
return ref.Content, nil return ref.Content, nil
} }
// read reference from .git/HEAD // Read reference from $GIT_DIR/HEAD
branchHeadPath, err := ref.ResolvePath() branchHeadPath, err := ref.ResolvePath()
if err != nil { if err != nil {
return "", err return "", err
} }
branchHeadRef, err := LoadReferenceFile(r.root, path.Join(GitDirectoryName, branchHeadPath)) branchHeadRef, err := LoadReferenceFile(r.gitCommonDir, branchHeadPath)
if err != nil { if err != nil {
return "", err return "", err
} }
@ -125,7 +136,7 @@ func (r *Repository) loadConfig() error {
if err != nil { if err != nil {
return fmt.Errorf("unable to load user specific gitconfig: %w", err) return fmt.Errorf("unable to load user specific gitconfig: %w", err)
} }
err = config.loadFile(r.root, ".git/config") err = config.loadFile(r.gitCommonDir, "config")
if err != nil { if err != nil {
return fmt.Errorf("unable to load repository specific gitconfig: %w", err) return fmt.Errorf("unable to load repository specific gitconfig: %w", err)
} }
@ -133,12 +144,6 @@ func (r *Repository) loadConfig() error {
return nil return nil
} }
// newIgnoreFile constructs a new [ignoreRules] implementation backed by
// a file using the specified path relative to the repository root.
func (r *Repository) newIgnoreFile(relativeIgnoreFilePath string) ignoreRules {
return newIgnoreFile(r.root, relativeIgnoreFilePath)
}
// getIgnoreRules returns a slice of [ignoreRules] that apply // getIgnoreRules returns a slice of [ignoreRules] that apply
// for the specified prefix. The prefix must be cleaned by the caller. // for the specified prefix. The prefix must be cleaned by the caller.
// It lazily initializes an entry for the specified prefix if it // It lazily initializes an entry for the specified prefix if it
@ -149,7 +154,7 @@ func (r *Repository) getIgnoreRules(prefix string) []ignoreRules {
return fs return fs
} }
r.ignore[prefix] = append(r.ignore[prefix], r.newIgnoreFile(path.Join(prefix, gitIgnoreFileName))) r.ignore[prefix] = append(r.ignore[prefix], newIgnoreFile(r.rootDir, path.Join(prefix, gitIgnoreFileName)))
return r.ignore[prefix] return r.ignore[prefix]
} }
@ -205,21 +210,30 @@ func (r *Repository) Ignore(relPath string) (bool, error) {
func NewRepository(path vfs.Path) (*Repository, error) { func NewRepository(path vfs.Path) (*Repository, error) {
real := true real := true
rootPath, err := vfs.FindLeafInTree(path, GitDirectoryName) rootDir, err := vfs.FindLeafInTree(path, GitDirectoryName)
if err != nil { if err != nil {
if !errors.Is(err, fs.ErrNotExist) { if !errors.Is(err, fs.ErrNotExist) {
return nil, err return nil, err
} }
// Cannot find `.git` directory. // Cannot find `.git` directory.
// Treat the specified path as a potential repository root. // Treat the specified path as a potential repository root checkout.
real = false real = false
rootPath = path rootDir = path
}
// Derive $GIT_DIR and $GIT_COMMON_DIR paths if this is a real repository.
// If it isn't a real repository, they'll point to the (non-existent) `.git` directory.
gitDir, gitCommonDir, err := resolveGitDirs(rootDir)
if err != nil {
return nil, err
} }
repo := &Repository{ repo := &Repository{
real: real, real: real,
root: rootPath, rootDir: rootDir,
ignore: make(map[string][]ignoreRules), gitDir: gitDir,
gitCommonDir: gitCommonDir,
ignore: make(map[string][]ignoreRules),
} }
err = repo.loadConfig() err = repo.loadConfig()
@ -252,10 +266,10 @@ func NewRepository(path vfs.Path) (*Repository, error) {
newStringIgnoreRules([]string{ newStringIgnoreRules([]string{
".git", ".git",
}), }),
// Load repository-wide excludes file. // Load repository-wide exclude file.
repo.newIgnoreFile(".git/info/excludes"), newIgnoreFile(repo.gitCommonDir, "info/exclude"),
// Load root gitignore file. // Load root gitignore file.
repo.newIgnoreFile(".gitignore"), newIgnoreFile(repo.rootDir, ".gitignore"),
} }
return repo, nil return repo, nil

View File

@ -80,7 +80,7 @@ func NewView(root vfs.Path) (*View, error) {
// Target path must be relative to the repository root path. // Target path must be relative to the repository root path.
target := root.Native() target := root.Native()
prefix := repo.root.Native() prefix := repo.rootDir.Native()
if !strings.HasPrefix(target, prefix) { if !strings.HasPrefix(target, prefix) {
return nil, fmt.Errorf("path %q is not within repository root %q", root.Native(), prefix) return nil, fmt.Errorf("path %q is not within repository root %q", root.Native(), prefix)
} }

123
libs/git/worktree.go Normal file
View File

@ -0,0 +1,123 @@
package git
import (
"bufio"
"errors"
"fmt"
"io/fs"
"path/filepath"
"strings"
"github.com/databricks/cli/libs/vfs"
)
func readLines(root vfs.Path, name string) ([]string, error) {
file, err := root.Open(name)
if err != nil {
return nil, err
}
defer file.Close()
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
return lines, scanner.Err()
}
// readGitDir reads the value of the `.git` file in a worktree.
func readGitDir(root vfs.Path) (string, error) {
lines, err := readLines(root, GitDirectoryName)
if err != nil {
return "", err
}
var gitDir string
for _, line := range lines {
parts := strings.SplitN(line, ": ", 2)
if len(parts) != 2 {
continue
}
if parts[0] == "gitdir" {
gitDir = strings.TrimSpace(parts[1])
}
}
if gitDir == "" {
return "", fmt.Errorf(`expected %q to contain a line with "gitdir: [...]"`, filepath.Join(root.Native(), GitDirectoryName))
}
return gitDir, nil
}
// readGitCommonDir reads the value of the `commondir` file in the `.git` directory of a worktree.
// This file typically contains "../.." to point to $GIT_COMMON_DIR.
func readGitCommonDir(gitDir vfs.Path) (string, error) {
lines, err := readLines(gitDir, "commondir")
if err != nil {
return "", err
}
if len(lines) == 0 {
return "", errors.New("file is empty")
}
return strings.TrimSpace(lines[0]), nil
}
// resolveGitDirs resolves the paths for $GIT_DIR and $GIT_COMMON_DIR.
// The path argument is the root of the checkout where (supposedly) a `.git` file or directory exists.
func resolveGitDirs(root vfs.Path) (vfs.Path, vfs.Path, error) {
fileInfo, err := root.Stat(GitDirectoryName)
if err != nil {
// If the `.git` file or directory does not exist, then this is not a git repository.
// Return paths that we know don't exist, so we do not need to perform nil checks in the caller.
if errors.Is(err, fs.ErrNotExist) {
gitDir := vfs.MustNew(filepath.Join(root.Native(), GitDirectoryName))
return gitDir, gitDir, nil
}
return nil, nil, err
}
// If the path is a directory, then it is the main working tree.
// Both $GIT_DIR and $GIT_COMMON_DIR point to the same directory.
if fileInfo.IsDir() {
gitDir := vfs.MustNew(filepath.Join(root.Native(), GitDirectoryName))
return gitDir, gitDir, nil
}
// If the path is not a directory, then it is a worktree.
// Read value for $GIT_DIR.
gitDirValue, err := readGitDir(root)
if err != nil {
return nil, nil, err
}
// Resolve $GIT_DIR.
var gitDir vfs.Path
if filepath.IsAbs(gitDirValue) {
gitDir = vfs.MustNew(gitDirValue)
} else {
gitDir = vfs.MustNew(filepath.Join(root.Native(), gitDirValue))
}
// Read value for $GIT_COMMON_DIR.
gitCommonDirValue, err := readGitCommonDir(gitDir)
if err != nil {
return nil, nil, fmt.Errorf(`expected "commondir" file in worktree git folder at %q: %w`, gitDir.Native(), err)
}
// Resolve $GIT_COMMON_DIR.
var gitCommonDir vfs.Path
if filepath.IsAbs(gitCommonDirValue) {
gitCommonDir = vfs.MustNew(gitCommonDirValue)
} else {
gitCommonDir = vfs.MustNew(filepath.Join(gitDir.Native(), gitCommonDirValue))
}
return gitDir, gitCommonDir, nil
}

108
libs/git/worktree_test.go Normal file
View File

@ -0,0 +1,108 @@
package git
import (
"fmt"
"os"
"path/filepath"
"testing"
"github.com/databricks/cli/libs/vfs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func setupWorktree(t *testing.T) string {
var err error
tmpDir := t.TempDir()
// Checkout path
err = os.MkdirAll(filepath.Join(tmpDir, "my_worktree"), os.ModePerm)
require.NoError(t, err)
// Main $GIT_COMMON_DIR
err = os.MkdirAll(filepath.Join(tmpDir, ".git"), os.ModePerm)
require.NoError(t, err)
// Worktree $GIT_DIR
err = os.MkdirAll(filepath.Join(tmpDir, ".git/worktrees/my_worktree"), os.ModePerm)
require.NoError(t, err)
return tmpDir
}
func writeGitDir(t *testing.T, dir, content string) {
err := os.WriteFile(filepath.Join(dir, "my_worktree/.git"), []byte(content), os.ModePerm)
require.NoError(t, err)
}
func writeGitCommonDir(t *testing.T, dir, content string) {
err := os.WriteFile(filepath.Join(dir, ".git/worktrees/my_worktree/commondir"), []byte(content), os.ModePerm)
require.NoError(t, err)
}
func verifyCorrectDirs(t *testing.T, dir string) {
gitDir, gitCommonDir, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
require.NoError(t, err)
assert.Equal(t, filepath.Join(dir, ".git/worktrees/my_worktree"), gitDir.Native())
assert.Equal(t, filepath.Join(dir, ".git"), gitCommonDir.Native())
}
func TestWorktreeResolveGitDir(t *testing.T) {
dir := setupWorktree(t)
writeGitCommonDir(t, dir, "../..")
t.Run("relative", func(t *testing.T) {
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s", "../.git/worktrees/my_worktree"))
verifyCorrectDirs(t, dir)
})
t.Run("absolute", func(t *testing.T) {
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s", filepath.Join(dir, ".git/worktrees/my_worktree")))
verifyCorrectDirs(t, dir)
})
t.Run("additional spaces", func(t *testing.T) {
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s \n\n\n", "../.git/worktrees/my_worktree"))
verifyCorrectDirs(t, dir)
})
t.Run("empty", func(t *testing.T) {
writeGitDir(t, dir, "")
_, _, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
assert.ErrorContains(t, err, ` to contain a line with "gitdir: [...]"`)
})
}
func TestWorktreeResolveCommonDir(t *testing.T) {
dir := setupWorktree(t)
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s", "../.git/worktrees/my_worktree"))
t.Run("relative", func(t *testing.T) {
writeGitCommonDir(t, dir, "../..")
verifyCorrectDirs(t, dir)
})
t.Run("absolute", func(t *testing.T) {
writeGitCommonDir(t, dir, filepath.Join(dir, ".git"))
verifyCorrectDirs(t, dir)
})
t.Run("additional spaces", func(t *testing.T) {
writeGitCommonDir(t, dir, " ../.. \n\n\n")
verifyCorrectDirs(t, dir)
})
t.Run("empty", func(t *testing.T) {
writeGitCommonDir(t, dir, "")
_, _, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
assert.ErrorContains(t, err, `expected "commondir" file in worktree git folder at `)
})
t.Run("missing", func(t *testing.T) {
_, _, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
assert.ErrorContains(t, err, `expected "commondir" file in worktree git folder at `)
})
}

View File

@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"io" "io"
"os/exec" "os/exec"
"sync"
) )
type execOption func(context.Context, *exec.Cmd) error type execOption func(context.Context, *exec.Cmd) error
@ -69,10 +70,24 @@ func WithStdoutWriter(dst io.Writer) execOption {
} }
} }
// safeWriter is a writer that is safe to use concurrently.
// It serializes writes to the underlying writer.
type safeWriter struct {
w io.Writer
m sync.Mutex
}
func (s *safeWriter) Write(p []byte) (n int, err error) {
s.m.Lock()
defer s.m.Unlock()
return s.w.Write(p)
}
func WithCombinedOutput(buf *bytes.Buffer) execOption { func WithCombinedOutput(buf *bytes.Buffer) execOption {
sw := &safeWriter{w: buf}
return func(_ context.Context, c *exec.Cmd) error { return func(_ context.Context, c *exec.Cmd) error {
c.Stdout = io.MultiWriter(buf, c.Stdout) c.Stdout = io.MultiWriter(sw, c.Stdout)
c.Stderr = io.MultiWriter(buf, c.Stderr) c.Stderr = io.MultiWriter(sw, c.Stderr)
return nil return nil
} }
} }

View File

@ -3,7 +3,7 @@ resources:
pipelines: pipelines:
{{.project_name}}_pipeline: {{.project_name}}_pipeline:
name: {{.project_name}}_pipeline name: {{.project_name}}_pipeline
{{- if eq default_catalog ""}} {{- if or (eq default_catalog "") (eq default_catalog "hive_metastore")}}
## Specify the 'catalog' field to configure this pipeline to make use of Unity Catalog: ## Specify the 'catalog' field to configure this pipeline to make use of Unity Catalog:
# catalog: catalog_name # catalog: catalog_name
{{- else}} {{- else}}