Compare commits

...

38 Commits

Author SHA1 Message Date
Pieter Noordhuis 37b0039c1c
Never swallow errors 2024-10-01 11:22:53 -07:00
Pieter Noordhuis b54c10b832
Remove embed_credentials default from tfdyn 2024-10-01 11:22:44 -07:00
Pieter Noordhuis b63e94366a
Rename mutator 2024-10-01 11:19:43 -07:00
Pieter Noordhuis 0301854a43
Comment 2024-10-01 10:47:20 -07:00
Pieter Noordhuis 8186da8e67
Simplify 2024-10-01 10:47:08 -07:00
Pieter Noordhuis a2a794e5fa
Configure the default parent path for dashboards 2024-10-01 10:46:53 -07:00
Pieter Noordhuis c911f7922e
Move test 2024-10-01 09:47:40 -07:00
Pieter Noordhuis 0e0f4217ec
Remove examples 2024-10-01 04:55:51 -07:00
Pieter Noordhuis 0f22ec6116
Remove generate changes from this branch 2024-10-01 04:51:50 -07:00
Pieter Noordhuis c123cca275
Merge branch 'workspace-resource-path' into dashboards 2024-10-01 04:49:53 -07:00
Pieter Noordhuis 08f7f3b6b7
Add resource path field to bundle workspace configuration 2024-09-30 14:29:16 +02:00
Pieter Noordhuis 802be90687
Coverage for conversion logic 2024-09-30 11:54:08 +02:00
Lennart Kats (databricks) da3b4f7c72
Fix panic in `apply_presets.go` (#1796)
## Changes

This fixes the user-reported panic in `apply_presets.go`. I'm still
unsure how to reproduce this, since the CLI just reports `ob broken_job
is not defined` when I try to use `bundle deploy` with an empty job.
That said — we may as well be defensive here and I see we have lots of
checks for empty job/cluster/etc. settings scattered throughout our code
base so at least we're somewhat consistent.
2024-09-29 14:08:10 +00:00
Pieter Noordhuis 3a1d92c75c
Comments 2024-09-27 16:39:51 +02:00
Pieter Noordhuis ff15a046fc
Merge remote-tracking branch 'origin/main' into dashboards 2024-09-27 14:58:53 +02:00
Pieter Noordhuis 7a9355c02c
Merge remote-tracking branch 'origin/main' into dashboards 2024-09-27 14:58:37 +02:00
Pieter Noordhuis 1d1aa0a416
Rename `RootPath` -> `BundleRootPath` (#1792)
## Changes

After introducing the `SyncRootPath` field on the bundle (#1694), the
previous `RootPath` became ambiguous. Does it mean the bundle root path
or the sync root path? This PR renames to field to `BundleRootPath` to
remove the ambiguity.

## Tests

n/a

---------

Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
2024-09-27 10:03:05 +00:00
Pieter Noordhuis 56cd96cb93
Move trampoline code into trampoline package (#1793)
## Changes

Doing this to make room for PyDABs under `bundle/python`.

## Tests

n/a
2024-09-27 09:32:54 +00:00
Pieter Noordhuis a1dca56abf
Trim trailing whitespace (#1794)
## Changes

Trailing whitespace is trimmed per the VS Code settings for this
repository.

## Tests

n/a
2024-09-27 09:30:39 +00:00
shreyas-goenka 4e8e027380
Sort tasks by `task_key` before generating the Terraform configuration (#1776)
## Changes
Sort the tasks in the resultant `bundle.tf.json`. This is important
because configuration from one task can leak into another if the tasks
are not sorted.

For more details see:
1.
https://github.com/databricks/terraform-provider-databricks/issues/3951
2.
https://github.com/databricks/terraform-provider-databricks/issues/4011

## Tests
Unit test and manually.

For manual testing I used the following configuration:
```
resources:
  jobs:
    foo:
      tasks: 
        - task_key: task-Z
          notebook_task: 
            notebook_path: nb.py
            source: GIT
          existing_cluster_id: 0715-133738-ju0ma84z

        - task_key: task-1
          notebook_task: 
            notebook_path: ${workspace.file_path}/local.py
            source: WORKSPACE
          existing_cluster_id: 0715-133738-ju0ma84z
          depends_on: 
            - task_key: task-Z


      git_source: 
        git_provider: gitHub
        git_url: https://github.com/shreyas-goenka/job-source-tmp.git
        git_branch: main
```

Steps (1):
1. Deploy this bundle.
2. Comment out "source: GIT"
3. Deploy again

Before:
Deploying this bundle twice would fail. This is because the "source:
GIT" would carry over to the next deployment.

After:
There was no error on the subsequent deployment.

Steps (2):
1. Deploy once
2. Deploy again

Before:
Works correctly but leads to a update API call every time.

After:
No diff is detected by terraform.
2024-09-26 13:22:22 +00:00
Andrew Nester 66f2ba64a8
Simplified isFullVariableOverrideDef implementation (#1791)
## Changes
Simplified isFullVariableOverrideDef implementation

Follow up on https://github.com/databricks/cli/pull/1787
2024-09-26 12:55:07 +00:00
dependabot[bot] 94d8c3ba1e
Bump github.com/hashicorp/hc-install from 0.7.0 to 0.9.0 (#1772)
Bumps
[github.com/hashicorp/hc-install](https://github.com/hashicorp/hc-install)
from 0.7.0 to 0.9.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/hashicorp/hc-install/releases">github.com/hashicorp/hc-install's
releases</a>.</em></p>
<blockquote>
<h2>v0.9.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Finish Release of 0.8.1 by updating VERSION by <a
href="https://github.com/mutahhir"><code>@​mutahhir</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/248">hashicorp/hc-install#248</a></li>
<li>build(deps): bump golang.org/x/mod from 0.20.0 to 0.21.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/242">hashicorp/hc-install#242</a></li>
<li>docs: Update release instructions by <a
href="https://github.com/radeksimko"><code>@​radeksimko</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/249">hashicorp/hc-install#249</a></li>
<li>Prepare for v0.9.0 release by <a
href="https://github.com/mutahhir"><code>@​mutahhir</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/250">hashicorp/hc-install#250</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/hashicorp/hc-install/compare/v0.8.1...v0.9.0">https://github.com/hashicorp/hc-install/compare/v0.8.1...v0.9.0</a></p>
<h2>v0.8.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Add artifacts manifest (automatically generated) by <a
href="https://github.com/jeanneryan"><code>@​jeanneryan</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/235">hashicorp/hc-install#235</a></li>
<li>build(deps): bump golang.org/x/mod from 0.19.0 to 0.20.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/236">hashicorp/hc-install#236</a></li>
<li>build(deps): Bump workflows to latest trusted versions by <a
href="https://github.com/hashicorp-tsccr"><code>@​hashicorp-tsccr</code></a>
in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/237">hashicorp/hc-install#237</a></li>
<li>build(deps): Bump workflows to latest trusted versions by <a
href="https://github.com/hashicorp-tsccr"><code>@​hashicorp-tsccr</code></a>
in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/238">hashicorp/hc-install#238</a></li>
<li>build(deps): bump hashicorp/action-setup-bob from 2.1.0 to 2.1.1 in
the github-actions-backward-compatible group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/241">hashicorp/hc-install#241</a></li>
<li>httpclient: Reuse existing configured logger by <a
href="https://github.com/radeksimko"><code>@​radeksimko</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/240">hashicorp/hc-install#240</a></li>
<li>build(deps): Bump workflows to latest trusted versions by <a
href="https://github.com/hashicorp-tsccr"><code>@​hashicorp-tsccr</code></a>
in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/243">hashicorp/hc-install#243</a></li>
<li>Nightly and PR builds: fix &quot;no space left on device&quot; on
macOS runner by <a
href="https://github.com/kmoe"><code>@​kmoe</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/245">hashicorp/hc-install#245</a></li>
<li>Update CONTRIBUTING.md to add clean up step by <a
href="https://github.com/mutahhir"><code>@​mutahhir</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/246">hashicorp/hc-install#246</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/jeanneryan"><code>@​jeanneryan</code></a> made
their first contribution in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/235">hashicorp/hc-install#235</a></li>
<li><a href="https://github.com/mutahhir"><code>@​mutahhir</code></a>
made their first contribution in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/246">hashicorp/hc-install#246</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/hashicorp/hc-install/compare/v0.8.0...v0.8.1">https://github.com/hashicorp/hc-install/compare/v0.8.0...v0.8.1</a></p>
<h2>v0.8.0</h2>
<p>ENHANCEMENTS:</p>
<ul>
<li>Add retries for HTTP operations by <a
href="https://github.com/james0209"><code>@​james0209</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/218">hashicorp/hc-install#218</a></li>
<li>Allow <code>LicenseDir</code> field for non-enterprise usage by <a
href="https://github.com/james0209"><code>@​james0209</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/214">hashicorp/hc-install#214</a></li>
</ul>
<p>BUG FIXES:</p>
<ul>
<li>[fix] include custom url's &quot;path&quot; when creating Archive
URL by <a
href="https://github.com/james0209"><code>@​james0209</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/234">hashicorp/hc-install#234</a></li>
</ul>
<p>INTERNAL:</p>
<ul>
<li>[chore] Remove unused variable by <a
href="https://github.com/james0209"><code>@​james0209</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/215">hashicorp/hc-install#215</a></li>
<li>build(deps): bump github.com/hashicorp/go-retryablehttp from 0.7.6
to 0.7.7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/221">hashicorp/hc-install#221</a></li>
<li>build(deps): bump github.com/hashicorp/go-version from 1.6.0 to
1.7.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/216">hashicorp/hc-install#216</a></li>
<li>build(deps): bump golang.org/x/mod from 0.17.0 to 0.18.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/223">hashicorp/hc-install#223</a></li>
<li>build(deps): bump golang.org/x/mod from 0.18.0 to 0.19.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/229">hashicorp/hc-install#229</a></li>
<li>build(deps): bump hashicorp/action-setup-bob from 2.0.0 to 2.0.3 in
the github-actions-backward-compatible group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/220">hashicorp/hc-install#220</a></li>
<li>build(deps): bump hashicorp/action-setup-bob from 2.0.3 to 2.1.0 in
the github-actions-backward-compatible group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/222">hashicorp/hc-install#222</a></li>
<li>build(deps): bump hashicorp/actions-packaging-linux from 1.7 to 1.8
in the github-actions-backward-compatible group by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/224">hashicorp/hc-install#224</a></li>
<li>build(deps): Bump workflows to latest trusted versions by <a
href="https://github.com/hashicorp-tsccr"><code>@​hashicorp-tsccr</code></a>
in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/219">hashicorp/hc-install#219</a></li>
<li>build(deps): Bump workflows to latest trusted versions by <a
href="https://github.com/hashicorp-tsccr"><code>@​hashicorp-tsccr</code></a>
in <a
href="https://redirect.github.com/hashicorp/hc-install/pull/226">hashicorp/hc-install#226</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="157a802cb6"><code>157a802</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/hc-install/issues/250">#250</a>
from hashicorp/release-0.9.0</li>
<li><a
href="4c734fc034"><code>4c734fc</code></a>
Prepare for v0.9.0 release</li>
<li><a
href="d78b32850e"><code>d78b328</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/hc-install/issues/249">#249</a>
from hashicorp/d-contributing-md-update</li>
<li><a
href="34f38b0890"><code>34f38b0</code></a>
docs: Update release instructions</li>
<li><a
href="6a5aa830d9"><code>6a5aa83</code></a>
build(deps): bump golang.org/x/mod from 0.20.0 to 0.21.0 (<a
href="https://redirect.github.com/hashicorp/hc-install/issues/242">#242</a>)</li>
<li><a
href="1784fccf08"><code>1784fcc</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/hc-install/issues/248">#248</a>
from hashicorp/revert-version-contents</li>
<li><a
href="ea2c69b3af"><code>ea2c69b</code></a>
Finish Release of 0.8.1 by updating VERSION</li>
<li><a
href="4f3e00edd9"><code>4f3e00e</code></a>
Releasing 0.8.1</li>
<li><a
href="c6d1ced5b4"><code>c6d1ced</code></a>
Merge pull request <a
href="https://redirect.github.com/hashicorp/hc-install/issues/246">#246</a>
from hashicorp/update-contributing</li>
<li><a
href="eea12f14a6"><code>eea12f1</code></a>
Update CONTRIBUTING.md to add clean up step</li>
<li>Additional commits viewable in <a
href="https://github.com/hashicorp/hc-install/compare/v0.7.0...v0.9.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/hashicorp/hc-install | [>= 0.8.a, < 0.9] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/hashicorp/hc-install&package-manager=go_modules&previous-version=0.7.0&new-version=0.9.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-09-26 06:29:34 +00:00
dependabot[bot] 875b112f80
Bump golang.org/x/mod from 0.20.0 to 0.21.0 (#1758)
Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.20.0 to
0.21.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="46a3137dae"><code>46a3137</code></a>
zip: set GIT_DIR in test when using bare repositories</li>
<li><a
href="3afcd4e90a"><code>3afcd4e</code></a>
go.mod: set go version to 1.22.0</li>
<li><a
href="b1d336cfca"><code>b1d336c</code></a>
go.mod: update required go version to go1.22</li>
<li>See full diff in <a
href="https://github.com/golang/mod/compare/v0.20.0...v0.21.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/mod&package-manager=go_modules&previous-version=0.20.0&new-version=0.21.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-09-26 06:07:01 +00:00
shreyas-goenka 495040e4cd
Modify SetLocation test utility to take full locations as argument (#1788)
I plan to use this in https://github.com/databricks/cli/pull/1780, to
set the line and column numbers as well for the locations.

gopatch file used:
```
@@
var x expression
var y expression
var z expression
@@
-bundletest.SetLocation(x, y, z)
+bundletest.SetLocation(x, y, []dyn.Location{{File: z}})
```
2024-09-25 16:13:48 +00:00
Pieter Noordhuis 7f1121d8d8
Pin Go toolchain to 1.22.7 (#1790)
## Changes

Relates to https://github.com/databricks/cli/pull/1758.

More information about toolchains:
* https://go.dev/blog/toolchain
* https://go.dev/doc/toolchain

We need to specify the toolchain as we need to bump Go to 1.22.0 for the
`mod` upgrade and want to use the latest toolchain on the 1.22 series.

## Tests

The previous release was made with Go 1.22.7 so we should continue to
use it.
2024-09-25 15:45:28 +00:00
shreyas-goenka a4ba0bbe9f
Add sub-extension to resource files in built-in templates (#1777)
## Changes
We want to encourage a pattern of only specifying a single resource in a
YAML file when an `.<resource-type>.yml` (like `.job.yml`) is used. This
convention could allow us to bijectively map a resource YAML file to
it's corresponding resource in the Databricks workspace.

This PR simply makes the built-in templates compliant to this format.

## Tests
Existing tests.
2024-09-25 12:58:14 +00:00
Andrew Nester b3a3071086
Fixed full variable override detection (#1787)
## Changes
Fixes #1786 

## Tests
All valid override combinations are added as test cases
2024-09-25 12:35:16 +00:00
Gleb Kanterov 3d9decdda9
Add JobTaskClusterSpec validate mutator (#1784)
## Changes
Add JobTaskClusterSpec validate mutator. It catches the case when tasks
don't which cluster to use.

For example, we can get this error with minor modifications to
`default-python` template:

```yaml
      tasks:
        - task_key: python_file_task
          spark_python_task:
            python_file: ../src/my_project_10/main.py
```

```
 % databricks bundle validate
Error: Missing required cluster or environment settings
  at resources.jobs.my_project_10_job.tasks[0]
  in resources/my_project_10_job.yml:17:11

Task "print_github_stars" requires a cluster or an environment to run.
Specify one of the following fields: job_cluster_key, environment_key, existing_cluster_id, new_cluster.
```

We implicitly rely on "one of" validation, which does not exist. Many
bundle fields can't co-exist, for instance, specifying:
`JobTask.{existing_cluster_id,job_cluster_key}`, `Library.{whl,pypi}`,
`JobTask.{notebook_task,python_wheel_task}`, etc.

## Tests
Unit tests

---------

Co-authored-by: Pieter Noordhuis <pcnoordhuis@gmail.com>
2024-09-25 11:30:14 +00:00
Gleb Kanterov 490259a14a
Refactor jobs path translation (#1782)
## Changes
Extract package for other modules to transform different kinds of paths
in job resources.

## Tests
Unit tests
2024-09-24 13:51:54 +00:00
shreyas-goenka 0cc35ca056
Assert tokens are redacted in origin URL when username is not specified (#1785)
TSIA
2024-09-23 12:42:30 +00:00
Andrew Nester 56ed9bebf3
Added support for creating all-purpose clusters (#1698)
## Changes
Added support for creating all-purpose clusters

Example of configuration

```
bundle:
  name: clusters

resources:
  clusters:
    test_cluster:
      cluster_name: "Test Cluster"
      num_workers: 2
      node_type_id: "i3.xlarge"
      autoscale:
        min_workers: 2
        max_workers: 7
      spark_version: "13.3.x-scala2.12"
      spark_conf:
        "spark.executor.memory": "2g"

  jobs:
    test_job:
      name: "Test Job"
      tasks:
        - task_key: test_task
          existing_cluster_id: ${resources.clusters.test_cluster.id}
          notebook_task:
            notebook_path: "./src/test.py"

targets:
    development:
      mode: development
      compute_id: ${resources.clusters.test_cluster.id}

```

## Tests
Added unit, config and E2E tests
2024-09-23 10:42:34 +00:00
Ilia Babanov ac80d3dfcb
Add verbose flag to the "bundle deploy" command (#1774)
## Changes
- Extract sync output logic from `cmd/sync` into `lib/sync`
- Add hidden `verbose` flag to the `bundle deploy` command, it's false
by default and hidden from the `--help` output
- Pass output handler to the `deploy/files/upload` mutator if the
verbose option is true

The was an idea to use in-place output overriding each past file sync
event in the output, bit that wont work for the extension, since it
doesn't display deploy logs in the terminal.

Example output:
```
~/tmp/defpy: ~/cli/cli bundle deploy --sync-progress
Building defpy...
Uploading defpy-0.0.1+20240917.112755-py3-none-any.whl...
Uploading bundle files to /Users/ilia.babanov@databricks.com/.bundle/defpy/dev/files...
Action: PUT: requirements-dev.txt, resources/defpy_pipeline.yml, pytest.ini, src/defpy/main.py, src/defpy/__init__.py, src/dlt_pipeline.ipynb, tests/main_test.py, src/notebook.ipynb, setup.py, resources/defpy_job.yml, .vscode/extensions.json, .vscode/settings.json, fixtures/.gitkeep, .vscode/__builtins__.pyi, README.md, .gitignore, databricks.yml
Uploaded tests
Uploaded resources
Uploaded fixtures
Uploaded .vscode
Uploaded src/defpy
Uploaded requirements-dev.txt
Uploaded .gitignore
Uploaded fixtures/.gitkeep
Uploaded src/defpy/__init__.py
Uploaded databricks.yml
Uploaded README.md
Uploaded setup.py
Uploaded .vscode/__builtins__.pyi
Uploaded .vscode/extensions.json
Uploaded src/dlt_pipeline.ipynb
Uploaded .vscode/settings.json
Uploaded resources/defpy_job.yml
Uploaded pytest.ini
Uploaded src/defpy/main.py
Uploaded tests/main_test.py
Uploaded resources/defpy_pipeline.yml
Uploaded src/notebook.ipynb
Initial Sync Complete
Deploying resources...
Updating deployment state...
Deployment complete!
```

Output example in the extension:
<img width="1843" alt="Screenshot 2024-09-19 at 11 07 48"
src="https://github.com/user-attachments/assets/0fafd095-cdc6-44b8-b482-27a38ada0330">


## Tests
Manually for the `sync` and `bundle deploy` commands + vscode extension
sync and deploy flows
2024-09-23 10:09:11 +00:00
Lennart Kats (databricks) 7665c639bd
Use Unity Catalog for pipelines in the default-python template (#1766)
## Summary

Enables Unity Catalog for pipelines in the default template. Pipelines
will default to non-Unity Catalog pipelines if a catalog is not
specified.

*Small caveat*: there are cases where admins lock down the default
catalog of a workspace and don't allow the creation of a new schema
there. If that happens, the pipeline would fail at runtime with a clear
error indicating what happened. ("PERMISSION_DENIED: User does not have
CREATE SCHEMA on Catalog 'main'."). I've seen this with an internal
Databricks workspace, where creating new non-UC schemas wasn't locked
down, but creation in the `main` was.

## Testing

- Validated on a non-UC + UC workspace. The catalog selection logic here
is the same as applied for the SQL templates.
2024-09-23 09:52:04 +00:00
Lennart Kats (databricks) 6c57683dc6
Reduce time until the prompt is shown for bundle run (#1727)
## Summary

Makes the `databricks bundle run` command use local state before showing
the menu prompt, which makes it show more quickly. For large/busy
workspaces this means the prompt can show 2-3 seconds earlier.
2024-09-21 06:36:47 +00:00
Andrew Nester cf989a7e10
Upgrade to TF provider 1.52 (#1781)
## Changes
Upgrade to TF provider 1.52

We also temporarily skip generating plugin framework structs to unblock
upgrade as generation does not work yet and need to be fixed separately
2024-09-19 11:21:32 +00:00
Andrew Nester e2c1d51d84
[Release] Release v0.228.1 (#1778)
Bundles:
* Added listing cluster filtering for cluster lookups
([#1754](https://github.com/databricks/cli/pull/1754)).
* Expand library globs relative to the sync root
([#1756](https://github.com/databricks/cli/pull/1756)).
* Fixed generated YAML missing 'default' for empty values
([#1765](https://github.com/databricks/cli/pull/1765)).
* Use periodic triggers in all templates
([#1739](https://github.com/databricks/cli/pull/1739)).
* Use the friendly name of service principals when shortening their name
([#1770](https://github.com/databricks/cli/pull/1770)).
* Fixed detecting full syntax variable override which includes type
field ([#1775](https://github.com/databricks/cli/pull/1775)).

Internal:
* Pass copy of `dyn.Path` to callback function
([#1747](https://github.com/databricks/cli/pull/1747)).
* Make bundle JSON schema modular with `$defs`
([#1700](https://github.com/databricks/cli/pull/1700)).
* Alias variables block in the `Target` struct
([#1748](https://github.com/databricks/cli/pull/1748)).
* Add end to end integration tests for bundle JSON schema
([#1726](https://github.com/databricks/cli/pull/1726)).
* Fix artifact upload integration tests
([#1767](https://github.com/databricks/cli/pull/1767)).

API Changes:
 * Added `databricks quality-monitors regenerate-dashboard` command.

OpenAPI commit d05898328669a3f8ab0c2ecee37db2673d3ea3f7 (2024-09-04)
Dependency updates:
* Bump golang.org/x/term from 0.23.0 to 0.24.0
([#1757](https://github.com/databricks/cli/pull/1757)).
* Bump golang.org/x/oauth2 from 0.22.0 to 0.23.0
([#1761](https://github.com/databricks/cli/pull/1761)).
* Bump golang.org/x/text from 0.17.0 to 0.18.0
([#1759](https://github.com/databricks/cli/pull/1759)).
* Bump github.com/databricks/databricks-sdk-go from 0.45.0 to 0.46.0
([#1760](https://github.com/databricks/cli/pull/1760)).
2024-09-18 11:26:16 +00:00
Andrew Nester bcab6ca37b
Fixed detecting full syntax variable override which includes type field (#1775)
## Changes
Fixes #1773 

## Tests
Confirmed manually
2024-09-18 10:23:07 +00:00
Lennart Kats (databricks) e220f9ddd6
Use the friendly name of service principals when shortening their name (#1770)
## Summary

Use the friendly name of service principals when shortening their name.

This change is helpful for the prefix in development mode. Instead of
adding a prefix like `[dev 1706906c-c0a2-4c25-9f57-3a7aa3cb8123]`, we'll
prefix like `[dev my_principal]`.
2024-09-16 18:35:07 +00:00
151 changed files with 2586 additions and 1638 deletions

View File

@ -33,7 +33,7 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.22.x go-version: 1.22.7
- name: Setup Python - name: Setup Python
uses: actions/setup-python@v5 uses: actions/setup-python@v5
@ -68,7 +68,7 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.22.x go-version: 1.22.7
# No need to download cached dependencies when running gofmt. # No need to download cached dependencies when running gofmt.
cache: false cache: false
@ -100,7 +100,7 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.22.x go-version: 1.22.7
# Github repo: https://github.com/ajv-validator/ajv-cli # Github repo: https://github.com/ajv-validator/ajv-cli
- name: Install ajv-cli - name: Install ajv-cli

View File

@ -21,7 +21,7 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.22.x go-version: 1.22.7
# The default cache key for this action considers only the `go.sum` file. # The default cache key for this action considers only the `go.sum` file.
# We include .goreleaser.yaml here to differentiate from the cache used by the push action # We include .goreleaser.yaml here to differentiate from the cache used by the push action

View File

@ -22,7 +22,7 @@ jobs:
- name: Setup Go - name: Setup Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: 1.22.x go-version: 1.22.7
# The default cache key for this action considers only the `go.sum` file. # The default cache key for this action considers only the `go.sum` file.
# We include .goreleaser.yaml here to differentiate from the cache used by the push action # We include .goreleaser.yaml here to differentiate from the cache used by the push action

View File

@ -1,5 +1,32 @@
# Version changelog # Version changelog
## [Release] Release v0.228.1
Bundles:
* Added listing cluster filtering for cluster lookups ([#1754](https://github.com/databricks/cli/pull/1754)).
* Expand library globs relative to the sync root ([#1756](https://github.com/databricks/cli/pull/1756)).
* Fixed generated YAML missing 'default' for empty values ([#1765](https://github.com/databricks/cli/pull/1765)).
* Use periodic triggers in all templates ([#1739](https://github.com/databricks/cli/pull/1739)).
* Use the friendly name of service principals when shortening their name ([#1770](https://github.com/databricks/cli/pull/1770)).
* Fixed detecting full syntax variable override which includes type field ([#1775](https://github.com/databricks/cli/pull/1775)).
Internal:
* Pass copy of `dyn.Path` to callback function ([#1747](https://github.com/databricks/cli/pull/1747)).
* Make bundle JSON schema modular with `` ([#1700](https://github.com/databricks/cli/pull/1700)).
* Alias variables block in the `Target` struct ([#1748](https://github.com/databricks/cli/pull/1748)).
* Add end to end integration tests for bundle JSON schema ([#1726](https://github.com/databricks/cli/pull/1726)).
* Fix artifact upload integration tests ([#1767](https://github.com/databricks/cli/pull/1767)).
API Changes:
* Added `databricks quality-monitors regenerate-dashboard` command.
OpenAPI commit d05898328669a3f8ab0c2ecee37db2673d3ea3f7 (2024-09-04)
Dependency updates:
* Bump golang.org/x/term from 0.23.0 to 0.24.0 ([#1757](https://github.com/databricks/cli/pull/1757)).
* Bump golang.org/x/oauth2 from 0.22.0 to 0.23.0 ([#1761](https://github.com/databricks/cli/pull/1761)).
* Bump golang.org/x/text from 0.17.0 to 0.18.0 ([#1759](https://github.com/databricks/cli/pull/1759)).
* Bump github.com/databricks/databricks-sdk-go from 0.45.0 to 0.46.0 ([#1760](https://github.com/databricks/cli/pull/1760)).
## [Release] Release v0.228.0 ## [Release] Release v0.228.0
CLI: CLI:

View File

@ -10,6 +10,7 @@ import (
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/internal/testutil" "github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -23,7 +24,7 @@ func TestExpandGlobs_Nominal(t *testing.T) {
testutil.Touch(t, tmpDir, "bc.txt") testutil.Touch(t, tmpDir, "bc.txt")
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: tmpDir, BundleRootPath: tmpDir,
Config: config.Root{ Config: config.Root{
Artifacts: config.Artifacts{ Artifacts: config.Artifacts{
"test": { "test": {
@ -36,7 +37,7 @@ func TestExpandGlobs_Nominal(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "artifacts", filepath.Join(tmpDir, "databricks.yml")) bundletest.SetLocation(b, "artifacts", []dyn.Location{{File: filepath.Join(tmpDir, "databricks.yml")}})
ctx := context.Background() ctx := context.Background()
diags := bundle.Apply(ctx, b, bundle.Seq( diags := bundle.Apply(ctx, b, bundle.Seq(
@ -62,7 +63,7 @@ func TestExpandGlobs_InvalidPattern(t *testing.T) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: tmpDir, BundleRootPath: tmpDir,
Config: config.Root{ Config: config.Root{
Artifacts: config.Artifacts{ Artifacts: config.Artifacts{
"test": { "test": {
@ -77,7 +78,7 @@ func TestExpandGlobs_InvalidPattern(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "artifacts", filepath.Join(tmpDir, "databricks.yml")) bundletest.SetLocation(b, "artifacts", []dyn.Location{{File: filepath.Join(tmpDir, "databricks.yml")}})
ctx := context.Background() ctx := context.Background()
diags := bundle.Apply(ctx, b, bundle.Seq( diags := bundle.Apply(ctx, b, bundle.Seq(
@ -110,7 +111,7 @@ func TestExpandGlobs_NoMatches(t *testing.T) {
testutil.Touch(t, tmpDir, "b2.txt") testutil.Touch(t, tmpDir, "b2.txt")
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: tmpDir, BundleRootPath: tmpDir,
Config: config.Root{ Config: config.Root{
Artifacts: config.Artifacts{ Artifacts: config.Artifacts{
"test": { "test": {
@ -125,7 +126,7 @@ func TestExpandGlobs_NoMatches(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "artifacts", filepath.Join(tmpDir, "databricks.yml")) bundletest.SetLocation(b, "artifacts", []dyn.Location{{File: filepath.Join(tmpDir, "databricks.yml")}})
ctx := context.Background() ctx := context.Background()
diags := bundle.Apply(ctx, b, bundle.Seq( diags := bundle.Apply(ctx, b, bundle.Seq(

View File

@ -47,7 +47,7 @@ func (m *prepare) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics
// If artifact path is not provided, use bundle root dir // If artifact path is not provided, use bundle root dir
if artifact.Path == "" { if artifact.Path == "" {
artifact.Path = b.RootPath artifact.Path = b.BundleRootPath
} }
if !filepath.IsAbs(artifact.Path) { if !filepath.IsAbs(artifact.Path) {

View File

@ -35,21 +35,21 @@ func (m *detectPkg) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostic
log.Infof(ctx, "Detecting Python wheel project...") log.Infof(ctx, "Detecting Python wheel project...")
// checking if there is setup.py in the bundle root // checking if there is setup.py in the bundle root
setupPy := filepath.Join(b.RootPath, "setup.py") setupPy := filepath.Join(b.BundleRootPath, "setup.py")
_, err := os.Stat(setupPy) _, err := os.Stat(setupPy)
if err != nil { if err != nil {
log.Infof(ctx, "No Python wheel project found at bundle root folder") log.Infof(ctx, "No Python wheel project found at bundle root folder")
return nil return nil
} }
log.Infof(ctx, fmt.Sprintf("Found Python wheel project at %s", b.RootPath)) log.Infof(ctx, fmt.Sprintf("Found Python wheel project at %s", b.BundleRootPath))
module := extractModuleName(setupPy) module := extractModuleName(setupPy)
if b.Config.Artifacts == nil { if b.Config.Artifacts == nil {
b.Config.Artifacts = make(map[string]*config.Artifact) b.Config.Artifacts = make(map[string]*config.Artifact)
} }
pkgPath, err := filepath.Abs(b.RootPath) pkgPath, err := filepath.Abs(b.BundleRootPath)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }

View File

@ -31,22 +31,26 @@ import (
const internalFolder = ".internal" const internalFolder = ".internal"
type Bundle struct { type Bundle struct {
// RootPath contains the directory path to the root of the bundle. // BundleRootPath is the local path to the root directory of the bundle.
// It is set when we instantiate a new bundle instance. // It is set when we instantiate a new bundle instance.
RootPath string BundleRootPath string
// BundleRoot is a virtual filesystem path to the root of the bundle. // BundleRoot is a virtual filesystem path to [BundleRootPath].
// Exclusively use this field for filesystem operations. // Exclusively use this field for filesystem operations.
BundleRoot vfs.Path BundleRoot vfs.Path
// SyncRoot is a virtual filesystem path to the root directory of the files that are synchronized to the workspace.
// It can be an ancestor to [BundleRoot], but not a descendant; that is, [SyncRoot] must contain [BundleRoot].
SyncRoot vfs.Path
// SyncRootPath is the local path to the root directory of files that are synchronized to the workspace. // SyncRootPath is the local path to the root directory of files that are synchronized to the workspace.
// It is equal to `SyncRoot.Native()` and included as dedicated field for convenient access. // By default, it is the same as [BundleRootPath].
// If it is different, it must be an ancestor to [BundleRootPath].
// That is, [SyncRootPath] must contain [BundleRootPath].
SyncRootPath string SyncRootPath string
// SyncRoot is a virtual filesystem path to [SyncRootPath].
// Exclusively use this field for filesystem operations.
SyncRoot vfs.Path
// Config contains the bundle configuration.
// It is loaded from the bundle configuration files and mutators may update it.
Config config.Root Config config.Root
// Metadata about the bundle deployment. This is the interface Databricks services // Metadata about the bundle deployment. This is the interface Databricks services
@ -84,14 +88,14 @@ type Bundle struct {
func Load(ctx context.Context, path string) (*Bundle, error) { func Load(ctx context.Context, path string) (*Bundle, error) {
b := &Bundle{ b := &Bundle{
RootPath: filepath.Clean(path), BundleRootPath: filepath.Clean(path),
BundleRoot: vfs.MustNew(path), BundleRoot: vfs.MustNew(path),
} }
configFile, err := config.FileNames.FindInPath(path) configFile, err := config.FileNames.FindInPath(path)
if err != nil { if err != nil {
return nil, err return nil, err
} }
log.Debugf(ctx, "Found bundle root at %s (file %s)", b.RootPath, configFile) log.Debugf(ctx, "Found bundle root at %s (file %s)", b.BundleRootPath, configFile)
return b, nil return b, nil
} }
@ -160,7 +164,7 @@ func (b *Bundle) CacheDir(ctx context.Context, paths ...string) (string, error)
if !exists || cacheDirName == "" { if !exists || cacheDirName == "" {
cacheDirName = filepath.Join( cacheDirName = filepath.Join(
// Anchor at bundle root directory. // Anchor at bundle root directory.
b.RootPath, b.BundleRootPath,
// Static cache directory. // Static cache directory.
".databricks", ".databricks",
"bundle", "bundle",
@ -212,7 +216,7 @@ func (b *Bundle) GetSyncIncludePatterns(ctx context.Context) ([]string, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
internalDirRel, err := filepath.Rel(b.RootPath, internalDir) internalDirRel, err := filepath.Rel(b.BundleRootPath, internalDir)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -21,7 +21,7 @@ func (r ReadOnlyBundle) Config() config.Root {
} }
func (r ReadOnlyBundle) RootPath() string { func (r ReadOnlyBundle) RootPath() string {
return r.b.RootPath return r.b.BundleRootPath
} }
func (r ReadOnlyBundle) BundleRoot() vfs.Path { func (r ReadOnlyBundle) BundleRoot() vfs.Path {

View File

@ -79,7 +79,7 @@ func TestBundleMustLoadSuccess(t *testing.T) {
t.Setenv(env.RootVariable, "./tests/basic") t.Setenv(env.RootVariable, "./tests/basic")
b, err := MustLoad(context.Background()) b, err := MustLoad(context.Background())
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "tests/basic", filepath.ToSlash(b.RootPath)) assert.Equal(t, "tests/basic", filepath.ToSlash(b.BundleRootPath))
} }
func TestBundleMustLoadFailureWithEnv(t *testing.T) { func TestBundleMustLoadFailureWithEnv(t *testing.T) {
@ -98,7 +98,7 @@ func TestBundleTryLoadSuccess(t *testing.T) {
t.Setenv(env.RootVariable, "./tests/basic") t.Setenv(env.RootVariable, "./tests/basic")
b, err := TryLoad(context.Background()) b, err := TryLoad(context.Background())
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "tests/basic", filepath.ToSlash(b.RootPath)) assert.Equal(t, "tests/basic", filepath.ToSlash(b.BundleRootPath))
} }
func TestBundleTryLoadFailureWithEnv(t *testing.T) { func TestBundleTryLoadFailureWithEnv(t *testing.T) {

View File

@ -38,8 +38,11 @@ type Bundle struct {
// Annotated readonly as this should be set at the target level. // Annotated readonly as this should be set at the target level.
Mode Mode `json:"mode,omitempty" bundle:"readonly"` Mode Mode `json:"mode,omitempty" bundle:"readonly"`
// Overrides the compute used for jobs and other supported assets. // DEPRECATED: Overrides the compute used for jobs and other supported assets.
ComputeID string `json:"compute_id,omitempty"` ComputeId string `json:"compute_id,omitempty"`
// Overrides the cluster used for jobs and other supported assets.
ClusterId string `json:"cluster_id,omitempty"`
// Deployment section specifies deployment related configuration for bundle // Deployment section specifies deployment related configuration for bundle
Deployment Deployment `json:"deployment,omitempty"` Deployment Deployment `json:"deployment,omitempty"`

View File

@ -1,19 +0,0 @@
package generate
import (
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/dashboards"
)
func ConvertDashboardToValue(dashboard *dashboards.Dashboard, filePath string) (dyn.Value, error) {
// The majority of fields of the dashboard struct are read-only.
// We copy the relevant fields manually.
dv := map[string]dyn.Value{
"display_name": dyn.NewValue(dashboard.DisplayName, []dyn.Location{{Line: 1}}),
"parent_path": dyn.NewValue("${workspace.file_path}", []dyn.Location{{Line: 2}}),
"warehouse_id": dyn.NewValue(dashboard.WarehouseId, []dyn.Location{{Line: 3}}),
"definition_path": dyn.NewValue(filePath, []dyn.Location{{Line: 4}}),
}
return dyn.V(dv), nil
}

View File

@ -20,7 +20,7 @@ func (m *entryPoint) Name() string {
} }
func (m *entryPoint) Apply(_ context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *entryPoint) Apply(_ context.Context, b *bundle.Bundle) diag.Diagnostics {
path, err := config.FileNames.FindInPath(b.RootPath) path, err := config.FileNames.FindInPath(b.BundleRootPath)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }

View File

@ -18,7 +18,7 @@ func TestEntryPointNoRootPath(t *testing.T) {
func TestEntryPoint(t *testing.T) { func TestEntryPoint(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "testdata", BundleRootPath: "testdata",
} }
diags := bundle.Apply(context.Background(), b, loader.EntryPoint()) diags := bundle.Apply(context.Background(), b, loader.EntryPoint())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())

View File

@ -14,7 +14,7 @@ import (
func TestProcessInclude(t *testing.T) { func TestProcessInclude(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "testdata", BundleRootPath: "testdata",
Config: config.Root{ Config: config.Root{
Workspace: config.Workspace{ Workspace: config.Workspace{
Host: "foo", Host: "foo",
@ -22,7 +22,7 @@ func TestProcessInclude(t *testing.T) {
}, },
} }
m := loader.ProcessInclude(filepath.Join(b.RootPath, "host.yml"), "host.yml") m := loader.ProcessInclude(filepath.Join(b.BundleRootPath, "host.yml"), "host.yml")
assert.Equal(t, "ProcessInclude(host.yml)", m.Name()) assert.Equal(t, "ProcessInclude(host.yml)", m.Name())
// Assert the host value prior to applying the mutator // Assert the host value prior to applying the mutator

View File

@ -47,7 +47,7 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
} }
// Anchor includes to the bundle root path. // Anchor includes to the bundle root path.
matches, err := filepath.Glob(filepath.Join(b.RootPath, entry)) matches, err := filepath.Glob(filepath.Join(b.BundleRootPath, entry))
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }
@ -61,7 +61,7 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
// Filter matches to ones we haven't seen yet. // Filter matches to ones we haven't seen yet.
var includes []string var includes []string
for _, match := range matches { for _, match := range matches {
rel, err := filepath.Rel(b.RootPath, match) rel, err := filepath.Rel(b.BundleRootPath, match)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }
@ -76,7 +76,7 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
slices.Sort(includes) slices.Sort(includes)
files = append(files, includes...) files = append(files, includes...)
for _, include := range includes { for _, include := range includes {
out = append(out, ProcessInclude(filepath.Join(b.RootPath, include), include)) out = append(out, ProcessInclude(filepath.Join(b.BundleRootPath, include), include))
} }
} }

View File

@ -15,7 +15,7 @@ import (
func TestProcessRootIncludesEmpty(t *testing.T) { func TestProcessRootIncludesEmpty(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: ".", BundleRootPath: ".",
} }
diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes()) diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -30,7 +30,7 @@ func TestProcessRootIncludesAbs(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: ".", BundleRootPath: ".",
Config: config.Root{ Config: config.Root{
Include: []string{ Include: []string{
"/tmp/*.yml", "/tmp/*.yml",
@ -44,7 +44,7 @@ func TestProcessRootIncludesAbs(t *testing.T) {
func TestProcessRootIncludesSingleGlob(t *testing.T) { func TestProcessRootIncludesSingleGlob(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Include: []string{ Include: []string{
"*.yml", "*.yml",
@ -52,9 +52,9 @@ func TestProcessRootIncludesSingleGlob(t *testing.T) {
}, },
} }
testutil.Touch(t, b.RootPath, "databricks.yml") testutil.Touch(t, b.BundleRootPath, "databricks.yml")
testutil.Touch(t, b.RootPath, "a.yml") testutil.Touch(t, b.BundleRootPath, "a.yml")
testutil.Touch(t, b.RootPath, "b.yml") testutil.Touch(t, b.BundleRootPath, "b.yml")
diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes()) diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -63,7 +63,7 @@ func TestProcessRootIncludesSingleGlob(t *testing.T) {
func TestProcessRootIncludesMultiGlob(t *testing.T) { func TestProcessRootIncludesMultiGlob(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Include: []string{ Include: []string{
"a*.yml", "a*.yml",
@ -72,8 +72,8 @@ func TestProcessRootIncludesMultiGlob(t *testing.T) {
}, },
} }
testutil.Touch(t, b.RootPath, "a1.yml") testutil.Touch(t, b.BundleRootPath, "a1.yml")
testutil.Touch(t, b.RootPath, "b1.yml") testutil.Touch(t, b.BundleRootPath, "b1.yml")
diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes()) diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -82,7 +82,7 @@ func TestProcessRootIncludesMultiGlob(t *testing.T) {
func TestProcessRootIncludesRemoveDups(t *testing.T) { func TestProcessRootIncludesRemoveDups(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Include: []string{ Include: []string{
"*.yml", "*.yml",
@ -91,7 +91,7 @@ func TestProcessRootIncludesRemoveDups(t *testing.T) {
}, },
} }
testutil.Touch(t, b.RootPath, "a.yml") testutil.Touch(t, b.BundleRootPath, "a.yml")
diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes()) diags := bundle.Apply(context.Background(), b, loader.ProcessRootIncludes())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -100,7 +100,7 @@ func TestProcessRootIncludesRemoveDups(t *testing.T) {
func TestProcessRootIncludesNotExists(t *testing.T) { func TestProcessRootIncludesNotExists(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Include: []string{ Include: []string{
"notexist.yml", "notexist.yml",

View File

@ -35,8 +35,10 @@ func (m *applyPresets) Name() string {
} }
func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
if d := validatePauseStatus(b); d != nil { if d := validatePauseStatus(b); d != nil {
return d diags = diags.Extend(d)
} }
r := b.Config.Resources r := b.Config.Resources
@ -45,7 +47,11 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
tags := toTagArray(t.Tags) tags := toTagArray(t.Tags)
// Jobs presets: Prefix, Tags, JobsMaxConcurrentRuns, TriggerPauseStatus // Jobs presets: Prefix, Tags, JobsMaxConcurrentRuns, TriggerPauseStatus
for _, j := range r.Jobs { for key, j := range r.Jobs {
if j.JobSettings == nil {
diags = diags.Extend(diag.Errorf("job %s is not defined", key))
continue
}
j.Name = prefix + j.Name j.Name = prefix + j.Name
if j.Tags == nil { if j.Tags == nil {
j.Tags = make(map[string]string) j.Tags = make(map[string]string)
@ -77,20 +83,27 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
// Pipelines presets: Prefix, PipelinesDevelopment // Pipelines presets: Prefix, PipelinesDevelopment
for i := range r.Pipelines { for key, p := range r.Pipelines {
r.Pipelines[i].Name = prefix + r.Pipelines[i].Name if p.PipelineSpec == nil {
diags = diags.Extend(diag.Errorf("pipeline %s is not defined", key))
continue
}
p.Name = prefix + p.Name
if config.IsExplicitlyEnabled(t.PipelinesDevelopment) { if config.IsExplicitlyEnabled(t.PipelinesDevelopment) {
r.Pipelines[i].Development = true p.Development = true
} }
if t.TriggerPauseStatus == config.Paused { if t.TriggerPauseStatus == config.Paused {
r.Pipelines[i].Continuous = false p.Continuous = false
} }
// As of 2024-06, pipelines don't yet support tags // As of 2024-06, pipelines don't yet support tags
} }
// Models presets: Prefix, Tags // Models presets: Prefix, Tags
for _, m := range r.Models { for key, m := range r.Models {
if m.Model == nil {
diags = diags.Extend(diag.Errorf("model %s is not defined", key))
continue
}
m.Name = prefix + m.Name m.Name = prefix + m.Name
for _, t := range tags { for _, t := range tags {
exists := slices.ContainsFunc(m.Tags, func(modelTag ml.ModelTag) bool { exists := slices.ContainsFunc(m.Tags, func(modelTag ml.ModelTag) bool {
@ -104,7 +117,11 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
// Experiments presets: Prefix, Tags // Experiments presets: Prefix, Tags
for _, e := range r.Experiments { for key, e := range r.Experiments {
if e.Experiment == nil {
diags = diags.Extend(diag.Errorf("experiment %s is not defined", key))
continue
}
filepath := e.Name filepath := e.Name
dir := path.Dir(filepath) dir := path.Dir(filepath)
base := path.Base(filepath) base := path.Base(filepath)
@ -128,44 +145,79 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
// Model serving endpoint presets: Prefix // Model serving endpoint presets: Prefix
for i := range r.ModelServingEndpoints { for key, e := range r.ModelServingEndpoints {
r.ModelServingEndpoints[i].Name = normalizePrefix(prefix) + r.ModelServingEndpoints[i].Name if e.CreateServingEndpoint == nil {
diags = diags.Extend(diag.Errorf("model serving endpoint %s is not defined", key))
continue
}
e.Name = normalizePrefix(prefix) + e.Name
// As of 2024-06, model serving endpoints don't yet support tags // As of 2024-06, model serving endpoints don't yet support tags
} }
// Registered models presets: Prefix // Registered models presets: Prefix
for i := range r.RegisteredModels { for key, m := range r.RegisteredModels {
r.RegisteredModels[i].Name = normalizePrefix(prefix) + r.RegisteredModels[i].Name if m.CreateRegisteredModelRequest == nil {
diags = diags.Extend(diag.Errorf("registered model %s is not defined", key))
continue
}
m.Name = normalizePrefix(prefix) + m.Name
// As of 2024-06, registered models don't yet support tags // As of 2024-06, registered models don't yet support tags
} }
// Quality monitors presets: Prefix // Quality monitors presets: Schedule
if t.TriggerPauseStatus == config.Paused { if t.TriggerPauseStatus == config.Paused {
for i := range r.QualityMonitors { for key, q := range r.QualityMonitors {
if q.CreateMonitor == nil {
diags = diags.Extend(diag.Errorf("quality monitor %s is not defined", key))
continue
}
// Remove all schedules from monitors, since they don't support pausing/unpausing. // Remove all schedules from monitors, since they don't support pausing/unpausing.
// Quality monitors might support the "pause" property in the future, so at the // Quality monitors might support the "pause" property in the future, so at the
// CLI level we do respect that property if it is set to "unpaused." // CLI level we do respect that property if it is set to "unpaused."
if r.QualityMonitors[i].Schedule != nil && r.QualityMonitors[i].Schedule.PauseStatus != catalog.MonitorCronSchedulePauseStatusUnpaused { if q.Schedule != nil && q.Schedule.PauseStatus != catalog.MonitorCronSchedulePauseStatusUnpaused {
r.QualityMonitors[i].Schedule = nil q.Schedule = nil
} }
} }
} }
// Schemas: Prefix // Schemas: Prefix
for i := range r.Schemas { for key, s := range r.Schemas {
r.Schemas[i].Name = normalizePrefix(prefix) + r.Schemas[i].Name if s.CreateSchema == nil {
diags = diags.Extend(diag.Errorf("schema %s is not defined", key))
continue
}
s.Name = normalizePrefix(prefix) + s.Name
// HTTP API for schemas doesn't yet support tags. It's only supported in // HTTP API for schemas doesn't yet support tags. It's only supported in
// the Databricks UI and via the SQL API. // the Databricks UI and via the SQL API.
} }
// Clusters: Prefix, Tags
for key, c := range r.Clusters {
if c.ClusterSpec == nil {
diags = diags.Extend(diag.Errorf("cluster %s is not defined", key))
continue
}
c.ClusterName = prefix + c.ClusterName
if c.CustomTags == nil {
c.CustomTags = make(map[string]string)
}
for _, tag := range tags {
normalisedKey := b.Tagging.NormalizeKey(tag.Key)
normalisedValue := b.Tagging.NormalizeValue(tag.Value)
if _, ok := c.CustomTags[normalisedKey]; !ok {
c.CustomTags[normalisedKey] = normalisedValue
}
}
}
// Dashboards: Prefix // Dashboards: Prefix
for i := range r.Dashboards { for i := range r.Dashboards {
r.Dashboards[i].DisplayName = prefix + r.Dashboards[i].DisplayName r.Dashboards[i].DisplayName = prefix + r.Dashboards[i].DisplayName
} }
return nil return diags
} }
func validatePauseStatus(b *bundle.Bundle) diag.Diagnostics { func validatePauseStatus(b *bundle.Bundle) diag.Diagnostics {

View File

@ -251,3 +251,116 @@ func TestApplyPresetsJobsMaxConcurrentRuns(t *testing.T) {
}) })
} }
} }
func TestApplyPresetsPrefixWithoutJobSettings(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job1": {}, // no jobsettings inside
},
},
Presets: config.Presets{
NamePrefix: "prefix-",
},
},
}
ctx := context.Background()
diags := bundle.Apply(ctx, b, mutator.ApplyPresets())
require.ErrorContains(t, diags.Error(), "job job1 is not defined")
}
func TestApplyPresetsResourceNotDefined(t *testing.T) {
tests := []struct {
resources config.Resources
error string
}{
{
resources: config.Resources{
Jobs: map[string]*resources.Job{
"job1": {}, // no jobsettings inside
},
},
error: "job job1 is not defined",
},
{
resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"pipeline1": {}, // no pipelinespec inside
},
},
error: "pipeline pipeline1 is not defined",
},
{
resources: config.Resources{
Models: map[string]*resources.MlflowModel{
"model1": {}, // no model inside
},
},
error: "model model1 is not defined",
},
{
resources: config.Resources{
Experiments: map[string]*resources.MlflowExperiment{
"experiment1": {}, // no experiment inside
},
},
error: "experiment experiment1 is not defined",
},
{
resources: config.Resources{
ModelServingEndpoints: map[string]*resources.ModelServingEndpoint{
"endpoint1": {}, // no CreateServingEndpoint inside
},
RegisteredModels: map[string]*resources.RegisteredModel{
"model1": {}, // no CreateRegisteredModelRequest inside
},
},
error: "model serving endpoint endpoint1 is not defined",
},
{
resources: config.Resources{
QualityMonitors: map[string]*resources.QualityMonitor{
"monitor1": {}, // no CreateMonitor inside
},
},
error: "quality monitor monitor1 is not defined",
},
{
resources: config.Resources{
Schemas: map[string]*resources.Schema{
"schema1": {}, // no CreateSchema inside
},
},
error: "schema schema1 is not defined",
},
{
resources: config.Resources{
Clusters: map[string]*resources.Cluster{
"cluster1": {}, // no ClusterSpec inside
},
},
error: "cluster cluster1 is not defined",
},
}
for _, tt := range tests {
t.Run(tt.error, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: tt.resources,
Presets: config.Presets{
TriggerPauseStatus: config.Paused,
},
},
}
ctx := context.Background()
diags := bundle.Apply(ctx, b, mutator.ApplyPresets())
require.ErrorContains(t, diags.Error(), tt.error)
})
}
}

View File

@ -0,0 +1,87 @@
package mutator
import (
"context"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
)
type computeIdToClusterId struct{}
func ComputeIdToClusterId() bundle.Mutator {
return &computeIdToClusterId{}
}
func (m *computeIdToClusterId) Name() string {
return "ComputeIdToClusterId"
}
func (m *computeIdToClusterId) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
// The "compute_id" key is set; rewrite it to "cluster_id".
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
v, d := rewriteComputeIdToClusterId(v, dyn.NewPath(dyn.Key("bundle")))
diags = diags.Extend(d)
// Check if the "compute_id" key is set in any target overrides.
return dyn.MapByPattern(v, dyn.NewPattern(dyn.Key("targets"), dyn.AnyKey()), func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
v, d := rewriteComputeIdToClusterId(v, dyn.Path{})
diags = diags.Extend(d)
return v, nil
})
})
diags = diags.Extend(diag.FromErr(err))
return diags
}
func rewriteComputeIdToClusterId(v dyn.Value, p dyn.Path) (dyn.Value, diag.Diagnostics) {
var diags diag.Diagnostics
computeIdPath := p.Append(dyn.Key("compute_id"))
computeId, err := dyn.GetByPath(v, computeIdPath)
// If the "compute_id" key is not set, we don't need to do anything.
if err != nil {
return v, nil
}
if computeId.Kind() == dyn.KindInvalid {
return v, nil
}
diags = diags.Append(diag.Diagnostic{
Severity: diag.Warning,
Summary: "compute_id is deprecated, please use cluster_id instead",
Locations: computeId.Locations(),
Paths: []dyn.Path{computeIdPath},
})
clusterIdPath := p.Append(dyn.Key("cluster_id"))
nv, err := dyn.SetByPath(v, clusterIdPath, computeId)
if err != nil {
return dyn.InvalidValue, diag.FromErr(err)
}
// Drop the "compute_id" key.
vout, err := dyn.Walk(nv, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
switch len(p) {
case 0:
return v, nil
case 1:
if p[0] == dyn.Key("compute_id") {
return v, dyn.ErrDrop
}
return v, nil
case 2:
if p[1] == dyn.Key("compute_id") {
return v, dyn.ErrDrop
}
}
return v, dyn.ErrSkip
})
diags = diags.Extend(diag.FromErr(err))
return vout, diags
}

View File

@ -0,0 +1,57 @@
package mutator_test
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/libs/diag"
"github.com/stretchr/testify/assert"
)
func TestComputeIdToClusterId(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Bundle: config.Bundle{
ComputeId: "compute-id",
},
},
}
diags := bundle.Apply(context.Background(), b, mutator.ComputeIdToClusterId())
assert.NoError(t, diags.Error())
assert.Equal(t, "compute-id", b.Config.Bundle.ClusterId)
assert.Empty(t, b.Config.Bundle.ComputeId)
assert.Len(t, diags, 1)
assert.Equal(t, "compute_id is deprecated, please use cluster_id instead", diags[0].Summary)
assert.Equal(t, diag.Warning, diags[0].Severity)
}
func TestComputeIdToClusterIdInTargetOverride(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Targets: map[string]*config.Target{
"dev": {
ComputeId: "compute-id-dev",
},
},
},
}
diags := bundle.Apply(context.Background(), b, mutator.ComputeIdToClusterId())
assert.NoError(t, diags.Error())
assert.Empty(t, b.Config.Targets["dev"].ComputeId)
diags = diags.Extend(bundle.Apply(context.Background(), b, mutator.SelectTarget("dev")))
assert.NoError(t, diags.Error())
assert.Equal(t, "compute-id-dev", b.Config.Bundle.ClusterId)
assert.Empty(t, b.Config.Bundle.ComputeId)
assert.Len(t, diags, 1)
assert.Equal(t, "compute_id is deprecated, please use cluster_id instead", diags[0].Summary)
assert.Equal(t, diag.Warning, diags[0].Severity)
}

View File

@ -0,0 +1,70 @@
package mutator
import (
"context"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
)
type configureDashboardDefaults struct{}
func ConfigureDashboardDefaults() bundle.Mutator {
return &configureDashboardDefaults{}
}
func (m *configureDashboardDefaults) Name() string {
return "ConfigureDashboardDefaults"
}
func (m *configureDashboardDefaults) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
pattern := dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("dashboards"),
dyn.AnyKey(),
)
// Configure defaults for all dashboards.
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
return dyn.MapByPattern(v, pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
var err error
v, err = setIfNotExists(v, dyn.NewPath(dyn.Key("parent_path")), dyn.V(b.Config.Workspace.ResourcePath))
if err != nil {
return dyn.InvalidValue, err
}
v, err = setIfNotExists(v, dyn.NewPath(dyn.Key("embed_credentials")), dyn.V(false))
if err != nil {
return dyn.InvalidValue, err
}
return v, nil
})
})
diags = diags.Extend(diag.FromErr(err))
return diags
}
func setIfNotExists(v dyn.Value, path dyn.Path, defaultValue dyn.Value) (dyn.Value, error) {
// Get the field at the specified path (if set).
_, err := dyn.GetByPath(v, path)
switch {
case dyn.IsNoSuchKeyError(err):
// OK, we'll set the default value.
break
case dyn.IsCannotTraverseNilError(err):
// Cannot traverse the value, skip it.
return v, nil
case err == nil:
// The field is set, skip it.
return v, nil
default:
// Return the error.
return v, err
}
// Set the field at the specified path.
return dyn.SetByPath(v, path, defaultValue)
}

View File

@ -0,0 +1,125 @@
package mutator_test
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigureDashboardDefaultsParentPath(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ResourcePath: "/foo/bar",
},
Resources: config.Resources{
Dashboards: map[string]*resources.Dashboard{
"d1": {
// Empty string is skipped.
// See below for how it is set.
ParentPath: "",
},
"d2": {
// Non-empty string is skipped.
ParentPath: "already-set",
},
"d3": {
// No parent path set.
},
"d4": nil,
},
},
},
}
// We can't set an empty string in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.dashboards.d1.parent_path", dyn.V(""))
})
diags := bundle.Apply(context.Background(), b, mutator.ConfigureDashboardDefaults())
require.NoError(t, diags.Error())
var v dyn.Value
var err error
// Set to empty string; unchanged.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d1.parent_path")
if assert.NoError(t, err) {
assert.Equal(t, "", v.MustString())
}
// Set to "already-set"; unchanged.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d2.parent_path")
if assert.NoError(t, err) {
assert.Equal(t, "already-set", v.MustString())
}
// Not set; now set to the workspace resource path.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d3.parent_path")
if assert.NoError(t, err) {
assert.Equal(t, "/foo/bar", v.MustString())
}
// No valid dashboard; no change.
_, err = dyn.Get(b.Config.Value(), "resources.dashboards.d4.parent_path")
assert.True(t, dyn.IsCannotTraverseNilError(err))
}
func TestConfigureDashboardDefaultsEmbedCredentials(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Dashboards: map[string]*resources.Dashboard{
"d1": {
EmbedCredentials: true,
},
"d2": {
EmbedCredentials: false,
},
"d3": {
// No parent path set.
},
"d4": nil,
},
},
},
}
diags := bundle.Apply(context.Background(), b, mutator.ConfigureDashboardDefaults())
require.NoError(t, diags.Error())
var v dyn.Value
var err error
// Set to true; still true.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d1.embed_credentials")
if assert.NoError(t, err) {
assert.Equal(t, true, v.MustBool())
}
// Set to false; still false.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d2.embed_credentials")
if assert.NoError(t, err) {
assert.Equal(t, false, v.MustBool())
}
// Not set; now false.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d3.embed_credentials")
if assert.NoError(t, err) {
assert.Equal(t, false, v.MustBool())
}
// No valid dashboard; no change.
_, err = dyn.Get(b.Config.Value(), "resources.dashboards.d4.embed_credentials")
assert.True(t, dyn.IsCannotTraverseNilError(err))
}

View File

@ -29,6 +29,10 @@ func (m *defineDefaultWorkspacePaths) Apply(ctx context.Context, b *bundle.Bundl
b.Config.Workspace.FilePath = path.Join(root, "files") b.Config.Workspace.FilePath = path.Join(root, "files")
} }
if b.Config.Workspace.ResourcePath == "" {
b.Config.Workspace.ResourcePath = path.Join(root, "resources")
}
if b.Config.Workspace.ArtifactPath == "" { if b.Config.Workspace.ArtifactPath == "" {
b.Config.Workspace.ArtifactPath = path.Join(root, "artifacts") b.Config.Workspace.ArtifactPath = path.Join(root, "artifacts")
} }

View File

@ -22,6 +22,7 @@ func TestDefineDefaultWorkspacePaths(t *testing.T) {
diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths()) diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, "/files", b.Config.Workspace.FilePath) assert.Equal(t, "/files", b.Config.Workspace.FilePath)
assert.Equal(t, "/resources", b.Config.Workspace.ResourcePath)
assert.Equal(t, "/artifacts", b.Config.Workspace.ArtifactPath) assert.Equal(t, "/artifacts", b.Config.Workspace.ArtifactPath)
assert.Equal(t, "/state", b.Config.Workspace.StatePath) assert.Equal(t, "/state", b.Config.Workspace.StatePath)
} }
@ -32,6 +33,7 @@ func TestDefineDefaultWorkspacePathsAlreadySet(t *testing.T) {
Workspace: config.Workspace{ Workspace: config.Workspace{
RootPath: "/", RootPath: "/",
FilePath: "/foo/bar", FilePath: "/foo/bar",
ResourcePath: "/foo/bar",
ArtifactPath: "/foo/bar", ArtifactPath: "/foo/bar",
StatePath: "/foo/bar", StatePath: "/foo/bar",
}, },
@ -40,6 +42,7 @@ func TestDefineDefaultWorkspacePathsAlreadySet(t *testing.T) {
diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths()) diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, "/foo/bar", b.Config.Workspace.FilePath) assert.Equal(t, "/foo/bar", b.Config.Workspace.FilePath)
assert.Equal(t, "/foo/bar", b.Config.Workspace.ResourcePath)
assert.Equal(t, "/foo/bar", b.Config.Workspace.ArtifactPath) assert.Equal(t, "/foo/bar", b.Config.Workspace.ArtifactPath)
assert.Equal(t, "/foo/bar", b.Config.Workspace.StatePath) assert.Equal(t, "/foo/bar", b.Config.Workspace.StatePath)
} }

View File

@ -10,6 +10,7 @@ import (
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -41,7 +42,7 @@ func TestExpandGlobPathsInPipelines(t *testing.T) {
touchEmptyFile(t, filepath.Join(dir, "skip/test7.py")) touchEmptyFile(t, filepath.Join(dir, "skip/test7.py"))
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: dir, BundleRootPath: dir,
Config: config.Root{ Config: config.Root{
Resources: config.Resources{ Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{ Pipelines: map[string]*resources.Pipeline{
@ -105,8 +106,8 @@ func TestExpandGlobPathsInPipelines(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
bundletest.SetLocation(b, "resources.pipelines.pipeline.libraries[3]", filepath.Join(dir, "relative", "resource.yml")) bundletest.SetLocation(b, "resources.pipelines.pipeline.libraries[3]", []dyn.Location{{File: filepath.Join(dir, "relative", "resource.yml")}})
m := ExpandPipelineGlobPaths() m := ExpandPipelineGlobPaths()
diags := bundle.Apply(context.Background(), b, m) diags := bundle.Apply(context.Background(), b, m)

View File

@ -56,7 +56,7 @@ func (m *loadGitDetails) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagn
} }
// Compute relative path of the bundle root from the Git repo root. // Compute relative path of the bundle root from the Git repo root.
absBundlePath, err := filepath.Abs(b.RootPath) absBundlePath, err := filepath.Abs(b.BundleRootPath)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }

View File

@ -23,6 +23,7 @@ func DefaultMutators() []bundle.Mutator {
VerifyCliVersion(), VerifyCliVersion(),
EnvironmentsToTargets(), EnvironmentsToTargets(),
ComputeIdToClusterId(),
InitializeVariables(), InitializeVariables(),
DefineDefaultTarget(), DefineDefaultTarget(),
LoadGitDetails(), LoadGitDetails(),

View File

@ -39,22 +39,22 @@ func overrideJobCompute(j *resources.Job, compute string) {
func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if b.Config.Bundle.Mode != config.Development { if b.Config.Bundle.Mode != config.Development {
if b.Config.Bundle.ComputeID != "" { if b.Config.Bundle.ClusterId != "" {
return diag.Errorf("cannot override compute for an target that does not use 'mode: development'") return diag.Errorf("cannot override compute for an target that does not use 'mode: development'")
} }
return nil return nil
} }
if v := env.Get(ctx, "DATABRICKS_CLUSTER_ID"); v != "" { if v := env.Get(ctx, "DATABRICKS_CLUSTER_ID"); v != "" {
b.Config.Bundle.ComputeID = v b.Config.Bundle.ClusterId = v
} }
if b.Config.Bundle.ComputeID == "" { if b.Config.Bundle.ClusterId == "" {
return nil return nil
} }
r := b.Config.Resources r := b.Config.Resources
for i := range r.Jobs { for i := range r.Jobs {
overrideJobCompute(r.Jobs[i], b.Config.Bundle.ComputeID) overrideJobCompute(r.Jobs[i], b.Config.Bundle.ClusterId)
} }
return nil return nil

View File

@ -20,7 +20,7 @@ func TestOverrideDevelopment(t *testing.T) {
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Mode: config.Development, Mode: config.Development,
ComputeID: "newClusterID", ClusterId: "newClusterID",
}, },
Resources: config.Resources{ Resources: config.Resources{
Jobs: map[string]*resources.Job{ Jobs: map[string]*resources.Job{
@ -144,7 +144,7 @@ func TestOverrideProduction(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
ComputeID: "newClusterID", ClusterId: "newClusterID",
}, },
Resources: config.Resources{ Resources: config.Resources{
Jobs: map[string]*resources.Job{ Jobs: map[string]*resources.Job{

View File

@ -0,0 +1,115 @@
package paths
import (
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/libs/dyn"
)
type jobRewritePattern struct {
pattern dyn.Pattern
kind PathKind
skipRewrite func(string) bool
}
func noSkipRewrite(string) bool {
return false
}
func jobTaskRewritePatterns(base dyn.Pattern) []jobRewritePattern {
return []jobRewritePattern{
{
base.Append(dyn.Key("notebook_task"), dyn.Key("notebook_path")),
PathKindNotebook,
noSkipRewrite,
},
{
base.Append(dyn.Key("spark_python_task"), dyn.Key("python_file")),
PathKindWorkspaceFile,
noSkipRewrite,
},
{
base.Append(dyn.Key("dbt_task"), dyn.Key("project_directory")),
PathKindDirectory,
noSkipRewrite,
},
{
base.Append(dyn.Key("sql_task"), dyn.Key("file"), dyn.Key("path")),
PathKindWorkspaceFile,
noSkipRewrite,
},
{
base.Append(dyn.Key("libraries"), dyn.AnyIndex(), dyn.Key("whl")),
PathKindLibrary,
noSkipRewrite,
},
{
base.Append(dyn.Key("libraries"), dyn.AnyIndex(), dyn.Key("jar")),
PathKindLibrary,
noSkipRewrite,
},
{
base.Append(dyn.Key("libraries"), dyn.AnyIndex(), dyn.Key("requirements")),
PathKindWorkspaceFile,
noSkipRewrite,
},
}
}
func jobRewritePatterns() []jobRewritePattern {
// Base pattern to match all tasks in all jobs.
base := dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("jobs"),
dyn.AnyKey(),
dyn.Key("tasks"),
dyn.AnyIndex(),
)
// Compile list of patterns and their respective rewrite functions.
jobEnvironmentsPatterns := []jobRewritePattern{
{
dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("jobs"),
dyn.AnyKey(),
dyn.Key("environments"),
dyn.AnyIndex(),
dyn.Key("spec"),
dyn.Key("dependencies"),
dyn.AnyIndex(),
),
PathKindWithPrefix,
func(s string) bool {
return !libraries.IsLibraryLocal(s)
},
},
}
taskPatterns := jobTaskRewritePatterns(base)
forEachPatterns := jobTaskRewritePatterns(base.Append(dyn.Key("for_each_task"), dyn.Key("task")))
allPatterns := append(taskPatterns, jobEnvironmentsPatterns...)
allPatterns = append(allPatterns, forEachPatterns...)
return allPatterns
}
// VisitJobPaths visits all paths in job resources and applies a function to each path.
func VisitJobPaths(value dyn.Value, fn VisitFunc) (dyn.Value, error) {
var err error
var newValue = value
for _, rewritePattern := range jobRewritePatterns() {
newValue, err = dyn.MapByPattern(newValue, rewritePattern.pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
if rewritePattern.skipRewrite(v.MustString()) {
return v, nil
}
return fn(p, rewritePattern.kind, v)
})
if err != nil {
return dyn.InvalidValue, err
}
}
return newValue, nil
}

View File

@ -0,0 +1,168 @@
package paths
import (
"testing"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/dyn"
assert "github.com/databricks/cli/libs/dyn/dynassert"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require"
)
func TestVisitJobPaths(t *testing.T) {
task0 := jobs.Task{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "abc",
},
}
task1 := jobs.Task{
SparkPythonTask: &jobs.SparkPythonTask{
PythonFile: "abc",
},
}
task2 := jobs.Task{
DbtTask: &jobs.DbtTask{
ProjectDirectory: "abc",
},
}
task3 := jobs.Task{
SqlTask: &jobs.SqlTask{
File: &jobs.SqlTaskFile{
Path: "abc",
},
},
}
task4 := jobs.Task{
Libraries: []compute.Library{
{Whl: "dist/foo.whl"},
},
}
task5 := jobs.Task{
Libraries: []compute.Library{
{Jar: "dist/foo.jar"},
},
}
task6 := jobs.Task{
Libraries: []compute.Library{
{Requirements: "requirements.txt"},
},
}
job0 := &resources.Job{
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
task0,
task1,
task2,
task3,
task4,
task5,
task6,
},
},
}
root := config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job0": job0,
},
},
}
actual := visitJobPaths(t, root)
expected := []dyn.Path{
dyn.MustPathFromString("resources.jobs.job0.tasks[0].notebook_task.notebook_path"),
dyn.MustPathFromString("resources.jobs.job0.tasks[1].spark_python_task.python_file"),
dyn.MustPathFromString("resources.jobs.job0.tasks[2].dbt_task.project_directory"),
dyn.MustPathFromString("resources.jobs.job0.tasks[3].sql_task.file.path"),
dyn.MustPathFromString("resources.jobs.job0.tasks[4].libraries[0].whl"),
dyn.MustPathFromString("resources.jobs.job0.tasks[5].libraries[0].jar"),
dyn.MustPathFromString("resources.jobs.job0.tasks[6].libraries[0].requirements"),
}
assert.ElementsMatch(t, expected, actual)
}
func TestVisitJobPaths_environments(t *testing.T) {
environment0 := jobs.JobEnvironment{
Spec: &compute.Environment{
Dependencies: []string{
"dist_0/*.whl",
"dist_1/*.whl",
},
},
}
job0 := &resources.Job{
JobSettings: &jobs.JobSettings{
Environments: []jobs.JobEnvironment{
environment0,
},
},
}
root := config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job0": job0,
},
},
}
actual := visitJobPaths(t, root)
expected := []dyn.Path{
dyn.MustPathFromString("resources.jobs.job0.environments[0].spec.dependencies[0]"),
dyn.MustPathFromString("resources.jobs.job0.environments[0].spec.dependencies[1]"),
}
assert.ElementsMatch(t, expected, actual)
}
func TestVisitJobPaths_foreach(t *testing.T) {
task0 := jobs.Task{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "abc",
},
},
},
}
job0 := &resources.Job{
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
task0,
},
},
}
root := config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job0": job0,
},
},
}
actual := visitJobPaths(t, root)
expected := []dyn.Path{
dyn.MustPathFromString("resources.jobs.job0.tasks[0].for_each_task.task.notebook_task.notebook_path"),
}
assert.ElementsMatch(t, expected, actual)
}
func visitJobPaths(t *testing.T, root config.Root) []dyn.Path {
var actual []dyn.Path
err := root.Mutate(func(value dyn.Value) (dyn.Value, error) {
return VisitJobPaths(value, func(p dyn.Path, kind PathKind, v dyn.Value) (dyn.Value, error) {
actual = append(actual, p)
return v, nil
})
})
require.NoError(t, err)
return actual
}

View File

@ -0,0 +1,26 @@
package paths
import "github.com/databricks/cli/libs/dyn"
type PathKind int
const (
// PathKindLibrary is a path to a library file
PathKindLibrary = iota
// PathKindNotebook is a path to a notebook file
PathKindNotebook
// PathKindWorkspaceFile is a path to a regular workspace file,
// notebooks are not allowed because they are uploaded a special
// kind of workspace object.
PathKindWorkspaceFile
// PathKindWithPrefix is a path that starts with './'
PathKindWithPrefix
// PathKindDirectory is a path to directory
PathKindDirectory
)
type VisitFunc func(path dyn.Path, kind PathKind, value dyn.Value) (dyn.Value, error)

View File

@ -33,7 +33,7 @@ func (m *populateCurrentUser) Apply(ctx context.Context, b *bundle.Bundle) diag.
} }
b.Config.Workspace.CurrentUser = &config.User{ b.Config.Workspace.CurrentUser = &config.User{
ShortName: auth.GetShortUserName(me.UserName), ShortName: auth.GetShortUserName(me),
User: me, User: me,
} }

View File

@ -118,15 +118,18 @@ func findNonUserPath(b *bundle.Bundle) string {
if b.Config.Workspace.RootPath != "" && !containsName(b.Config.Workspace.RootPath) { if b.Config.Workspace.RootPath != "" && !containsName(b.Config.Workspace.RootPath) {
return "root_path" return "root_path"
} }
if b.Config.Workspace.StatePath != "" && !containsName(b.Config.Workspace.StatePath) {
return "state_path"
}
if b.Config.Workspace.FilePath != "" && !containsName(b.Config.Workspace.FilePath) { if b.Config.Workspace.FilePath != "" && !containsName(b.Config.Workspace.FilePath) {
return "file_path" return "file_path"
} }
if b.Config.Workspace.ResourcePath != "" && !containsName(b.Config.Workspace.ResourcePath) {
return "resource_path"
}
if b.Config.Workspace.ArtifactPath != "" && !containsName(b.Config.Workspace.ArtifactPath) { if b.Config.Workspace.ArtifactPath != "" && !containsName(b.Config.Workspace.ArtifactPath) {
return "artifact_path" return "artifact_path"
} }
if b.Config.Workspace.StatePath != "" && !containsName(b.Config.Workspace.StatePath) {
return "state_path"
}
return "" return ""
} }

View File

@ -13,6 +13,7 @@ import (
"github.com/databricks/cli/libs/tags" "github.com/databricks/cli/libs/tags"
sdkconfig "github.com/databricks/databricks-sdk-go/config" sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/iam" "github.com/databricks/databricks-sdk-go/service/iam"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/ml" "github.com/databricks/databricks-sdk-go/service/ml"
@ -119,6 +120,9 @@ func mockBundle(mode config.Mode) *bundle.Bundle {
Schemas: map[string]*resources.Schema{ Schemas: map[string]*resources.Schema{
"schema1": {CreateSchema: &catalog.CreateSchema{Name: "schema1"}}, "schema1": {CreateSchema: &catalog.CreateSchema{Name: "schema1"}},
}, },
Clusters: map[string]*resources.Cluster{
"cluster1": {ClusterSpec: &compute.ClusterSpec{ClusterName: "cluster1", SparkVersion: "13.2.x", NumWorkers: 1}},
},
}, },
}, },
// Use AWS implementation for testing. // Use AWS implementation for testing.
@ -177,6 +181,9 @@ func TestProcessTargetModeDevelopment(t *testing.T) {
// Schema 1 // Schema 1
assert.Equal(t, "dev_lennart_schema1", b.Config.Resources.Schemas["schema1"].Name) assert.Equal(t, "dev_lennart_schema1", b.Config.Resources.Schemas["schema1"].Name)
// Clusters
assert.Equal(t, "[dev lennart] cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName)
} }
func TestProcessTargetModeDevelopmentTagNormalizationForAws(t *testing.T) { func TestProcessTargetModeDevelopmentTagNormalizationForAws(t *testing.T) {
@ -281,6 +288,7 @@ func TestProcessTargetModeDefault(t *testing.T) {
assert.Equal(t, "servingendpoint1", b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Name) assert.Equal(t, "servingendpoint1", b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Name)
assert.Equal(t, "registeredmodel1", b.Config.Resources.RegisteredModels["registeredmodel1"].Name) assert.Equal(t, "registeredmodel1", b.Config.Resources.RegisteredModels["registeredmodel1"].Name)
assert.Equal(t, "qualityMonitor1", b.Config.Resources.QualityMonitors["qualityMonitor1"].TableName) assert.Equal(t, "qualityMonitor1", b.Config.Resources.QualityMonitors["qualityMonitor1"].TableName)
assert.Equal(t, "cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName)
} }
func TestProcessTargetModeProduction(t *testing.T) { func TestProcessTargetModeProduction(t *testing.T) {
@ -312,6 +320,7 @@ func TestProcessTargetModeProduction(t *testing.T) {
b.Config.Resources.Experiments["experiment2"].Permissions = permissions b.Config.Resources.Experiments["experiment2"].Permissions = permissions
b.Config.Resources.Models["model1"].Permissions = permissions b.Config.Resources.Models["model1"].Permissions = permissions
b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Permissions = permissions b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Permissions = permissions
b.Config.Resources.Clusters["cluster1"].Permissions = permissions
diags = validateProductionMode(context.Background(), b, false) diags = validateProductionMode(context.Background(), b, false)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -322,6 +331,7 @@ func TestProcessTargetModeProduction(t *testing.T) {
assert.Equal(t, "servingendpoint1", b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Name) assert.Equal(t, "servingendpoint1", b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Name)
assert.Equal(t, "registeredmodel1", b.Config.Resources.RegisteredModels["registeredmodel1"].Name) assert.Equal(t, "registeredmodel1", b.Config.Resources.RegisteredModels["registeredmodel1"].Name)
assert.Equal(t, "qualityMonitor1", b.Config.Resources.QualityMonitors["qualityMonitor1"].TableName) assert.Equal(t, "qualityMonitor1", b.Config.Resources.QualityMonitors["qualityMonitor1"].TableName)
assert.Equal(t, "cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName)
} }
func TestProcessTargetModeProductionOkForPrincipal(t *testing.T) { func TestProcessTargetModeProductionOkForPrincipal(t *testing.T) {

View File

@ -108,7 +108,7 @@ func (m *pythonMutator) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagno
return dyn.InvalidValue, fmt.Errorf("failed to create cache dir: %w", err) return dyn.InvalidValue, fmt.Errorf("failed to create cache dir: %w", err)
} }
rightRoot, diags := m.runPythonMutator(ctx, cacheDir, b.RootPath, pythonPath, leftRoot) rightRoot, diags := m.runPythonMutator(ctx, cacheDir, b.BundleRootPath, pythonPath, leftRoot)
mutateDiags = diags mutateDiags = diags
if diags.HasError() { if diags.HasError() {
return dyn.InvalidValue, mutateDiagsHasError return dyn.InvalidValue, mutateDiagsHasError

View File

@ -45,15 +45,15 @@ func (m *rewriteSyncPaths) makeRelativeTo(root string) dyn.MapFunc {
func (m *rewriteSyncPaths) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *rewriteSyncPaths) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) { err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
return dyn.Map(v, "sync", func(_ dyn.Path, v dyn.Value) (nv dyn.Value, err error) { return dyn.Map(v, "sync", func(_ dyn.Path, v dyn.Value) (nv dyn.Value, err error) {
v, err = dyn.Map(v, "paths", dyn.Foreach(m.makeRelativeTo(b.RootPath))) v, err = dyn.Map(v, "paths", dyn.Foreach(m.makeRelativeTo(b.BundleRootPath)))
if err != nil { if err != nil {
return dyn.InvalidValue, err return dyn.InvalidValue, err
} }
v, err = dyn.Map(v, "include", dyn.Foreach(m.makeRelativeTo(b.RootPath))) v, err = dyn.Map(v, "include", dyn.Foreach(m.makeRelativeTo(b.BundleRootPath)))
if err != nil { if err != nil {
return dyn.InvalidValue, err return dyn.InvalidValue, err
} }
v, err = dyn.Map(v, "exclude", dyn.Foreach(m.makeRelativeTo(b.RootPath))) v, err = dyn.Map(v, "exclude", dyn.Foreach(m.makeRelativeTo(b.BundleRootPath)))
if err != nil { if err != nil {
return dyn.InvalidValue, err return dyn.InvalidValue, err
} }

View File

@ -9,12 +9,13 @@ import (
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator" "github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func TestRewriteSyncPathsRelative(t *testing.T) { func TestRewriteSyncPathsRelative(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: ".", BundleRootPath: ".",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -33,12 +34,12 @@ func TestRewriteSyncPathsRelative(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "sync.paths[0]", "./databricks.yml") bundletest.SetLocation(b, "sync.paths[0]", []dyn.Location{{File: "./databricks.yml"}})
bundletest.SetLocation(b, "sync.paths[1]", "./databricks.yml") bundletest.SetLocation(b, "sync.paths[1]", []dyn.Location{{File: "./databricks.yml"}})
bundletest.SetLocation(b, "sync.include[0]", "./file.yml") bundletest.SetLocation(b, "sync.include[0]", []dyn.Location{{File: "./file.yml"}})
bundletest.SetLocation(b, "sync.include[1]", "./a/file.yml") bundletest.SetLocation(b, "sync.include[1]", []dyn.Location{{File: "./a/file.yml"}})
bundletest.SetLocation(b, "sync.exclude[0]", "./a/b/file.yml") bundletest.SetLocation(b, "sync.exclude[0]", []dyn.Location{{File: "./a/b/file.yml"}})
bundletest.SetLocation(b, "sync.exclude[1]", "./a/b/c/file.yml") bundletest.SetLocation(b, "sync.exclude[1]", []dyn.Location{{File: "./a/b/c/file.yml"}})
diags := bundle.Apply(context.Background(), b, mutator.RewriteSyncPaths()) diags := bundle.Apply(context.Background(), b, mutator.RewriteSyncPaths())
assert.NoError(t, diags.Error()) assert.NoError(t, diags.Error())
@ -53,7 +54,7 @@ func TestRewriteSyncPathsRelative(t *testing.T) {
func TestRewriteSyncPathsAbsolute(t *testing.T) { func TestRewriteSyncPathsAbsolute(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/dir", BundleRootPath: "/tmp/dir",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -72,12 +73,12 @@ func TestRewriteSyncPathsAbsolute(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "sync.paths[0]", "/tmp/dir/databricks.yml") bundletest.SetLocation(b, "sync.paths[0]", []dyn.Location{{File: "/tmp/dir/databricks.yml"}})
bundletest.SetLocation(b, "sync.paths[1]", "/tmp/dir/databricks.yml") bundletest.SetLocation(b, "sync.paths[1]", []dyn.Location{{File: "/tmp/dir/databricks.yml"}})
bundletest.SetLocation(b, "sync.include[0]", "/tmp/dir/file.yml") bundletest.SetLocation(b, "sync.include[0]", []dyn.Location{{File: "/tmp/dir/file.yml"}})
bundletest.SetLocation(b, "sync.include[1]", "/tmp/dir/a/file.yml") bundletest.SetLocation(b, "sync.include[1]", []dyn.Location{{File: "/tmp/dir/a/file.yml"}})
bundletest.SetLocation(b, "sync.exclude[0]", "/tmp/dir/a/b/file.yml") bundletest.SetLocation(b, "sync.exclude[0]", []dyn.Location{{File: "/tmp/dir/a/b/file.yml"}})
bundletest.SetLocation(b, "sync.exclude[1]", "/tmp/dir/a/b/c/file.yml") bundletest.SetLocation(b, "sync.exclude[1]", []dyn.Location{{File: "/tmp/dir/a/b/c/file.yml"}})
diags := bundle.Apply(context.Background(), b, mutator.RewriteSyncPaths()) diags := bundle.Apply(context.Background(), b, mutator.RewriteSyncPaths())
assert.NoError(t, diags.Error()) assert.NoError(t, diags.Error())
@ -93,7 +94,7 @@ func TestRewriteSyncPathsAbsolute(t *testing.T) {
func TestRewriteSyncPathsErrorPaths(t *testing.T) { func TestRewriteSyncPathsErrorPaths(t *testing.T) {
t.Run("no sync block", func(t *testing.T) { t.Run("no sync block", func(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: ".", BundleRootPath: ".",
} }
diags := bundle.Apply(context.Background(), b, mutator.RewriteSyncPaths()) diags := bundle.Apply(context.Background(), b, mutator.RewriteSyncPaths())
@ -102,7 +103,7 @@ func TestRewriteSyncPathsErrorPaths(t *testing.T) {
t.Run("empty include/exclude blocks", func(t *testing.T) { t.Run("empty include/exclude blocks", func(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: ".", BundleRootPath: ".",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Include: []string{}, Include: []string{},

View File

@ -32,6 +32,7 @@ func allResourceTypes(t *testing.T) []string {
// the dyn library gives us the correct list of all resources supported. Please // the dyn library gives us the correct list of all resources supported. Please
// also update this check when adding a new resource // also update this check when adding a new resource
require.Equal(t, []string{ require.Equal(t, []string{
"clusters",
"experiments", "experiments",
"jobs", "jobs",
"model_serving_endpoints", "model_serving_endpoints",
@ -133,6 +134,7 @@ func TestRunAsErrorForUnsupportedResources(t *testing.T) {
// some point in the future. These resources are (implicitly) on the deny list, since // some point in the future. These resources are (implicitly) on the deny list, since
// they are not on the allow list below. // they are not on the allow list below.
allowList := []string{ allowList := []string{
"clusters",
"jobs", "jobs",
"models", "models",
"registered_models", "registered_models",

View File

@ -15,8 +15,8 @@ import (
func TestSyncDefaultPath_DefaultIfUnset(t *testing.T) { func TestSyncDefaultPath_DefaultIfUnset(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/dir", BundleRootPath: "/tmp/some/dir",
Config: config.Root{}, Config: config.Root{},
} }
ctx := context.Background() ctx := context.Background()
@ -51,8 +51,8 @@ func TestSyncDefaultPath_SkipIfSet(t *testing.T) {
for _, tcase := range tcases { for _, tcase := range tcases {
t.Run(tcase.name, func(t *testing.T) { t.Run(tcase.name, func(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/dir", BundleRootPath: "/tmp/some/dir",
Config: config.Root{}, Config: config.Root{},
} }
diags := bundle.ApplyFunc(context.Background(), b, func(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { diags := bundle.ApplyFunc(context.Background(), b, func(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {

View File

@ -57,7 +57,7 @@ func (m *syncInferRoot) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagno
var diags diag.Diagnostics var diags diag.Diagnostics
// Use the bundle root path as the starting point for inferring the sync root path. // Use the bundle root path as the starting point for inferring the sync root path.
bundleRootPath := filepath.Clean(b.RootPath) bundleRootPath := filepath.Clean(b.BundleRootPath)
// Infer the sync root path by looking at each one of the sync paths. // Infer the sync root path by looking at each one of the sync paths.
// Every sync path must be a descendant of the final sync root path. // Every sync path must be a descendant of the final sync root path.

View File

@ -9,13 +9,14 @@ import (
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator" "github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestSyncInferRoot_NominalAbsolute(t *testing.T) { func TestSyncInferRoot_NominalAbsolute(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/dir", BundleRootPath: "/tmp/some/dir",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -46,7 +47,7 @@ func TestSyncInferRoot_NominalAbsolute(t *testing.T) {
func TestSyncInferRoot_NominalRelative(t *testing.T) { func TestSyncInferRoot_NominalRelative(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "./some/dir", BundleRootPath: "./some/dir",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -77,7 +78,7 @@ func TestSyncInferRoot_NominalRelative(t *testing.T) {
func TestSyncInferRoot_ParentDirectory(t *testing.T) { func TestSyncInferRoot_ParentDirectory(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/dir", BundleRootPath: "/tmp/some/dir",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -108,7 +109,7 @@ func TestSyncInferRoot_ParentDirectory(t *testing.T) {
func TestSyncInferRoot_ManyParentDirectories(t *testing.T) { func TestSyncInferRoot_ManyParentDirectories(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/dir/that/is/very/deeply/nested", BundleRootPath: "/tmp/some/dir/that/is/very/deeply/nested",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -145,7 +146,7 @@ func TestSyncInferRoot_ManyParentDirectories(t *testing.T) {
func TestSyncInferRoot_MultiplePaths(t *testing.T) { func TestSyncInferRoot_MultiplePaths(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/bundle/root", BundleRootPath: "/tmp/some/bundle/root",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -172,7 +173,7 @@ func TestSyncInferRoot_MultiplePaths(t *testing.T) {
func TestSyncInferRoot_Error(t *testing.T) { func TestSyncInferRoot_Error(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: "/tmp/some/dir", BundleRootPath: "/tmp/some/dir",
Config: config.Root{ Config: config.Root{
Sync: config.Sync{ Sync: config.Sync{
Paths: []string{ Paths: []string{
@ -184,7 +185,7 @@ func TestSyncInferRoot_Error(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "sync.paths", "databricks.yml") bundletest.SetLocation(b, "sync.paths", []dyn.Location{{File: "databricks.yml"}})
ctx := context.Background() ctx := context.Background()
diags := bundle.Apply(ctx, b, mutator.SyncInferRoot()) diags := bundle.Apply(ctx, b, mutator.SyncInferRoot())

View File

@ -6,45 +6,23 @@ import (
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
) )
type dashboardRewritePattern struct { func (t *translateContext) applyDashboardTranslations(v dyn.Value) (dyn.Value, error) {
pattern dyn.Pattern // Convert the `file_path` field to a local absolute path.
fn rewriteFunc // Terraform will load the file at this path and use its contents for the dashboard contents.
} pattern := dyn.NewPattern(
func (t *translateContext) dashboardRewritePatterns() []dashboardRewritePattern {
// Base pattern to match all dashboards.
base := dyn.NewPattern(
dyn.Key("resources"), dyn.Key("resources"),
dyn.Key("dashboards"), dyn.Key("dashboards"),
dyn.AnyKey(), dyn.AnyKey(),
dyn.Key("file_path"),
) )
// Compile list of configuration paths to rewrite. return dyn.MapByPattern(v, pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
return []dashboardRewritePattern{ key := p[2].Key()
{ dir, err := v.Location().Directory()
base.Append(dyn.Key("definition_path")),
t.retainLocalAbsoluteFilePath,
},
}
}
func (t *translateContext) applyDashboardTranslations(v dyn.Value) (dyn.Value, error) {
var err error
for _, rewritePattern := range t.dashboardRewritePatterns() {
v, err = dyn.MapByPattern(v, rewritePattern.pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
key := p[1].Key()
dir, err := v.Location().Directory()
if err != nil {
return dyn.InvalidValue, fmt.Errorf("unable to determine directory for dashboard %s: %w", key, err)
}
return t.rewriteRelativeTo(p, v, rewritePattern.fn, dir, "")
})
if err != nil { if err != nil {
return dyn.InvalidValue, err return dyn.InvalidValue, fmt.Errorf("unable to determine directory for dashboard %s: %w", key, err)
} }
}
return v, nil return t.rewriteRelativeTo(p, v, t.retainLocalAbsoluteFilePath, dir, "")
})
} }

View File

@ -4,97 +4,11 @@ import (
"fmt" "fmt"
"slices" "slices"
"github.com/databricks/cli/bundle/libraries" "github.com/databricks/cli/bundle/config/mutator/paths"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
) )
type jobRewritePattern struct {
pattern dyn.Pattern
fn rewriteFunc
skipRewrite func(string) bool
}
func noSkipRewrite(string) bool {
return false
}
func rewritePatterns(t *translateContext, base dyn.Pattern) []jobRewritePattern {
return []jobRewritePattern{
{
base.Append(dyn.Key("notebook_task"), dyn.Key("notebook_path")),
t.translateNotebookPath,
noSkipRewrite,
},
{
base.Append(dyn.Key("spark_python_task"), dyn.Key("python_file")),
t.translateFilePath,
noSkipRewrite,
},
{
base.Append(dyn.Key("dbt_task"), dyn.Key("project_directory")),
t.translateDirectoryPath,
noSkipRewrite,
},
{
base.Append(dyn.Key("sql_task"), dyn.Key("file"), dyn.Key("path")),
t.translateFilePath,
noSkipRewrite,
},
{
base.Append(dyn.Key("libraries"), dyn.AnyIndex(), dyn.Key("whl")),
t.translateNoOp,
noSkipRewrite,
},
{
base.Append(dyn.Key("libraries"), dyn.AnyIndex(), dyn.Key("jar")),
t.translateNoOp,
noSkipRewrite,
},
{
base.Append(dyn.Key("libraries"), dyn.AnyIndex(), dyn.Key("requirements")),
t.translateFilePath,
noSkipRewrite,
},
}
}
func (t *translateContext) jobRewritePatterns() []jobRewritePattern {
// Base pattern to match all tasks in all jobs.
base := dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("jobs"),
dyn.AnyKey(),
dyn.Key("tasks"),
dyn.AnyIndex(),
)
// Compile list of patterns and their respective rewrite functions.
jobEnvironmentsPatterns := []jobRewritePattern{
{
dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("jobs"),
dyn.AnyKey(),
dyn.Key("environments"),
dyn.AnyIndex(),
dyn.Key("spec"),
dyn.Key("dependencies"),
dyn.AnyIndex(),
),
t.translateNoOpWithPrefix,
func(s string) bool {
return !libraries.IsLibraryLocal(s)
},
},
}
taskPatterns := rewritePatterns(t, base)
forEachPatterns := rewritePatterns(t, base.Append(dyn.Key("for_each_task"), dyn.Key("task")))
allPatterns := append(taskPatterns, jobEnvironmentsPatterns...)
allPatterns = append(allPatterns, forEachPatterns...)
return allPatterns
}
func (t *translateContext) applyJobTranslations(v dyn.Value) (dyn.Value, error) { func (t *translateContext) applyJobTranslations(v dyn.Value) (dyn.Value, error) {
var err error var err error
@ -111,30 +25,41 @@ func (t *translateContext) applyJobTranslations(v dyn.Value) (dyn.Value, error)
} }
} }
for _, rewritePattern := range t.jobRewritePatterns() { return paths.VisitJobPaths(v, func(p dyn.Path, kind paths.PathKind, v dyn.Value) (dyn.Value, error) {
v, err = dyn.MapByPattern(v, rewritePattern.pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) { key := p[2].Key()
key := p[2].Key()
// Skip path translation if the job is using git source. // Skip path translation if the job is using git source.
if slices.Contains(ignore, key) { if slices.Contains(ignore, key) {
return v, nil return v, nil
} }
dir, err := v.Location().Directory() dir, err := v.Location().Directory()
if err != nil { if err != nil {
return dyn.InvalidValue, fmt.Errorf("unable to determine directory for job %s: %w", key, err) return dyn.InvalidValue, fmt.Errorf("unable to determine directory for job %s: %w", key, err)
} }
sv := v.MustString() rewritePatternFn, err := t.getRewritePatternFn(kind)
if rewritePattern.skipRewrite(sv) {
return v, nil
}
return t.rewriteRelativeTo(p, v, rewritePattern.fn, dir, fallback[key])
})
if err != nil { if err != nil {
return dyn.InvalidValue, err return dyn.InvalidValue, err
} }
return t.rewriteRelativeTo(p, v, rewritePatternFn, dir, fallback[key])
})
}
func (t *translateContext) getRewritePatternFn(kind paths.PathKind) (rewriteFunc, error) {
switch kind {
case paths.PathKindLibrary:
return t.translateNoOp, nil
case paths.PathKindNotebook:
return t.translateNotebookPath, nil
case paths.PathKindWorkspaceFile:
return t.translateFilePath, nil
case paths.PathKindDirectory:
return t.translateDirectoryPath, nil
case paths.PathKindWithPrefix:
return t.translateNoOpWithPrefix, nil
} }
return v, nil return nil, fmt.Errorf("unsupported path kind: %d", kind)
} }

View File

@ -82,7 +82,7 @@ func TestTranslatePathsSkippedWithGitSource(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -210,7 +210,7 @@ func TestTranslatePaths(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -346,8 +346,8 @@ func TestTranslatePathsInSubdirectories(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "resources.jobs", filepath.Join(dir, "job/resource.yml")) bundletest.SetLocation(b, "resources.jobs", []dyn.Location{{File: filepath.Join(dir, "job/resource.yml")}})
bundletest.SetLocation(b, "resources.pipelines", filepath.Join(dir, "pipeline/resource.yml")) bundletest.SetLocation(b, "resources.pipelines", []dyn.Location{{File: filepath.Join(dir, "pipeline/resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -408,7 +408,7 @@ func TestTranslatePathsOutsideSyncRoot(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "../resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "../resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.ErrorContains(t, diags.Error(), "is not contained in sync root path") assert.ErrorContains(t, diags.Error(), "is not contained in sync root path")
@ -439,7 +439,7 @@ func TestJobNotebookDoesNotExistError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "fake.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "fake.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.EqualError(t, diags.Error(), "notebook ./doesnt_exist.py not found") assert.EqualError(t, diags.Error(), "notebook ./doesnt_exist.py not found")
@ -470,7 +470,7 @@ func TestJobFileDoesNotExistError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "fake.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "fake.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.EqualError(t, diags.Error(), "file ./doesnt_exist.py not found") assert.EqualError(t, diags.Error(), "file ./doesnt_exist.py not found")
@ -501,7 +501,7 @@ func TestPipelineNotebookDoesNotExistError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "fake.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "fake.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.EqualError(t, diags.Error(), "notebook ./doesnt_exist.py not found") assert.EqualError(t, diags.Error(), "notebook ./doesnt_exist.py not found")
@ -532,7 +532,7 @@ func TestPipelineFileDoesNotExistError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "fake.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "fake.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.EqualError(t, diags.Error(), "file ./doesnt_exist.py not found") assert.EqualError(t, diags.Error(), "file ./doesnt_exist.py not found")
@ -567,7 +567,7 @@ func TestJobSparkPythonTaskWithNotebookSourceError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.ErrorContains(t, diags.Error(), `expected a file for "resources.jobs.job.tasks[0].spark_python_task.python_file" but got a notebook`) assert.ErrorContains(t, diags.Error(), `expected a file for "resources.jobs.job.tasks[0].spark_python_task.python_file" but got a notebook`)
@ -602,7 +602,7 @@ func TestJobNotebookTaskWithFileSourceError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.ErrorContains(t, diags.Error(), `expected a notebook for "resources.jobs.job.tasks[0].notebook_task.notebook_path" but got a file`) assert.ErrorContains(t, diags.Error(), `expected a notebook for "resources.jobs.job.tasks[0].notebook_task.notebook_path" but got a file`)
@ -637,7 +637,7 @@ func TestPipelineNotebookLibraryWithFileSourceError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.ErrorContains(t, diags.Error(), `expected a notebook for "resources.pipelines.pipeline.libraries[0].notebook.path" but got a file`) assert.ErrorContains(t, diags.Error(), `expected a notebook for "resources.pipelines.pipeline.libraries[0].notebook.path" but got a file`)
@ -672,7 +672,7 @@ func TestPipelineFileLibraryWithNotebookSourceError(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
assert.ErrorContains(t, diags.Error(), `expected a file for "resources.pipelines.pipeline.libraries[0].file.path" but got a notebook`) assert.ErrorContains(t, diags.Error(), `expected a file for "resources.pipelines.pipeline.libraries[0].file.path" but got a notebook`)
@ -710,7 +710,7 @@ func TestTranslatePathJobEnvironments(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "resources.jobs", filepath.Join(dir, "job/resource.yml")) bundletest.SetLocation(b, "resources.jobs", []dyn.Location{{File: filepath.Join(dir, "job/resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths()) diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
@ -753,8 +753,8 @@ func TestTranslatePathWithComplexVariables(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "variables", filepath.Join(dir, "variables/variables.yml")) bundletest.SetLocation(b, "variables", []dyn.Location{{File: filepath.Join(dir, "variables/variables.yml")}})
bundletest.SetLocation(b, "resources.jobs", filepath.Join(dir, "job/resource.yml")) bundletest.SetLocation(b, "resources.jobs", []dyn.Location{{File: filepath.Join(dir, "job/resource.yml")}})
ctx := context.Background() ctx := context.Background()
// Assign the variables to the dynamic configuration. // Assign the variables to the dynamic configuration.

View File

@ -19,6 +19,7 @@ type Resources struct {
RegisteredModels map[string]*resources.RegisteredModel `json:"registered_models,omitempty"` RegisteredModels map[string]*resources.RegisteredModel `json:"registered_models,omitempty"`
QualityMonitors map[string]*resources.QualityMonitor `json:"quality_monitors,omitempty"` QualityMonitors map[string]*resources.QualityMonitor `json:"quality_monitors,omitempty"`
Schemas map[string]*resources.Schema `json:"schemas,omitempty"` Schemas map[string]*resources.Schema `json:"schemas,omitempty"`
Clusters map[string]*resources.Cluster `json:"clusters,omitempty"`
Dashboards map[string]*resources.Dashboard `json:"dashboards,omitempty"` Dashboards map[string]*resources.Dashboard `json:"dashboards,omitempty"`
} }

View File

@ -0,0 +1,39 @@
package resources
import (
"context"
"github.com/databricks/cli/libs/log"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/marshal"
"github.com/databricks/databricks-sdk-go/service/compute"
)
type Cluster struct {
ID string `json:"id,omitempty" bundle:"readonly"`
Permissions []Permission `json:"permissions,omitempty"`
ModifiedStatus ModifiedStatus `json:"modified_status,omitempty" bundle:"internal"`
*compute.ClusterSpec
}
func (s *Cluster) UnmarshalJSON(b []byte) error {
return marshal.Unmarshal(b, s)
}
func (s Cluster) MarshalJSON() ([]byte, error) {
return marshal.Marshal(s)
}
func (s *Cluster) Exists(ctx context.Context, w *databricks.WorkspaceClient, id string) (bool, error) {
_, err := w.Clusters.GetByClusterId(ctx, id)
if err != nil {
log.Debugf(ctx, "cluster %s does not exist", id)
return false, err
}
return true, nil
}
func (s *Cluster) TerraformResourceName() string {
return "databricks_cluster"
}

View File

@ -18,21 +18,36 @@ type Dashboard struct {
// === BEGIN OF API FIELDS === // === BEGIN OF API FIELDS ===
// =========================== // ===========================
// DisplayName is the name of the dashboard (both as title and as basename in the workspace). // DisplayName is the display name of the dashboard (both as title and as basename in the workspace).
DisplayName string `json:"display_name,omitempty"` DisplayName string `json:"display_name"`
// ParentPath is the path to the parent directory of the dashboard. // WarehouseID is the ID of the SQL Warehouse used to run the dashboard's queries.
WarehouseID string `json:"warehouse_id"`
// SerializedDashboard holds the contents of the dashboard in serialized JSON form.
// Note: its type is any and not string such that it can be inlined as YAML.
// If it is not a string, its contents will be marshalled as JSON.
SerializedDashboard any `json:"serialized_dashboard,omitempty"`
// ParentPath is the workspace path of the folder containing the dashboard.
// Includes leading slash and no trailing slash.
//
// Defaults to ${workspace.resource_path} if not set.
ParentPath string `json:"parent_path,omitempty"` ParentPath string `json:"parent_path,omitempty"`
// WarehouseID is the ID of the warehouse to use for the dashboard. // EmbedCredentials is a flag to indicate if the publisher's credentials should
WarehouseID string `json:"warehouse_id,omitempty"` // be embedded in the published dashboard. These embedded credentials will be used
// to execute the published dashboard's queries.
//
// Defaults to false if not set.
EmbedCredentials bool `json:"embed_credentials,omitempty"`
// =========================== // ===========================
// ==== END OF API FIELDS ==== // ==== END OF API FIELDS ====
// =========================== // ===========================
// DefinitionPath points to the local `.lvdash.json` file containing the dashboard definition. // FilePath points to the local `.lvdash.json` file containing the dashboard definition.
DefinitionPath string `json:"definition_path,omitempty"` FilePath string `json:"file_path,omitempty"`
} }
func (s *Dashboard) UnmarshalJSON(b []byte) error { func (s *Dashboard) UnmarshalJSON(b []byte) error {
@ -43,7 +58,7 @@ func (s Dashboard) MarshalJSON() ([]byte, error) {
return marshal.Marshal(s) return marshal.Marshal(s)
} }
func (_ *Dashboard) Exists(ctx context.Context, w *databricks.WorkspaceClient, id string) (bool, error) { func (*Dashboard) Exists(ctx context.Context, w *databricks.WorkspaceClient, id string) (bool, error) {
_, err := w.Lakeview.Get(ctx, dashboards.GetDashboardRequest{ _, err := w.Lakeview.Get(ctx, dashboards.GetDashboardRequest{
DashboardId: id, DashboardId: id,
}) })
@ -54,6 +69,6 @@ func (_ *Dashboard) Exists(ctx context.Context, w *databricks.WorkspaceClient, i
return true, nil return true, nil
} }
func (_ *Dashboard) TerraformResourceName() string { func (*Dashboard) TerraformResourceName() string {
return "databricks_dashboard" return "databricks_dashboard"
} }

View File

@ -366,9 +366,9 @@ func (r *Root) MergeTargetOverrides(name string) error {
} }
} }
// Merge `compute_id`. This field must be overwritten if set, not merged. // Merge `cluster_id`. This field must be overwritten if set, not merged.
if v := target.Get("compute_id"); v.Kind() != dyn.KindInvalid { if v := target.Get("cluster_id"); v.Kind() != dyn.KindInvalid {
root, err = dyn.SetByPath(root, dyn.NewPath(dyn.Key("bundle"), dyn.Key("compute_id")), v) root, err = dyn.SetByPath(root, dyn.NewPath(dyn.Key("bundle"), dyn.Key("cluster_id")), v)
if err != nil { if err != nil {
return err return err
} }
@ -406,23 +406,45 @@ func (r *Root) MergeTargetOverrides(name string) error {
return r.updateWithDynamicValue(root) return r.updateWithDynamicValue(root)
} }
var variableKeywords = []string{"default", "lookup"} var allowedVariableDefinitions = []([]string){
{"default", "type", "description"},
{"default", "type"},
{"default", "description"},
{"lookup", "description"},
{"default"},
{"lookup"},
}
// isFullVariableOverrideDef checks if the given value is a full syntax varaible override. // isFullVariableOverrideDef checks if the given value is a full syntax varaible override.
// A full syntax variable override is a map with only one of the following // A full syntax variable override is a map with either 1 of 2 keys.
// keys: "default", "lookup". // If it's 2 keys, the keys should be "default" and "type".
// If it's 1 key, the key should be one of the following keys: "default", "lookup".
func isFullVariableOverrideDef(v dyn.Value) bool { func isFullVariableOverrideDef(v dyn.Value) bool {
mv, ok := v.AsMap() mv, ok := v.AsMap()
if !ok { if !ok {
return false return false
} }
if mv.Len() != 1 { // If the map has more than 3 keys, it is not a full variable override.
if mv.Len() > 3 {
return false return false
} }
for _, keyword := range variableKeywords { for _, keys := range allowedVariableDefinitions {
if _, ok := mv.GetByString(keyword); ok { if len(keys) != mv.Len() {
continue
}
// Check if the keys are the same.
match := true
for _, key := range keys {
if _, ok := mv.GetByString(key); !ok {
match = false
break
}
}
if match {
return true return true
} }
} }

View File

@ -6,6 +6,7 @@ import (
"testing" "testing"
"github.com/databricks/cli/bundle/config/variable" "github.com/databricks/cli/bundle/config/variable"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -169,3 +170,87 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
assert.Equal(t, "complex var", root.Variables["complex"].Description) assert.Equal(t, "complex var", root.Variables["complex"].Description)
} }
func TestIsFullVariableOverrideDef(t *testing.T) {
testCases := []struct {
value dyn.Value
expected bool
}{
{
value: dyn.V(map[string]dyn.Value{
"type": dyn.V("string"),
"default": dyn.V("foo"),
"description": dyn.V("foo var"),
}),
expected: true,
},
{
value: dyn.V(map[string]dyn.Value{
"type": dyn.V("string"),
"lookup": dyn.V("foo"),
"description": dyn.V("foo var"),
}),
expected: false,
},
{
value: dyn.V(map[string]dyn.Value{
"type": dyn.V("string"),
"default": dyn.V("foo"),
}),
expected: true,
},
{
value: dyn.V(map[string]dyn.Value{
"type": dyn.V("string"),
"lookup": dyn.V("foo"),
}),
expected: false,
},
{
value: dyn.V(map[string]dyn.Value{
"description": dyn.V("string"),
"default": dyn.V("foo"),
}),
expected: true,
},
{
value: dyn.V(map[string]dyn.Value{
"description": dyn.V("string"),
"lookup": dyn.V("foo"),
}),
expected: true,
},
{
value: dyn.V(map[string]dyn.Value{
"default": dyn.V("foo"),
}),
expected: true,
},
{
value: dyn.V(map[string]dyn.Value{
"lookup": dyn.V("foo"),
}),
expected: true,
},
{
value: dyn.V(map[string]dyn.Value{
"type": dyn.V("string"),
}),
expected: false,
},
{
value: dyn.V(map[string]dyn.Value{
"type": dyn.V("string"),
"default": dyn.V("foo"),
"description": dyn.V("foo var"),
"lookup": dyn.V("foo"),
}),
expected: false,
},
}
for i, tc := range testCases {
assert.Equal(t, tc.expected, isFullVariableOverrideDef(tc.value), "test case %d", i)
}
}

View File

@ -24,8 +24,11 @@ type Target struct {
// name prefix of deployed resources. // name prefix of deployed resources.
Presets Presets `json:"presets,omitempty"` Presets Presets `json:"presets,omitempty"`
// Overrides the compute used for jobs and other supported assets. // DEPRECATED: Overrides the compute used for jobs and other supported assets.
ComputeID string `json:"compute_id,omitempty"` ComputeId string `json:"compute_id,omitempty"`
// Overrides the cluster used for jobs and other supported assets.
ClusterId string `json:"cluster_id,omitempty"`
Bundle *Bundle `json:"bundle,omitempty"` Bundle *Bundle `json:"bundle,omitempty"`

View File

@ -0,0 +1,161 @@
package validate
import (
"context"
"fmt"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/jobs"
)
// JobTaskClusterSpec validates that job tasks have cluster spec defined
// if task requires a cluster
func JobTaskClusterSpec() bundle.ReadOnlyMutator {
return &jobTaskClusterSpec{}
}
type jobTaskClusterSpec struct {
}
func (v *jobTaskClusterSpec) Name() string {
return "validate:job_task_cluster_spec"
}
func (v *jobTaskClusterSpec) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
diags := diag.Diagnostics{}
jobsPath := dyn.NewPath(dyn.Key("resources"), dyn.Key("jobs"))
for resourceName, job := range rb.Config().Resources.Jobs {
resourcePath := jobsPath.Append(dyn.Key(resourceName))
for taskIndex, task := range job.Tasks {
taskPath := resourcePath.Append(dyn.Key("tasks"), dyn.Index(taskIndex))
diags = diags.Extend(validateJobTask(rb, task, taskPath))
}
}
return diags
}
func validateJobTask(rb bundle.ReadOnlyBundle, task jobs.Task, taskPath dyn.Path) diag.Diagnostics {
diags := diag.Diagnostics{}
var specified []string
var unspecified []string
if task.JobClusterKey != "" {
specified = append(specified, "job_cluster_key")
} else {
unspecified = append(unspecified, "job_cluster_key")
}
if task.EnvironmentKey != "" {
specified = append(specified, "environment_key")
} else {
unspecified = append(unspecified, "environment_key")
}
if task.ExistingClusterId != "" {
specified = append(specified, "existing_cluster_id")
} else {
unspecified = append(unspecified, "existing_cluster_id")
}
if task.NewCluster != nil {
specified = append(specified, "new_cluster")
} else {
unspecified = append(unspecified, "new_cluster")
}
if task.ForEachTask != nil {
forEachTaskPath := taskPath.Append(dyn.Key("for_each_task"), dyn.Key("task"))
diags = diags.Extend(validateJobTask(rb, task.ForEachTask.Task, forEachTaskPath))
}
if isComputeTask(task) && len(specified) == 0 {
if task.NotebookTask != nil {
// notebook tasks without cluster spec will use notebook environment
} else {
// path might be not very helpful, adding user-specified task key clarifies the context
detail := fmt.Sprintf(
"Task %q requires a cluster or an environment to run.\nSpecify one of the following fields: %s.",
task.TaskKey,
strings.Join(unspecified, ", "),
)
diags = diags.Append(diag.Diagnostic{
Severity: diag.Error,
Summary: "Missing required cluster or environment settings",
Detail: detail,
Locations: rb.Config().GetLocations(taskPath.String()),
Paths: []dyn.Path{taskPath},
})
}
}
return diags
}
// isComputeTask returns true if the task runs on a cluster or serverless GC
func isComputeTask(task jobs.Task) bool {
if task.NotebookTask != nil {
// if warehouse_id is set, it's SQL notebook that doesn't need cluster or serverless GC
if task.NotebookTask.WarehouseId != "" {
return false
} else {
// task settings don't require specifying a cluster/serverless GC, but task itself can run on one
// we handle that case separately in validateJobTask
return true
}
}
if task.PythonWheelTask != nil {
return true
}
if task.DbtTask != nil {
return true
}
if task.SparkJarTask != nil {
return true
}
if task.SparkSubmitTask != nil {
return true
}
if task.SparkPythonTask != nil {
return true
}
if task.SqlTask != nil {
return false
}
if task.PipelineTask != nil {
// while pipelines use clusters, pipeline tasks don't, they only trigger pipelines
return false
}
if task.RunJobTask != nil {
return false
}
if task.ConditionTask != nil {
return false
}
// for each task doesn't use clusters, underlying task(s) can though
if task.ForEachTask != nil {
return false
}
return false
}

View File

@ -0,0 +1,203 @@
package validate
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/assert"
)
func TestJobTaskClusterSpec(t *testing.T) {
expectedSummary := "Missing required cluster or environment settings"
type testCase struct {
name string
task jobs.Task
errorPath string
errorDetail string
errorSummary string
}
testCases := []testCase{
{
name: "valid notebook task",
task: jobs.Task{
// while a cluster is needed, it will use notebook environment to create one
NotebookTask: &jobs.NotebookTask{},
},
},
{
name: "valid notebook task (job_cluster_key)",
task: jobs.Task{
JobClusterKey: "cluster1",
NotebookTask: &jobs.NotebookTask{},
},
},
{
name: "valid notebook task (new_cluster)",
task: jobs.Task{
NewCluster: &compute.ClusterSpec{},
NotebookTask: &jobs.NotebookTask{},
},
},
{
name: "valid notebook task (existing_cluster_id)",
task: jobs.Task{
ExistingClusterId: "cluster1",
NotebookTask: &jobs.NotebookTask{},
},
},
{
name: "valid SQL notebook task",
task: jobs.Task{
NotebookTask: &jobs.NotebookTask{
WarehouseId: "warehouse1",
},
},
},
{
name: "valid python wheel task",
task: jobs.Task{
JobClusterKey: "cluster1",
PythonWheelTask: &jobs.PythonWheelTask{},
},
},
{
name: "valid python wheel task (environment_key)",
task: jobs.Task{
EnvironmentKey: "environment1",
PythonWheelTask: &jobs.PythonWheelTask{},
},
},
{
name: "valid dbt task",
task: jobs.Task{
JobClusterKey: "cluster1",
DbtTask: &jobs.DbtTask{},
},
},
{
name: "valid spark jar task",
task: jobs.Task{
JobClusterKey: "cluster1",
SparkJarTask: &jobs.SparkJarTask{},
},
},
{
name: "valid spark submit",
task: jobs.Task{
NewCluster: &compute.ClusterSpec{},
SparkSubmitTask: &jobs.SparkSubmitTask{},
},
},
{
name: "valid spark python task",
task: jobs.Task{
JobClusterKey: "cluster1",
SparkPythonTask: &jobs.SparkPythonTask{},
},
},
{
name: "valid SQL task",
task: jobs.Task{
SqlTask: &jobs.SqlTask{},
},
},
{
name: "valid pipeline task",
task: jobs.Task{
PipelineTask: &jobs.PipelineTask{},
},
},
{
name: "valid run job task",
task: jobs.Task{
RunJobTask: &jobs.RunJobTask{},
},
},
{
name: "valid condition task",
task: jobs.Task{
ConditionTask: &jobs.ConditionTask{},
},
},
{
name: "valid for each task",
task: jobs.Task{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
JobClusterKey: "cluster1",
NotebookTask: &jobs.NotebookTask{},
},
},
},
},
{
name: "invalid python wheel task",
task: jobs.Task{
PythonWheelTask: &jobs.PythonWheelTask{},
TaskKey: "my_task",
},
errorPath: "resources.jobs.job1.tasks[0]",
errorDetail: `Task "my_task" requires a cluster or an environment to run.
Specify one of the following fields: job_cluster_key, environment_key, existing_cluster_id, new_cluster.`,
errorSummary: expectedSummary,
},
{
name: "invalid for each task",
task: jobs.Task{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
PythonWheelTask: &jobs.PythonWheelTask{},
TaskKey: "my_task",
},
},
},
errorPath: "resources.jobs.job1.tasks[0].for_each_task.task",
errorDetail: `Task "my_task" requires a cluster or an environment to run.
Specify one of the following fields: job_cluster_key, environment_key, existing_cluster_id, new_cluster.`,
errorSummary: expectedSummary,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
job := &resources.Job{
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{tc.task},
},
}
b := createBundle(map[string]*resources.Job{"job1": job})
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobTaskClusterSpec())
if tc.errorPath != "" || tc.errorDetail != "" || tc.errorSummary != "" {
assert.Len(t, diags, 1)
assert.Len(t, diags[0].Paths, 1)
diag := diags[0]
assert.Equal(t, tc.errorPath, diag.Paths[0].String())
assert.Equal(t, tc.errorSummary, diag.Summary)
assert.Equal(t, tc.errorDetail, diag.Detail)
} else {
assert.ElementsMatch(t, []string{}, diags)
}
})
}
}
func createBundle(jobs map[string]*resources.Job) *bundle.Bundle {
return &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: jobs,
},
},
}
}

View File

@ -34,6 +34,7 @@ func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics
JobClusterKeyDefined(), JobClusterKeyDefined(),
FilesToSync(), FilesToSync(),
ValidateSyncPatterns(), ValidateSyncPatterns(),
JobTaskClusterSpec(),
)) ))
} }

View File

@ -54,6 +54,11 @@ type Workspace struct {
// This defaults to "${workspace.root}/files". // This defaults to "${workspace.root}/files".
FilePath string `json:"file_path,omitempty"` FilePath string `json:"file_path,omitempty"`
// Remote workspace path for resources with a presence in the workspace.
// These are kept outside [FilePath] to avoid potential naming collisions.
// This defaults to "${workspace.root}/resources".
ResourcePath string `json:"resource_path,omitempty"`
// Remote workspace path for build artifacts. // Remote workspace path for build artifacts.
// This defaults to "${workspace.root}/artifacts". // This defaults to "${workspace.root}/artifacts".
ArtifactPath string `json:"artifact_path,omitempty"` ArtifactPath string `json:"artifact_path,omitempty"`

View File

@ -8,9 +8,12 @@ import (
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/sync"
) )
type upload struct{} type upload struct {
outputHandler sync.OutputHandler
}
func (m *upload) Name() string { func (m *upload) Name() string {
return "files.Upload" return "files.Upload"
@ -18,11 +21,18 @@ func (m *upload) Name() string {
func (m *upload) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *upload) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
cmdio.LogString(ctx, fmt.Sprintf("Uploading bundle files to %s...", b.Config.Workspace.FilePath)) cmdio.LogString(ctx, fmt.Sprintf("Uploading bundle files to %s...", b.Config.Workspace.FilePath))
sync, err := GetSync(ctx, bundle.ReadOnly(b)) opts, err := GetSyncOptions(ctx, bundle.ReadOnly(b))
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }
opts.OutputHandler = m.outputHandler
sync, err := sync.New(ctx, *opts)
if err != nil {
return diag.FromErr(err)
}
defer sync.Close()
b.Files, err = sync.RunOnce(ctx) b.Files, err = sync.RunOnce(ctx)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
@ -32,6 +42,6 @@ func (m *upload) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
return nil return nil
} }
func Upload() bundle.Mutator { func Upload(outputHandler sync.OutputHandler) bundle.Mutator {
return &upload{} return &upload{outputHandler}
} }

View File

@ -40,7 +40,7 @@ func (m *compute) Apply(_ context.Context, b *bundle.Bundle) diag.Diagnostics {
// Compute config file path the job is defined in, relative to the bundle // Compute config file path the job is defined in, relative to the bundle
// root // root
l := b.Config.GetLocation("resources.jobs." + name) l := b.Config.GetLocation("resources.jobs." + name)
relativePath, err := filepath.Rel(b.RootPath, l.File) relativePath, err := filepath.Rel(b.BundleRootPath, l.File)
if err != nil { if err != nil {
return diag.Errorf("failed to compute relative path for job %s: %v", name, err) return diag.Errorf("failed to compute relative path for job %s: %v", name, err)
} }

View File

@ -9,6 +9,7 @@ import (
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/bundle/metadata" "github.com/databricks/cli/bundle/metadata"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -55,9 +56,9 @@ func TestComputeMetadataMutator(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "resources.jobs.my-job-1", "a/b/c") bundletest.SetLocation(b, "resources.jobs.my-job-1", []dyn.Location{{File: "a/b/c"}})
bundletest.SetLocation(b, "resources.jobs.my-job-2", "d/e/f") bundletest.SetLocation(b, "resources.jobs.my-job-2", []dyn.Location{{File: "d/e/f"}})
bundletest.SetLocation(b, "resources.pipelines.my-pipeline", "abc") bundletest.SetLocation(b, "resources.pipelines.my-pipeline", []dyn.Location{{File: "abc"}})
expectedMetadata := metadata.Metadata{ expectedMetadata := metadata.Metadata{
Version: metadata.Version, Version: metadata.Version,

View File

@ -62,8 +62,8 @@ func testStatePull(t *testing.T, opts statePullOpts) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: tmpDir, BundleRootPath: tmpDir,
BundleRoot: vfs.MustNew(tmpDir), BundleRoot: vfs.MustNew(tmpDir),
SyncRootPath: tmpDir, SyncRootPath: tmpDir,
SyncRoot: vfs.MustNew(tmpDir), SyncRoot: vfs.MustNew(tmpDir),
@ -259,7 +259,7 @@ func TestStatePullNoState(t *testing.T) {
}} }}
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",
@ -447,7 +447,7 @@ func TestStatePullNewerDeploymentStateVersion(t *testing.T) {
}} }}
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",

View File

@ -45,7 +45,7 @@ func TestStatePush(t *testing.T) {
}} }}
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",

View File

@ -27,7 +27,7 @@ func setupBundleForStateUpdate(t *testing.T) *bundle.Bundle {
require.NoError(t, err) require.NoError(t, err)
return &bundle.Bundle{ return &bundle.Bundle{
RootPath: tmpDir, BundleRootPath: tmpDir,
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",

View File

@ -4,6 +4,7 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"sort"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
@ -82,6 +83,10 @@ func BundleToTerraform(config *config.Root) *schema.Root {
conv(src, &dst) conv(src, &dst)
if src.JobSettings != nil { if src.JobSettings != nil {
sort.Slice(src.JobSettings.Tasks, func(i, j int) bool {
return src.JobSettings.Tasks[i].TaskKey < src.JobSettings.Tasks[j].TaskKey
})
for _, v := range src.Tasks { for _, v := range src.Tasks {
var t schema.ResourceJobTask var t schema.ResourceJobTask
conv(v, &t) conv(v, &t)
@ -231,6 +236,13 @@ func BundleToTerraform(config *config.Root) *schema.Root {
tfroot.Resource.QualityMonitor[k] = &dst tfroot.Resource.QualityMonitor[k] = &dst
} }
for k, src := range config.Resources.Clusters {
noResources = false
var dst schema.ResourceCluster
conv(src, &dst)
tfroot.Resource.Cluster[k] = &dst
}
// We explicitly set "resource" to nil to omit it from a JSON encoding. // We explicitly set "resource" to nil to omit it from a JSON encoding.
// This is required because the terraform CLI requires >= 1 resources defined // This is required because the terraform CLI requires >= 1 resources defined
// if the "resource" property is used in a .tf.json file. // if the "resource" property is used in a .tf.json file.
@ -394,6 +406,16 @@ func TerraformToBundle(state *resourcesState, config *config.Root) error {
} }
cur.ID = instance.Attributes.ID cur.ID = instance.Attributes.ID
config.Resources.Schemas[resource.Name] = cur config.Resources.Schemas[resource.Name] = cur
case "databricks_cluster":
if config.Resources.Clusters == nil {
config.Resources.Clusters = make(map[string]*resources.Cluster)
}
cur := config.Resources.Clusters[resource.Name]
if cur == nil {
cur = &resources.Cluster{ModifiedStatus: resources.ModifiedStatusDeleted}
}
cur.ID = instance.Attributes.ID
config.Resources.Clusters[resource.Name] = cur
case "databricks_dashboard": case "databricks_dashboard":
if config.Resources.Dashboards == nil { if config.Resources.Dashboards == nil {
config.Resources.Dashboards = make(map[string]*resources.Dashboard) config.Resources.Dashboards = make(map[string]*resources.Dashboard)
@ -453,6 +475,11 @@ func TerraformToBundle(state *resourcesState, config *config.Root) error {
src.ModifiedStatus = resources.ModifiedStatusCreated src.ModifiedStatus = resources.ModifiedStatusCreated
} }
} }
for _, src := range config.Resources.Clusters {
if src.ModifiedStatus == "" && src.ID == "" {
src.ModifiedStatus = resources.ModifiedStatusCreated
}
}
return nil return nil
} }

View File

@ -58,9 +58,12 @@ func TestBundleToTerraformJob(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) vin, err := convert.FromTyped(config, dyn.NilValue)
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) require.NoError(t, err)
out, err := BundleToTerraformWithDynValue(context.Background(), vin)
require.NoError(t, err)
resource := out.Resource.Job["my_job"].(*schema.ResourceJob)
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
assert.Len(t, resource.JobCluster, 1) assert.Len(t, resource.JobCluster, 1)
assert.Equal(t, "https://github.com/foo/bar", resource.GitSource.Url) assert.Equal(t, "https://github.com/foo/bar", resource.GitSource.Url)
@ -663,6 +666,14 @@ func TestTerraformToBundleEmptyLocalResources(t *testing.T) {
{Attributes: stateInstanceAttributes{ID: "1"}}, {Attributes: stateInstanceAttributes{ID: "1"}},
}, },
}, },
{
Type: "databricks_cluster",
Mode: "managed",
Name: "test_cluster",
Instances: []stateResourceInstance{
{Attributes: stateInstanceAttributes{ID: "1"}},
},
},
}, },
} }
err := TerraformToBundle(&tfState, &config) err := TerraformToBundle(&tfState, &config)
@ -692,6 +703,9 @@ func TestTerraformToBundleEmptyLocalResources(t *testing.T) {
assert.Equal(t, "1", config.Resources.Schemas["test_schema"].ID) assert.Equal(t, "1", config.Resources.Schemas["test_schema"].ID)
assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Schemas["test_schema"].ModifiedStatus) assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Schemas["test_schema"].ModifiedStatus)
assert.Equal(t, "1", config.Resources.Clusters["test_cluster"].ID)
assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Clusters["test_cluster"].ModifiedStatus)
AssertFullResourceCoverage(t, &config) AssertFullResourceCoverage(t, &config)
} }
@ -754,6 +768,13 @@ func TestTerraformToBundleEmptyRemoteResources(t *testing.T) {
}, },
}, },
}, },
Clusters: map[string]*resources.Cluster{
"test_cluster": {
ClusterSpec: &compute.ClusterSpec{
ClusterName: "test_cluster",
},
},
},
}, },
} }
var tfState = resourcesState{ var tfState = resourcesState{
@ -786,6 +807,9 @@ func TestTerraformToBundleEmptyRemoteResources(t *testing.T) {
assert.Equal(t, "", config.Resources.Schemas["test_schema"].ID) assert.Equal(t, "", config.Resources.Schemas["test_schema"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Schemas["test_schema"].ModifiedStatus) assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Schemas["test_schema"].ModifiedStatus)
assert.Equal(t, "", config.Resources.Clusters["test_cluster"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Clusters["test_cluster"].ModifiedStatus)
AssertFullResourceCoverage(t, &config) AssertFullResourceCoverage(t, &config)
} }
@ -888,6 +912,18 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
}, },
}, },
}, },
Clusters: map[string]*resources.Cluster{
"test_cluster": {
ClusterSpec: &compute.ClusterSpec{
ClusterName: "test_cluster",
},
},
"test_cluster_new": {
ClusterSpec: &compute.ClusterSpec{
ClusterName: "test_cluster_new",
},
},
},
}, },
} }
var tfState = resourcesState{ var tfState = resourcesState{
@ -1020,6 +1056,22 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
{Attributes: stateInstanceAttributes{ID: "2"}}, {Attributes: stateInstanceAttributes{ID: "2"}},
}, },
}, },
{
Type: "databricks_cluster",
Mode: "managed",
Name: "test_cluster",
Instances: []stateResourceInstance{
{Attributes: stateInstanceAttributes{ID: "1"}},
},
},
{
Type: "databricks_cluster",
Mode: "managed",
Name: "test_cluster_old",
Instances: []stateResourceInstance{
{Attributes: stateInstanceAttributes{ID: "2"}},
},
},
}, },
} }
err := TerraformToBundle(&tfState, &config) err := TerraformToBundle(&tfState, &config)
@ -1081,6 +1133,13 @@ func TestTerraformToBundleModifiedResources(t *testing.T) {
assert.Equal(t, "", config.Resources.Schemas["test_schema_new"].ID) assert.Equal(t, "", config.Resources.Schemas["test_schema_new"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Schemas["test_schema_new"].ModifiedStatus) assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Schemas["test_schema_new"].ModifiedStatus)
assert.Equal(t, "1", config.Resources.Clusters["test_cluster"].ID)
assert.Equal(t, "", config.Resources.Clusters["test_cluster"].ModifiedStatus)
assert.Equal(t, "2", config.Resources.Clusters["test_cluster_old"].ID)
assert.Equal(t, resources.ModifiedStatusDeleted, config.Resources.Clusters["test_cluster_old"].ModifiedStatus)
assert.Equal(t, "", config.Resources.Clusters["test_cluster_new"].ID)
assert.Equal(t, resources.ModifiedStatusCreated, config.Resources.Clusters["test_cluster_new"].ModifiedStatus)
AssertFullResourceCoverage(t, &config) AssertFullResourceCoverage(t, &config)
} }

View File

@ -33,7 +33,7 @@ func TestInitEnvironmentVariables(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -60,7 +60,7 @@ func TestSetTempDirEnvVarsForUnixWithTmpDirSet(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -88,7 +88,7 @@ func TestSetTempDirEnvVarsForUnixWithTmpDirNotSet(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -114,7 +114,7 @@ func TestSetTempDirEnvVarsForWindowWithAllTmpDirEnvVarsSet(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -144,7 +144,7 @@ func TestSetTempDirEnvVarsForWindowWithUserProfileAndTempSet(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -174,7 +174,7 @@ func TestSetTempDirEnvVarsForWindowsWithoutAnyTempDirEnvVarsSet(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -202,7 +202,7 @@ func TestSetTempDirEnvVarsForWindowsWithoutAnyTempDirEnvVarsSet(t *testing.T) {
func TestSetProxyEnvVars(t *testing.T) { func TestSetProxyEnvVars(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -250,7 +250,7 @@ func TestSetProxyEnvVars(t *testing.T) {
func TestSetUserAgentExtraEnvVar(t *testing.T) { func TestSetUserAgentExtraEnvVar(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Experimental: &config.Experimental{ Experimental: &config.Experimental{
PyDABs: config.PyDABs{ PyDABs: config.PyDABs{
@ -333,7 +333,7 @@ func TestFindExecPathFromEnvironmentWithWrongVersion(t *testing.T) {
ctx := context.Background() ctx := context.Background()
m := &initialize{} m := &initialize{}
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -357,7 +357,7 @@ func TestFindExecPathFromEnvironmentWithCorrectVersionAndNoBinary(t *testing.T)
ctx := context.Background() ctx := context.Background()
m := &initialize{} m := &initialize{}
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -380,7 +380,7 @@ func TestFindExecPathFromEnvironmentWithCorrectVersionAndBinary(t *testing.T) {
ctx := context.Background() ctx := context.Background()
m := &initialize{} m := &initialize{}
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",

View File

@ -58,6 +58,8 @@ func (m *interpolateMutator) Apply(ctx context.Context, b *bundle.Bundle) diag.D
path = dyn.NewPath(dyn.Key("databricks_quality_monitor")).Append(path[2:]...) path = dyn.NewPath(dyn.Key("databricks_quality_monitor")).Append(path[2:]...)
case dyn.Key("schemas"): case dyn.Key("schemas"):
path = dyn.NewPath(dyn.Key("databricks_schema")).Append(path[2:]...) path = dyn.NewPath(dyn.Key("databricks_schema")).Append(path[2:]...)
case dyn.Key("clusters"):
path = dyn.NewPath(dyn.Key("databricks_cluster")).Append(path[2:]...)
default: default:
// Trigger "key not found" for unknown resource types. // Trigger "key not found" for unknown resource types.
return dyn.GetByPath(root, path) return dyn.GetByPath(root, path)

View File

@ -31,6 +31,7 @@ func TestInterpolate(t *testing.T) {
"other_model_serving": "${resources.model_serving_endpoints.other_model_serving.id}", "other_model_serving": "${resources.model_serving_endpoints.other_model_serving.id}",
"other_registered_model": "${resources.registered_models.other_registered_model.id}", "other_registered_model": "${resources.registered_models.other_registered_model.id}",
"other_schema": "${resources.schemas.other_schema.id}", "other_schema": "${resources.schemas.other_schema.id}",
"other_cluster": "${resources.clusters.other_cluster.id}",
}, },
Tasks: []jobs.Task{ Tasks: []jobs.Task{
{ {
@ -67,6 +68,7 @@ func TestInterpolate(t *testing.T) {
assert.Equal(t, "${databricks_model_serving.other_model_serving.id}", j.Tags["other_model_serving"]) assert.Equal(t, "${databricks_model_serving.other_model_serving.id}", j.Tags["other_model_serving"])
assert.Equal(t, "${databricks_registered_model.other_registered_model.id}", j.Tags["other_registered_model"]) assert.Equal(t, "${databricks_registered_model.other_registered_model.id}", j.Tags["other_registered_model"])
assert.Equal(t, "${databricks_schema.other_schema.id}", j.Tags["other_schema"]) assert.Equal(t, "${databricks_schema.other_schema.id}", j.Tags["other_schema"])
assert.Equal(t, "${databricks_cluster.other_cluster.id}", j.Tags["other_cluster"])
m := b.Config.Resources.Models["my_model"] m := b.Config.Resources.Models["my_model"]
assert.Equal(t, "my_model", m.Model.Name) assert.Equal(t, "my_model", m.Model.Name)

View File

@ -17,7 +17,7 @@ func TestLoadWithNoState(t *testing.T) {
} }
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",

View File

@ -32,7 +32,7 @@ func mockStateFilerForPull(t *testing.T, contents map[string]any, merr error) fi
func statePullTestBundle(t *testing.T) *bundle.Bundle { func statePullTestBundle(t *testing.T) *bundle.Bundle {
return &bundle.Bundle{ return &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",

View File

@ -29,7 +29,7 @@ func mockStateFilerForPush(t *testing.T, fn func(body io.Reader)) filer.Filer {
func statePushTestBundle(t *testing.T) *bundle.Bundle { func statePushTestBundle(t *testing.T) *bundle.Bundle {
return &bundle.Bundle{ return &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",

View File

@ -0,0 +1,52 @@
package tfdyn
import (
"context"
"fmt"
"github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/log"
"github.com/databricks/databricks-sdk-go/service/compute"
)
func convertClusterResource(ctx context.Context, vin dyn.Value) (dyn.Value, error) {
// Normalize the output value to the target schema.
vout, diags := convert.Normalize(compute.ClusterSpec{}, vin)
for _, diag := range diags {
log.Debugf(ctx, "cluster normalization diagnostic: %s", diag.Summary)
}
return vout, nil
}
type clusterConverter struct{}
func (clusterConverter) Convert(ctx context.Context, key string, vin dyn.Value, out *schema.Resources) error {
vout, err := convertClusterResource(ctx, vin)
if err != nil {
return err
}
// We always set no_wait as it allows DABs not to wait for cluster to be started.
vout, err = dyn.Set(vout, "no_wait", dyn.V(true))
if err != nil {
return err
}
// Add the converted resource to the output.
out.Cluster[key] = vout.AsAny()
// Configure permissions for this resource.
if permissions := convertPermissionsResource(ctx, vin); permissions != nil {
permissions.JobId = fmt.Sprintf("${databricks_cluster.%s.id}", key)
out.Permissions["cluster_"+key] = permissions
}
return nil
}
func init() {
registerConverter("clusters", clusterConverter{})
}

View File

@ -0,0 +1,97 @@
package tfdyn
import (
"context"
"testing"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConvertCluster(t *testing.T) {
var src = resources.Cluster{
ClusterSpec: &compute.ClusterSpec{
NumWorkers: 3,
SparkVersion: "13.3.x-scala2.12",
ClusterName: "cluster",
SparkConf: map[string]string{
"spark.executor.memory": "2g",
},
AwsAttributes: &compute.AwsAttributes{
Availability: "ON_DEMAND",
},
AzureAttributes: &compute.AzureAttributes{
Availability: "SPOT",
},
DataSecurityMode: "USER_ISOLATION",
NodeTypeId: "m5.xlarge",
Autoscale: &compute.AutoScale{
MinWorkers: 1,
MaxWorkers: 10,
},
},
Permissions: []resources.Permission{
{
Level: "CAN_RUN",
UserName: "jack@gmail.com",
},
{
Level: "CAN_MANAGE",
ServicePrincipalName: "sp",
},
},
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = clusterConverter{}.Convert(ctx, "my_cluster", vin, out)
require.NoError(t, err)
cluster := out.Cluster["my_cluster"]
assert.Equal(t, map[string]any{
"num_workers": int64(3),
"spark_version": "13.3.x-scala2.12",
"cluster_name": "cluster",
"spark_conf": map[string]any{
"spark.executor.memory": "2g",
},
"aws_attributes": map[string]any{
"availability": "ON_DEMAND",
},
"azure_attributes": map[string]any{
"availability": "SPOT",
},
"data_security_mode": "USER_ISOLATION",
"no_wait": true,
"node_type_id": "m5.xlarge",
"autoscale": map[string]any{
"min_workers": int64(1),
"max_workers": int64(10),
},
}, cluster)
// Assert equality on the permissions
assert.Equal(t, &schema.ResourcePermissions{
JobId: "${databricks_cluster.my_cluster.id}",
AccessControl: []schema.ResourcePermissionsAccessControl{
{
PermissionLevel: "CAN_RUN",
UserName: "jack@gmail.com",
},
{
PermissionLevel: "CAN_MANAGE",
ServicePrincipalName: "sp",
},
},
}, out.Permissions["cluster_my_cluster"])
}

View File

@ -19,20 +19,39 @@ func convertDashboardResource(ctx context.Context, vin dyn.Value) (dyn.Value, er
log.Debugf(ctx, "dashboard normalization diagnostic: %s", diag.Summary) log.Debugf(ctx, "dashboard normalization diagnostic: %s", diag.Summary)
} }
// Include "serialized_dashboard" field if "definition_path" is set. // Include "serialized_dashboard" field if "file_path" is set.
if path, ok := vin.Get("definition_path").AsString(); ok { // Note: the Terraform resource supports "file_path" natively, but its
// change detection mechanism doesn't work as expected at the time of writing (Sep 30).
if path, ok := vout.Get("file_path").AsString(); ok {
vout, err = dyn.Set(vout, "serialized_dashboard", dyn.V(fmt.Sprintf("${file(\"%s\")}", path))) vout, err = dyn.Set(vout, "serialized_dashboard", dyn.V(fmt.Sprintf("${file(\"%s\")}", path)))
if err != nil { if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to set serialized_dashboard: %w", err) return dyn.InvalidValue, fmt.Errorf("failed to set serialized_dashboard: %w", err)
} }
// Drop the "file_path" field. It is mutually exclusive with "serialized_dashboard".
vout, err = dyn.Walk(vout, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
switch len(p) {
case 0:
return v, nil
case 1:
if p[0] == dyn.Key("file_path") {
return v, dyn.ErrDrop
}
}
// Skip everything else.
return v, dyn.ErrSkip
})
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to drop file_path: %w", err)
}
} }
return vout, nil return vout, nil
} }
type DashboardConverter struct{} type dashboardConverter struct{}
func (DashboardConverter) Convert(ctx context.Context, key string, vin dyn.Value, out *schema.Resources) error { func (dashboardConverter) Convert(ctx context.Context, key string, vin dyn.Value, out *schema.Resources) error {
vout, err := convertDashboardResource(ctx, vin) vout, err := convertDashboardResource(ctx, vin)
if err != nil { if err != nil {
return err return err
@ -51,5 +70,5 @@ func (DashboardConverter) Convert(ctx context.Context, key string, vin dyn.Value
} }
func init() { func init() {
registerConverter("dashboards", DashboardConverter{}) registerConverter("dashboards", dashboardConverter{})
} }

View File

@ -1,7 +1,80 @@
package tfdyn package tfdyn
import "testing" import (
"context"
"testing"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConvertDashboard(t *testing.T) { func TestConvertDashboard(t *testing.T) {
var src = resources.Dashboard{
DisplayName: "my dashboard",
WarehouseID: "f00dcafe",
ParentPath: "/some/path",
EmbedCredentials: true,
Permissions: []resources.Permission{
{
Level: "CAN_VIEW",
UserName: "jane@doe.com",
},
},
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert equality on the job
assert.Equal(t, map[string]any{
"display_name": "my dashboard",
"warehouse_id": "f00dcafe",
"parent_path": "/some/path",
"embed_credentials": true,
}, out.Dashboard["my_dashboard"])
// Assert equality on the permissions
assert.Equal(t, &schema.ResourcePermissions{
DashboardId: "${databricks_dashboard.my_dashboard.id}",
AccessControl: []schema.ResourcePermissionsAccessControl{
{
PermissionLevel: "CAN_VIEW",
UserName: "jane@doe.com",
},
},
}, out.Permissions["dashboard_my_dashboard"])
}
func TestConvertDashboardFilePath(t *testing.T) {
var src = resources.Dashboard{
FilePath: "some/path",
}
vin, err := convert.FromTyped(src, dyn.NilValue)
require.NoError(t, err)
ctx := context.Background()
out := schema.NewResources()
err = dashboardConverter{}.Convert(ctx, "my_dashboard", vin, out)
require.NoError(t, err)
// Assert that the "serialized_dashboard" is included.
assert.Subset(t, out.Dashboard["my_dashboard"], map[string]any{
"serialized_dashboard": "${file(\"some/path\")}",
})
// Assert that the "file_path" doesn't carry over.
assert.NotSubset(t, out.Dashboard["my_dashboard"], map[string]any{
"file_path": "some/path",
})
} }

View File

@ -3,6 +3,7 @@ package tfdyn
import ( import (
"context" "context"
"fmt" "fmt"
"sort"
"github.com/databricks/cli/bundle/internal/tf/schema" "github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
@ -19,8 +20,38 @@ func convertJobResource(ctx context.Context, vin dyn.Value) (dyn.Value, error) {
log.Debugf(ctx, "job normalization diagnostic: %s", diag.Summary) log.Debugf(ctx, "job normalization diagnostic: %s", diag.Summary)
} }
// Sort the tasks of each job in the bundle by task key. Sorting
// the task keys ensures that the diff computed by terraform is correct and avoids
// recreates. For more details see the NOTE at
// https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job#example-usage
// and https://github.com/databricks/terraform-provider-databricks/issues/4011
// and https://github.com/databricks/cli/pull/1776
vout := vin
var err error
tasks, ok := vin.Get("tasks").AsSequence()
if ok {
sort.Slice(tasks, func(i, j int) bool {
// We sort the tasks by their task key. Tasks without task keys are ordered
// before tasks with task keys. We do not error for those tasks
// since presence of a task_key is validated for in the Jobs backend.
tk1, ok := tasks[i].Get("task_key").AsString()
if !ok {
return true
}
tk2, ok := tasks[j].Get("task_key").AsString()
if !ok {
return false
}
return tk1 < tk2
})
vout, err = dyn.Set(vin, "tasks", dyn.V(tasks))
if err != nil {
return dyn.InvalidValue, err
}
}
// Modify top-level keys. // Modify top-level keys.
vout, err := renameKeys(vin, map[string]string{ vout, err = renameKeys(vout, map[string]string{
"tasks": "task", "tasks": "task",
"job_clusters": "job_cluster", "job_clusters": "job_cluster",
"parameters": "parameter", "parameters": "parameter",

View File

@ -42,8 +42,8 @@ func TestConvertJob(t *testing.T) {
}, },
Tasks: []jobs.Task{ Tasks: []jobs.Task{
{ {
TaskKey: "task_key", TaskKey: "task_key_b",
JobClusterKey: "job_cluster_key", JobClusterKey: "job_cluster_key_b",
Libraries: []compute.Library{ Libraries: []compute.Library{
{ {
Pypi: &compute.PythonPyPiLibrary{ Pypi: &compute.PythonPyPiLibrary{
@ -55,6 +55,17 @@ func TestConvertJob(t *testing.T) {
}, },
}, },
}, },
{
TaskKey: "task_key_a",
JobClusterKey: "job_cluster_key_a",
},
{
TaskKey: "task_key_c",
JobClusterKey: "job_cluster_key_c",
},
{
Description: "missing task key 😱",
},
}, },
}, },
Permissions: []resources.Permission{ Permissions: []resources.Permission{
@ -100,8 +111,15 @@ func TestConvertJob(t *testing.T) {
}, },
"task": []any{ "task": []any{
map[string]any{ map[string]any{
"task_key": "task_key", "description": "missing task key 😱",
"job_cluster_key": "job_cluster_key", },
map[string]any{
"task_key": "task_key_a",
"job_cluster_key": "job_cluster_key_a",
},
map[string]any{
"task_key": "task_key_b",
"job_cluster_key": "job_cluster_key_b",
"library": []any{ "library": []any{
map[string]any{ map[string]any{
"pypi": map[string]any{ "pypi": map[string]any{
@ -113,6 +131,10 @@ func TestConvertJob(t *testing.T) {
}, },
}, },
}, },
map[string]any{
"task_key": "task_key_c",
"job_cluster_key": "job_cluster_key_c",
},
}, },
}, out.Job["my_job"]) }, out.Job["my_job"])

View File

@ -13,7 +13,7 @@ import (
func TestParseResourcesStateWithNoFile(t *testing.T) { func TestParseResourcesStateWithNoFile(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",
@ -31,7 +31,7 @@ func TestParseResourcesStateWithNoFile(t *testing.T) {
func TestParseResourcesStateWithExistingStateFile(t *testing.T) { func TestParseResourcesStateWithExistingStateFile(t *testing.T) {
ctx := context.Background() ctx := context.Background()
b := &bundle.Bundle{ b := &bundle.Bundle{
RootPath: t.TempDir(), BundleRootPath: t.TempDir(),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "whatever", Target: "whatever",

View File

@ -8,15 +8,13 @@ import (
// SetLocation sets the location of all values in the bundle to the given path. // SetLocation sets the location of all values in the bundle to the given path.
// This is useful for testing where we need to associate configuration // This is useful for testing where we need to associate configuration
// with the path it is loaded from. // with the path it is loaded from.
func SetLocation(b *bundle.Bundle, prefix string, filePath string) { func SetLocation(b *bundle.Bundle, prefix string, locations []dyn.Location) {
start := dyn.MustPathFromString(prefix) start := dyn.MustPathFromString(prefix)
b.Config.Mutate(func(root dyn.Value) (dyn.Value, error) { b.Config.Mutate(func(root dyn.Value) (dyn.Value, error) {
return dyn.Walk(root, func(p dyn.Path, v dyn.Value) (dyn.Value, error) { return dyn.Walk(root, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
// If the path has the given prefix, set the location. // If the path has the given prefix, set the location.
if p.HasPrefix(start) { if p.HasPrefix(start) {
return v.WithLocations([]dyn.Location{{ return v.WithLocations(locations), nil
File: filePath,
}}), nil
} }
// The path is not nested under the given prefix. // The path is not nested under the given prefix.

View File

@ -0,0 +1,20 @@
package bundletest
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/stretchr/testify/require"
)
func Mutate(t *testing.T, b *bundle.Bundle, f func(v dyn.Value) (dyn.Value, error)) {
diags := bundle.ApplyFunc(context.Background(), b, func(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
err := b.Config.Mutate(f)
require.NoError(t, err)
return nil
})
require.NoError(t, diags.Error())
}

View File

@ -51,9 +51,15 @@ func (r *root) Generate(path string) error {
} }
func Run(ctx context.Context, schema *tfjson.ProviderSchema, path string) error { func Run(ctx context.Context, schema *tfjson.ProviderSchema, path string) error {
// Generate types for resources. // Generate types for resources
var resources []*namedBlock var resources []*namedBlock
for _, k := range sortKeys(schema.ResourceSchemas) { for _, k := range sortKeys(schema.ResourceSchemas) {
// Skipping all plugin framework struct generation.
// TODO: This is a temporary fix, generation should be fixed in the future.
if strings.HasSuffix(k, "_pluginframework") {
continue
}
v := schema.ResourceSchemas[k] v := schema.ResourceSchemas[k]
b := &namedBlock{ b := &namedBlock{
filePattern: "resource_%s.go", filePattern: "resource_%s.go",
@ -71,6 +77,12 @@ func Run(ctx context.Context, schema *tfjson.ProviderSchema, path string) error
// Generate types for data sources. // Generate types for data sources.
var dataSources []*namedBlock var dataSources []*namedBlock
for _, k := range sortKeys(schema.DataSourceSchemas) { for _, k := range sortKeys(schema.DataSourceSchemas) {
// Skipping all plugin framework struct generation.
// TODO: This is a temporary fix, generation should be fixed in the future.
if strings.HasSuffix(k, "_pluginframework") {
continue
}
v := schema.DataSourceSchemas[k] v := schema.DataSourceSchemas[k]
b := &namedBlock{ b := &namedBlock{
filePattern: "data_source_%s.go", filePattern: "data_source_%s.go",

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.50.0" const ProviderVersion = "1.52.0"

View File

@ -2,8 +2,16 @@
package schema package schema
type DataSourceClusters struct { type DataSourceClustersFilterBy struct {
ClusterNameContains string `json:"cluster_name_contains,omitempty"` ClusterSources []string `json:"cluster_sources,omitempty"`
Id string `json:"id,omitempty"` ClusterStates []string `json:"cluster_states,omitempty"`
Ids []string `json:"ids,omitempty"` IsPinned bool `json:"is_pinned,omitempty"`
PolicyId string `json:"policy_id,omitempty"`
}
type DataSourceClusters struct {
ClusterNameContains string `json:"cluster_name_contains,omitempty"`
Id string `json:"id,omitempty"`
Ids []string `json:"ids,omitempty"`
FilterBy *DataSourceClustersFilterBy `json:"filter_by,omitempty"`
} }

View File

@ -19,6 +19,7 @@ type DataSourceExternalLocationExternalLocationInfo struct {
CreatedBy string `json:"created_by,omitempty"` CreatedBy string `json:"created_by,omitempty"`
CredentialId string `json:"credential_id,omitempty"` CredentialId string `json:"credential_id,omitempty"`
CredentialName string `json:"credential_name,omitempty"` CredentialName string `json:"credential_name,omitempty"`
Fallback bool `json:"fallback,omitempty"`
IsolationMode string `json:"isolation_mode,omitempty"` IsolationMode string `json:"isolation_mode,omitempty"`
MetastoreId string `json:"metastore_id,omitempty"` MetastoreId string `json:"metastore_id,omitempty"`
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`

View File

@ -18,12 +18,14 @@ type DataSourceShareObject struct {
AddedBy string `json:"added_by,omitempty"` AddedBy string `json:"added_by,omitempty"`
CdfEnabled bool `json:"cdf_enabled,omitempty"` CdfEnabled bool `json:"cdf_enabled,omitempty"`
Comment string `json:"comment,omitempty"` Comment string `json:"comment,omitempty"`
Content string `json:"content,omitempty"`
DataObjectType string `json:"data_object_type"` DataObjectType string `json:"data_object_type"`
HistoryDataSharingStatus string `json:"history_data_sharing_status,omitempty"` HistoryDataSharingStatus string `json:"history_data_sharing_status,omitempty"`
Name string `json:"name"` Name string `json:"name"`
SharedAs string `json:"shared_as,omitempty"` SharedAs string `json:"shared_as,omitempty"`
StartVersion int `json:"start_version,omitempty"` StartVersion int `json:"start_version,omitempty"`
Status string `json:"status,omitempty"` Status string `json:"status,omitempty"`
StringSharedAs string `json:"string_shared_as,omitempty"`
Partition []DataSourceShareObjectPartition `json:"partition,omitempty"` Partition []DataSourceShareObjectPartition `json:"partition,omitempty"`
} }

View File

@ -2,20 +2,14 @@
package schema package schema
type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceEnablementDetails struct {
ForcedForComplianceMode bool `json:"forced_for_compliance_mode,omitempty"`
UnavailableForDisabledEntitlement bool `json:"unavailable_for_disabled_entitlement,omitempty"`
UnavailableForNonEnterpriseTier bool `json:"unavailable_for_non_enterprise_tier,omitempty"`
}
type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime struct { type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime struct {
Hours int `json:"hours,omitempty"` Hours int `json:"hours"`
Minutes int `json:"minutes,omitempty"` Minutes int `json:"minutes"`
} }
type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule struct { type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedSchedule struct {
DayOfWeek string `json:"day_of_week,omitempty"` DayOfWeek string `json:"day_of_week"`
Frequency string `json:"frequency,omitempty"` Frequency string `json:"frequency"`
WindowStartTime *ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime `json:"window_start_time,omitempty"` WindowStartTime *ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindowWeekDayBasedScheduleWindowStartTime `json:"window_start_time,omitempty"`
} }
@ -25,9 +19,9 @@ type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspa
type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace struct { type ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspace struct {
CanToggle bool `json:"can_toggle,omitempty"` CanToggle bool `json:"can_toggle,omitempty"`
Enabled bool `json:"enabled,omitempty"` Enabled bool `json:"enabled"`
EnablementDetails []any `json:"enablement_details,omitempty"`
RestartEvenIfNoUpdatesAvailable bool `json:"restart_even_if_no_updates_available,omitempty"` RestartEvenIfNoUpdatesAvailable bool `json:"restart_even_if_no_updates_available,omitempty"`
EnablementDetails *ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceEnablementDetails `json:"enablement_details,omitempty"`
MaintenanceWindow *ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindow `json:"maintenance_window,omitempty"` MaintenanceWindow *ResourceAutomaticClusterUpdateWorkspaceSettingAutomaticClusterUpdateWorkspaceMaintenanceWindow `json:"maintenance_window,omitempty"`
} }

View File

@ -176,6 +176,7 @@ type ResourceCluster struct {
IdempotencyToken string `json:"idempotency_token,omitempty"` IdempotencyToken string `json:"idempotency_token,omitempty"`
InstancePoolId string `json:"instance_pool_id,omitempty"` InstancePoolId string `json:"instance_pool_id,omitempty"`
IsPinned bool `json:"is_pinned,omitempty"` IsPinned bool `json:"is_pinned,omitempty"`
NoWait bool `json:"no_wait,omitempty"`
NodeTypeId string `json:"node_type_id,omitempty"` NodeTypeId string `json:"node_type_id,omitempty"`
NumWorkers int `json:"num_workers,omitempty"` NumWorkers int `json:"num_workers,omitempty"`
PolicyId string `json:"policy_id,omitempty"` PolicyId string `json:"policy_id,omitempty"`

View File

@ -3,8 +3,8 @@
package schema package schema
type ResourceComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace struct { type ResourceComplianceSecurityProfileWorkspaceSettingComplianceSecurityProfileWorkspace struct {
ComplianceStandards []string `json:"compliance_standards,omitempty"` ComplianceStandards []string `json:"compliance_standards"`
IsEnabled bool `json:"is_enabled,omitempty"` IsEnabled bool `json:"is_enabled"`
} }
type ResourceComplianceSecurityProfileWorkspaceSetting struct { type ResourceComplianceSecurityProfileWorkspaceSetting struct {

View File

@ -3,7 +3,7 @@
package schema package schema
type ResourceEnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace struct { type ResourceEnhancedSecurityMonitoringWorkspaceSettingEnhancedSecurityMonitoringWorkspace struct {
IsEnabled bool `json:"is_enabled,omitempty"` IsEnabled bool `json:"is_enabled"`
} }
type ResourceEnhancedSecurityMonitoringWorkspaceSetting struct { type ResourceEnhancedSecurityMonitoringWorkspaceSetting struct {

View File

@ -95,14 +95,16 @@ type ResourceModelServingConfigServedEntities struct {
} }
type ResourceModelServingConfigServedModels struct { type ResourceModelServingConfigServedModels struct {
EnvironmentVars map[string]string `json:"environment_vars,omitempty"` EnvironmentVars map[string]string `json:"environment_vars,omitempty"`
InstanceProfileArn string `json:"instance_profile_arn,omitempty"` InstanceProfileArn string `json:"instance_profile_arn,omitempty"`
ModelName string `json:"model_name"` MaxProvisionedThroughput int `json:"max_provisioned_throughput,omitempty"`
ModelVersion string `json:"model_version"` MinProvisionedThroughput int `json:"min_provisioned_throughput,omitempty"`
Name string `json:"name,omitempty"` ModelName string `json:"model_name"`
ScaleToZeroEnabled bool `json:"scale_to_zero_enabled,omitempty"` ModelVersion string `json:"model_version"`
WorkloadSize string `json:"workload_size"` Name string `json:"name,omitempty"`
WorkloadType string `json:"workload_type,omitempty"` ScaleToZeroEnabled bool `json:"scale_to_zero_enabled,omitempty"`
WorkloadSize string `json:"workload_size,omitempty"`
WorkloadType string `json:"workload_type,omitempty"`
} }
type ResourceModelServingConfigTrafficConfigRoutes struct { type ResourceModelServingConfigTrafficConfigRoutes struct {

View File

@ -18,20 +18,27 @@ type ResourceShareObject struct {
AddedBy string `json:"added_by,omitempty"` AddedBy string `json:"added_by,omitempty"`
CdfEnabled bool `json:"cdf_enabled,omitempty"` CdfEnabled bool `json:"cdf_enabled,omitempty"`
Comment string `json:"comment,omitempty"` Comment string `json:"comment,omitempty"`
Content string `json:"content,omitempty"`
DataObjectType string `json:"data_object_type"` DataObjectType string `json:"data_object_type"`
HistoryDataSharingStatus string `json:"history_data_sharing_status,omitempty"` HistoryDataSharingStatus string `json:"history_data_sharing_status,omitempty"`
Name string `json:"name"` Name string `json:"name"`
SharedAs string `json:"shared_as,omitempty"` SharedAs string `json:"shared_as,omitempty"`
StartVersion int `json:"start_version,omitempty"` StartVersion int `json:"start_version,omitempty"`
Status string `json:"status,omitempty"` Status string `json:"status,omitempty"`
StringSharedAs string `json:"string_shared_as,omitempty"`
Partition []ResourceShareObjectPartition `json:"partition,omitempty"` Partition []ResourceShareObjectPartition `json:"partition,omitempty"`
} }
type ResourceShare struct { type ResourceShare struct {
CreatedAt int `json:"created_at,omitempty"` Comment string `json:"comment,omitempty"`
CreatedBy string `json:"created_by,omitempty"` CreatedAt int `json:"created_at,omitempty"`
Id string `json:"id,omitempty"` CreatedBy string `json:"created_by,omitempty"`
Name string `json:"name"` Id string `json:"id,omitempty"`
Owner string `json:"owner,omitempty"` Name string `json:"name"`
Object []ResourceShareObject `json:"object,omitempty"` Owner string `json:"owner,omitempty"`
StorageLocation string `json:"storage_location,omitempty"`
StorageRoot string `json:"storage_root,omitempty"`
UpdatedAt int `json:"updated_at,omitempty"`
UpdatedBy string `json:"updated_by,omitempty"`
Object []ResourceShareObject `json:"object,omitempty"`
} }

View File

@ -15,6 +15,7 @@ type ResourceSqlTable struct {
ClusterKeys []string `json:"cluster_keys,omitempty"` ClusterKeys []string `json:"cluster_keys,omitempty"`
Comment string `json:"comment,omitempty"` Comment string `json:"comment,omitempty"`
DataSourceFormat string `json:"data_source_format,omitempty"` DataSourceFormat string `json:"data_source_format,omitempty"`
EffectiveProperties map[string]string `json:"effective_properties,omitempty"`
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
Name string `json:"name"` Name string `json:"name"`
Options map[string]string `json:"options,omitempty"` Options map[string]string `json:"options,omitempty"`

View File

@ -21,7 +21,7 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.50.0" const ProviderVersion = "1.52.0"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{

View File

@ -10,6 +10,7 @@ import (
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest" "github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/internal/testutil" "github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -61,7 +62,7 @@ func TestGlobReferencesExpandedForTaskLibraries(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, ExpandGlobReferences()) diags := bundle.Apply(context.Background(), b, ExpandGlobReferences())
require.Empty(t, diags) require.Empty(t, diags)
@ -146,7 +147,7 @@ func TestGlobReferencesExpandedForForeachTaskLibraries(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, ExpandGlobReferences()) diags := bundle.Apply(context.Background(), b, ExpandGlobReferences())
require.Empty(t, diags) require.Empty(t, diags)
@ -221,7 +222,7 @@ func TestGlobReferencesExpandedForEnvironmentsDeps(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, ".", filepath.Join(dir, "resource.yml")) bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, ExpandGlobReferences()) diags := bundle.Apply(context.Background(), b, ExpandGlobReferences())
require.Empty(t, diags) require.Empty(t, diags)

View File

@ -15,9 +15,10 @@ import (
"github.com/databricks/cli/bundle/deploy/terraform" "github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/libraries" "github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/bundle/permissions" "github.com/databricks/cli/bundle/permissions"
"github.com/databricks/cli/bundle/python"
"github.com/databricks/cli/bundle/scripts" "github.com/databricks/cli/bundle/scripts"
"github.com/databricks/cli/bundle/trampoline"
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/sync"
terraformlib "github.com/databricks/cli/libs/terraform" terraformlib "github.com/databricks/cli/libs/terraform"
tfjson "github.com/hashicorp/terraform-json" tfjson "github.com/hashicorp/terraform-json"
) )
@ -128,7 +129,7 @@ properties such as the 'catalog' or 'storage' are changed:`
} }
// The deploy phase deploys artifacts and resources. // The deploy phase deploys artifacts and resources.
func Deploy() bundle.Mutator { func Deploy(outputHandler sync.OutputHandler) bundle.Mutator {
// Core mutators that CRUD resources and modify deployment state. These // Core mutators that CRUD resources and modify deployment state. These
// mutators need informed consent if they are potentially destructive. // mutators need informed consent if they are potentially destructive.
deployCore := bundle.Defer( deployCore := bundle.Defer(
@ -156,8 +157,8 @@ func Deploy() bundle.Mutator {
artifacts.CleanUp(), artifacts.CleanUp(),
libraries.ExpandGlobReferences(), libraries.ExpandGlobReferences(),
libraries.Upload(), libraries.Upload(),
python.TransformWheelTask(), trampoline.TransformWheelTask(),
files.Upload(), files.Upload(outputHandler),
deploy.StateUpdate(), deploy.StateUpdate(),
deploy.StatePush(), deploy.StatePush(),
permissions.ApplyWorkspaceRootPermissions(), permissions.ApplyWorkspaceRootPermissions(),

View File

@ -9,8 +9,8 @@ import (
"github.com/databricks/cli/bundle/deploy/metadata" "github.com/databricks/cli/bundle/deploy/metadata"
"github.com/databricks/cli/bundle/deploy/terraform" "github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/permissions" "github.com/databricks/cli/bundle/permissions"
"github.com/databricks/cli/bundle/python"
"github.com/databricks/cli/bundle/scripts" "github.com/databricks/cli/bundle/scripts"
"github.com/databricks/cli/bundle/trampoline"
) )
// The initialize phase fills in defaults and connects to the workspace. // The initialize phase fills in defaults and connects to the workspace.
@ -57,6 +57,7 @@ func Initialize() bundle.Mutator {
), ),
mutator.SetRunAs(), mutator.SetRunAs(),
mutator.OverrideCompute(), mutator.OverrideCompute(),
mutator.ConfigureDashboardDefaults(),
mutator.ProcessTargetMode(), mutator.ProcessTargetMode(),
mutator.ApplyPresets(), mutator.ApplyPresets(),
mutator.DefaultQueueing(), mutator.DefaultQueueing(),
@ -66,7 +67,7 @@ func Initialize() bundle.Mutator {
mutator.ConfigureWSFS(), mutator.ConfigureWSFS(),
mutator.TranslatePaths(), mutator.TranslatePaths(),
python.WrapperWarning(), trampoline.WrapperWarning(),
permissions.ApplyBundlePermissions(), permissions.ApplyBundlePermissions(),
permissions.FilterCurrentUser(), permissions.FilterCurrentUser(),
metadata.AnnotateJobs(), metadata.AnnotateJobs(),

View File

@ -148,7 +148,7 @@ func renderDiagnostics(out io.Writer, b *bundle.Bundle, diags diag.Diagnostics)
// Make location relative to bundle root // Make location relative to bundle root
if d.Locations[i].File != "" { if d.Locations[i].File != "" {
out, err := filepath.Rel(b.RootPath, d.Locations[i].File) out, err := filepath.Rel(b.BundleRootPath, d.Locations[i].File)
// if we can't relativize the path, just use path as-is // if we can't relativize the path, just use path as-is
if err == nil { if err == nil {
d.Locations[i].File = out d.Locations[i].File = out

View File

@ -30,7 +30,7 @@ func (m *script) Name() string {
} }
func (m *script) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *script) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
executor, err := exec.NewCommandExecutor(b.RootPath) executor, err := exec.NewCommandExecutor(b.BundleRootPath)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }

View File

@ -23,7 +23,7 @@ func TestExecutesHook(t *testing.T) {
}, },
} }
executor, err := exec.NewCommandExecutor(b.RootPath) executor, err := exec.NewCommandExecutor(b.BundleRootPath)
require.NoError(t, err) require.NoError(t, err)
_, out, err := executeHook(context.Background(), executor, b, config.ScriptPreBuild) _, out, err := executeHook(context.Background(), executor, b, config.ScriptPreBuild)
require.NoError(t, err) require.NoError(t, err)

View File

@ -0,0 +1,36 @@
bundle:
name: clusters
workspace:
host: https://acme.cloud.databricks.com/
resources:
clusters:
foo:
cluster_name: foo
num_workers: 2
node_type_id: "i3.xlarge"
autoscale:
min_workers: 2
max_workers: 7
spark_version: "13.3.x-scala2.12"
spark_conf:
"spark.executor.memory": "2g"
targets:
default:
development:
resources:
clusters:
foo:
cluster_name: foo-override
num_workers: 3
node_type_id: "m5.xlarge"
autoscale:
min_workers: 1
max_workers: 3
spark_version: "15.2.x-scala2.12"
spark_conf:
"spark.executor.memory": "4g"
"spark.executor.memory2": "4g"

View File

@ -0,0 +1,36 @@
package config_tests
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestClusters(t *testing.T) {
b := load(t, "./clusters")
assert.Equal(t, "clusters", b.Config.Bundle.Name)
cluster := b.Config.Resources.Clusters["foo"]
assert.Equal(t, "foo", cluster.ClusterName)
assert.Equal(t, "13.3.x-scala2.12", cluster.SparkVersion)
assert.Equal(t, "i3.xlarge", cluster.NodeTypeId)
assert.Equal(t, 2, cluster.NumWorkers)
assert.Equal(t, "2g", cluster.SparkConf["spark.executor.memory"])
assert.Equal(t, 2, cluster.Autoscale.MinWorkers)
assert.Equal(t, 7, cluster.Autoscale.MaxWorkers)
}
func TestClustersOverride(t *testing.T) {
b := loadTarget(t, "./clusters", "development")
assert.Equal(t, "clusters", b.Config.Bundle.Name)
cluster := b.Config.Resources.Clusters["foo"]
assert.Equal(t, "foo-override", cluster.ClusterName)
assert.Equal(t, "15.2.x-scala2.12", cluster.SparkVersion)
assert.Equal(t, "m5.xlarge", cluster.NodeTypeId)
assert.Equal(t, 3, cluster.NumWorkers)
assert.Equal(t, "4g", cluster.SparkConf["spark.executor.memory"])
assert.Equal(t, "4g", cluster.SparkConf["spark.executor.memory2"])
assert.Equal(t, 1, cluster.Autoscale.MinWorkers)
assert.Equal(t, 3, cluster.Autoscale.MaxWorkers)
}

Some files were not shown because too many files have changed in this diff Show More