Compare commits

...

15 Commits

Author SHA1 Message Date
Pieter Noordhuis 866bfc5be7
Include JSON schema for dashboards 2024-10-24 13:35:22 +02:00
Pieter Noordhuis b8e9a293a1
Merge remote-tracking branch 'origin/main' into dashboards 2024-10-24 13:29:01 +02:00
Andrew Nester ab622e65bb
[Release] Release v0.231.0 (#1856)
CLI:
* Added JSON input validation for CLI commands
([#1771](https://github.com/databricks/cli/pull/1771)).
* Support Git worktrees for `sync`
([#1831](https://github.com/databricks/cli/pull/1831)).

Bundles:
* Add `bundle summary` to display URLs for deployed resources
([#1731](https://github.com/databricks/cli/pull/1731)).
* Added a warning when incorrect permissions used for
`/Workspace/Shared` bundle root
([#1821](https://github.com/databricks/cli/pull/1821)).
* Show actionable errors for collaborative deployment scenarios
([#1386](https://github.com/databricks/cli/pull/1386)).
* Fix path to repository-wide exclude file
([#1837](https://github.com/databricks/cli/pull/1837)).
* Fixed typo in converting cluster permissions
([#1826](https://github.com/databricks/cli/pull/1826)).
* Ignore metastore permission error during template generation
([#1819](https://github.com/databricks/cli/pull/1819)).
* Handle normalization of `dyn.KindTime` into an any type
([#1836](https://github.com/databricks/cli/pull/1836)).
* Added support for pip options in environment dependencies
([#1842](https://github.com/databricks/cli/pull/1842)).
* Fix race condition when restarting continuous jobs
([#1849](https://github.com/databricks/cli/pull/1849)).
* Fix pipeline in default-python template not working for certain
workspaces ([#1854](https://github.com/databricks/cli/pull/1854)).
* Add "output" flag to the bundle sync command
([#1853](https://github.com/databricks/cli/pull/1853)).

Internal:
* Move utility functions dealing with IAM to libs/iamutil
([#1820](https://github.com/databricks/cli/pull/1820)).
* Remove unused `IS_OWNER` constant
([#1823](https://github.com/databricks/cli/pull/1823)).
* Assert SDK version is consistent in the CLI generation process
([#1814](https://github.com/databricks/cli/pull/1814)).
* Fixed unmarshalling json input into `interface{}` type
([#1832](https://github.com/databricks/cli/pull/1832)).
* Fix `TestAccFsMkdirWhenFileExistsAtPath` in isolated Azure
environments ([#1833](https://github.com/databricks/cli/pull/1833)).
* Add behavioral tests for examples from the YAML spec
([#1835](https://github.com/databricks/cli/pull/1835)).
* Remove Terraform conversion function that's no longer used
([#1840](https://github.com/databricks/cli/pull/1840)).
* Encode assumptions about the dashboards API in a test
([#1839](https://github.com/databricks/cli/pull/1839)).
* Add script to make testing of code on branches easier
([#1844](https://github.com/databricks/cli/pull/1844)).

API Changes:
 * Added `databricks disable-legacy-dbfs` command group.

OpenAPI commit cf9c61453990df0f9453670f2fe68e1b128647a2 (2024-10-14)
Dependency updates:
* Upgrade TF provider to 1.54.0
([#1852](https://github.com/databricks/cli/pull/1852)).
* Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0
([#1843](https://github.com/databricks/cli/pull/1843)).
2024-10-23 14:08:27 +00:00
Ilia Babanov 55a055d0f5
Add "output" flag to the bundle sync command (#1853)
## Changes
We want to use 'bundle sync' in the vscode extension before running a
file as an ad-hoc job (or through the context api). Right now we use
bundle deploy in these cases, but deploying bundle resources is not
always expected when you just want to quickly run a file. Sync makes
more sense in these cases, but we still want to have verbose output to
see what's happening.

In the 'deploy' command we have hidden 'verbose' flag. For the sync I've
just added 'output' flag, handling both json and text cases, similar to
how it's done in the non-bundle `sync` command. The flag is not hidden
(although we still don't show any output by default, if the flag is not
set).

VSCode Extension PR:
https://github.com/databricks/databricks-vscode/pull/1401


## Tests
Manually
2024-10-23 11:08:12 +00:00
Lennart Kats (databricks) 60c153c0e7
Fix pipeline in default-python template not working for certain workspaces (#1854)
Change the default-python template to not set the `catalog` field for
the pipeline for workspaces that set `hive_metastore` as the default
catalog. The Pipelines service currently returns an error when that
value is used for the `catalog` field.

This is the most simple fix for this issue, which was reported by a
customer. As a followup, we should look at whether we want to prompt for
a catalog instead, possibly just for this specific scenario.
2024-10-22 15:52:46 +00:00
shreyas-goenka 3bab21e72e
Fix race condition when restarting continuous jobs (#1849)
## Changes
We don't need to cancel existing runs when the job is continuous and
unpaused. The `/jobs/run-now` command will cancel the existing run and
trigger a new one automatically.

Cancelling the job manually can cause a race condition where both the
manual trigger from the CLI and the continuous trigger from the job
configuration happens at the same time. This PR prevents that from
happening.

## Tests
Unit tests and manually
2024-10-22 14:59:17 +00:00
Andrew Nester 68d69d6e0b
Upgrade TF provider to 1.54.0 (#1852)
## Changes
Upgrade TF provider to 1.54.0
2024-10-22 10:43:43 +00:00
dependabot[bot] f8bb3a8d72
Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0 (#1843)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.48.0 to 0.49.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.49.0</h2>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DisableLegacyDbfsAPI">w.DisableLegacyDbfs</a>
workspace-level service.</li>
<li>Added <code>UnityCatalogProvisioningState</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li>
<li>Added <code>IsTruncated</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Result">dashboards.Result</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#BaseJob">jobs.BaseJob</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CreateJob">jobs.CreateJob</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#Job">jobs.Job</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobSettings">jobs.JobSettings</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li>
<li>Added <code>Report</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li>
<li>Added <code>SequenceBy</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpecificConfig">pipelines.TableSpecificConfig</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#Alert">sql.Alert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#CreateAlertRequestAlert">sql.CreateAlertRequestAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#ListAlertsResponseAlert">sql.ListAlertsResponseAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#UpdateAlertRequestAlert">sql.UpdateAlertRequestAlert</a>.</li>
</ul>
<p>OpenAPI SHA: cf9c61453990df0f9453670f2fe68e1b128647a2, Date:
2024-10-14</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.49.0</h2>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DisableLegacyDbfsAPI">w.DisableLegacyDbfs</a>
workspace-level service.</li>
<li>Added <code>UnityCatalogProvisioningState</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li>
<li>Added <code>IsTruncated</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Result">dashboards.Result</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#BaseJob">jobs.BaseJob</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CreateJob">jobs.CreateJob</a>.</li>
<li>Added <code>EffectiveBudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#Job">jobs.Job</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobSettings">jobs.JobSettings</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li>
<li>Added <code>Report</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#IngestionConfig">pipelines.IngestionConfig</a>.</li>
<li>Added <code>SequenceBy</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/pipelines#TableSpecificConfig">pipelines.TableSpecificConfig</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#Alert">sql.Alert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#CreateAlertRequestAlert">sql.CreateAlertRequestAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#ListAlertsResponseAlert">sql.ListAlertsResponseAlert</a>.</li>
<li>Added <code>NotifyOnOk</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sql#UpdateAlertRequestAlert">sql.UpdateAlertRequestAlert</a>.</li>
</ul>
<p>OpenAPI SHA: cf9c61453990df0f9453670f2fe68e1b128647a2, Date:
2024-10-14</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b47ecac9ea"><code>b47ecac</code></a>
[Release] Release v0.49.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1062">#1062</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.48.0...v0.49.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.48.0&new-version=0.49.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-10-22 09:37:01 +00:00
Stephen Macke 571076d5e1
Support Git worktrees for `sync` (#1831)
## Changes

This change allows the `sync` command to work from [git
worktrees](https://git-scm.com/docs/git-worktree).

## Tests

* Added unit tests for traversal of worktree related files.
* Manually confirmed that synchronization of files from a main checkout,
as well as a worktree, observed the same ignore rules (both locally
defined as well as from `$GIT_DIR/info/exclude`).

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-21 18:27:07 +00:00
Pieter Noordhuis ca45e53f42
Add script to make testing of code on branches easier (#1844)
## Changes

Convenience script to exec into a shell where a CLI build for a specific
branch is made available.

## Tests

Manually from `/tmp` with `bash <([path]) demo-dashboards`.
2024-10-21 17:56:17 +00:00
Andrew Nester ffdbec87cc
Added support for pip options in environment dependencies (#1842)
## Changes
Added support for specifying pip options such as `--extra-index-url` and
etc. in environments dependencies

```
environments:
  - environment_key: Default
    spec:
      client: "1"
      dependencies:
        - --extra-index-url https://foo@bar.com/packages/smth somepackage
        - json==1.0.0
```

## Tests
Added regression test

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-21 11:45:39 +00:00
Pieter Noordhuis eefda8c198
Fix path to repository-wide exclude file (#1837)
## Changes

This file is at `info/exclude`, and not `info/excludes`.

Also see https://git-scm.com/docs/gitignore.

## Tests

Manually confirmed that these ignore patterns are now picked up. I
created a repository with a pattern in this file and ran `sync` to
confirm it ignores files matching the pattern.
2024-10-18 15:46:39 +00:00
Pieter Noordhuis c9770503f8
Encode assumptions about the dashboards API in a test (#1839)
## Changes

Dashboards can be imported either via its own CRUD API, or via the
workspace import API. This test encodes the assumptions we can make
about the API behavior. More specifically, the identity of the dashboard
is retained on workspace import, as are the properties that aren't
surfaced in the workspace import API.

## Tests

The integration test passes.
2024-10-18 15:42:05 +00:00
Pieter Noordhuis 3b1fb6d2df
Remove Terraform conversion function that's no longer used (#1840)
## Changes

In #1218, the `BundleToTerraform` function was replaced by a version
based on the dynamic configuration tree. Since then, it has only been
used in tests to confirm that the output of the old function was equal
to the output of the new function. We no longer need this and can
exclusively rely on the version based on the dynamic configuration tree.

## Tests

Tests pass.
2024-10-18 15:38:10 +00:00
Andrew Nester 0c9c90208f
Added a warning when incorrect permissions used for `/Workspace/Shared` bundle root (#1821)
## Changes
Added a warning when incorrect permissions used for `/Workspace/Shared`
bundle root

## Tests
Added unit test

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-18 15:37:16 +00:00
43 changed files with 1492 additions and 376 deletions

View File

@ -1 +1 @@
0c86ea6dbd9a730c24ff0d4e509603e476955ac5 cf9c61453990df0f9453670f2fe68e1b128647a2

1
.gitattributes vendored
View File

@ -54,6 +54,7 @@ cmd/workspace/dashboards/dashboards.go linguist-generated=true
cmd/workspace/data-sources/data-sources.go linguist-generated=true cmd/workspace/data-sources/data-sources.go linguist-generated=true
cmd/workspace/default-namespace/default-namespace.go linguist-generated=true cmd/workspace/default-namespace/default-namespace.go linguist-generated=true
cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true
cmd/workspace/disable-legacy-dbfs/disable-legacy-dbfs.go linguist-generated=true
cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true
cmd/workspace/experiments/experiments.go linguist-generated=true cmd/workspace/experiments/experiments.go linguist-generated=true
cmd/workspace/external-locations/external-locations.go linguist-generated=true cmd/workspace/external-locations/external-locations.go linguist-generated=true

View File

@ -1,5 +1,43 @@
# Version changelog # Version changelog
## [Release] Release v0.231.0
CLI:
* Added JSON input validation for CLI commands ([#1771](https://github.com/databricks/cli/pull/1771)).
* Support Git worktrees for `sync` ([#1831](https://github.com/databricks/cli/pull/1831)).
Bundles:
* Add `bundle summary` to display URLs for deployed resources ([#1731](https://github.com/databricks/cli/pull/1731)).
* Added a warning when incorrect permissions used for `/Workspace/Shared` bundle root ([#1821](https://github.com/databricks/cli/pull/1821)).
* Show actionable errors for collaborative deployment scenarios ([#1386](https://github.com/databricks/cli/pull/1386)).
* Fix path to repository-wide exclude file ([#1837](https://github.com/databricks/cli/pull/1837)).
* Fixed typo in converting cluster permissions ([#1826](https://github.com/databricks/cli/pull/1826)).
* Ignore metastore permission error during template generation ([#1819](https://github.com/databricks/cli/pull/1819)).
* Handle normalization of `dyn.KindTime` into an any type ([#1836](https://github.com/databricks/cli/pull/1836)).
* Added support for pip options in environment dependencies ([#1842](https://github.com/databricks/cli/pull/1842)).
* Fix race condition when restarting continuous jobs ([#1849](https://github.com/databricks/cli/pull/1849)).
* Fix pipeline in default-python template not working for certain workspaces ([#1854](https://github.com/databricks/cli/pull/1854)).
* Add "output" flag to the bundle sync command ([#1853](https://github.com/databricks/cli/pull/1853)).
Internal:
* Move utility functions dealing with IAM to libs/iamutil ([#1820](https://github.com/databricks/cli/pull/1820)).
* Remove unused `IS_OWNER` constant ([#1823](https://github.com/databricks/cli/pull/1823)).
* Assert SDK version is consistent in the CLI generation process ([#1814](https://github.com/databricks/cli/pull/1814)).
* Fixed unmarshalling json input into `interface{}` type ([#1832](https://github.com/databricks/cli/pull/1832)).
* Fix `TestAccFsMkdirWhenFileExistsAtPath` in isolated Azure environments ([#1833](https://github.com/databricks/cli/pull/1833)).
* Add behavioral tests for examples from the YAML spec ([#1835](https://github.com/databricks/cli/pull/1835)).
* Remove Terraform conversion function that's no longer used ([#1840](https://github.com/databricks/cli/pull/1840)).
* Encode assumptions about the dashboards API in a test ([#1839](https://github.com/databricks/cli/pull/1839)).
* Add script to make testing of code on branches easier ([#1844](https://github.com/databricks/cli/pull/1844)).
API Changes:
* Added `databricks disable-legacy-dbfs` command group.
OpenAPI commit cf9c61453990df0f9453670f2fe68e1b128647a2 (2024-10-14)
Dependency updates:
* Upgrade TF provider to 1.54.0 ([#1852](https://github.com/databricks/cli/pull/1852)).
* Bump github.com/databricks/databricks-sdk-go from 0.48.0 to 0.49.0 ([#1843](https://github.com/databricks/cli/pull/1843)).
## [Release] Release v0.230.0 ## [Release] Release v0.230.0
Notable changes for Databricks Asset Bundles: Notable changes for Databricks Asset Bundles:

View File

@ -699,6 +699,9 @@ func TestTranslatePathJobEnvironments(t *testing.T) {
"../dist/env2.whl", "../dist/env2.whl",
"simplejson", "simplejson",
"/Workspace/Users/foo@bar.com/test.whl", "/Workspace/Users/foo@bar.com/test.whl",
"--extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple foobar",
"foobar --extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple",
"https://foo@bar.com/packages/pypi/simple",
}, },
}, },
}, },
@ -719,6 +722,9 @@ func TestTranslatePathJobEnvironments(t *testing.T) {
assert.Equal(t, strings.Join([]string{".", "dist", "env2.whl"}, string(os.PathSeparator)), b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[1]) assert.Equal(t, strings.Join([]string{".", "dist", "env2.whl"}, string(os.PathSeparator)), b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[1])
assert.Equal(t, "simplejson", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[2]) assert.Equal(t, "simplejson", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[2])
assert.Equal(t, "/Workspace/Users/foo@bar.com/test.whl", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[3]) assert.Equal(t, "/Workspace/Users/foo@bar.com/test.whl", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[3])
assert.Equal(t, "--extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple foobar", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[4])
assert.Equal(t, "foobar --extra-index-url https://name:token@gitlab.com/api/v4/projects/9876/packages/pypi/simple", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[5])
assert.Equal(t, "https://foo@bar.com/packages/pypi/simple", b.Config.Resources.Jobs["job"].JobSettings.Environments[0].Spec.Dependencies[6])
} }
func TestTranslatePathWithComplexVariables(t *testing.T) { func TestTranslatePathWithComplexVariables(t *testing.T) {

View File

@ -2,9 +2,7 @@ package terraform
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sort"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
@ -14,244 +12,6 @@ import (
tfjson "github.com/hashicorp/terraform-json" tfjson "github.com/hashicorp/terraform-json"
) )
func conv(from any, to any) {
buf, _ := json.Marshal(from)
json.Unmarshal(buf, &to)
}
func convPermissions(acl []resources.Permission) *schema.ResourcePermissions {
if len(acl) == 0 {
return nil
}
resource := schema.ResourcePermissions{}
for _, ac := range acl {
resource.AccessControl = append(resource.AccessControl, convPermission(ac))
}
return &resource
}
func convPermission(ac resources.Permission) schema.ResourcePermissionsAccessControl {
dst := schema.ResourcePermissionsAccessControl{
PermissionLevel: ac.Level,
}
if ac.UserName != "" {
dst.UserName = ac.UserName
}
if ac.GroupName != "" {
dst.GroupName = ac.GroupName
}
if ac.ServicePrincipalName != "" {
dst.ServicePrincipalName = ac.ServicePrincipalName
}
return dst
}
func convGrants(acl []resources.Grant) *schema.ResourceGrants {
if len(acl) == 0 {
return nil
}
resource := schema.ResourceGrants{}
for _, ac := range acl {
resource.Grant = append(resource.Grant, schema.ResourceGrantsGrant{
Privileges: ac.Privileges,
Principal: ac.Principal,
})
}
return &resource
}
// BundleToTerraform converts resources in a bundle configuration
// to the equivalent Terraform JSON representation.
//
// Note: This function is an older implementation of the conversion logic. It is
// no longer used in any code paths. It is kept around to be used in tests.
// New resources do not need to modify this function and can instead can define
// the conversion login in the tfdyn package.
func BundleToTerraform(config *config.Root) *schema.Root {
tfroot := schema.NewRoot()
tfroot.Provider = schema.NewProviders()
tfroot.Resource = schema.NewResources()
noResources := true
for k, src := range config.Resources.Jobs {
noResources = false
var dst schema.ResourceJob
conv(src, &dst)
if src.JobSettings != nil {
sort.Slice(src.JobSettings.Tasks, func(i, j int) bool {
return src.JobSettings.Tasks[i].TaskKey < src.JobSettings.Tasks[j].TaskKey
})
for _, v := range src.Tasks {
var t schema.ResourceJobTask
conv(v, &t)
for _, v_ := range v.Libraries {
var l schema.ResourceJobTaskLibrary
conv(v_, &l)
t.Library = append(t.Library, l)
}
// Convert for_each_task libraries
if v.ForEachTask != nil {
for _, v_ := range v.ForEachTask.Task.Libraries {
var l schema.ResourceJobTaskForEachTaskTaskLibrary
conv(v_, &l)
t.ForEachTask.Task.Library = append(t.ForEachTask.Task.Library, l)
}
}
dst.Task = append(dst.Task, t)
}
for _, v := range src.JobClusters {
var t schema.ResourceJobJobCluster
conv(v, &t)
dst.JobCluster = append(dst.JobCluster, t)
}
// Unblock downstream work. To be addressed more generally later.
if git := src.GitSource; git != nil {
dst.GitSource = &schema.ResourceJobGitSource{
Url: git.GitUrl,
Branch: git.GitBranch,
Commit: git.GitCommit,
Provider: string(git.GitProvider),
Tag: git.GitTag,
}
}
for _, v := range src.Parameters {
var t schema.ResourceJobParameter
conv(v, &t)
dst.Parameter = append(dst.Parameter, t)
}
}
tfroot.Resource.Job[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.JobId = fmt.Sprintf("${databricks_job.%s.id}", k)
tfroot.Resource.Permissions["job_"+k] = rp
}
}
for k, src := range config.Resources.Pipelines {
noResources = false
var dst schema.ResourcePipeline
conv(src, &dst)
if src.PipelineSpec != nil {
for _, v := range src.Libraries {
var l schema.ResourcePipelineLibrary
conv(v, &l)
dst.Library = append(dst.Library, l)
}
for _, v := range src.Clusters {
var l schema.ResourcePipelineCluster
conv(v, &l)
dst.Cluster = append(dst.Cluster, l)
}
for _, v := range src.Notifications {
var l schema.ResourcePipelineNotification
conv(v, &l)
dst.Notification = append(dst.Notification, l)
}
}
tfroot.Resource.Pipeline[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.PipelineId = fmt.Sprintf("${databricks_pipeline.%s.id}", k)
tfroot.Resource.Permissions["pipeline_"+k] = rp
}
}
for k, src := range config.Resources.Models {
noResources = false
var dst schema.ResourceMlflowModel
conv(src, &dst)
tfroot.Resource.MlflowModel[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.RegisteredModelId = fmt.Sprintf("${databricks_mlflow_model.%s.registered_model_id}", k)
tfroot.Resource.Permissions["mlflow_model_"+k] = rp
}
}
for k, src := range config.Resources.Experiments {
noResources = false
var dst schema.ResourceMlflowExperiment
conv(src, &dst)
tfroot.Resource.MlflowExperiment[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.ExperimentId = fmt.Sprintf("${databricks_mlflow_experiment.%s.id}", k)
tfroot.Resource.Permissions["mlflow_experiment_"+k] = rp
}
}
for k, src := range config.Resources.ModelServingEndpoints {
noResources = false
var dst schema.ResourceModelServing
conv(src, &dst)
tfroot.Resource.ModelServing[k] = &dst
// Configure permissions for this resource.
if rp := convPermissions(src.Permissions); rp != nil {
rp.ServingEndpointId = fmt.Sprintf("${databricks_model_serving.%s.serving_endpoint_id}", k)
tfroot.Resource.Permissions["model_serving_"+k] = rp
}
}
for k, src := range config.Resources.RegisteredModels {
noResources = false
var dst schema.ResourceRegisteredModel
conv(src, &dst)
tfroot.Resource.RegisteredModel[k] = &dst
// Configure permissions for this resource.
if rp := convGrants(src.Grants); rp != nil {
rp.Function = fmt.Sprintf("${databricks_registered_model.%s.id}", k)
tfroot.Resource.Grants["registered_model_"+k] = rp
}
}
for k, src := range config.Resources.QualityMonitors {
noResources = false
var dst schema.ResourceQualityMonitor
conv(src, &dst)
tfroot.Resource.QualityMonitor[k] = &dst
}
for k, src := range config.Resources.Clusters {
noResources = false
var dst schema.ResourceCluster
conv(src, &dst)
tfroot.Resource.Cluster[k] = &dst
}
// We explicitly set "resource" to nil to omit it from a JSON encoding.
// This is required because the terraform CLI requires >= 1 resources defined
// if the "resource" property is used in a .tf.json file.
if noResources {
tfroot.Resource = nil
}
return tfroot
}
// BundleToTerraformWithDynValue converts resources in a bundle configuration // BundleToTerraformWithDynValue converts resources in a bundle configuration
// to the equivalent Terraform JSON representation. // to the equivalent Terraform JSON representation.
func BundleToTerraformWithDynValue(ctx context.Context, root dyn.Value) (*schema.Root, error) { func BundleToTerraformWithDynValue(ctx context.Context, root dyn.Value) (*schema.Root, error) {

View File

@ -2,7 +2,6 @@ package terraform
import ( import (
"context" "context"
"encoding/json"
"reflect" "reflect"
"testing" "testing"
@ -21,6 +20,27 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func produceTerraformConfiguration(t *testing.T, config config.Root) *schema.Root {
vin, err := convert.FromTyped(config, dyn.NilValue)
require.NoError(t, err)
out, err := BundleToTerraformWithDynValue(context.Background(), vin)
require.NoError(t, err)
return out
}
func convertToResourceStruct[T any](t *testing.T, resource *T, data any) {
require.NotNil(t, resource)
require.NotNil(t, data)
// Convert data to a dyn.Value.
vin, err := convert.FromTyped(data, dyn.NilValue)
require.NoError(t, err)
// Convert the dyn.Value to a struct.
err = convert.ToTyped(resource, vin)
require.NoError(t, err)
}
func TestBundleToTerraformJob(t *testing.T) { func TestBundleToTerraformJob(t *testing.T) {
var src = resources.Job{ var src = resources.Job{
JobSettings: &jobs.JobSettings{ JobSettings: &jobs.JobSettings{
@ -58,8 +78,9 @@ func TestBundleToTerraformJob(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceJob
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Job["my_job"])
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
assert.Len(t, resource.JobCluster, 1) assert.Len(t, resource.JobCluster, 1)
@ -68,8 +89,6 @@ func TestBundleToTerraformJob(t *testing.T) {
assert.Equal(t, "param1", resource.Parameter[0].Name) assert.Equal(t, "param1", resource.Parameter[0].Name)
assert.Equal(t, "param2", resource.Parameter[1].Name) assert.Equal(t, "param2", resource.Parameter[1].Name)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformJobPermissions(t *testing.T) { func TestBundleToTerraformJobPermissions(t *testing.T) {
@ -90,15 +109,14 @@ func TestBundleToTerraformJobPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["job_my_job"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["job_my_job"])
assert.NotEmpty(t, resource.JobId) assert.NotEmpty(t, resource.JobId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformJobTaskLibraries(t *testing.T) { func TestBundleToTerraformJobTaskLibraries(t *testing.T) {
@ -128,15 +146,14 @@ func TestBundleToTerraformJobTaskLibraries(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceJob
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Job["my_job"])
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
require.Len(t, resource.Task, 1) require.Len(t, resource.Task, 1)
require.Len(t, resource.Task[0].Library, 1) require.Len(t, resource.Task[0].Library, 1)
assert.Equal(t, "mlflow", resource.Task[0].Library[0].Pypi.Package) assert.Equal(t, "mlflow", resource.Task[0].Library[0].Pypi.Package)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformForEachTaskLibraries(t *testing.T) { func TestBundleToTerraformForEachTaskLibraries(t *testing.T) {
@ -172,15 +189,14 @@ func TestBundleToTerraformForEachTaskLibraries(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceJob
resource := out.Resource.Job["my_job"].(*schema.ResourceJob) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Job["my_job"])
assert.Equal(t, "my job", resource.Name) assert.Equal(t, "my job", resource.Name)
require.Len(t, resource.Task, 1) require.Len(t, resource.Task, 1)
require.Len(t, resource.Task[0].ForEachTask.Task.Library, 1) require.Len(t, resource.Task[0].ForEachTask.Task.Library, 1)
assert.Equal(t, "mlflow", resource.Task[0].ForEachTask.Task.Library[0].Pypi.Package) assert.Equal(t, "mlflow", resource.Task[0].ForEachTask.Task.Library[0].Pypi.Package)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformPipeline(t *testing.T) { func TestBundleToTerraformPipeline(t *testing.T) {
@ -230,8 +246,9 @@ func TestBundleToTerraformPipeline(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePipeline
resource := out.Resource.Pipeline["my_pipeline"].(*schema.ResourcePipeline) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Pipeline["my_pipeline"])
assert.Equal(t, "my pipeline", resource.Name) assert.Equal(t, "my pipeline", resource.Name)
assert.Len(t, resource.Library, 2) assert.Len(t, resource.Library, 2)
@ -241,8 +258,6 @@ func TestBundleToTerraformPipeline(t *testing.T) {
assert.Equal(t, resource.Notification[1].Alerts, []string{"on-update-failure", "on-flow-failure"}) assert.Equal(t, resource.Notification[1].Alerts, []string{"on-update-failure", "on-flow-failure"})
assert.Equal(t, resource.Notification[1].EmailRecipients, []string{"jane@doe.com", "john@doe.com"}) assert.Equal(t, resource.Notification[1].EmailRecipients, []string{"jane@doe.com", "john@doe.com"})
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformPipelinePermissions(t *testing.T) { func TestBundleToTerraformPipelinePermissions(t *testing.T) {
@ -263,15 +278,14 @@ func TestBundleToTerraformPipelinePermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["pipeline_my_pipeline"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["pipeline_my_pipeline"])
assert.NotEmpty(t, resource.PipelineId) assert.NotEmpty(t, resource.PipelineId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModel(t *testing.T) { func TestBundleToTerraformModel(t *testing.T) {
@ -300,8 +314,9 @@ func TestBundleToTerraformModel(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceMlflowModel
resource := out.Resource.MlflowModel["my_model"].(*schema.ResourceMlflowModel) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.MlflowModel["my_model"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "description", resource.Description) assert.Equal(t, "description", resource.Description)
@ -311,8 +326,6 @@ func TestBundleToTerraformModel(t *testing.T) {
assert.Equal(t, "k2", resource.Tags[1].Key) assert.Equal(t, "k2", resource.Tags[1].Key)
assert.Equal(t, "v2", resource.Tags[1].Value) assert.Equal(t, "v2", resource.Tags[1].Value)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModelPermissions(t *testing.T) { func TestBundleToTerraformModelPermissions(t *testing.T) {
@ -336,15 +349,14 @@ func TestBundleToTerraformModelPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["mlflow_model_my_model"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["mlflow_model_my_model"])
assert.NotEmpty(t, resource.RegisteredModelId) assert.NotEmpty(t, resource.RegisteredModelId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformExperiment(t *testing.T) { func TestBundleToTerraformExperiment(t *testing.T) {
@ -362,13 +374,12 @@ func TestBundleToTerraformExperiment(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceMlflowExperiment
resource := out.Resource.MlflowExperiment["my_experiment"].(*schema.ResourceMlflowExperiment) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.MlflowExperiment["my_experiment"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformExperimentPermissions(t *testing.T) { func TestBundleToTerraformExperimentPermissions(t *testing.T) {
@ -392,15 +403,14 @@ func TestBundleToTerraformExperimentPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["mlflow_experiment_my_experiment"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["mlflow_experiment_my_experiment"])
assert.NotEmpty(t, resource.ExperimentId) assert.NotEmpty(t, resource.ExperimentId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_READ", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModelServing(t *testing.T) { func TestBundleToTerraformModelServing(t *testing.T) {
@ -436,8 +446,9 @@ func TestBundleToTerraformModelServing(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceModelServing
resource := out.Resource.ModelServing["my_model_serving_endpoint"].(*schema.ResourceModelServing) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.ModelServing["my_model_serving_endpoint"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "model_name", resource.Config.ServedModels[0].ModelName) assert.Equal(t, "model_name", resource.Config.ServedModels[0].ModelName)
@ -447,8 +458,6 @@ func TestBundleToTerraformModelServing(t *testing.T) {
assert.Equal(t, "model_name-1", resource.Config.TrafficConfig.Routes[0].ServedModelName) assert.Equal(t, "model_name-1", resource.Config.TrafficConfig.Routes[0].ServedModelName)
assert.Equal(t, 100, resource.Config.TrafficConfig.Routes[0].TrafficPercentage) assert.Equal(t, 100, resource.Config.TrafficConfig.Routes[0].TrafficPercentage)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformModelServingPermissions(t *testing.T) { func TestBundleToTerraformModelServingPermissions(t *testing.T) {
@ -490,15 +499,14 @@ func TestBundleToTerraformModelServingPermissions(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourcePermissions
resource := out.Resource.Permissions["model_serving_my_model_serving_endpoint"].(*schema.ResourcePermissions) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Permissions["model_serving_my_model_serving_endpoint"])
assert.NotEmpty(t, resource.ServingEndpointId) assert.NotEmpty(t, resource.ServingEndpointId)
assert.Len(t, resource.AccessControl, 1) assert.Len(t, resource.AccessControl, 1)
assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName) assert.Equal(t, "jane@doe.com", resource.AccessControl[0].UserName)
assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel) assert.Equal(t, "CAN_VIEW", resource.AccessControl[0].PermissionLevel)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformRegisteredModel(t *testing.T) { func TestBundleToTerraformRegisteredModel(t *testing.T) {
@ -519,16 +527,15 @@ func TestBundleToTerraformRegisteredModel(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceRegisteredModel
resource := out.Resource.RegisteredModel["my_registered_model"].(*schema.ResourceRegisteredModel) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.RegisteredModel["my_registered_model"])
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "catalog", resource.CatalogName) assert.Equal(t, "catalog", resource.CatalogName)
assert.Equal(t, "schema", resource.SchemaName) assert.Equal(t, "schema", resource.SchemaName)
assert.Equal(t, "comment", resource.Comment) assert.Equal(t, "comment", resource.Comment)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformRegisteredModelGrants(t *testing.T) { func TestBundleToTerraformRegisteredModelGrants(t *testing.T) {
@ -554,15 +561,14 @@ func TestBundleToTerraformRegisteredModelGrants(t *testing.T) {
}, },
} }
out := BundleToTerraform(&config) var resource schema.ResourceGrants
resource := out.Resource.Grants["registered_model_my_registered_model"].(*schema.ResourceGrants) out := produceTerraformConfiguration(t, config)
convertToResourceStruct(t, &resource, out.Resource.Grants["registered_model_my_registered_model"])
assert.NotEmpty(t, resource.Function) assert.NotEmpty(t, resource.Function)
assert.Len(t, resource.Grant, 1) assert.Len(t, resource.Grant, 1)
assert.Equal(t, "jane@doe.com", resource.Grant[0].Principal) assert.Equal(t, "jane@doe.com", resource.Grant[0].Principal)
assert.Equal(t, "EXECUTE", resource.Grant[0].Privileges[0]) assert.Equal(t, "EXECUTE", resource.Grant[0].Privileges[0])
bundleToTerraformEquivalenceTest(t, &config)
} }
func TestBundleToTerraformDeletedResources(t *testing.T) { func TestBundleToTerraformDeletedResources(t *testing.T) {
@ -1204,25 +1210,3 @@ func AssertFullResourceCoverage(t *testing.T, config *config.Root) {
} }
} }
} }
func assertEqualTerraformRoot(t *testing.T, a, b *schema.Root) {
ba, err := json.Marshal(a)
require.NoError(t, err)
bb, err := json.Marshal(b)
require.NoError(t, err)
assert.JSONEq(t, string(ba), string(bb))
}
func bundleToTerraformEquivalenceTest(t *testing.T, config *config.Root) {
t.Run("dyn equivalence", func(t *testing.T) {
tf1 := BundleToTerraform(config)
vin, err := convert.FromTyped(config, dyn.NilValue)
require.NoError(t, err)
tf2, err := BundleToTerraformWithDynValue(context.Background(), vin)
require.NoError(t, err)
// Compare roots
assertEqualTerraformRoot(t, tf1, tf2)
})
}

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.53.0" const ProviderVersion = "1.54.0"

View File

@ -0,0 +1,15 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceNotificationDestinationsNotificationDestinations struct {
DestinationType string `json:"destination_type,omitempty"`
DisplayName string `json:"display_name,omitempty"`
Id string `json:"id,omitempty"`
}
type DataSourceNotificationDestinations struct {
DisplayNameContains string `json:"display_name_contains,omitempty"`
Type string `json:"type,omitempty"`
NotificationDestinations []DataSourceNotificationDestinationsNotificationDestinations `json:"notification_destinations,omitempty"`
}

View File

@ -0,0 +1,32 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceRegisteredModelModelInfoAliases struct {
AliasName string `json:"alias_name,omitempty"`
VersionNum int `json:"version_num,omitempty"`
}
type DataSourceRegisteredModelModelInfo struct {
BrowseOnly bool `json:"browse_only,omitempty"`
CatalogName string `json:"catalog_name,omitempty"`
Comment string `json:"comment,omitempty"`
CreatedAt int `json:"created_at,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
FullName string `json:"full_name,omitempty"`
MetastoreId string `json:"metastore_id,omitempty"`
Name string `json:"name,omitempty"`
Owner string `json:"owner,omitempty"`
SchemaName string `json:"schema_name,omitempty"`
StorageLocation string `json:"storage_location,omitempty"`
UpdatedAt int `json:"updated_at,omitempty"`
UpdatedBy string `json:"updated_by,omitempty"`
Aliases []DataSourceRegisteredModelModelInfoAliases `json:"aliases,omitempty"`
}
type DataSourceRegisteredModel struct {
FullName string `json:"full_name"`
IncludeAliases bool `json:"include_aliases,omitempty"`
IncludeBrowse bool `json:"include_browse,omitempty"`
ModelInfo []DataSourceRegisteredModelModelInfo `json:"model_info,omitempty"`
}

View File

@ -36,7 +36,9 @@ type DataSources struct {
NodeType map[string]any `json:"databricks_node_type,omitempty"` NodeType map[string]any `json:"databricks_node_type,omitempty"`
Notebook map[string]any `json:"databricks_notebook,omitempty"` Notebook map[string]any `json:"databricks_notebook,omitempty"`
NotebookPaths map[string]any `json:"databricks_notebook_paths,omitempty"` NotebookPaths map[string]any `json:"databricks_notebook_paths,omitempty"`
NotificationDestinations map[string]any `json:"databricks_notification_destinations,omitempty"`
Pipelines map[string]any `json:"databricks_pipelines,omitempty"` Pipelines map[string]any `json:"databricks_pipelines,omitempty"`
RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"`
Schema map[string]any `json:"databricks_schema,omitempty"` Schema map[string]any `json:"databricks_schema,omitempty"`
Schemas map[string]any `json:"databricks_schemas,omitempty"` Schemas map[string]any `json:"databricks_schemas,omitempty"`
ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"` ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"`
@ -92,7 +94,9 @@ func NewDataSources() *DataSources {
NodeType: make(map[string]any), NodeType: make(map[string]any),
Notebook: make(map[string]any), Notebook: make(map[string]any),
NotebookPaths: make(map[string]any), NotebookPaths: make(map[string]any),
NotificationDestinations: make(map[string]any),
Pipelines: make(map[string]any), Pipelines: make(map[string]any),
RegisteredModel: make(map[string]any),
Schema: make(map[string]any), Schema: make(map[string]any),
Schemas: make(map[string]any), Schemas: make(map[string]any),
ServicePrincipal: make(map[string]any), ServicePrincipal: make(map[string]any),

View File

@ -1448,6 +1448,7 @@ type ResourceJobWebhookNotifications struct {
type ResourceJob struct { type ResourceJob struct {
AlwaysRunning bool `json:"always_running,omitempty"` AlwaysRunning bool `json:"always_running,omitempty"`
BudgetPolicyId string `json:"budget_policy_id,omitempty"`
ControlRunState bool `json:"control_run_state,omitempty"` ControlRunState bool `json:"control_run_state,omitempty"`
Description string `json:"description,omitempty"` Description string `json:"description,omitempty"`
EditMode string `json:"edit_mode,omitempty"` EditMode string `json:"edit_mode,omitempty"`

View File

@ -23,5 +23,6 @@ type ResourceOnlineTable struct {
Name string `json:"name"` Name string `json:"name"`
Status []any `json:"status,omitempty"` Status []any `json:"status,omitempty"`
TableServingUrl string `json:"table_serving_url,omitempty"` TableServingUrl string `json:"table_serving_url,omitempty"`
UnityCatalogProvisioningState string `json:"unity_catalog_provisioning_state,omitempty"`
Spec *ResourceOnlineTableSpec `json:"spec,omitempty"` Spec *ResourceOnlineTableSpec `json:"spec,omitempty"`
} }

View File

@ -142,10 +142,26 @@ type ResourcePipelineGatewayDefinition struct {
GatewayStorageSchema string `json:"gateway_storage_schema,omitempty"` GatewayStorageSchema string `json:"gateway_storage_schema,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsReportTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
}
type ResourcePipelineIngestionDefinitionObjectsReport struct {
DestinationCatalog string `json:"destination_catalog,omitempty"`
DestinationSchema string `json:"destination_schema,omitempty"`
DestinationTable string `json:"destination_table,omitempty"`
SourceUrl string `json:"source_url,omitempty"`
TableConfiguration *ResourcePipelineIngestionDefinitionObjectsReportTableConfiguration `json:"table_configuration,omitempty"`
}
type ResourcePipelineIngestionDefinitionObjectsSchemaTableConfiguration struct { type ResourcePipelineIngestionDefinitionObjectsSchemaTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsSchema struct { type ResourcePipelineIngestionDefinitionObjectsSchema struct {
@ -160,6 +176,7 @@ type ResourcePipelineIngestionDefinitionObjectsTableTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinitionObjectsTable struct { type ResourcePipelineIngestionDefinitionObjectsTable struct {
@ -173,6 +190,7 @@ type ResourcePipelineIngestionDefinitionObjectsTable struct {
} }
type ResourcePipelineIngestionDefinitionObjects struct { type ResourcePipelineIngestionDefinitionObjects struct {
Report *ResourcePipelineIngestionDefinitionObjectsReport `json:"report,omitempty"`
Schema *ResourcePipelineIngestionDefinitionObjectsSchema `json:"schema,omitempty"` Schema *ResourcePipelineIngestionDefinitionObjectsSchema `json:"schema,omitempty"`
Table *ResourcePipelineIngestionDefinitionObjectsTable `json:"table,omitempty"` Table *ResourcePipelineIngestionDefinitionObjectsTable `json:"table,omitempty"`
} }
@ -181,6 +199,7 @@ type ResourcePipelineIngestionDefinitionTableConfiguration struct {
PrimaryKeys []string `json:"primary_keys,omitempty"` PrimaryKeys []string `json:"primary_keys,omitempty"`
SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"` SalesforceIncludeFormulaFields bool `json:"salesforce_include_formula_fields,omitempty"`
ScdType string `json:"scd_type,omitempty"` ScdType string `json:"scd_type,omitempty"`
SequenceBy []string `json:"sequence_by,omitempty"`
} }
type ResourcePipelineIngestionDefinition struct { type ResourcePipelineIngestionDefinition struct {

View File

@ -21,7 +21,7 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.53.0" const ProviderVersion = "1.54.0"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{

View File

@ -57,6 +57,12 @@ func IsLibraryLocal(dep string) bool {
} }
} }
// If the dependency starts with --, it's a pip flag option which is a valid
// entry for environment dependencies but not a local path
if containsPipFlag(dep) {
return false
}
// If the dependency is a requirements file, it's not a valid local path // If the dependency is a requirements file, it's not a valid local path
if strings.HasPrefix(dep, "-r") { if strings.HasPrefix(dep, "-r") {
return false return false
@ -70,6 +76,11 @@ func IsLibraryLocal(dep string) bool {
return IsLocalPath(dep) return IsLocalPath(dep)
} }
func containsPipFlag(input string) bool {
re := regexp.MustCompile(`--[a-zA-Z0-9-]+`)
return re.MatchString(input)
}
// ^[a-zA-Z0-9\-_]+: Matches the package name, allowing alphanumeric characters, dashes (-), and underscores (_). // ^[a-zA-Z0-9\-_]+: Matches the package name, allowing alphanumeric characters, dashes (-), and underscores (_).
// \[.*\])?: Optionally matches any extras specified in square brackets, e.g., [security]. // \[.*\])?: Optionally matches any extras specified in square brackets, e.g., [security].
// ((==|!=|<=|>=|~=|>|<)\d+(\.\d+){0,2}(\.\*)?): Optionally matches version specifiers, supporting various operators (==, !=, etc.) followed by a version number (e.g., 2.25.1). // ((==|!=|<=|>=|~=|>|<)\d+(\.\d+){0,2}(\.\*)?): Optionally matches version specifiers, supporting various operators (==, !=, etc.) followed by a version number (e.g., 2.25.1).

View File

@ -0,0 +1,56 @@
package permissions
import (
"context"
"fmt"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
)
type validateSharedRootPermissions struct {
}
func ValidateSharedRootPermissions() bundle.Mutator {
return &validateSharedRootPermissions{}
}
func (*validateSharedRootPermissions) Name() string {
return "ValidateSharedRootPermissions"
}
func (*validateSharedRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if isWorkspaceSharedRoot(b.Config.Workspace.RootPath) {
return isUsersGroupPermissionSet(b)
}
return nil
}
func isWorkspaceSharedRoot(path string) bool {
return strings.HasPrefix(path, "/Workspace/Shared/")
}
// isUsersGroupPermissionSet checks that top-level permissions set for bundle contain group_name: users with CAN_MANAGE permission.
func isUsersGroupPermissionSet(b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
allUsers := false
for _, p := range b.Config.Permissions {
if p.GroupName == "users" && p.Level == CAN_MANAGE {
allUsers = true
break
}
}
if !allUsers {
diags = diags.Append(diag.Diagnostic{
Severity: diag.Warning,
Summary: fmt.Sprintf("the bundle root path %s is writable by all workspace users", b.Config.Workspace.RootPath),
Detail: "The bundle is configured to use /Workspace/Shared, which will give read/write access to all users. If this is intentional, add CAN_MANAGE for 'group_name: users' permission to your bundle configuration. If the deployment should be restricted, move it to a restricted folder such as /Workspace/Users/<username or principal name>.",
})
}
return diags
}

View File

@ -0,0 +1,66 @@
package permissions
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require"
)
func TestValidateSharedRootPermissionsForShared(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Shared/foo/bar",
},
Permissions: []resources.Permission{
{Level: CAN_MANAGE, GroupName: "users"},
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job_1": {JobSettings: &jobs.JobSettings{Name: "job_1"}},
"job_2": {JobSettings: &jobs.JobSettings{Name: "job_2"}},
},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions()))
require.Empty(t, diags)
}
func TestValidateSharedRootPermissionsForSharedError(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Shared/foo/bar",
},
Permissions: []resources.Permission{
{Level: CAN_MANAGE, UserName: "foo@bar.com"},
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job_1": {JobSettings: &jobs.JobSettings{Name: "job_1"}},
"job_2": {JobSettings: &jobs.JobSettings{Name: "job_2"}},
},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions()))
require.Len(t, diags, 1)
require.Equal(t, "the bundle root path /Workspace/Shared/foo/bar is writable by all workspace users", diags[0].Summary)
require.Equal(t, diag.Warning, diags[0].Severity)
}

View File

@ -16,6 +16,10 @@ func ApplyWorkspaceRootPermissions() bundle.Mutator {
return &workspaceRootPermissions{} return &workspaceRootPermissions{}
} }
func (*workspaceRootPermissions) Name() string {
return "ApplyWorkspaceRootPermissions"
}
// Apply implements bundle.Mutator. // Apply implements bundle.Mutator.
func (*workspaceRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (*workspaceRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
err := giveAccessForWorkspaceRoot(ctx, b) err := giveAccessForWorkspaceRoot(ctx, b)
@ -26,10 +30,6 @@ func (*workspaceRootPermissions) Apply(ctx context.Context, b *bundle.Bundle) di
return nil return nil
} }
func (*workspaceRootPermissions) Name() string {
return "ApplyWorkspaceRootPermissions"
}
func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error { func giveAccessForWorkspaceRoot(ctx context.Context, b *bundle.Bundle) error {
permissions := make([]workspace.WorkspaceObjectAccessControlRequest, 0) permissions := make([]workspace.WorkspaceObjectAccessControlRequest, 0)

View File

@ -69,6 +69,6 @@ func TestApplyWorkspaceRootPermissions(t *testing.T) {
WorkspaceObjectType: "directories", WorkspaceObjectType: "directories",
}).Return(nil, nil) }).Return(nil, nil)
diags := bundle.Apply(context.Background(), b, ApplyWorkspaceRootPermissions()) diags := bundle.Apply(context.Background(), b, bundle.Seq(ValidateSharedRootPermissions(), ApplyWorkspaceRootPermissions()))
require.NoError(t, diags.Error()) require.Empty(t, diags)
} }

View File

@ -77,8 +77,11 @@ func Initialize() bundle.Mutator {
mutator.TranslatePaths(), mutator.TranslatePaths(),
trampoline.WrapperWarning(), trampoline.WrapperWarning(),
permissions.ValidateSharedRootPermissions(),
permissions.ApplyBundlePermissions(), permissions.ApplyBundlePermissions(),
permissions.FilterCurrentUser(), permissions.FilterCurrentUser(),
metadata.AnnotateJobs(), metadata.AnnotateJobs(),
metadata.AnnotatePipelines(), metadata.AnnotatePipelines(),
terraform.Initialize(), terraform.Initialize(),

View File

@ -317,6 +317,29 @@ func (r *jobRunner) Cancel(ctx context.Context) error {
return errGroup.Wait() return errGroup.Wait()
} }
func (r *jobRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
// We don't need to cancel existing runs if the job is continuous and unpaused.
// the /jobs/run-now API will automatically cancel any existing runs before starting a new one.
//
// /jobs/run-now will not cancel existing runs if the job is continuous and paused.
// New job runs will be queued instead and will wait for existing runs to finish.
// In this case, we need to cancel the existing runs before starting a new one.
continuous := r.job.JobSettings.Continuous
if continuous != nil && continuous.PauseStatus == jobs.PauseStatusUnpaused {
return r.Run(ctx, opts)
}
s := cmdio.Spinner(ctx)
s <- "Cancelling all active job runs"
err := r.Cancel(ctx)
close(s)
if err != nil {
return nil, err
}
return r.Run(ctx, opts)
}
func (r *jobRunner) ParseArgs(args []string, opts *Options) error { func (r *jobRunner) ParseArgs(args []string, opts *Options) error {
return r.posArgsHandler().ParseArgs(args, opts) return r.posArgsHandler().ParseArgs(args, opts)
} }

View File

@ -1,6 +1,7 @@
package run package run
import ( import (
"bytes"
"context" "context"
"testing" "testing"
"time" "time"
@ -8,6 +9,8 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/mock" "github.com/stretchr/testify/mock"
@ -126,3 +129,132 @@ func TestJobRunnerCancelWithNoActiveRuns(t *testing.T) {
err := runner.Cancel(context.Background()) err := runner.Cancel(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
func TestJobRunnerRestart(t *testing.T) {
for _, jobSettings := range []*jobs.JobSettings{
{},
{
Continuous: &jobs.Continuous{
PauseStatus: jobs.PauseStatusPaused,
},
},
} {
job := &resources.Job{
ID: "123",
JobSettings: jobSettings,
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": job,
},
},
},
}
runner := jobRunner{key: "test", bundle: b, job: job}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", ""))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
jobApi := m.GetMockJobsAPI()
jobApi.EXPECT().ListRunsAll(mock.Anything, jobs.ListRunsRequest{
ActiveOnly: true,
JobId: 123,
}).Return([]jobs.BaseRun{
{RunId: 1},
{RunId: 2},
}, nil)
// Mock the runner cancelling existing job runs.
mockWait := &jobs.WaitGetRunJobTerminatedOrSkipped[struct{}]{
Poll: func(time time.Duration, f func(j *jobs.Run)) (*jobs.Run, error) {
return nil, nil
},
}
jobApi.EXPECT().CancelRun(mock.Anything, jobs.CancelRun{
RunId: 1,
}).Return(mockWait, nil)
jobApi.EXPECT().CancelRun(mock.Anything, jobs.CancelRun{
RunId: 2,
}).Return(mockWait, nil)
// Mock the runner triggering a job run
mockWaitForRun := &jobs.WaitGetRunJobTerminatedOrSkipped[jobs.RunNowResponse]{
Poll: func(d time.Duration, f func(*jobs.Run)) (*jobs.Run, error) {
return &jobs.Run{
State: &jobs.RunState{
ResultState: jobs.RunResultStateSuccess,
},
}, nil
},
}
jobApi.EXPECT().RunNow(mock.Anything, jobs.RunNow{
JobId: 123,
}).Return(mockWaitForRun, nil)
// Mock the runner getting the job output
jobApi.EXPECT().GetRun(mock.Anything, jobs.GetRunRequest{}).Return(&jobs.Run{}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}
}
func TestJobRunnerRestartForContinuousUnpausedJobs(t *testing.T) {
job := &resources.Job{
ID: "123",
JobSettings: &jobs.JobSettings{
Continuous: &jobs.Continuous{
PauseStatus: jobs.PauseStatusUnpaused,
},
},
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": job,
},
},
},
}
runner := jobRunner{key: "test", bundle: b, job: job}
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
jobApi := m.GetMockJobsAPI()
// The runner should not try and cancel existing job runs for unpaused continuous jobs.
jobApi.AssertNotCalled(t, "ListRunsAll")
jobApi.AssertNotCalled(t, "CancelRun")
// Mock the runner triggering a job run
mockWaitForRun := &jobs.WaitGetRunJobTerminatedOrSkipped[jobs.RunNowResponse]{
Poll: func(d time.Duration, f func(*jobs.Run)) (*jobs.Run, error) {
return &jobs.Run{
State: &jobs.RunState{
ResultState: jobs.RunResultStateSuccess,
},
}, nil
},
}
jobApi.EXPECT().RunNow(mock.Anything, jobs.RunNow{
JobId: 123,
}).Return(mockWaitForRun, nil)
// Mock the runner getting the job output
jobApi.EXPECT().GetRun(mock.Anything, jobs.GetRunRequest{}).Return(&jobs.Run{}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}

View File

@ -183,6 +183,18 @@ func (r *pipelineRunner) Cancel(ctx context.Context) error {
return err return err
} }
func (r *pipelineRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
s := cmdio.Spinner(ctx)
s <- "Cancelling the active pipeline update"
err := r.Cancel(ctx)
close(s)
if err != nil {
return nil, err
}
return r.Run(ctx, opts)
}
func (r *pipelineRunner) ParseArgs(args []string, opts *Options) error { func (r *pipelineRunner) ParseArgs(args []string, opts *Options) error {
if len(args) == 0 { if len(args) == 0 {
return nil return nil

View File

@ -1,6 +1,7 @@
package run package run
import ( import (
"bytes"
"context" "context"
"testing" "testing"
"time" "time"
@ -8,8 +9,12 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
sdk_config "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks" "github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/pipelines" "github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -47,3 +52,68 @@ func TestPipelineRunnerCancel(t *testing.T) {
err := runner.Cancel(context.Background()) err := runner.Cancel(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
func TestPipelineRunnerRestart(t *testing.T) {
pipeline := &resources.Pipeline{
ID: "123",
}
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"test_pipeline": pipeline,
},
},
},
}
runner := pipelineRunner{key: "test", bundle: b, pipeline: pipeline}
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdk_config.Config{
Host: "https://test.com",
}
b.SetWorkpaceClient(m.WorkspaceClient)
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
mockWait := &pipelines.WaitGetPipelineIdle[struct{}]{
Poll: func(time.Duration, func(*pipelines.GetPipelineResponse)) (*pipelines.GetPipelineResponse, error) {
return nil, nil
},
}
pipelineApi := m.GetMockPipelinesAPI()
pipelineApi.EXPECT().Stop(mock.Anything, pipelines.StopRequest{
PipelineId: "123",
}).Return(mockWait, nil)
pipelineApi.EXPECT().GetByPipelineId(mock.Anything, "123").Return(&pipelines.GetPipelineResponse{}, nil)
// Mock runner starting a new update
pipelineApi.EXPECT().StartUpdate(mock.Anything, pipelines.StartUpdate{
PipelineId: "123",
}).Return(&pipelines.StartUpdateResponse{
UpdateId: "456",
}, nil)
// Mock runner polling for events
pipelineApi.EXPECT().ListPipelineEventsAll(mock.Anything, pipelines.ListPipelineEventsRequest{
Filter: `update_id = '456'`,
MaxResults: 100,
PipelineId: "123",
}).Return([]pipelines.PipelineEvent{}, nil)
// Mock runner polling for update status
pipelineApi.EXPECT().GetUpdateByPipelineIdAndUpdateId(mock.Anything, "123", "456").
Return(&pipelines.GetUpdateResponse{
Update: &pipelines.UpdateInfo{
State: pipelines.UpdateInfoStateCompleted,
},
}, nil)
_, err := runner.Restart(ctx, &Options{})
require.NoError(t, err)
}

View File

@ -27,6 +27,10 @@ type Runner interface {
// Run the underlying worklow. // Run the underlying worklow.
Run(ctx context.Context, opts *Options) (output.RunOutput, error) Run(ctx context.Context, opts *Options) (output.RunOutput, error)
// Restart the underlying workflow by cancelling any existing runs before
// starting a new one.
Restart(ctx context.Context, opts *Options) (output.RunOutput, error)
// Cancel the underlying workflow. // Cancel the underlying workflow.
Cancel(ctx context.Context) error Cancel(ctx context.Context) error

View File

@ -180,6 +180,45 @@
} }
] ]
}, },
"resources.Dashboard": {
"anyOf": [
{
"type": "object",
"properties": {
"display_name": {
"$ref": "#/$defs/string"
},
"embed_credentials": {
"$ref": "#/$defs/bool"
},
"file_path": {
"$ref": "#/$defs/string"
},
"parent_path": {
"$ref": "#/$defs/string"
},
"permissions": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission"
},
"serialized_dashboard": {
"$ref": "#/$defs/interface"
},
"warehouse_id": {
"$ref": "#/$defs/string"
}
},
"additionalProperties": false,
"required": [
"display_name",
"warehouse_id"
]
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Grant": { "resources.Grant": {
"anyOf": [ "anyOf": [
{ {
@ -209,6 +248,10 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"budget_policy_id": {
"description": "The id of the user specified budget policy to use for this job.\nIf not specified, a default budget policy may be applied when creating or modifying the job.\nSee `effective_budget_policy_id` for the budget policy used by this workload.",
"$ref": "#/$defs/string"
},
"continuous": { "continuous": {
"description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.", "description": "An optional continuous property for this job. The continuous property will ensure that there is always one run executing. Only one of `schedule` and `continuous` can be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.Continuous" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.Continuous"
@ -1050,6 +1093,9 @@
"clusters": { "clusters": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster"
}, },
"dashboards": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Dashboard"
},
"experiments": { "experiments": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment"
}, },
@ -3901,6 +3947,10 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"report": {
"description": "Select tables from a specific source report.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.ReportSpec"
},
"schema": { "schema": {
"description": "Select tables from a specific source schema.", "description": "Select tables from a specific source schema.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.SchemaSpec" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.SchemaSpec"
@ -4233,6 +4283,40 @@
} }
] ]
}, },
"pipelines.ReportSpec": {
"anyOf": [
{
"type": "object",
"properties": {
"destination_catalog": {
"description": "Required. Destination catalog to store table.",
"$ref": "#/$defs/string"
},
"destination_schema": {
"description": "Required. Destination schema to store table.",
"$ref": "#/$defs/string"
},
"destination_table": {
"description": "Required. Destination table name. The pipeline fails if a table with that name already exists.",
"$ref": "#/$defs/string"
},
"source_url": {
"description": "Required. Report URL in the source system.",
"$ref": "#/$defs/string"
},
"table_configuration": {
"description": "Configuration settings to control the ingestion of tables. These settings override the table_configuration defined in the IngestionPipelineDefinition object.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.TableSpecificConfig"
}
},
"additionalProperties": false
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"pipelines.SchemaSpec": { "pipelines.SchemaSpec": {
"anyOf": [ "anyOf": [
{ {
@ -4281,7 +4365,7 @@
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"destination_table": { "destination_table": {
"description": "Optional. Destination table name. The pipeline fails If a table with that name already exists. If not set, the source table name is used.", "description": "Optional. Destination table name. The pipeline fails if a table with that name already exists. If not set, the source table name is used.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"source_catalog": { "source_catalog": {
@ -4329,6 +4413,10 @@
"SCD_TYPE_1", "SCD_TYPE_1",
"SCD_TYPE_2" "SCD_TYPE_2"
] ]
},
"sequence_by": {
"description": "The column names specifying the logical order of events in the source data. Delta Live Tables uses this sequencing to handle change events that arrive out of order.",
"$ref": "#/$defs/slice/string"
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -5246,6 +5334,20 @@
} }
] ]
}, },
"resources.Dashboard": {
"anyOf": [
{
"type": "object",
"additionalProperties": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/resources.Dashboard"
}
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Job": { "resources.Job": {
"anyOf": [ "anyOf": [
{ {

View File

@ -8,6 +8,7 @@ import (
"github.com/databricks/cli/bundle/deploy/terraform" "github.com/databricks/cli/bundle/deploy/terraform"
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/run" "github.com/databricks/cli/bundle/run"
"github.com/databricks/cli/bundle/run/output"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio" "github.com/databricks/cli/libs/cmdio"
@ -100,19 +101,16 @@ task or a Python wheel task, the second example applies.
} }
runOptions.NoWait = noWait runOptions.NoWait = noWait
var output output.RunOutput
if restart { if restart {
s := cmdio.Spinner(ctx) output, err = runner.Restart(ctx, &runOptions)
s <- "Cancelling all runs" } else {
err := runner.Cancel(ctx) output, err = runner.Run(ctx, &runOptions)
close(s) }
if err != nil {
return err
}
}
output, err := runner.Run(ctx, &runOptions)
if err != nil { if err != nil {
return err return err
} }
if output != nil { if output != nil {
switch root.OutputType(cmd) { switch root.OutputType(cmd) {
case flags.OutputText: case flags.OutputText:

View File

@ -1,7 +1,9 @@
package bundle package bundle
import ( import (
"context"
"fmt" "fmt"
"io"
"time" "time"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
@ -9,6 +11,7 @@ import (
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/sync" "github.com/databricks/cli/libs/sync"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -18,6 +21,7 @@ type syncFlags struct {
interval time.Duration interval time.Duration
full bool full bool
watch bool watch bool
output flags.Output
} }
func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle) (*sync.SyncOptions, error) { func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle) (*sync.SyncOptions, error) {
@ -26,6 +30,21 @@ func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, b *bundle.Bundle)
return nil, fmt.Errorf("cannot get sync options: %w", err) return nil, fmt.Errorf("cannot get sync options: %w", err)
} }
if f.output != "" {
var outputFunc func(context.Context, <-chan sync.Event, io.Writer)
switch f.output {
case flags.OutputText:
outputFunc = sync.TextOutput
case flags.OutputJSON:
outputFunc = sync.JsonOutput
}
if outputFunc != nil {
opts.OutputHandler = func(ctx context.Context, c <-chan sync.Event) {
outputFunc(ctx, c, cmd.OutOrStdout())
}
}
}
opts.Full = f.full opts.Full = f.full
opts.PollInterval = f.interval opts.PollInterval = f.interval
return opts, nil return opts, nil
@ -42,6 +61,7 @@ func newSyncCommand() *cobra.Command {
cmd.Flags().DurationVar(&f.interval, "interval", 1*time.Second, "file system polling interval (for --watch)") cmd.Flags().DurationVar(&f.interval, "interval", 1*time.Second, "file system polling interval (for --watch)")
cmd.Flags().BoolVar(&f.full, "full", false, "perform full synchronization (default is incremental)") cmd.Flags().BoolVar(&f.full, "full", false, "perform full synchronization (default is incremental)")
cmd.Flags().BoolVar(&f.watch, "watch", false, "watch local file system for changes") cmd.Flags().BoolVar(&f.watch, "watch", false, "watch local file system for changes")
cmd.Flags().Var(&f.output, "output", "type of the output format")
cmd.RunE = func(cmd *cobra.Command, args []string) error { cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context() ctx := cmd.Context()
@ -65,6 +85,7 @@ func newSyncCommand() *cobra.Command {
if err != nil { if err != nil {
return err return err
} }
defer s.Close()
log.Infof(ctx, "Remote file sync location: %v", opts.RemotePath) log.Infof(ctx, "Remote file sync location: %v", opts.RemotePath)

View File

@ -28,9 +28,6 @@ func New() *cobra.Command {
Annotations: map[string]string{ Annotations: map[string]string{
"package": "apps", "package": "apps",
}, },
// This service is being previewed; hide from help output.
Hidden: true,
} }
// Add methods // Add methods

View File

@ -0,0 +1,220 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package disable_legacy_dbfs
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "disable-legacy-dbfs",
Short: `When this setting is on, access to DBFS root and DBFS mounts is disallowed (as well as creation of new mounts).`,
Long: `When this setting is on, access to DBFS root and DBFS mounts is disallowed (as
well as creation of new mounts). When the setting is off, all DBFS
functionality is enabled`,
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteDisableLegacyDbfsRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteDisableLegacyDbfsRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete the disable legacy DBFS setting.`
cmd.Long = `Delete the disable legacy DBFS setting.
Deletes the disable legacy DBFS setting for a workspace, reverting back to the
default.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyDbfs().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*settings.GetDisableLegacyDbfsRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq settings.GetDisableLegacyDbfsRequest
// TODO: short flags
cmd.Flags().StringVar(&getReq.Etag, "etag", getReq.Etag, `etag used for versioning.`)
cmd.Use = "get"
cmd.Short = `Get the disable legacy DBFS setting.`
cmd.Long = `Get the disable legacy DBFS setting.
Gets the disable legacy DBFS setting.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyDbfs().Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*settings.UpdateDisableLegacyDbfsRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq settings.UpdateDisableLegacyDbfsRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Use = "update"
cmd.Short = `Update the disable legacy DBFS setting.`
cmd.Long = `Update the disable legacy DBFS setting.
Updates the disable legacy DBFS setting for the workspace.`
cmd.Annotations = make(map[string]string)
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
} else {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}
response, err := w.Settings.DisableLegacyDbfs().Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service DisableLegacyDbfs

View File

@ -1557,6 +1557,7 @@ func newSubmit() *cobra.Command {
cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: array: access_control_list // TODO: array: access_control_list
cmd.Flags().StringVar(&submitReq.BudgetPolicyId, "budget-policy-id", submitReq.BudgetPolicyId, `The user specified id of the budget policy to use for this one-time run.`)
// TODO: complex arg: email_notifications // TODO: complex arg: email_notifications
// TODO: array: environments // TODO: array: environments
// TODO: complex arg: git_source // TODO: complex arg: git_source

View File

@ -9,6 +9,7 @@ import (
compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile" compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile"
default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace" default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace"
disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access" disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access"
disable_legacy_dbfs "github.com/databricks/cli/cmd/workspace/disable-legacy-dbfs"
enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring" enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring"
restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins" restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins"
) )
@ -33,6 +34,7 @@ func New() *cobra.Command {
cmd.AddCommand(compliance_security_profile.New()) cmd.AddCommand(compliance_security_profile.New())
cmd.AddCommand(default_namespace.New()) cmd.AddCommand(default_namespace.New())
cmd.AddCommand(disable_legacy_access.New()) cmd.AddCommand(disable_legacy_access.New())
cmd.AddCommand(disable_legacy_dbfs.New())
cmd.AddCommand(enhanced_security_monitoring.New()) cmd.AddCommand(enhanced_security_monitoring.New())
cmd.AddCommand(restrict_workspace_admins.New()) cmd.AddCommand(restrict_workspace_admins.New())

2
go.mod
View File

@ -7,7 +7,7 @@ toolchain go1.22.7
require ( require (
github.com/Masterminds/semver/v3 v3.3.0 // MIT github.com/Masterminds/semver/v3 v3.3.0 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0 github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.48.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.49.0 // Apache 2.0
github.com/fatih/color v1.17.0 // MIT github.com/fatih/color v1.17.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause

4
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.48.0 h1:46KtsnRo+FGhC3izUXbpL0PXBNomvsdignYDhJZlm9s= github.com/databricks/databricks-sdk-go v0.49.0 h1:VBTeZZMLIuBSM4kxOCfUcW9z4FUQZY2QeNRD5qm9FUQ=
github.com/databricks/databricks-sdk-go v0.48.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU= github.com/databricks/databricks-sdk-go v0.49.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -2,11 +2,14 @@ package acc
import ( import (
"context" "context"
"fmt"
"os" "os"
"testing" "testing"
"github.com/databricks/databricks-sdk-go" "github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -94,3 +97,30 @@ func (t *WorkspaceT) RunPython(code string) (string, error) {
require.True(t, ok, "unexpected type %T", results.Data) require.True(t, ok, "unexpected type %T", results.Data)
return output, nil return output, nil
} }
func (t *WorkspaceT) TemporaryWorkspaceDir(name ...string) string {
ctx := context.Background()
me, err := t.W.CurrentUser.Me(ctx)
require.NoError(t, err)
basePath := fmt.Sprintf("/Users/%s/%s", me.UserName, RandomName(name...))
t.Logf("Creating %s", basePath)
err = t.W.Workspace.MkdirsByPath(ctx, basePath)
require.NoError(t, err)
// Remove test directory on test completion.
t.Cleanup(func() {
t.Logf("Removing %s", basePath)
err := t.W.Workspace.Delete(ctx, workspace.Delete{
Path: basePath,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove temporary workspace directory %s: %#v", basePath, err)
})
return basePath
}

View File

@ -0,0 +1,13 @@
# Bugbash
The script in this directory can be used to conveniently exec into a shell
where a CLI build for a specific branch is made available.
## Usage
This script prompts if you do NOT have at least Bash 5 installed,
but works without command completion with earlier versions.
```shell
bash <(curl -fsSL https://raw.githubusercontent.com/databricks/cli/main/internal/bugbash/exec.sh) my-branch
```

139
internal/bugbash/exec.sh Executable file
View File

@ -0,0 +1,139 @@
#!/usr/bin/env bash
set -euo pipefail
# Set the GitHub repository for the Databricks CLI.
export GH_REPO="databricks/cli"
# Synthesize the directory name for the snapshot build.
function cli_snapshot_directory() {
dir="cli"
# Append OS
case "$(uname -s)" in
Linux)
dir="${dir}_linux"
;;
Darwin)
dir="${dir}_darwin"
;;
*)
echo "Unknown operating system: $os"
;;
esac
# Append architecture
case "$(uname -m)" in
x86_64)
dir="${dir}_amd64_v1"
;;
i386|i686)
dir="${dir}_386"
;;
arm64|aarch64)
dir="${dir}_arm64"
;;
armv7l|armv8l)
dir="${dir}_arm_6"
;;
*)
echo "Unknown architecture: $arch"
;;
esac
echo $dir
}
BRANCH=$1
shift
# Default to main branch if branch is not specified.
if [ -z "$BRANCH" ]; then
BRANCH=main
fi
if [ -z "$BRANCH" ]; then
echo "Please specify which branch to bugbash..."
exit 1
fi
# Check if the "gh" command is available.
if ! command -v gh &> /dev/null; then
echo "The GitHub CLI (gh) is required to download the snapshot build."
echo "Install and configure it with:"
echo ""
echo " brew install gh"
echo " gh auth login"
echo ""
exit 1
fi
echo "Looking for a snapshot build of the Databricks CLI on branch $BRANCH..."
# Find last successful build on $BRANCH.
last_successful_run_id=$(
gh run list -b "$BRANCH" -w release-snapshot --json 'databaseId,conclusion' |
jq 'limit(1; .[] | select(.conclusion == "success")) | .databaseId'
)
if [ -z "$last_successful_run_id" ]; then
echo "Unable to find last successful build of the release-snapshot workflow for branch $BRANCH."
exit 1
fi
# Determine artifact name with the right binaries for this runner.
case "$(uname -s)" in
Linux)
artifact="cli_linux_snapshot"
;;
Darwin)
artifact="cli_darwin_snapshot"
;;
esac
# Create a temporary directory to download the artifact.
dir=$(mktemp -d)
# Download the artifact.
echo "Downloading the snapshot build..."
gh run download "$last_successful_run_id" -n "$artifact" -D "$dir/.bin"
dir="$dir/.bin/$(cli_snapshot_directory)"
if [ ! -d "$dir" ]; then
echo "Directory does not exist: $dir"
exit 1
fi
# Make CLI available on $PATH.
chmod +x "$dir/databricks"
export PATH="$dir:$PATH"
# Set the prompt to indicate the bugbash environment and exec.
export PS1="(bugbash $BRANCH) \[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ "
# Display completion instructions.
echo ""
echo "=================================================================="
if [[ ${BASH_VERSINFO[0]} -lt 5 ]]; then
echo -en "\033[31m"
echo "You have Bash version < 5 installed... completion won't work."
echo -en "\033[0m"
echo ""
echo "Install it with:"
echo ""
echo " brew install bash bash-completion"
echo ""
echo "=================================================================="
fi
echo ""
echo "To load completions in your current shell session:"
echo ""
echo " source /opt/homebrew/etc/profile.d/bash_completion.sh"
echo " source <(databricks completion bash)"
echo ""
echo "=================================================================="
echo ""
# Exec into a new shell.
# Note: don't use zsh because on macOS it _always_ overwrites PS1.
exec /usr/bin/env bash

View File

@ -0,0 +1,110 @@
package internal
import (
"encoding/base64"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/dyn/merge"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/dashboards"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Verify that importing a dashboard through the Workspace API retains the identity of the underying resource,
// as well as properties exclusively accessible through the dashboards API.
func TestAccDashboardAssumptions_WorkspaceImport(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
t.Parallel()
dashboardName := "New Dashboard"
dashboardPayload := []byte(`{"pages":[{"name":"2506f97a","displayName":"New Page"}]}`)
warehouseId := acc.GetEnvOrSkipTest(t, "TEST_DEFAULT_WAREHOUSE_ID")
dir := wt.TemporaryWorkspaceDir("dashboard-assumptions-")
dashboard, err := wt.W.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
DisplayName: dashboardName,
ParentPath: dir,
SerializedDashboard: string(dashboardPayload),
WarehouseId: warehouseId,
})
require.NoError(t, err)
t.Logf("Dashboard ID (per Lakeview API): %s", dashboard.DashboardId)
// Overwrite the dashboard via the workspace API.
{
err := wt.W.Workspace.Import(ctx, workspace.Import{
Format: workspace.ImportFormatAuto,
Path: dashboard.Path,
Content: base64.StdEncoding.EncodeToString(dashboardPayload),
Overwrite: true,
})
require.NoError(t, err)
}
// Cross-check consistency with the workspace object.
{
obj, err := wt.W.Workspace.GetStatusByPath(ctx, dashboard.Path)
require.NoError(t, err)
// Confirm that the resource ID included in the response is equal to the dashboard ID.
require.Equal(t, dashboard.DashboardId, obj.ResourceId)
t.Logf("Dashboard ID (per workspace object status): %s", obj.ResourceId)
}
// Try to overwrite the dashboard via the Lakeview API (and expect failure).
{
_, err := wt.W.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
DisplayName: dashboardName,
ParentPath: dir,
SerializedDashboard: string(dashboardPayload),
})
require.ErrorIs(t, err, apierr.ErrResourceAlreadyExists)
}
// Retrieve the dashboard object and confirm that only select fields were updated by the import.
{
previousDashboard := dashboard
currentDashboard, err := wt.W.Lakeview.Get(ctx, dashboards.GetDashboardRequest{
DashboardId: dashboard.DashboardId,
})
require.NoError(t, err)
// Convert the dashboard object to a [dyn.Value] to make comparison easier.
previous, err := convert.FromTyped(previousDashboard, dyn.NilValue)
require.NoError(t, err)
current, err := convert.FromTyped(currentDashboard, dyn.NilValue)
require.NoError(t, err)
// Collect updated paths.
var updatedFieldPaths []string
_, err = merge.Override(previous, current, merge.OverrideVisitor{
VisitDelete: func(basePath dyn.Path, left dyn.Value) error {
assert.Fail(t, "unexpected delete operation")
return nil
},
VisitInsert: func(basePath dyn.Path, right dyn.Value) (dyn.Value, error) {
assert.Fail(t, "unexpected insert operation")
return right, nil
},
VisitUpdate: func(basePath dyn.Path, left dyn.Value, right dyn.Value) (dyn.Value, error) {
updatedFieldPaths = append(updatedFieldPaths, basePath.String())
return right, nil
},
})
require.NoError(t, err)
// Confirm that only the expected fields have been updated.
assert.ElementsMatch(t, []string{
"etag",
"update_time",
}, updatedFieldPaths)
}
}

View File

@ -23,8 +23,21 @@ type Repository struct {
// directory where we process .gitignore files. // directory where we process .gitignore files.
real bool real bool
// root is the absolute path to the repository root. // rootDir is the path to the root of the repository checkout.
root vfs.Path // This can be either the main repository checkout or a worktree checkout.
// For more information about worktrees, see: https://git-scm.com/docs/git-worktree#_description.
rootDir vfs.Path
// gitDir is the equivalent of $GIT_DIR and points to the
// `.git` directory of a repository or a worktree directory.
// See https://git-scm.com/docs/git-worktree#_details for more information.
gitDir vfs.Path
// gitCommonDir is the equivalent of $GIT_COMMON_DIR and points to the
// `.git` directory of the main working tree (common between worktrees).
// This is equivalent to [gitDir] if this is the main working tree.
// See https://git-scm.com/docs/git-worktree#_details for more information.
gitCommonDir vfs.Path
// ignore contains a list of ignore patterns indexed by the // ignore contains a list of ignore patterns indexed by the
// path prefix relative to the repository root. // path prefix relative to the repository root.
@ -44,12 +57,11 @@ type Repository struct {
// Root returns the absolute path to the repository root. // Root returns the absolute path to the repository root.
func (r *Repository) Root() string { func (r *Repository) Root() string {
return r.root.Native() return r.rootDir.Native()
} }
func (r *Repository) CurrentBranch() (string, error) { func (r *Repository) CurrentBranch() (string, error) {
// load .git/HEAD ref, err := LoadReferenceFile(r.gitDir, "HEAD")
ref, err := LoadReferenceFile(r.root, path.Join(GitDirectoryName, "HEAD"))
if err != nil { if err != nil {
return "", err return "", err
} }
@ -65,8 +77,7 @@ func (r *Repository) CurrentBranch() (string, error) {
} }
func (r *Repository) LatestCommit() (string, error) { func (r *Repository) LatestCommit() (string, error) {
// load .git/HEAD ref, err := LoadReferenceFile(r.gitDir, "HEAD")
ref, err := LoadReferenceFile(r.root, path.Join(GitDirectoryName, "HEAD"))
if err != nil { if err != nil {
return "", err return "", err
} }
@ -80,12 +91,12 @@ func (r *Repository) LatestCommit() (string, error) {
return ref.Content, nil return ref.Content, nil
} }
// read reference from .git/HEAD // Read reference from $GIT_DIR/HEAD
branchHeadPath, err := ref.ResolvePath() branchHeadPath, err := ref.ResolvePath()
if err != nil { if err != nil {
return "", err return "", err
} }
branchHeadRef, err := LoadReferenceFile(r.root, path.Join(GitDirectoryName, branchHeadPath)) branchHeadRef, err := LoadReferenceFile(r.gitCommonDir, branchHeadPath)
if err != nil { if err != nil {
return "", err return "", err
} }
@ -125,7 +136,7 @@ func (r *Repository) loadConfig() error {
if err != nil { if err != nil {
return fmt.Errorf("unable to load user specific gitconfig: %w", err) return fmt.Errorf("unable to load user specific gitconfig: %w", err)
} }
err = config.loadFile(r.root, ".git/config") err = config.loadFile(r.gitCommonDir, "config")
if err != nil { if err != nil {
return fmt.Errorf("unable to load repository specific gitconfig: %w", err) return fmt.Errorf("unable to load repository specific gitconfig: %w", err)
} }
@ -133,12 +144,6 @@ func (r *Repository) loadConfig() error {
return nil return nil
} }
// newIgnoreFile constructs a new [ignoreRules] implementation backed by
// a file using the specified path relative to the repository root.
func (r *Repository) newIgnoreFile(relativeIgnoreFilePath string) ignoreRules {
return newIgnoreFile(r.root, relativeIgnoreFilePath)
}
// getIgnoreRules returns a slice of [ignoreRules] that apply // getIgnoreRules returns a slice of [ignoreRules] that apply
// for the specified prefix. The prefix must be cleaned by the caller. // for the specified prefix. The prefix must be cleaned by the caller.
// It lazily initializes an entry for the specified prefix if it // It lazily initializes an entry for the specified prefix if it
@ -149,7 +154,7 @@ func (r *Repository) getIgnoreRules(prefix string) []ignoreRules {
return fs return fs
} }
r.ignore[prefix] = append(r.ignore[prefix], r.newIgnoreFile(path.Join(prefix, gitIgnoreFileName))) r.ignore[prefix] = append(r.ignore[prefix], newIgnoreFile(r.rootDir, path.Join(prefix, gitIgnoreFileName)))
return r.ignore[prefix] return r.ignore[prefix]
} }
@ -205,20 +210,29 @@ func (r *Repository) Ignore(relPath string) (bool, error) {
func NewRepository(path vfs.Path) (*Repository, error) { func NewRepository(path vfs.Path) (*Repository, error) {
real := true real := true
rootPath, err := vfs.FindLeafInTree(path, GitDirectoryName) rootDir, err := vfs.FindLeafInTree(path, GitDirectoryName)
if err != nil { if err != nil {
if !errors.Is(err, fs.ErrNotExist) { if !errors.Is(err, fs.ErrNotExist) {
return nil, err return nil, err
} }
// Cannot find `.git` directory. // Cannot find `.git` directory.
// Treat the specified path as a potential repository root. // Treat the specified path as a potential repository root checkout.
real = false real = false
rootPath = path rootDir = path
}
// Derive $GIT_DIR and $GIT_COMMON_DIR paths if this is a real repository.
// If it isn't a real repository, they'll point to the (non-existent) `.git` directory.
gitDir, gitCommonDir, err := resolveGitDirs(rootDir)
if err != nil {
return nil, err
} }
repo := &Repository{ repo := &Repository{
real: real, real: real,
root: rootPath, rootDir: rootDir,
gitDir: gitDir,
gitCommonDir: gitCommonDir,
ignore: make(map[string][]ignoreRules), ignore: make(map[string][]ignoreRules),
} }
@ -252,10 +266,10 @@ func NewRepository(path vfs.Path) (*Repository, error) {
newStringIgnoreRules([]string{ newStringIgnoreRules([]string{
".git", ".git",
}), }),
// Load repository-wide excludes file. // Load repository-wide exclude file.
repo.newIgnoreFile(".git/info/excludes"), newIgnoreFile(repo.gitCommonDir, "info/exclude"),
// Load root gitignore file. // Load root gitignore file.
repo.newIgnoreFile(".gitignore"), newIgnoreFile(repo.rootDir, ".gitignore"),
} }
return repo, nil return repo, nil

View File

@ -80,7 +80,7 @@ func NewView(root vfs.Path) (*View, error) {
// Target path must be relative to the repository root path. // Target path must be relative to the repository root path.
target := root.Native() target := root.Native()
prefix := repo.root.Native() prefix := repo.rootDir.Native()
if !strings.HasPrefix(target, prefix) { if !strings.HasPrefix(target, prefix) {
return nil, fmt.Errorf("path %q is not within repository root %q", root.Native(), prefix) return nil, fmt.Errorf("path %q is not within repository root %q", root.Native(), prefix)
} }

123
libs/git/worktree.go Normal file
View File

@ -0,0 +1,123 @@
package git
import (
"bufio"
"errors"
"fmt"
"io/fs"
"path/filepath"
"strings"
"github.com/databricks/cli/libs/vfs"
)
func readLines(root vfs.Path, name string) ([]string, error) {
file, err := root.Open(name)
if err != nil {
return nil, err
}
defer file.Close()
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
return lines, scanner.Err()
}
// readGitDir reads the value of the `.git` file in a worktree.
func readGitDir(root vfs.Path) (string, error) {
lines, err := readLines(root, GitDirectoryName)
if err != nil {
return "", err
}
var gitDir string
for _, line := range lines {
parts := strings.SplitN(line, ": ", 2)
if len(parts) != 2 {
continue
}
if parts[0] == "gitdir" {
gitDir = strings.TrimSpace(parts[1])
}
}
if gitDir == "" {
return "", fmt.Errorf(`expected %q to contain a line with "gitdir: [...]"`, filepath.Join(root.Native(), GitDirectoryName))
}
return gitDir, nil
}
// readGitCommonDir reads the value of the `commondir` file in the `.git` directory of a worktree.
// This file typically contains "../.." to point to $GIT_COMMON_DIR.
func readGitCommonDir(gitDir vfs.Path) (string, error) {
lines, err := readLines(gitDir, "commondir")
if err != nil {
return "", err
}
if len(lines) == 0 {
return "", errors.New("file is empty")
}
return strings.TrimSpace(lines[0]), nil
}
// resolveGitDirs resolves the paths for $GIT_DIR and $GIT_COMMON_DIR.
// The path argument is the root of the checkout where (supposedly) a `.git` file or directory exists.
func resolveGitDirs(root vfs.Path) (vfs.Path, vfs.Path, error) {
fileInfo, err := root.Stat(GitDirectoryName)
if err != nil {
// If the `.git` file or directory does not exist, then this is not a git repository.
// Return paths that we know don't exist, so we do not need to perform nil checks in the caller.
if errors.Is(err, fs.ErrNotExist) {
gitDir := vfs.MustNew(filepath.Join(root.Native(), GitDirectoryName))
return gitDir, gitDir, nil
}
return nil, nil, err
}
// If the path is a directory, then it is the main working tree.
// Both $GIT_DIR and $GIT_COMMON_DIR point to the same directory.
if fileInfo.IsDir() {
gitDir := vfs.MustNew(filepath.Join(root.Native(), GitDirectoryName))
return gitDir, gitDir, nil
}
// If the path is not a directory, then it is a worktree.
// Read value for $GIT_DIR.
gitDirValue, err := readGitDir(root)
if err != nil {
return nil, nil, err
}
// Resolve $GIT_DIR.
var gitDir vfs.Path
if filepath.IsAbs(gitDirValue) {
gitDir = vfs.MustNew(gitDirValue)
} else {
gitDir = vfs.MustNew(filepath.Join(root.Native(), gitDirValue))
}
// Read value for $GIT_COMMON_DIR.
gitCommonDirValue, err := readGitCommonDir(gitDir)
if err != nil {
return nil, nil, fmt.Errorf(`expected "commondir" file in worktree git folder at %q: %w`, gitDir.Native(), err)
}
// Resolve $GIT_COMMON_DIR.
var gitCommonDir vfs.Path
if filepath.IsAbs(gitCommonDirValue) {
gitCommonDir = vfs.MustNew(gitCommonDirValue)
} else {
gitCommonDir = vfs.MustNew(filepath.Join(gitDir.Native(), gitCommonDirValue))
}
return gitDir, gitCommonDir, nil
}

108
libs/git/worktree_test.go Normal file
View File

@ -0,0 +1,108 @@
package git
import (
"fmt"
"os"
"path/filepath"
"testing"
"github.com/databricks/cli/libs/vfs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func setupWorktree(t *testing.T) string {
var err error
tmpDir := t.TempDir()
// Checkout path
err = os.MkdirAll(filepath.Join(tmpDir, "my_worktree"), os.ModePerm)
require.NoError(t, err)
// Main $GIT_COMMON_DIR
err = os.MkdirAll(filepath.Join(tmpDir, ".git"), os.ModePerm)
require.NoError(t, err)
// Worktree $GIT_DIR
err = os.MkdirAll(filepath.Join(tmpDir, ".git/worktrees/my_worktree"), os.ModePerm)
require.NoError(t, err)
return tmpDir
}
func writeGitDir(t *testing.T, dir, content string) {
err := os.WriteFile(filepath.Join(dir, "my_worktree/.git"), []byte(content), os.ModePerm)
require.NoError(t, err)
}
func writeGitCommonDir(t *testing.T, dir, content string) {
err := os.WriteFile(filepath.Join(dir, ".git/worktrees/my_worktree/commondir"), []byte(content), os.ModePerm)
require.NoError(t, err)
}
func verifyCorrectDirs(t *testing.T, dir string) {
gitDir, gitCommonDir, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
require.NoError(t, err)
assert.Equal(t, filepath.Join(dir, ".git/worktrees/my_worktree"), gitDir.Native())
assert.Equal(t, filepath.Join(dir, ".git"), gitCommonDir.Native())
}
func TestWorktreeResolveGitDir(t *testing.T) {
dir := setupWorktree(t)
writeGitCommonDir(t, dir, "../..")
t.Run("relative", func(t *testing.T) {
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s", "../.git/worktrees/my_worktree"))
verifyCorrectDirs(t, dir)
})
t.Run("absolute", func(t *testing.T) {
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s", filepath.Join(dir, ".git/worktrees/my_worktree")))
verifyCorrectDirs(t, dir)
})
t.Run("additional spaces", func(t *testing.T) {
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s \n\n\n", "../.git/worktrees/my_worktree"))
verifyCorrectDirs(t, dir)
})
t.Run("empty", func(t *testing.T) {
writeGitDir(t, dir, "")
_, _, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
assert.ErrorContains(t, err, ` to contain a line with "gitdir: [...]"`)
})
}
func TestWorktreeResolveCommonDir(t *testing.T) {
dir := setupWorktree(t)
writeGitDir(t, dir, fmt.Sprintf("gitdir: %s", "../.git/worktrees/my_worktree"))
t.Run("relative", func(t *testing.T) {
writeGitCommonDir(t, dir, "../..")
verifyCorrectDirs(t, dir)
})
t.Run("absolute", func(t *testing.T) {
writeGitCommonDir(t, dir, filepath.Join(dir, ".git"))
verifyCorrectDirs(t, dir)
})
t.Run("additional spaces", func(t *testing.T) {
writeGitCommonDir(t, dir, " ../.. \n\n\n")
verifyCorrectDirs(t, dir)
})
t.Run("empty", func(t *testing.T) {
writeGitCommonDir(t, dir, "")
_, _, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
assert.ErrorContains(t, err, `expected "commondir" file in worktree git folder at `)
})
t.Run("missing", func(t *testing.T) {
_, _, err := resolveGitDirs(vfs.MustNew(filepath.Join(dir, "my_worktree")))
assert.ErrorContains(t, err, `expected "commondir" file in worktree git folder at `)
})
}

View File

@ -3,7 +3,7 @@ resources:
pipelines: pipelines:
{{.project_name}}_pipeline: {{.project_name}}_pipeline:
name: {{.project_name}}_pipeline name: {{.project_name}}_pipeline
{{- if eq default_catalog ""}} {{- if or (eq default_catalog "") (eq default_catalog "hive_metastore")}}
## Specify the 'catalog' field to configure this pipeline to make use of Unity Catalog: ## Specify the 'catalog' field to configure this pipeline to make use of Unity Catalog:
# catalog: catalog_name # catalog: catalog_name
{{- else}} {{- else}}