Compare commits

...

96 Commits

Author SHA1 Message Date
Shreyas Goenka c296299ad2
Merge remote-tracking branch 'origin' into remove-uncessary-get 2024-12-31 15:00:18 +05:30
Denis Bilenko 1ce20a2612
lint.sh: read config for formatters; include gofmt (#2056)
As suggested here:
https://github.com/databricks/cli/pull/2051#discussion_r1899641273
2024-12-30 18:39:33 +00:00
Denis Bilenko 1306e5ec67
Add CODEOWNERS (#2055)
Goal is to have DABs core team automatically added as reviewers so that
you don't have to click manually.

Based on this example:
https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#example-of-a-codeowners-file
2024-12-30 17:41:45 +00:00
Denis Bilenko 261b7f4083
Move bulk of "golden tests" logic to libs/testdiff (#2054)
## Changes
- Detach "golden files" assertions from testcli runner. Now any output
can be compared, no matter how it is obtained.
- Move those assertion to libs/testdiff package.

This allows using "golden files" in non-integration tests.

## Tests
Existing tests
2024-12-30 15:26:21 +00:00
Denis Bilenko e088d0d996
Add lint.sh to run golanci-lint in 2 stages (#2051)
First stage is to run goimports and formatter, second is full suite.

This ensures that imports and formatting are fixed even in presence of
other issues. Otherwise golanci-lint refuses to fix anything
https://github.com/golangci/golangci-lint/issues/5257

This helpful when running aider with config like this - aider will use
that to autofix what it can after every update:

```
% cat .aider.conf.yml
lint-cmd:
  - "go: ./lint.sh"
```
2024-12-30 15:18:57 +00:00
Lennart Kats (databricks) a002475a6a
Relax checks in builtin template tests (#2042)
## Changes
Relax the checks of `lib/template/builtin_test` so they don't fail for a
local development copy that has uncommitted draft templates. Right now
these tests fail because I have some git-ignored uncommitted templates
in my local dev copy.
2024-12-27 11:38:12 +00:00
Ilya Kuznetsov 793bf2b995
fix: Empty schema fields in OpenAPI spec (#2045)
## Changes

1. Removes default yaml-fields during schema generation, caused by [this
PR](https://github.com/databricks/cli/pull/2032) (current yaml package
can't read `json` annotations in struct fields)
2. Addresses missing annotations for fields from OpenAPI spec, which are
named differently in go SDK
3. Adds filtering for annotations.yaml to include only CLI package
fields
4. Implements alphabetical sort for yaml keys to avoid unnecessary diff
in PRs

## Tests

Manually tested
2024-12-23 12:08:01 +00:00
Denis Bilenko e0952491c9
Add tests for default-python template on different Python versions (#2025)
## Changes
Add new type of test helpers that run the command and compare full
output (golden files approach).

In case of JSON, there is also an option to ignore certain paths.

Add test for different versions of Python to go through bundle init
default-python / validate / deploy / summary.

## Tests
New integration tests.
2024-12-20 14:40:54 +00:00
Denis Bilenko dd9f59837e
Upgrade go to 1.23.4 (#2038)
## Changes
`git grep -l 1.23.2 | xargs -n 1 sed -i '' 's/1.23.2/1.23.4/'`

## Tests
Existing tests
2024-12-20 09:21:36 +00:00
Denis Bilenko 2fee243586
Fix finding Python within virtualenv on Windows (#2034)
## Changes
Simplify logic for selecting Python to run when calculating default whl
build command: "python" on Windows and "python3" everywhere.

Python installers from python.org do not install python3.exe. In
virtualenv there is no python3.exe.

## Tests
Added new unit tests to create real venv with uv and simulate activation
by prepending venv/bin to PATH.
2024-12-20 07:45:32 +00:00
Denis Bilenko 07fff20eff
Remove "Publish test coverage" step on CI (#2036)
There is no token for codecov and no plans on getting one.
2024-12-19 15:29:51 +00:00
Pieter Noordhuis f939e57f3a
Trigger integration tests on push to main (#2035)
## Changes

The existing workflow already had 2 trigger conditions, so instead of
adding a third (and seeing more "skipped" jobs), I split them up into
dedicated workflow files, each with their own trigger condition.

The integration test status is reported back via commit status.

## Tests

We can confirm that everything works as expected as this PR moves from
here to the merge group to main.
2024-12-19 11:50:59 +00:00
Pieter Noordhuis 965a3fcd53
Remove dependency on ghodss/yaml (#2032)
## Changes

I noticed that #1957 took a dep on this library even though we no longer
need it. This change removes the dep and cleans up other (unused) uses
of the library. We originally relied on this library to deserialize
bundle configuration and JSON payloads to non-bundle CLI commands.

Relevant commits:
* The YAML flag was added to support apps (very early), and is not
longer used: e408b701
* First use for bundle configuration loading: e47fa619
* Switch bundle configuration loading to use `libs/dyn`: 87dd46a3

## Tests

The build works without the dependency.
2024-12-19 08:23:05 +00:00
Andrew Nester 6b4b908682
[Release] Release v0.237.0 (#2031)
Bundles:
* Allow overriding compute for non-development mode targets
([#1899](https://github.com/databricks/cli/pull/1899)).
* Show an error when using a cluster override with 'mode: production'
([#1994](https://github.com/databricks/cli/pull/1994)).

API Changes:
 * Added `databricks account federation-policy` command group.
* Added `databricks account service-principal-federation-policy` command
group.
* Added `databricks aibi-dashboard-embedding-access-policy delete`
command.
* Added `databricks aibi-dashboard-embedding-approved-domains delete`
command.

OpenAPI commit a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d (2024-12-16)
Dependency updates:
* Upgrade TF provider to 1.62.0
([#2030](https://github.com/databricks/cli/pull/2030)).
* Upgrade Go SDK to 0.54.0
([#2029](https://github.com/databricks/cli/pull/2029)).
* Bump TF codegen dependencies to latest
([#1961](https://github.com/databricks/cli/pull/1961)).
* Bump golang.org/x/term from 0.26.0 to 0.27.0
([#1983](https://github.com/databricks/cli/pull/1983)).
* Bump golang.org/x/sync from 0.9.0 to 0.10.0
([#1984](https://github.com/databricks/cli/pull/1984)).
* Bump github.com/databricks/databricks-sdk-go from 0.52.0 to 0.53.0
([#1985](https://github.com/databricks/cli/pull/1985)).
* Bump golang.org/x/crypto from 0.24.0 to 0.31.0
([#2006](https://github.com/databricks/cli/pull/2006)).
* Bump golang.org/x/crypto from 0.30.0 to 0.31.0 in
/bundle/internal/tf/codegen
([#2005](https://github.com/databricks/cli/pull/2005)).
2024-12-18 17:17:02 +01:00
Andrew Nester e3b256e753
Upgrade TF provider to 1.62.0 (#2030)
## Changes
* Added support for `IsSingleNode`, `Kind` and `UseMlRuntime` for
clusters
* Added support for `CleanRoomsNotebookTask`
* `DaysOfWeek` for pipeline restart window is now a list
2024-12-18 14:03:08 +00:00
Andrew Nester 59f0859e00
Upgrade Go SDK to 0.54.0 (#2029)
## Changes

* Added
[a.AccountFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#AccountFederationPolicyAPI)
account-level service and
[a.ServicePrincipalFederationPolicy](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/oauth2#ServicePrincipalFederationPolicyAPI)
account-level service.
* Added `IsSingleNode`, `Kind` and `UseMlRuntime` fields for Cluster
commands.
* Added `UpdateParameterSyntax` field for
[dashboards.MigrateDashboardRequest](https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MigrateDashboardRequest).
2024-12-18 12:43:27 +00:00
Ilya Kuznetsov 042c8d88c6
Custom annotations for bundle-specific JSON schema fields (#1957)
## Changes

Adds annotations to json-schema for fields which are not covered by
OpenAPI spec.

Custom descriptions were copy-pasted from documentation PR which is
still WIP so descriptions for some fields are missing

Further improvements:
* documentation autogen based on json-schema
* fix missing descriptions

## Tests

This script is not part of CLI package so I didn't test all corner
cases. Few high-level tests were added to be sure that schema
annotations is in sync with actual config

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-18 10:19:14 +00:00
Andrew Nester 5b84856b17
Correctly handle required query params in CLI generation (#2027)
## Changes
If there's required query params, it is a top-level field of request
object and not a field of nested request body.

This is needed for upcoming changes from OpenAPI spec changes where such
query parameters is introduced.

No changes after regenerating CLI with current spec and the fix (appears
we haven't had such params before)
2024-12-17 20:05:42 +01:00
Pieter Noordhuis 13fa43e0f5
Remove superfluous helper (#2028)
## Changes

There was only one helper for AWS and not the other clouds. Found this
when looking through double calls to `acc.WorkspaceTest()` (see
`TestPythonWheelTaskDeployAndRunOnInteractiveCluster`).

## Tests

n/a
2024-12-17 17:34:09 +00:00
Pieter Noordhuis 23ddee8023
Skip job runs during integration testing for PRs (#2024)
## Changes

A small subset of tests trigger cluster creation to run jobs. These
tests comprise a substantial amount of the total integration test
runtime. We can skip them on PRs and only run them on the main branch.

## Tests

Confirmed the short runtime is ~20 mins.
2024-12-17 17:16:58 +00:00
Denis Bilenko 2fa3b48083
Remove 'make fmt' and 'fmt' workflow (#2026)
Remove unnecessary make command and github workflow - it's a subset of
"lint" now. However, keep "mod tidy" separately, don't think the linter
does that.
2024-12-17 16:34:54 +01:00
Pieter Noordhuis d7eac598cd
Move integration test helpers to `integration/internal` (#2022)
## Changes

The `acc` package is exclusively used by integration tests, so it
belongs under `integration/internal`.

It's not the best name we can rename later.

## Tests

n/a
2024-12-17 08:45:58 +01:00
Andrew Nester e60fe1bff2
Fixed downloading arm64 binaries (#2021)
## Changes
Fixed downloading arm64 binaries

Go 1.23 changed the way built binaries are prefixed on amd64, more
details here: https://tip.golang.org/doc/go1.23#arm64
2024-12-16 17:34:22 +01:00
Denis Bilenko b6f299974f
Fix testutil.RandomName to use the full character set (#2020)
## Changes
It was using first 12 chars, that does not seem intended.

## Tests
Existing tests.
2024-12-16 17:21:20 +01:00
Denis Bilenko e5b836a6ac
Refactor initTestTemplate/deployBundle/destroyBundle to not return errors (#2017)
## Changes
These test helpers were updated to handle the error internally and not
return it. Since they have testing.T object, they can do so directly. On
the caller side, this functions were always followed by
require.NoError(t, err), that was cleaned up.

This approach helps reduce the setup/teardown boilerplate in the test
cases.

## Tests
Existing tests.
2024-12-16 13:41:32 +01:00
Pieter Noordhuis 70b7bbfd81
Remove calls to `t.Setenv` from integration tests (#2018)
## Changes

The `Setenv` helper function configures an environment variable and
resets it to its original value when exiting the test scope. It is
incompatible with running tests in parallel because it modifies
process-wide state. The `libs/env` package defines functions to interact
with the environment but records `Setenv` calls on a `context.Context`.
This enables us to override/specialize the environment scoped to a
context.

Pre-requisites for removing the `t.Setenv` calls:
* Make `cmdio.NewIO` accept a context and use it with `libs/env`
* Make all `internal/testcli` functions use a context

The rest of this change:
* Modifies integration tests to initialize a context to use if there
wasn't already one
* Updates `t.Setenv` calls to use `env.Set`

## Tests

n/a
2024-12-16 12:34:37 +01:00
dependabot[bot] d929ea3eef
Bump golang.org/x/crypto from 0.30.0 to 0.31.0 in /bundle/internal/tf/codegen (#2005)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.30.0 to 0.31.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b4f1988a35"><code>b4f1988</code></a>
ssh: make the public key cache a 1-entry FIFO cache</li>
<li>See full diff in <a
href="https://github.com/golang/crypto/compare/v0.30.0...v0.31.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/crypto&package-manager=go_modules&previous-version=0.30.0&new-version=0.31.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/databricks/cli/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 08:55:33 +01:00
dependabot[bot] 9f9d892db9
Bump golang.org/x/crypto from 0.24.0 to 0.31.0 (#2006)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.24.0 to 0.31.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b4f1988a35"><code>b4f1988</code></a>
ssh: make the public key cache a 1-entry FIFO cache</li>
<li><a
href="7042ebcbe0"><code>7042ebc</code></a>
openpgp/clearsign: just use rand.Reader in tests</li>
<li><a
href="3e90321ac7"><code>3e90321</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="8c4e668694"><code>8c4e668</code></a>
x509roots/fallback: update bundle</li>
<li><a
href="6018723c74"><code>6018723</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="71ed71b4fa"><code>71ed71b</code></a>
README: don't recommend go get</li>
<li><a
href="750a45fe5e"><code>750a45f</code></a>
sha3: add MarshalBinary, AppendBinary, and UnmarshalBinary</li>
<li><a
href="36b172546b"><code>36b1725</code></a>
sha3: avoid trailing permutation</li>
<li><a
href="80ea76eb17"><code>80ea76e</code></a>
sha3: fix padding for long cSHAKE parameters</li>
<li><a
href="c17aa50fbd"><code>c17aa50</code></a>
sha3: avoid buffer copy</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/crypto/compare/v0.24.0...v0.31.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/crypto&package-manager=go_modules&previous-version=0.24.0&new-version=0.31.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/databricks/cli/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-16 08:55:19 +01:00
Ilia Babanov daf0f48143
Remove unused vscode settings in the templates (#2013)
## Changes
VSCode extension no longer uses `databricks.python.envFile ` setting.
And older extension versions will use the same default value anyway.

## Tests
None
2024-12-13 16:13:21 +00:00
Pieter Noordhuis 3b00d7861e
Remove calls to `testutil.GetEnvOrSkipTest(t, "CLOUD_ENV")` (#2014)
## Changes

These calls are no longer necessary now that integration tests use a
main function that performs this check. This change updates integration
tests that call this function. Of those, the call sites that initialize
a workspace client are updated to use `acc.WorkspaceTest(t)` to get one.

## Tests

n/a
2024-12-13 16:09:51 +00:00
Andrew Nester 58dfa70e50
Upgrade TF provider to 1.61.0 (#2011)
## Changes
Upgraded to TF provider 1.61.0

### New Features and Improvements
- Added databricks_app resource and data source
(https://github.com/databricks/terraform-provider-databricks/pull/4099).
- Added databricks_credentials resource and data source
2024-12-13 15:55:49 +00:00
Pieter Noordhuis 4e95cb226c
Remove superfluous name prefix for integration tests (#2012)
## Changes

Mechanical rename of "TestAcc" -> "Test" in the test name prefix.

## Tests

n/a
2024-12-13 15:47:50 +01:00
Pieter Noordhuis c958702097
Move integration tests to `integration` package (#2009)
## Changes

Objectives:
* A dedicated directory for integration tests
* It is not picked up by `go test ./...`
* No need for a `TestAcc` test name prefix
* More granular packages to improve test selection (future)

The tree structure generally mirrors the source code tree structure.

Requirements for new files in this directory:
* Every package **must** be named after its directory with `_test` appended
* Requiring a different package name for integration tests avoids
aliasing with the main package.
* Every integration test package **must** include a `main_test.go` file.

These requirements are enforced by a unit test in the `integration` package.

## Tests

Integration tests pass.

The total run time regresses by about 10%. A follow-up change that
increases the degree of test parallelism will address this.
2024-12-13 15:38:58 +01:00
Pieter Noordhuis 61b0c59137
Move test helpers from internal to `acc` and `testutil` (#2008)
## Changes

This change moves fixture helpers to `internal/acc/fixtures.go`. These
helpers create an ephemeral path or resource for the duration of a test.
Call sites are updated to use `acc.WorkspaceTest()` to construct a
workspace-focused test wrapper as needed.

This change also moves the `GetNodeTypeID()` function to `testutil`.

## Tests

n/a
2024-12-12 21:28:04 +00:00
Pieter Noordhuis e472b5d888
Move the CLI test runner to `internal/testcli` package (#2004)
## Changes

The CLI test runner instantiates a new CLI "instance" through
`cmd.New()` and runs it with specified arguments. This is as close as we
get to running the real CLI **in-process**. This runner was located in
the `internal` package next to other helpers. This change moves it to
its own dedicated package.

Note: this runner transitively imports pretty much the entire
repository, which is why we intentionally keep it _separate_ from
`testutil`.

## Tests

n/a
2024-12-12 16:48:51 +00:00
Pieter Noordhuis dd3b7ec450
Define and use `testutil.TestingT` interface (#2003)
## Changes

Using an interface instead of a concrete type means we can pass
`*testing.T` directly or any wrapper type that implements a superset of
this interface. It prepares for more broad use of `acc.WorkspaceT`,
which enhances the testing object with helper functions for using a
Databricks workspace.

This eliminates the need to dereference a `*testing.T` field on a
wrapper type.

## Tests

n/a
2024-12-12 14:42:15 +00:00
dependabot[bot] cabdabf31e
Bump github.com/databricks/databricks-sdk-go from 0.52.0 to 0.53.0 (#1985)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.52.0 to 0.53.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.53.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Update Changelog file (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1091">#1091</a>).</li>
</ul>
<h3>Internal Changes</h3>
<ul>
<li>Update to latest OpenAPI spec (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1098">#1098</a>).</li>
</ul>
<p>Note: This release contains breaking changes, please see the API
changes below for more details.</p>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms">cleanrooms</a>
package.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingAccessPolicyAPI">w.AibiDashboardEmbeddingAccessPolicy</a>
workspace-level service.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingApprovedDomainsAPI">w.AibiDashboardEmbeddingApprovedDomains</a>
workspace-level service.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunLifeCycleState">jobs.CleanRoomTaskRunLifeCycleState</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunResultState">jobs.CleanRoomTaskRunResultState</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunState">jobs.CleanRoomTaskRunState</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DataType">dashboards.DataType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchema">dashboards.QuerySchema</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchemaColumn">dashboards.QuerySchemaColumn</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#DatabricksGcpServiceAccount">catalog.DatabricksGcpServiceAccount</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialGcpOptions">catalog.GenerateTemporaryServiceCredentialGcpOptions</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentRange">files.ContentRange</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingRequest">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingResponse">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicy">settings.EgressNetworkPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicy">settings.EgressNetworkPolicyInternetAccessPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestination">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestination</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyMode">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyRestrictionMode">settings.EgressNetworkPolicyInternetAccessPolicyRestrictionMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestination">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestination</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#PartitionSpecificationPartition">sharing.PartitionSpecificationPartition</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialInfo">catalog.CredentialInfo</a>.</li>
<li>Added <code>GcpOptions</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialRequest">catalog.GenerateTemporaryServiceCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCredentialRequest">catalog.UpdateCredentialRequest</a>.</li>
<li>Added <code>CachedQuerySchema</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QueryAttachment">dashboards.QueryAttachment</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#GetMetadataResponse">files.GetMetadataResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GcpServiceAccountKey">catalog.GcpServiceAccountKey</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#FileSize">files.FileSize</a>.</li>
<li>[Breaking] Removed <code>GcpServiceAccountKey</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
</ul>
<p>OpenAPI SHA: 7016dcbf2e011459416cf408ce21143bcc4b3a25, Date:
2024-12-05</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.53.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Update Changelog file (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1091">#1091</a>).</li>
</ul>
<h3>Internal Changes</h3>
<ul>
<li>Update to latest OpenAPI spec (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1098">#1098</a>).</li>
</ul>
<p>Note: This release contains breaking changes, please see the API
changes below for more details.</p>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms">cleanrooms</a>
package.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingAccessPolicyAPI">w.AibiDashboardEmbeddingAccessPolicy</a>
workspace-level service.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingApprovedDomainsAPI">w.AibiDashboardEmbeddingApprovedDomains</a>
workspace-level service.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunLifeCycleState">jobs.CleanRoomTaskRunLifeCycleState</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunResultState">jobs.CleanRoomTaskRunResultState</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunState">jobs.CleanRoomTaskRunState</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DataType">dashboards.DataType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchema">dashboards.QuerySchema</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchemaColumn">dashboards.QuerySchemaColumn</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#DatabricksGcpServiceAccount">catalog.DatabricksGcpServiceAccount</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialGcpOptions">catalog.GenerateTemporaryServiceCredentialGcpOptions</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentRange">files.ContentRange</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingRequest">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingResponse">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicy">settings.EgressNetworkPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicy">settings.EgressNetworkPolicyInternetAccessPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestination">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestination</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyMode">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyRestrictionMode">settings.EgressNetworkPolicyInternetAccessPolicyRestrictionMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestination">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestination</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#PartitionSpecificationPartition">sharing.PartitionSpecificationPartition</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialInfo">catalog.CredentialInfo</a>.</li>
<li>Added <code>GcpOptions</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialRequest">catalog.GenerateTemporaryServiceCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCredentialRequest">catalog.UpdateCredentialRequest</a>.</li>
<li>Added <code>CachedQuerySchema</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QueryAttachment">dashboards.QueryAttachment</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#GetMetadataResponse">files.GetMetadataResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GcpServiceAccountKey">catalog.GcpServiceAccountKey</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#FileSize">files.FileSize</a>.</li>
<li>[Breaking] Removed <code>GcpServiceAccountKey</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
</ul>
<p>OpenAPI SHA: 7016dcbf2e011459416cf408ce21143bcc4b3a25, Date:
2024-12-05</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6f68afdd47"><code>6f68afd</code></a>
[Release] Release v0.53.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1099">#1099</a>)</li>
<li><a
href="011bd5dab8"><code>011bd5d</code></a>
[Internal] Update to latest OpenAPI spec (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1098">#1098</a>)</li>
<li><a
href="8219c2cda9"><code>8219c2c</code></a>
[Fix] Update Changelog file (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1091">#1091</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.52.0...v0.53.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.52.0&new-version=0.53.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-12-12 14:36:00 +00:00
Pieter Noordhuis 241fcfffb0
Consolidate helper functions to `internal/testutil` package (#2002)
## Changes

This is one step (of many) toward moving the integration tests around.

This change consolidates the following functions:

* `ReadFile` / `WriteFile`
* `GetEnvOrSkipTest`
* `RandomName`

## Tests

n/a
2024-12-12 12:35:38 +00:00
Denis Bilenko a7e91a5b68
Enable gofumpt in vscode (#2001)
## Tests
Verified that with this setting, VSCode reformats code on Save according
to gofumpt rules.
2024-12-12 11:06:34 +01:00
Denis Bilenko 7249b82bf7
Add .git-blame-ignore-revs with linter-related mass change commits (#2000)
This file is automatically recognized by GitHub. For git to recognized
it, run:

`git config blame.ignoreRevsFile .git-blame-ignore-revs`

## Tests
Run 'git blame' with and without this option.

Compare view on Github

https://github.com/databricks/cli/blame/2e018cf/libs/git/reference_test.go

https://github.com/databricks/cli/blame/denis.bilenko/ignore-linter-commits/libs/git/reference_test.go
2024-12-12 10:54:00 +01:00
Denis Bilenko 2e018cfaec
Enable gofumpt and goimports in golangci-lint (#1999)
## Changes
Enable gofumpt and goimports in golangci-lint and apply autofix.

This makes 'make fmt' redundant, will be cleaned up in follow up diff.

## Tests
Existing tests.
2024-12-12 10:28:42 +01:00
Denis Bilenko 592474880d
Enable 'govet' linter; expand log/diag with non-f functions (#1996)
## Changes
Fix all the govet-found issues and enable govet linter.

This prompts adding non-formatting variants of logging functions (Errorf
-> Error).

## Tests
Existing tests.
2024-12-11 16:42:03 +00:00
Lennart Kats (databricks) 2ee7d56ae6
Show an error when using a cluster override with 'mode: production' (#1994)
## Changes

We should show a warning when using a cluster override with 'mode:
production'. Right now, we inadvertently show an error for this state.
This is a followup based on
https://github.com/databricks/cli/pull/1899#discussion_r1877765148.
2024-12-11 14:57:31 +00:00
Andrew Nester aa0b6080a4
Fixed TestAccBundleDeployUcSchema test (#1997)
## Changes
It was failing because when schema.yml was removed, `catalog: main`
option left set in the base pipeline definition in databricks.yml which
lead to incorrect config (empty target schema)
https://github.com/databricks/cli/pull/1413

Backend behaviour changed and DLT pipelines stopped to accept empty
targets leading to the error which was ignored before.

## Tests
```
--- PASS: TestAccBundleDeployUcSchema (41.75s)
PASS
coverage: 33.3% of statements in ./...
ok      github.com/databricks/cli/internal/bundle       42.210s coverage: 33.3% of statements in ./...
```
2024-12-11 14:00:43 +00:00
Denis Bilenko e39e94b12f
Make 'make lint' apply --fix (#1995)
Add 'make lintcheck' to lint without fixing. Fixing is what you usually
want.

No changes to github workflow since that does not call our Makefile.
2024-12-11 13:53:57 +01:00
Denis Bilenko 8d5351c1c3
Enable errcheck everywhere and fix or silent remaining issues (#1987)
## Changes
Enable errcheck linter for the whole codebase.

Fix remaining complaints:
- If we can propagate error to caller, do that
- If we writing to stdout, continue ignoring errors (to avoid crashing
in "cli | head" case)
- Add exception for cobra non-critical API such as
MarkHidden/MarkDeprecated/RegisterFlagCompletionFunc. This keeps current
code and behaviour, to be decided later if we want to change this.
- Continue ignoring errors where that is desired behaviour (e.g.
git.loadConfig).
- Continue ignoring errors where panicking seems riskier than ignoring
the error.
- Annotate cases in libs/dyn with //nolint:errcheck - to be addressed
later.

Note, this PR is not meant to come up with the best strategy for each
case, but to be a relative safe change to enable errcheck linter.
  
## Tests
Existing tests.
2024-12-11 13:26:00 +01:00
Denis Bilenko 4236e7122f
Switch to `folders.FindDirWithLeaf` (#1963)
## Changes
Remove two duplicate implementations of the same logic, switch
everywhere to folders.FindDirWithLeaf.

Add Abs() call to FindDirWithLeaf, it cannot really work on relative
paths.

## Tests
Existing tests.
2024-12-11 09:44:22 +01:00
Denis Bilenko 67f08ba924
Avoid panic if Config.Workspace.CurrentUser.User is not set (#1993)
## Changes
Extra check to avoid panic if /api/2.0/preview/scim/v2/Me returns `{}`

## Tests
Existing tests.
2024-12-11 09:40:14 +01:00
dependabot[bot] ad1359c1eb
Bump golang.org/x/sync from 0.9.0 to 0.10.0 (#1984)
Bumps [golang.org/x/sync](https://github.com/golang/sync) from 0.9.0 to
0.10.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="913fb63af2"><code>913fb63</code></a>
singleflight: fix typo in singleflight_test.go</li>
<li>See full diff in <a
href="https://github.com/golang/sync/compare/v0.9.0...v0.10.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/sync&package-manager=go_modules&previous-version=0.9.0&new-version=0.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-10 17:14:20 +01:00
dependabot[bot] 070dc6813f
Bump golang.org/x/term from 0.26.0 to 0.27.0 (#1983)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.26.0 to
0.27.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="442846aa8d"><code>442846a</code></a>
go.mod: update golang.org/x dependencies</li>
<li>See full diff in <a
href="https://github.com/golang/term/compare/v0.26.0...v0.27.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/term&package-manager=go_modules&previous-version=0.26.0&new-version=0.27.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-10 16:10:25 +01:00
Lennart Kats (databricks) f3c628e537
Allow overriding compute for non-development mode targets (#1899)
## Changes
Allow overriding compute for non-development targets. We previously had
a restriction in place where `--cluster-id` was only allowed for targets
that use `mode: development`. The intention was to prevent mistakes, but
this was overly restrictive.

## Tests
Updated unit tests.
2024-12-10 10:02:44 +00:00
Pieter Noordhuis 48d7c08a46
Bump TF codegen dependencies to latest (#1961)
## Changes

This updates the TF codegen dependencies to latest.

## Tests

Ran codegen and confirmed it still works.

See `bundle/internal/tf/codegen/README.md` for instructions.
2024-12-09 21:29:11 +01:00
Andrew Nester dc2cf3bc42
Process top-level permissions for resources using dynamic config (#1964)
## Changes
This PR ensures that when new resources are added they are handled by
top-level permissions mutator, either by supporting or not supporting
the resource type.

## Tests
Added unit tests
2024-12-09 15:26:41 +00:00
Pieter Noordhuis 3457c10c7f
Pin gotestsum version to v1.12.0 (#1981)
Make this robust against inadvertent upgrades.
2024-12-09 16:19:19 +01:00
Pieter Noordhuis e9fa7b7c0e
Remove global variable from clusters integration test (#1972)
## Changes

I saw this test fail on rerun because the global wasn't set.

Fix by removing the global and using a different approach to acquire a
valid cluster ID.

## Tests

Integration tests.
2024-12-09 14:25:06 +00:00
Denis Bilenko 1b2be1b2cb
Add error checking in tests and enable errcheck there (#1980)
## Changes
Fix all errcheck-found issues in tests and test helpers. Mostly this
done by adding require.NoError(t, err), sometimes panic() where t object
is not available).

Initial change is obtained with aider+claude, then manually reviewed and
cleaned up.

## Tests
Existing tests.
2024-12-09 13:56:41 +01:00
shreyas-goenka e0d54f0bb6
Do not run mlops-stacks integration test in parallel (#1982)
## Changes
This test changes the cwd using the `testutil.Chdir` function. This
causes flakiness with other integration tests, like
`TestAccWorkspaceFilesExtensionsNotebooksAreNotDeletedAsFiles`, which
rely on the cwd being configured correctly to read test fixtures.

The `t.Setenv` call in `testutil.Chdir` ensures that it is not run from
a test whose upstream is executing in parallel.
2024-12-09 12:34:27 +00:00
Pieter Noordhuis 227a13556b
Add build rule for running integration tests (#1965)
## Changes

Make it possible to change what/how we run our integration tests from
within this repository.

## Tests

Integration tests all pass when run with this command.
2024-12-09 09:52:08 +00:00
Denis Bilenko 4c1042132b
Enable linter bodyclose (#1968)
## Changes
Enable linter '[bodyclose](https://github.com/timakin/bodyclose)' and
fix 2 cases in tests.

## Tests
Existing tests.
2024-12-05 19:11:49 +00:00
Pieter Noordhuis 62bc59a3a6
Fail filer integration test if error is nil (#1967)
## Changes

I found a race where this error is nil and the subsequent assert panics
on the error being nil. This change makes the test robust against this
to fail immediately if the error is different from the one we expect.

## Tests

n/a
2024-12-05 18:20:46 +00:00
Pieter Noordhuis 6e754d4f34
Rewrite 'interface{} -> any' (#1959)
## Changes

The `any` alias for `interface{}` has been around since Go 1.18.

Now that we're using golangci-lint (#1953), we can lint on it.

Existing commits can be updated with:
```
gofmt -w -r 'interface{} -> any' .
```

## Tests

n/a
2024-12-05 15:37:24 +00:00
Pieter Noordhuis 7ffe93e4d0
[Release] Release v0.236.0 (#1966)
**New features for Databricks Asset Bundles:**

This release adds support for managing Unity Catalog volumes as part of
your bundle configuration.

Bundles:
* Add DABs support for Unity Catalog volumes
([#1762](https://github.com/databricks/cli/pull/1762)).
* Support lookup by name of notification destinations
([#1922](https://github.com/databricks/cli/pull/1922)).
* Extend "notebook not found" error to warn about missing extension
([#1920](https://github.com/databricks/cli/pull/1920)).
* Skip sync warning if no sync paths are defined
([#1926](https://github.com/databricks/cli/pull/1926)).
* Add validation for single node clusters
([#1909](https://github.com/databricks/cli/pull/1909)).
* Fix segfault in bundle summary command
([#1937](https://github.com/databricks/cli/pull/1937)).
* Add the `bundle_uuid` helper function for templates
([#1947](https://github.com/databricks/cli/pull/1947)).
* Add default value for `volume_type` for DABs
([#1952](https://github.com/databricks/cli/pull/1952)).
* Properly read Git metadata when running inside workspace
([#1945](https://github.com/databricks/cli/pull/1945)).
* Upgrade TF provider to 1.59.0
([#1960](https://github.com/databricks/cli/pull/1960)).

Internal:
* Breakout variable lookup into separate files and tests
([#1921](https://github.com/databricks/cli/pull/1921)).
* Add golangci-lint v1.62.2
([#1953](https://github.com/databricks/cli/pull/1953)).

Dependency updates:
* Bump golang.org/x/term from 0.25.0 to 0.26.0
([#1907](https://github.com/databricks/cli/pull/1907)).
* Bump github.com/Masterminds/semver/v3 from 3.3.0 to 3.3.1
([#1930](https://github.com/databricks/cli/pull/1930)).
* Bump github.com/stretchr/testify from 1.9.0 to 1.10.0
([#1932](https://github.com/databricks/cli/pull/1932)).
* Bump github.com/databricks/databricks-sdk-go from 0.51.0 to 0.52.0
([#1931](https://github.com/databricks/cli/pull/1931)).
2024-12-05 14:39:26 +00:00
Pieter Noordhuis 647b09e6e2
Upgrade TF provider to 1.59.0 (#1960)
## Changes

Notable changes:
* Fixes dashboard deployment if it was trashed out-of-band.
* Removes client-side validation for single-node cluster configuration
(also see #1546).

Beware: for the same reason as in #1900, this excludes the changes for
the quality monitor resource.

## Tests

Integration tests pass.
2024-12-05 12:09:45 +01:00
Denis Bilenko 0ad790e468
Properly read Git metadata when running inside workspace (#1945)
## Changes

Since there is no .git directory in Workspace file system, we need to
make an API call to api/2.0/workspace/get-status?return_git_info=true to
fetch git the root of the repo, current branch, commit and origin.

Added new function FetchRepositoryInfo that either looks up and parses
.git or calls remote API depending on env.

Refactor Repository/View/FileSet to accept repository root rather than
calculate it. This helps because:
- Repository is currently created in multiple places and finding the
repository root is becoming relatively expensive (API call needed).
- Repository/FileSet/View do not have access to current Bundle which is
where WorkplaceClient is stored.

## Tests

- Tested manually by running "bundle validate --json" inside web
terminal within Databricks env.
- Added integration tests for the new API.

---------

Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-05 10:13:13 +00:00
Denis Bilenko 0a36681bef
Add golangci-lint v1.62.2 (#1953) 2024-12-04 17:40:19 +00:00
shreyas-goenka 0da17f6ec6
Add default value for `volume_type` for DABs (#1952)
## Changes

The Unity Catalog volumes API requires a `volume_type` argument when
creating volumes. In the context of DABs, it's unnecessary to require
users to specify the volume type every time. We can default to "MANAGED"
instead.

This PR is similar to https://github.com/databricks/cli/pull/1743 which
does the same for dashboards.

## Tests
Unit test
2024-12-04 11:05:54 +00:00
Denis Bilenko 0e088eb9f8
Simplify load_git_details.go; remove unnecessary Abs() call (#1950)
Suggested here
https://github.com/databricks/cli/pull/1945#discussion_r1866088579
2024-12-02 22:41:38 +00:00
shreyas-goenka 2847533e1e
Add DABs support for Unity Catalog volumes (#1762)
## Changes

This PR adds support for UC volumes to DABs.

### Can I use a UC volume managed by DABs in `artifact_path`?

Yes, but we require the volume to exist before being referenced in
`artifact_path`. Otherwise you'll see an error that the volume does not
exist. For this case, this PR also adds a warning if we detect that the
UC volume is defined in the DAB itself, which informs the user to deploy
the UC volume in a separate deployment first before using it in
`artifact_path`.

We cannot create the UC volume and then upload the artifacts to it in
the same `bundle deploy` because `bundle deploy` always uploads the
artifacts to `artifact_path` before materializing any resources defined
in the bundle. Supporting this in a single deployment requires us to
migrate away from our dependency on the Databricks Terraform provider to
manage the CRUD lifecycle of DABs resources.

### Why do we not support `preset.name_prefix` for UC volumes?

UC volumes will not have a `dev_shreyas_goenka` prefix added in `mode:
development`. Configuring `presets.name_prefix` will be a no-op for UC
volumes. We have decided not to support prefixing for UC resources. This
is because:
1. UC provides its own namespace hierarchy that is independent of DABs.
2. Users can always manually use `${workspace.current_user.short_name}`
to configure the prefixes manually.

Customers often manually set up a UC hierarchy for dev and prod,
including a schema or catalog per developer. Thus, it's often
unnecessary for us to add prefixing in `mode: development` by default
for UC resources.

In retrospect, supporting prefixing for UC schemas and registered models
was a mistake and will be removed in a future release of DABs.

## Tests
Unit, integration test, and manually. 

### Manual Testing cases:
 1. UC volume does not exist:
```
➜  bundle-playground git:(master) ✗ cli bundle deploy
Error: failed to fetch metadata for the UC volume /Volumes/main/caps/my_volume that is configured in the artifact_path: Not Found
```

2. UC Volume does not exist, but is defined in the DAB
```
➜  bundle-playground git:(master) ✗ cli bundle deploy
Error: failed to fetch metadata for the UC volume /Volumes/main/caps/managed_by_dab that is configured in the artifact_path: Not Found

Warning: You might be using a UC volume in your artifact_path that is managed by this bundle but which has not been deployed yet. Please deploy the UC volume in a separate bundle deploy before using it in the artifact_path.
  at resources.volumes.bar
  in databricks.yml:24:7

```

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-02 21:18:07 +00:00
shreyas-goenka f9d65f315f
Add comment for why we test two files for `bundle_uuid` (#1949)
## Changes
Addresses feedback from this thread
https://github.com/databricks/cli/pull/1947#discussion_r1865692479
2024-12-02 14:40:57 +00:00
Pieter Noordhuis 94b221b8ba
Build snapshot releases for bugbash branches (#1948)
## Changes

To be used with
https://github.com/databricks/cli/tree/main/internal/bugbash.

## Tests

n/a
2024-12-02 14:04:08 +00:00
shreyas-goenka e86a949d99
Add the `bundle_uuid` helper function for templates (#1947)
## Changes
This PR adds the `bundle_uuid` helper function that'll return a stable
identifier for the bundle for the duration of the `bundle init` command.

This is also the UUID that'll be set in the telemetry event sent during
`databricks bundle init` and would be used to correlate revenue from
bundle init with resource deployments.

Template authors should add the uuid field to their `databricks.yml`
file they generate:
```
bundle:
  # A stable identified for your DAB project. We use this UUID in the Databricks backend 
  # to correlate and identify multiple deployments of the same DAB project. 
  uuid: {{ bundle_uuid }}
```

## Tests
Unit test
2024-12-02 10:29:29 +00:00
Denis Bilenko 00bd98f898
Move loadGitDetails mutator to Initialize phase (#1944)
This will require API call when run inside a workspace, which will
require workspace client (we don't have one at the current point). We
want to keep Load phase quick, since it's common across all commands.
2024-12-02 09:49:32 +00:00
dependabot[bot] 7b9726dd64
Bump github.com/databricks/databricks-sdk-go from 0.51.0 to 0.52.0 (#1931)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.51.0 to 0.52.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.52.0</h2>
<h3>Internal Changes</h3>
<ul>
<li>Update Jobs GetRun API to support paginated responses for jobs and
ForEach tasks (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1089">#1089</a>).</li>
</ul>
<h3>API Changes:</h3>
<ul>
<li>Added <code>ServicePrincipalClientId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/apps#App">apps.App</a>.</li>
<li>Added <code>AzureServicePrincipal</code>,
<code>GcpServiceAccountKey</code> and <code>ReadOnly</code> fields for
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
<li>Added <code>AzureServicePrincipal</code>, <code>ReadOnly</code> and
<code>UsedForManagedStorage</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialInfo">catalog.CredentialInfo</a>.</li>
<li>Added <code>AzureServicePrincipal</code> and <code>ReadOnly</code>
fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCredentialRequest">catalog.UpdateCredentialRequest</a>.</li>
<li>Added <code>ExternalLocationName</code>, <code>ReadOnly</code> and
<code>Url</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidateCredentialRequest">catalog.ValidateCredentialRequest</a>.</li>
<li>Added <code>IsDir</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidateCredentialResponse">catalog.ValidateCredentialResponse</a>.</li>
<li>Changed <code>CreateCredential</code> and
<code>GenerateTemporaryServiceCredential</code> methods for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialsAPI">w.Credentials</a>
workspace-level service with new required argument order.</li>
<li>Changed <code>AccessConnectorId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#AzureManagedIdentity">catalog.AzureManagedIdentity</a>
to be required.</li>
<li>Changed <code>AccessConnectorId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#AzureManagedIdentity">catalog.AzureManagedIdentity</a>
to be required.</li>
<li>Changed <code>Name</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>
to be required.</li>
<li>Changed <code>CredentialName</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialRequest">catalog.GenerateTemporaryServiceCredentialRequest</a>
to be required.</li>
</ul>
<p>OpenAPI SHA: f2385add116e3716c8a90a0b68e204deb40f996c, Date:
2024-11-15</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.52.0</h2>
<h3>Internal Changes</h3>
<ul>
<li>Update Jobs GetRun API to support paginated responses for jobs and
ForEach tasks (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1089">#1089</a>).</li>
</ul>
<h3>API Changes:</h3>
<ul>
<li>Added <code>ServicePrincipalClientId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/apps#App">apps.App</a>.</li>
<li>Added <code>AzureServicePrincipal</code>,
<code>GcpServiceAccountKey</code> and <code>ReadOnly</code> fields for
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
<li>Added <code>AzureServicePrincipal</code>, <code>ReadOnly</code> and
<code>UsedForManagedStorage</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialInfo">catalog.CredentialInfo</a>.</li>
<li>Added <code>AzureServicePrincipal</code> and <code>ReadOnly</code>
fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCredentialRequest">catalog.UpdateCredentialRequest</a>.</li>
<li>Added <code>ExternalLocationName</code>, <code>ReadOnly</code> and
<code>Url</code> fields for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidateCredentialRequest">catalog.ValidateCredentialRequest</a>.</li>
<li>Added <code>IsDir</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ValidateCredentialResponse">catalog.ValidateCredentialResponse</a>.</li>
<li>Changed <code>CreateCredential</code> and
<code>GenerateTemporaryServiceCredential</code> methods for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialsAPI">w.Credentials</a>
workspace-level service with new required argument order.</li>
<li>Changed <code>AccessConnectorId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#AzureManagedIdentity">catalog.AzureManagedIdentity</a>
to be required.</li>
<li>Changed <code>AccessConnectorId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#AzureManagedIdentity">catalog.AzureManagedIdentity</a>
to be required.</li>
<li>Changed <code>Name</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>
to be required.</li>
<li>Changed <code>CredentialName</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialRequest">catalog.GenerateTemporaryServiceCredentialRequest</a>
to be required.</li>
</ul>
<p>OpenAPI SHA: f2385add116e3716c8a90a0b68e204deb40f996c, Date:
2024-11-15</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="8fa2b93471"><code>8fa2b93</code></a>
[Release] Release v0.52.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1090">#1090</a>)</li>
<li><a
href="1981951cc1"><code>1981951</code></a>
[Internal] Update Jobs GetRun API to support paginated responses for
jobs and...</li>
<li><a
href="776b63cf7f"><code>776b63c</code></a>
[Internal] Refresh PR template (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1084">#1084</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.51.0...v0.52.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.51.0&new-version=0.52.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-11-28 15:33:51 +00:00
Andrew Nester 8053e9c4e4
Fix segfault in bundle summary command (#1937)
## Changes
This PR introduces use of new `isNil` method. It allows to ensure we
filter out all improperly defined resources in `bundle summary` command.
This includes deleted resources or resources with incorrect
configuration such as only defining key of the resource and nothing
else.

Fixes #1919, #1913

## Tests
Added regression unit test case
2024-11-28 12:27:24 +00:00
Denis Bilenko 6fc2093a22
Remove unused method GitRepository (#1941) 2024-11-28 08:52:21 +00:00
Denis Bilenko e57cbf1273
Remove unused field: Repository.real (#1936) 2024-11-27 12:14:39 +00:00
Pieter Noordhuis fae1b6742d
Update target references to use `${bundle.target}` (#1935)
## Changes

The built-in template contains a reference to `${bundle.environment}`.

This property has been deprecated in favor of `${bundle.target}` a long
time ago (#670), so we should no longer emit it. The environment field
will continue to be usable until we cut a new major version in some far
away future.

## Tests

* Unit tests
* The test `TestInterpolationWithTarget` still covers correct
interpolation of `${bundle.environment}`
2024-11-27 11:51:08 +00:00
dependabot[bot] 85c0d2d3ee
Bump github.com/stretchr/testify from 1.9.0 to 1.10.0 (#1932)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify)
from 1.9.0 to 1.10.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/stretchr/testify/releases">github.com/stretchr/testify's
releases</a>.</em></p>
<blockquote>
<h2>v1.10.0</h2>
<h2>What's Changed</h2>
<h3>Functional Changes</h3>
<ul>
<li>Add PanicAssertionFunc by <a
href="https://github.com/fahimbagar"><code>@​fahimbagar</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1337">stretchr/testify#1337</a></li>
<li>assert: deprecate CompareType by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1566">stretchr/testify#1566</a></li>
<li>assert: make YAML dependency pluggable via build tags by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1579">stretchr/testify#1579</a></li>
<li>assert: new assertion NotElementsMatch by <a
href="https://github.com/hendrywiranto"><code>@​hendrywiranto</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1600">stretchr/testify#1600</a></li>
<li>mock: in order mock calls by <a
href="https://github.com/ReyOrtiz"><code>@​ReyOrtiz</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1637">stretchr/testify#1637</a></li>
<li>Add assertion for NotErrorAs by <a
href="https://github.com/palsivertsen"><code>@​palsivertsen</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1129">stretchr/testify#1129</a></li>
<li>Record Return Arguments of a Call by <a
href="https://github.com/jayd3e"><code>@​jayd3e</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1636">stretchr/testify#1636</a></li>
<li>assert.EqualExportedValues: accepts everything by <a
href="https://github.com/redachl"><code>@​redachl</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1586">stretchr/testify#1586</a></li>
</ul>
<h3>Fixes</h3>
<ul>
<li>assert: make tHelper a type alias by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1562">stretchr/testify#1562</a></li>
<li>Do not get argument again unnecessarily in Arguments.Error() by <a
href="https://github.com/TomWright"><code>@​TomWright</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/820">stretchr/testify#820</a></li>
<li>Fix time.Time compare by <a
href="https://github.com/myxo"><code>@​myxo</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1582">stretchr/testify#1582</a></li>
<li>assert.Regexp: handle []byte array properly by <a
href="https://github.com/kevinburkesegment"><code>@​kevinburkesegment</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1587">stretchr/testify#1587</a></li>
<li>assert: collect.FailNow() should not panic by <a
href="https://github.com/marshall-lee"><code>@​marshall-lee</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1481">stretchr/testify#1481</a></li>
<li>mock: simplify implementation of FunctionalOptions by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1571">stretchr/testify#1571</a></li>
<li>mock: caller information for unexpected method call by <a
href="https://github.com/spirin"><code>@​spirin</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1644">stretchr/testify#1644</a></li>
<li>suite: fix test failures by <a
href="https://github.com/stevenh"><code>@​stevenh</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1421">stretchr/testify#1421</a></li>
<li>Fix issue <a
href="https://redirect.github.com/stretchr/testify/issues/1662">#1662</a>
(comparing infs should fail) by <a
href="https://github.com/ybrustin"><code>@​ybrustin</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1663">stretchr/testify#1663</a></li>
<li>NotSame should fail if args are not pointers <a
href="https://redirect.github.com/stretchr/testify/issues/1661">#1661</a>
by <a href="https://github.com/sikehish"><code>@​sikehish</code></a> in
<a
href="https://redirect.github.com/stretchr/testify/pull/1664">stretchr/testify#1664</a></li>
<li>Increase timeouts in Test_Mock_Called_blocks to reduce flakiness in
CI by <a href="https://github.com/sikehish"><code>@​sikehish</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1667">stretchr/testify#1667</a></li>
<li>fix: compare functional option names for indirect calls by <a
href="https://github.com/arjun-1"><code>@​arjun-1</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1626">stretchr/testify#1626</a></li>
</ul>
<h3>Documantation, Build &amp; CI</h3>
<ul>
<li>.gitignore: ignore &quot;go test -c&quot; binaries by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1565">stretchr/testify#1565</a></li>
<li>mock: improve doc by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1570">stretchr/testify#1570</a></li>
<li>mock: fix FunctionalOptions docs by <a
href="https://github.com/snirye"><code>@​snirye</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1433">stretchr/testify#1433</a></li>
<li>README: link out to the excellent testifylint by <a
href="https://github.com/brackendawson"><code>@​brackendawson</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1568">stretchr/testify#1568</a></li>
<li>assert: fix typo in comment by <a
href="https://github.com/JohnEndson"><code>@​JohnEndson</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1580">stretchr/testify#1580</a></li>
<li>Correct the EventuallyWithT and EventuallyWithTf example by <a
href="https://github.com/JonCrowther"><code>@​JonCrowther</code></a> in
<a
href="https://redirect.github.com/stretchr/testify/pull/1588">stretchr/testify#1588</a></li>
<li>CI: bump softprops/action-gh-release from 1 to 2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1575">stretchr/testify#1575</a></li>
<li>mock: document more alternatives to deprecated
AnythingOfTypeArgument by <a
href="https://github.com/dolmen"><code>@​dolmen</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1569">stretchr/testify#1569</a></li>
<li>assert: Correctly document EqualValues behavior by <a
href="https://github.com/brackendawson"><code>@​brackendawson</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1593">stretchr/testify#1593</a></li>
<li>fix: grammar in godoc by <a
href="https://github.com/miparnisari"><code>@​miparnisari</code></a> in
<a
href="https://redirect.github.com/stretchr/testify/pull/1607">stretchr/testify#1607</a></li>
<li>.github/workflows: Run tests for Go 1.22 by <a
href="https://github.com/HaraldNordgren"><code>@​HaraldNordgren</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1629">stretchr/testify#1629</a></li>
<li>Document suite's lack of support for t.Parallel by <a
href="https://github.com/brackendawson"><code>@​brackendawson</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1645">stretchr/testify#1645</a></li>
<li>assert: fix typos in comments by <a
href="https://github.com/alexandear"><code>@​alexandear</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1650">stretchr/testify#1650</a></li>
<li>mock: fix doc comment for NotBefore by <a
href="https://github.com/alexandear"><code>@​alexandear</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1651">stretchr/testify#1651</a></li>
<li>Generate better comments for require package by <a
href="https://github.com/Neokil"><code>@​Neokil</code></a> in <a
href="https://redirect.github.com/stretchr/testify/pull/1610">stretchr/testify#1610</a></li>
<li>README: replace Testify V2 notice with <a
href="https://github.com/dolmen"><code>@​dolmen</code></a>'s V2
manifesto by <a
href="https://github.com/hendrywiranto"><code>@​hendrywiranto</code></a>
in <a
href="https://redirect.github.com/stretchr/testify/pull/1518">stretchr/testify#1518</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/fahimbagar"><code>@​fahimbagar</code></a> made
their first contribution in <a
href="https://redirect.github.com/stretchr/testify/pull/1337">stretchr/testify#1337</a></li>
<li><a href="https://github.com/TomWright"><code>@​TomWright</code></a>
made their first contribution in <a
href="https://redirect.github.com/stretchr/testify/pull/820">stretchr/testify#820</a></li>
<li><a href="https://github.com/snirye"><code>@​snirye</code></a> made
their first contribution in <a
href="https://redirect.github.com/stretchr/testify/pull/1433">stretchr/testify#1433</a></li>
<li><a href="https://github.com/myxo"><code>@​myxo</code></a> made their
first contribution in <a
href="https://redirect.github.com/stretchr/testify/pull/1582">stretchr/testify#1582</a></li>
<li><a
href="https://github.com/JohnEndson"><code>@​JohnEndson</code></a> made
their first contribution in <a
href="https://redirect.github.com/stretchr/testify/pull/1580">stretchr/testify#1580</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="89cbdd9e7b"><code>89cbdd9</code></a>
Merge pull request <a
href="https://redirect.github.com/stretchr/testify/issues/1626">#1626</a>
from arjun-1/fix-functional-options-diff-indirect-calls</li>
<li><a
href="07bac606be"><code>07bac60</code></a>
Merge pull request <a
href="https://redirect.github.com/stretchr/testify/issues/1667">#1667</a>
from sikehish/flaky</li>
<li><a
href="716de8dff4"><code>716de8d</code></a>
Increase timeouts in Test_Mock_Called_blocks to reduce flakiness in
CI</li>
<li><a
href="118fb83466"><code>118fb83</code></a>
NotSame should fail if args are not pointers <a
href="https://redirect.github.com/stretchr/testify/issues/1661">#1661</a>
(<a
href="https://redirect.github.com/stretchr/testify/issues/1664">#1664</a>)</li>
<li><a
href="7d99b2b43d"><code>7d99b2b</code></a>
attempt 2</li>
<li><a
href="05f87c0160"><code>05f87c0</code></a>
more similar</li>
<li><a
href="ea7129e006"><code>ea7129e</code></a>
better fmt</li>
<li><a
href="a1b9c9efe3"><code>a1b9c9e</code></a>
Merge pull request <a
href="https://redirect.github.com/stretchr/testify/issues/1663">#1663</a>
from ybrustin/master</li>
<li><a
href="8302de98b1"><code>8302de9</code></a>
Merge branch 'master' into master</li>
<li><a
href="89352f7958"><code>89352f7</code></a>
Merge pull request <a
href="https://redirect.github.com/stretchr/testify/issues/1518">#1518</a>
from hendrywiranto/adjust-readme-remove-v2</li>
<li>Additional commits viewable in <a
href="https://github.com/stretchr/testify/compare/v1.9.0...v1.10.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/stretchr/testify&package-manager=go_modules&previous-version=1.9.0&new-version=1.10.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-26 17:12:11 +00:00
dependabot[bot] 026c5555b2
Bump github.com/Masterminds/semver/v3 from 3.3.0 to 3.3.1 (#1930)
Bumps
[github.com/Masterminds/semver/v3](https://github.com/Masterminds/semver)
from 3.3.0 to 3.3.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/Masterminds/semver/releases">github.com/Masterminds/semver/v3's
releases</a>.</em></p>
<blockquote>
<h2>v3.3.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Fix for allowing some version that were invalid by <a
href="https://github.com/mattfarina"><code>@​mattfarina</code></a> in <a
href="https://redirect.github.com/Masterminds/semver/pull/253">Masterminds/semver#253</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/Masterminds/semver/compare/v3.3.0...v3.3.1">https://github.com/Masterminds/semver/compare/v3.3.0...v3.3.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Masterminds/semver/blob/master/CHANGELOG.md">github.com/Masterminds/semver/v3's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1558ca3488"><code>1558ca3</code></a>
Merge pull request <a
href="https://redirect.github.com/Masterminds/semver/issues/253">#253</a>
from mattfarina/fix-bad-versions</li>
<li><a
href="252dd61dd3"><code>252dd61</code></a>
Fix for allowing some version that were invalid</li>
<li>See full diff in <a
href="https://github.com/Masterminds/semver/compare/v3.3.0...v3.3.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/Masterminds/semver/v3&package-manager=go_modules&previous-version=3.3.0&new-version=3.3.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-26 15:52:43 +00:00
dependabot[bot] 4b069bb6e1
Bump golang.org/x/term from 0.25.0 to 0.26.0 (#1907)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.25.0 to
0.26.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b725e362a8"><code>b725e36</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="54df7da90d"><code>54df7da</code></a>
README: don't recommend go get</li>
<li>See full diff in <a
href="https://github.com/golang/term/compare/v0.25.0...v0.26.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/term&package-manager=go_modules&previous-version=0.25.0&new-version=0.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 13:46:20 +00:00
shreyas-goenka b323703c1b
Add validation for single node clusters (#1909)
## Changes
This PR adds a warning validating that the configuration for a single
node cluster is valid for interactive, job, job-task, and pipeline
clusters.

Note: We skip the validation if a cluster policy is configured because
the policy is likely to configure `spark_conf` / `custom_tags` itself.

Note: Terrform originally only had validation for interactive, job, and
job-task clusters. This PR adding the validation for pipeline clusters
as well is new.

This PR follows the same logic as we used to have in Terraform. The
validation was removed from Terraform because we had no way to demote
the error to a warning:
https://github.com/databricks/terraform-provider-databricks/pull/4222

### Background
Single-node clusters require `spark_conf` and `custom_tags` to be
correctly set in the cluster definition for them to function optimally.
The cluster will be created even if incorrectly configured, but its
performance will not be great.

For example, if both `spark_conf` and `custom_tags` are not set and
`num_workers` is 0, then only the driver process will be launched on the
cluster compute instance thus leading to sub-optimal utilization of
available compute resources and no parallelization across worker
processes when processing a spark query.

### Issue

This PR addresses some issues reported in
https://github.com/databricks/cli/issues/1546

## Tests
Unit tests and manually.

Example output of the warning:
```
➜  bundle-playground git:(master) ✗ cli bundle validate
Warning: Single node cluster is not correctly configured
  at resources.pipelines.bar.clusters[0]
  in databricks.yml:29:11

num_workers should be 0 only for single-node clusters. To create a
valid single node cluster please ensure that the following properties
are correctly set in the cluster specification:

  spark_conf:
    spark.databricks.cluster.profile: singleNode
    spark.master: local[*]

  custom_tags:
    ResourceClass: SingleNode
  

Name: foobar
Target: default
Workspace:
  User: shreyas.goenka@databricks.com
  Path: /Workspace/Users/shreyas.goenka@databricks.com/.bundle/foobar/default

Found 1 warning
```
2024-11-22 15:48:09 +00:00
Ilya Kuznetsov 490dd058aa
Extended message for warning when source-linked mode is used outside of the workspace (#1929)
## Changes

Added path and locations to the warning which displayed when
source-linked mode is used outside of the workspace
2024-11-22 14:44:33 +00:00
Pieter Noordhuis abfd1713e0
Skip sync warning if no sync paths are defined (#1926)
## Changes

Users can configure the bundle to not synchronize any files with:
```yaml
sync:
  paths: []
```

If it is explicitly configured as an empty list, the validate command
must not warn about not having any files to synchronize. The warning
exists to alert users who are unintentionally not synchronizing any
files (they might have a `.gitignore` pattern that matches everything).

Closes #1663.

## Tests

* New unit test.
2024-11-21 15:03:13 +00:00
Pieter Noordhuis a3cea07c9e
Support lookup by name of notification destinations (#1922)
## Changes

Add support for notification destinations in variable lookups.

More information:
https://docs.databricks.com/en/admin/workspace-settings/notification-destinations.html

Depends on #1921.

## Tests

* New unit test
* Manually confirmed that the lookup works
2024-11-21 15:52:14 +01:00
shreyas-goenka abc2f3c825
Fix `TestAccBundleInitOnMlopsStacks` (#1924)
## Changes
The ML production team modified mlops-stack to use `mode: development`
for their development target here:
https://github.com/databricks/mlops-stacks/pull/174

This PR makes the integration test assertion agnostic of the prefix to
make it pass again.

## Tests
The test passes now
2024-11-21 10:46:24 +00:00
shreyas-goenka c2e2abcc35
Extend "notebook not found" error to warn about missing extension (#1920)
## Changes
The full workspace path for a notebook does not contain the notebook's
extension. If a user converts that file path to a relative path (like
`/Workspace/bundle_root/bar/nb` -> `./bar/nb`), they can be confused as
to why the new file path does not work.

The changes in this PR nudge them to add the appropriate file extension
(e.g., `./bar/nb.py` or `./bar/nb.ipynb`).

One common way users can end up in this scenario is by using the view
job as YAML functionality in the Databricks UI.

## Tests
Unit test and manually.

```
(.venv) ➜  bundle-playground git:(master) ✗ cli bundle validate 
Error: notebook ./foo not found. Local notebook references are expected
to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb]
```
2024-11-21 16:21:21 +05:30
Pieter Noordhuis 14fe03dcb9
Breakout variable lookup into separate files and tests (#1921)
## Changes

While looking into adding variable lookups for notification destinations
([API][API]), I found the codegen approach for different classes of
variable lookups a bit complex. The template had a custom field override
(for service principals), the package had an override for the cluster
lookup, and it didn't produce tests.

The notification destinations API uses a default page size of 20 for
listing. I want to use a larger page size to limit the number of API
calls, so that would imply another customization on the template or a
manual override.

This code being rather mechanical, I used copilot to produce all
instances of the resolvers and their tests (after writing one of them
manually).

[api]: https://docs.databricks.com/api/workspace/notificationdestinations

## Tests

* Unit tests pass
* Manual confirmation that lookups of warehouses still work
2024-11-21 11:28:50 +01:00
shreyas-goenka 984c38e03e
Add unique ID to `root_path` for bundle integration test fixtures (#1917)
## Changes
Integration tests using these fixtures could have been flaky when run in
parallel using the same user's identity. They would also possibly have
piggybacked state from previous runs.

This PR adds a UUID to the root_path to force independent bundle
deployments for every test run.

I have checked that all bundles in `internal/bundle/bundles` have
`root_path` namespaced to a UUID.

## Tests
Self testing.
2024-11-20 16:30:10 +00:00
Pieter Noordhuis ade95d9649
[Release] Release v0.235.0 (#1918)
**Note:** the `bundle generate` command now uses the
`.<resource-type>.yml`
sub-extension for the configuration files it writes. Existing
configuration
files that do not use this sub-extension are renamed to include it.

Bundles:
* Make `TableName` field part of quality monitor schema
([#1903](https://github.com/databricks/cli/pull/1903)).
* Do not prepend paths starting with ~ or variable reference
([#1905](https://github.com/databricks/cli/pull/1905)).
* Fix workspace extensions filer accidentally reading notebooks
([#1891](https://github.com/databricks/cli/pull/1891)).
* Fix template initialization when running on Databricks
([#1912](https://github.com/databricks/cli/pull/1912)).
* Source-linked deployments for bundles in the workspace
([#1884](https://github.com/databricks/cli/pull/1884)).
* Added integration test to deploy bundle to /Shared root path
([#1914](https://github.com/databricks/cli/pull/1914)).
* Update filenames used by bundle generate to use `.<resource-type>.yml`
([#1901](https://github.com/databricks/cli/pull/1901)).

Internal:
* Extract functionality to detect if the CLI is running on DBR
([#1889](https://github.com/databricks/cli/pull/1889)).
* Consolidate test helpers for `io/fs`
([#1906](https://github.com/databricks/cli/pull/1906)).
* Use `fs.FS` interface to read template
([#1910](https://github.com/databricks/cli/pull/1910)).
* Use `filer.Filer` to write template instantiation
([#1911](https://github.com/databricks/cli/pull/1911)).
2024-11-20 14:48:18 +00:00
Andrew Nester 592e1111b7
Update filenames used by bundle generate to use `.<resource-type>.yml` (#1901)
## Changes
Update filenames used by bundle generate to use '.resource-type.yml'

Similar to [Add sub-extension to resource files in built-in templates by
shreyas-goenka · Pull Request #1777 ·
databricks/cli](https://github.com/databricks/cli/pull/1777)

---------

Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
2024-11-20 13:53:25 +01:00
Andrew Nester fab3e8f168
Added integration test to deploy bundle to /Shared root path (#1914)
## Changes
Added integration test to deploy bundle to /Shared root path

## Tests
```
--- PASS: TestAccDeployBasicToSharedWorkspace (24.58s)
PASS
coverage: 31.2% of statements in ./...
ok  	github.com/databricks/cli/internal/bundle	25.572s	coverage: 31.2% of statements in ./...
```

---------

Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
2024-11-20 12:20:39 +00:00
Ilya Kuznetsov 756e55fabc
Source-linked deployments for bundles in the workspace (#1884)
## Changes

This change adds a preset for source-linked deployments. It is enabled
by default for targets in `development` mode **if** the Databricks CLI
is running from the `/Workspace` directory on DBR. It does not have an
effect when running the CLI anywhere else.

Key highlights:
1. Files in this mode won't be uploaded to workspace
2. Created resources will use references to source files instead of
their workspace copies

## Tests
1. Apply preset unit test covering conditional logic
2. High-level process target mode unit test for testing integration
between mutators

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-11-20 13:22:27 +01:00
Pieter Noordhuis 886e14910c
Fix template initialization when running on Databricks (#1912)
## Changes

When running the CLI on Databricks Runtime (DBR), use the
extension-aware filer to write an instantiated template if the instance
path is located in the workspace filesystem.

Notebooks cannot be written through the workspace filesystem's FUSE
mount. As a result, this is the only method for initializing templates
that contain notebooks when running the CLI on DBR and writing to the
workspace filesystem.

Depends on #1910 and #1911.

Supersedes #1744.

## Tests

* Manually confirmed I can initialize a template with notebooks when
running the CLI from the web terminal.
2024-11-20 11:42:23 +00:00
Pieter Noordhuis 75b09ff230
Use `filer.Filer` to write template instantiation (#1911)
## Changes

Prior to this change, the output directory was part of the `renderer`
type and passed down to every `file` it produced. Every file knew its
absolute destination path. This is incompatible with the use of a filer,
where all operations are automatically anchored to some base path.

To make this compatible, this change updates:
* the `file` type to only know its own path relative to the instantiation root,
* the `renderer` type to no longer require or pass along the output directory,
* the `persistToDisk` function to take a context and filer argument,
* the `filer.WriteMode` to represent permission bits

## Tests

* Existing tests pass.
* Manually confirmed template initialization works as expected.
2024-11-20 11:11:31 +01:00
Pieter Noordhuis 4fea0219fd
Use `fs.FS` interface to read template (#1910)
## Changes

While working on the v2 of #1744, I found that:
* Template initialization first copies built-in templates to a temporary
directory before initializing them
* Reading a template's contents goes through a `filer.Filer` but is
hardcoded to a local one

This change updates the interface for reading templates to be `fs.FS`.
This is compatible with the `embed.FS` type for the built-in templates,
so they no longer have to be copied to a temporary directory before
being used.

The alternative is to use a `filer.Filer` throughout, but this would
have required even more plumbing, and we don't need to _read_ templates,
including notebooks, from the workspace filesystem (yet?).

As part of making `template.Materialize` take an `fs.FS` argument, the
logic to match a given argument to a particular built-in template in the
`init` command has moved to sit next to its implementation.

## Tests

Existing tests pass.
2024-11-20 09:28:35 +00:00
shreyas-goenka 72dde793d8
Fix workspace extensions filer accidentally reading notebooks (#1891)
## Changes
The workspace extensions filer should not read or stat a notebook called
`foo` if the user calls `.Stat(ctx, "foo")`.

Instead, the filer should return a file not found error. This is because
the contract for the workspace extensions filer is to only work for
notebooks when the file path / name includes the extension (example:
`foo.ipynb` or `foo.sql` instead of just `foo`)

## Tests
Integration tests.
2024-11-18 17:25:24 +00:00
619 changed files with 17112 additions and 4607 deletions

View File

@ -5,14 +5,13 @@
},
"batch": {
".codegen/cmds-workspace.go.tmpl": "cmd/workspace/cmd.go",
".codegen/cmds-account.go.tmpl": "cmd/account/cmd.go",
".codegen/lookup.go.tmpl": "bundle/config/variable/lookup.go"
".codegen/cmds-account.go.tmpl": "cmd/account/cmd.go"
},
"toolchain": {
"required": ["go"],
"post_generate": [
"go test -timeout 240s -run TestConsistentDatabricksSdkVersion github.com/databricks/cli/internal/build",
"go run ./bundle/internal/schema/*.go ./bundle/schema/jsonschema.json",
"make schema",
"echo 'bundle/internal/tf/schema/\\*.go linguist-generated=true' >> ./.gitattributes",
"echo 'go.sum linguist-generated=true' >> ./.gitattributes",
"echo 'bundle/schema/jsonschema.json linguist-generated=true' >> ./.gitattributes"

View File

@ -1 +1 @@
d25296d2f4aa7bd6195c816fdf82e0f960f775da
a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d

View File

@ -1,134 +0,0 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package variable
{{ $allowlist :=
list
"alerts"
"clusters"
"cluster-policies"
"clusters"
"dashboards"
"instance-pools"
"jobs"
"metastores"
"pipelines"
"service-principals"
"queries"
"warehouses"
}}
{{ $customField :=
dict
"service-principals" "ApplicationId"
}}
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
)
type Lookup struct {
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
{{.Singular.PascalName}} string `json:"{{.Singular.SnakeName}},omitempty"`
{{end}}
{{- end}}
}
func LookupFromMap(m map[string]any) *Lookup {
l := &Lookup{}
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
if v, ok := m["{{.Singular.SnakeName}}"]; ok {
l.{{.Singular.PascalName}} = v.(string)
}
{{end -}}
{{- end}}
return l
}
func (l *Lookup) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
if err := l.validate(); err != nil {
return "", err
}
r := allResolvers()
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
if l.{{.Singular.PascalName}} != "" {
return r.{{.Singular.PascalName}}(ctx, w, l.{{.Singular.PascalName}})
}
{{end -}}
{{- end}}
return "", fmt.Errorf("no valid lookup fields provided")
}
func (l *Lookup) String() string {
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
if l.{{.Singular.PascalName}} != "" {
return fmt.Sprintf("{{.Singular.KebabName}}: %s", l.{{.Singular.PascalName}})
}
{{end -}}
{{- end}}
return ""
}
func (l *Lookup) validate() error {
// Validate that only one field is set
count := 0
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
if l.{{.Singular.PascalName}} != "" {
count++
}
{{end -}}
{{- end}}
if count != 1 {
return fmt.Errorf("exactly one lookup field must be provided")
}
if strings.Contains(l.String(), "${var") {
return fmt.Errorf("lookup fields cannot contain variable references")
}
return nil
}
type resolverFunc func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error)
type resolvers struct {
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
{{.Singular.PascalName}} resolverFunc
{{end -}}
{{- end}}
}
func allResolvers() *resolvers {
r := &resolvers{}
{{range .Services -}}
{{- if in $allowlist .KebabName -}}
r.{{.Singular.PascalName}} = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["{{.Singular.PascalName}}"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.{{.PascalName}}.GetBy{{range .NamedIdMap.NamePath}}{{.PascalName}}{{end}}(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.{{ getOrDefault $customField .KebabName ((index .NamedIdMap.IdPath 0).PascalName) }}), nil
}
{{end -}}
{{- end}}
return r
}

View File

@ -411,5 +411,5 @@ func new{{.PascalName}}() *cobra.Command {
{{- define "request-body-obj" -}}
{{- $method := .Method -}}
{{- $field := .Field -}}
{{$method.CamelName}}Req{{ if (and $method.RequestBodyField (not $field.IsPath)) }}.{{$method.RequestBodyField.PascalName}}{{end}}.{{$field.PascalName}}
{{$method.CamelName}}Req{{ if (and $method.RequestBodyField (and (not $field.IsPath) (not $field.IsQuery))) }}.{{$method.RequestBodyField.PascalName}}{{end}}.{{$field.PascalName}}
{{- end -}}

8
.git-blame-ignore-revs Normal file
View File

@ -0,0 +1,8 @@
# Enable gofumpt and goimports in golangci-lint (#1999)
2e018cfaec200a02ee2bd5b389e7da3c6f15f460
# Enable errcheck everywhere and fix or silent remaining issues (#1987)
8d5351c1c3d7befda4baae5d6adb99367aa50b3c
# Add error checking in tests and enable errcheck there (#1980)
1b2be1b2cb4b7909df2a8ad4cb6a0f43e8fcf0c6

6
.gitattributes vendored
View File

@ -1,4 +1,3 @@
bundle/config/variable/lookup.go linguist-generated=true
cmd/account/access-control/access-control.go linguist-generated=true
cmd/account/billable-usage/billable-usage.go linguist-generated=true
cmd/account/budgets/budgets.go linguist-generated=true
@ -9,6 +8,7 @@ cmd/account/custom-app-integration/custom-app-integration.go linguist-generated=
cmd/account/disable-legacy-features/disable-legacy-features.go linguist-generated=true
cmd/account/encryption-keys/encryption-keys.go linguist-generated=true
cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true
cmd/account/federation-policy/federation-policy.go linguist-generated=true
cmd/account/groups/groups.go linguist-generated=true
cmd/account/ip-access-lists/ip-access-lists.go linguist-generated=true
cmd/account/log-delivery/log-delivery.go linguist-generated=true
@ -20,6 +20,7 @@ cmd/account/o-auth-published-apps/o-auth-published-apps.go linguist-generated=tr
cmd/account/personal-compute/personal-compute.go linguist-generated=true
cmd/account/private-access/private-access.go linguist-generated=true
cmd/account/published-app-integration/published-app-integration.go linguist-generated=true
cmd/account/service-principal-federation-policy/service-principal-federation-policy.go linguist-generated=true
cmd/account/service-principal-secrets/service-principal-secrets.go linguist-generated=true
cmd/account/service-principals/service-principals.go linguist-generated=true
cmd/account/settings/settings.go linguist-generated=true
@ -38,6 +39,9 @@ cmd/workspace/apps/apps.go linguist-generated=true
cmd/workspace/artifact-allowlists/artifact-allowlists.go linguist-generated=true
cmd/workspace/automatic-cluster-update/automatic-cluster-update.go linguist-generated=true
cmd/workspace/catalogs/catalogs.go linguist-generated=true
cmd/workspace/clean-room-assets/clean-room-assets.go linguist-generated=true
cmd/workspace/clean-room-task-runs/clean-room-task-runs.go linguist-generated=true
cmd/workspace/clean-rooms/clean-rooms.go linguist-generated=true
cmd/workspace/cluster-policies/cluster-policies.go linguist-generated=true
cmd/workspace/clusters/clusters.go linguist-generated=true
cmd/workspace/cmd.go linguist-generated=true

1
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1 @@
* @pietern @andrewnester @shreyas-goenka @denik

View File

@ -0,0 +1,32 @@
name: integration-approve
on:
merge_group:
jobs:
# Trigger for merge groups.
#
# Statuses and checks apply to specific commits (by hash).
# Enforcement of required checks is done both at the PR level and the merge queue level.
# In case of multiple commits in a single PR, the hash of the squashed commit
# will not match the one for the latest (approved) commit in the PR.
#
# We auto approve the check for the merge queue for two reasons:
#
# * Queue times out due to duration of tests.
# * Avoid running integration tests twice, since it was already run at the tip of the branch before squashing.
#
trigger:
runs-on: ubuntu-latest
steps:
- name: Auto-approve squashed commit
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
gh api -X POST -H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
/repos/${{ github.repository }}/statuses/${{ github.sha }} \
-f 'state=success' \
-f 'context=Integration Tests Check'

33
.github/workflows/integration-main.yml vendored Normal file
View File

@ -0,0 +1,33 @@
name: integration-main
on:
push:
branches:
- main
jobs:
# Trigger for pushes to the main branch.
#
# This workflow triggers the integration test workflow in a different repository.
# It requires secrets from the "test-trigger-is" environment, which are only available to authorized users.
trigger:
runs-on: ubuntu-latest
environment: "test-trigger-is"
steps:
- name: Generate GitHub App Token
id: generate-token
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}
private-key: ${{ secrets.DECO_WORKFLOW_TRIGGER_PRIVATE_KEY }}
owner: ${{ secrets.ORG_NAME }}
repositories: ${{secrets.REPO_NAME}}
- name: Trigger Workflow in Another Repo
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
run: |
gh workflow run cli-isolated-nightly.yml -R ${{ secrets.ORG_NAME }}/${{secrets.REPO_NAME}} \
--ref main \
-f commit_sha=${{ github.event.after }}

56
.github/workflows/integration-pr.yml vendored Normal file
View File

@ -0,0 +1,56 @@
name: integration-pr
on:
pull_request:
types: [opened, synchronize]
jobs:
check-token:
runs-on: ubuntu-latest
environment: "test-trigger-is"
outputs:
has_token: ${{ steps.set-token-status.outputs.has_token }}
steps:
- name: Check if DECO_WORKFLOW_TRIGGER_APP_ID is set
id: set-token-status
run: |
if [ -z "${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}" ]; then
echo "DECO_WORKFLOW_TRIGGER_APP_ID is empty. User has no access to secrets."
echo "::set-output name=has_token::false"
else
echo "DECO_WORKFLOW_TRIGGER_APP_ID is set. User has access to secrets."
echo "::set-output name=has_token::true"
fi
# Trigger for pull requests.
#
# This workflow triggers the integration test workflow in a different repository.
# It requires secrets from the "test-trigger-is" environment, which are only available to authorized users.
# It depends on the "check-token" workflow to confirm access to this environment to avoid failures.
trigger:
runs-on: ubuntu-latest
environment: "test-trigger-is"
if: needs.check-token.outputs.has_token == 'true'
needs: check-token
steps:
- name: Generate GitHub App Token
id: generate-token
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}
private-key: ${{ secrets.DECO_WORKFLOW_TRIGGER_PRIVATE_KEY }}
owner: ${{ secrets.ORG_NAME }}
repositories: ${{secrets.REPO_NAME}}
- name: Trigger Workflow in Another Repo
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
run: |
gh workflow run cli-isolated-pr.yml -R ${{ secrets.ORG_NAME }}/${{secrets.REPO_NAME}} \
--ref main \
-f pull_request_number=${{ github.event.pull_request.number }} \
-f commit_sha=${{ github.event.pull_request.head.sha }}

View File

@ -1,78 +0,0 @@
name: integration
on:
pull_request:
types: [opened, synchronize]
merge_group:
jobs:
check-token:
runs-on: ubuntu-latest
environment: "test-trigger-is"
outputs:
has_token: ${{ steps.set-token-status.outputs.has_token }}
steps:
- name: Check if DECO_WORKFLOW_TRIGGER_APP_ID is set
id: set-token-status
run: |
if [ -z "${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}" ]; then
echo "DECO_WORKFLOW_TRIGGER_APP_ID is empty. User has no access to secrets."
echo "::set-output name=has_token::false"
else
echo "DECO_WORKFLOW_TRIGGER_APP_ID is set. User has access to secrets."
echo "::set-output name=has_token::true"
fi
trigger-tests:
runs-on: ubuntu-latest
needs: check-token
if: github.event_name == 'pull_request' && needs.check-token.outputs.has_token == 'true'
environment: "test-trigger-is"
steps:
- uses: actions/checkout@v4
- name: Generate GitHub App Token
id: generate-token
uses: actions/create-github-app-token@v1
with:
app-id: ${{ secrets.DECO_WORKFLOW_TRIGGER_APP_ID }}
private-key: ${{ secrets.DECO_WORKFLOW_TRIGGER_PRIVATE_KEY }}
owner: ${{ secrets.ORG_NAME }}
repositories: ${{secrets.REPO_NAME}}
- name: Trigger Workflow in Another Repo
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
run: |
gh workflow run cli-isolated-pr.yml -R ${{ secrets.ORG_NAME }}/${{secrets.REPO_NAME}} \
--ref main \
-f pull_request_number=${{ github.event.pull_request.number }} \
-f commit_sha=${{ github.event.pull_request.head.sha }}
# Statuses and checks apply to specific commits (by hash).
# Enforcement of required checks is done both at the PR level and the merge queue level.
# In case of multiple commits in a single PR, the hash of the squashed commit
# will not match the one for the latest (approved) commit in the PR.
# We auto approve the check for the merge queue for two reasons:
# * Queue times out due to duration of tests.
# * Avoid running integration tests twice, since it was already run at the tip of the branch before squashing.
auto-approve:
if: github.event_name == 'merge_group'
runs-on: ubuntu-latest
steps:
- name: Mark Check
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell: bash
run: |
gh api -X POST -H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
/repos/${{ github.repository }}/statuses/${{ github.sha }} \
-f 'state=success' \
-f 'context=Integration Tests Check'

View File

@ -33,19 +33,21 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.4
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: '3.9'
- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Set go env
run: |
echo "GOPATH=$(go env GOPATH)" >> $GITHUB_ENV
echo "$(go env GOPATH)/bin" >> $GITHUB_PATH
go install gotest.tools/gotestsum@latest
go install honnef.co/go/tools/cmd/staticcheck@latest
go install gotest.tools/gotestsum@v1.12.0
- name: Pull external libraries
run: |
@ -53,42 +55,28 @@ jobs:
pip3 install wheel
- name: Run tests
run: make test
run: make testonly
- name: Publish test coverage
uses: codecov/codecov-action@v4
fmt:
golangci:
name: lint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v5
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: 1.23.2
# No need to download cached dependencies when running gofmt.
cache: false
- name: Install goimports
run: |
go install golang.org/x/tools/cmd/goimports@latest
- name: Run make fmt
run: |
make fmt
go-version: 1.23.4
- name: Run go mod tidy
run: |
go mod tidy
- name: Fail on differences
run: |
# Exit with status code 1 if there are differences (i.e. unformatted files)
git diff --exit-code
- name: golangci-lint
uses: golangci/golangci-lint-action@v6
with:
version: v1.62.2
args: --timeout=15m
validate-bundle-schema:
runs-on: ubuntu-latest
@ -100,7 +88,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.4
# Github repo: https://github.com/ajv-validator/ajv-cli
- name: Install ajv-cli
@ -111,14 +99,19 @@ jobs:
# By default the ajv-cli runs in strict mode which will fail if the schema
# itself is not valid. Strict mode is more strict than the JSON schema
# specification. See for details: https://ajv.js.org/options.html#strict-mode-options
# The ajv-cli is configured to use the markdownDescription keyword which is not part of the JSON schema specification,
# but is used in editors like VSCode to render markdown in the description field
- name: Validate bundle schema
run: |
go run main.go bundle schema > schema.json
# Add markdownDescription keyword to ajv
echo "module.exports=function(a){a.addKeyword('markdownDescription')}" >> keywords.js
for file in ./bundle/internal/schema/testdata/pass/*.yml; do
ajv test -s schema.json -d $file --valid
ajv test -s schema.json -d $file --valid -c=./keywords.js
done
for file in ./bundle/internal/schema/testdata/fail/*.yml; do
ajv test -s schema.json -d $file --invalid
ajv test -s schema.json -d $file --invalid -c=./keywords.js
done

View File

@ -5,6 +5,7 @@ on:
branches:
- "main"
- "demo-*"
- "bugbash-*"
# Confirm that snapshot builds work if this file is modified.
pull_request:
@ -30,7 +31,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.4
# The default cache key for this action considers only the `go.sum` file.
# We include .goreleaser.yaml here to differentiate from the cache used by the push action

View File

@ -22,7 +22,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: 1.23.2
go-version: 1.23.4
# The default cache key for this action considers only the `go.sum` file.
# We include .goreleaser.yaml here to differentiate from the cache used by the push action

38
.golangci.yaml Normal file
View File

@ -0,0 +1,38 @@
linters:
disable-all: true
enable:
- bodyclose
- errcheck
- gosimple
- govet
- ineffassign
- staticcheck
- unused
- gofmt
- gofumpt
- goimports
linters-settings:
govet:
enable-all: true
disable:
- fieldalignment
- shadow
gofmt:
rewrite-rules:
- pattern: 'a[b:len(a)]'
replacement: 'a[b:]'
- pattern: 'interface{}'
replacement: 'any'
errcheck:
exclude-functions:
- (*github.com/spf13/cobra.Command).RegisterFlagCompletionFunc
- (*github.com/spf13/cobra.Command).MarkFlagRequired
- (*github.com/spf13/pflag.FlagSet).MarkDeprecated
- (*github.com/spf13/pflag.FlagSet).MarkHidden
gofumpt:
module-path: github.com/databricks/cli
extra-rules: true
#goimports:
# local-prefixes: github.com/databricks/cli
issues:
exclude-dirs-use-default: false # recommended by docs https://golangci-lint.run/usage/false-positives/

View File

@ -3,11 +3,18 @@
"editor.insertSpaces": false,
"editor.formatOnSave": true
},
"go.lintTool": "golangci-lint",
"go.lintFlags": [
"--fast"
],
"go.useLanguageServer": true,
"gopls": {
"formatting.gofumpt": true
},
"files.trimTrailingWhitespace": true,
"files.insertFinalNewline": true,
"files.trimFinalNewlines": true,
"python.envFile": "${workspaceRoot}/.env",
"databricks.python.envFile": "${workspaceFolder}/.env",
"python.analysis.stubPath": ".vscode",
"jupyter.interactiveWindow.cellMarker.codeRegex": "^# COMMAND ----------|^# Databricks notebook source|^(#\\s*%%|#\\s*\\<codecell\\>|#\\s*In\\[\\d*?\\]|#\\s*In\\[ \\])",
"jupyter.interactiveWindow.cellMarker.default": "# COMMAND ----------"

View File

@ -1,5 +1,78 @@
# Version changelog
## [Release] Release v0.237.0
Bundles:
* Allow overriding compute for non-development mode targets ([#1899](https://github.com/databricks/cli/pull/1899)).
* Show an error when using a cluster override with 'mode: production' ([#1994](https://github.com/databricks/cli/pull/1994)).
API Changes:
* Added `databricks account federation-policy` command group.
* Added `databricks account service-principal-federation-policy` command group.
* Added `databricks aibi-dashboard-embedding-access-policy delete` command.
* Added `databricks aibi-dashboard-embedding-approved-domains delete` command.
OpenAPI commit a6a317df8327c9b1e5cb59a03a42ffa2aabeef6d (2024-12-16)
Dependency updates:
* Upgrade TF provider to 1.62.0 ([#2030](https://github.com/databricks/cli/pull/2030)).
* Upgrade Go SDK to 0.54.0 ([#2029](https://github.com/databricks/cli/pull/2029)).
* Bump TF codegen dependencies to latest ([#1961](https://github.com/databricks/cli/pull/1961)).
* Bump golang.org/x/term from 0.26.0 to 0.27.0 ([#1983](https://github.com/databricks/cli/pull/1983)).
* Bump golang.org/x/sync from 0.9.0 to 0.10.0 ([#1984](https://github.com/databricks/cli/pull/1984)).
* Bump github.com/databricks/databricks-sdk-go from 0.52.0 to 0.53.0 ([#1985](https://github.com/databricks/cli/pull/1985)).
* Bump golang.org/x/crypto from 0.24.0 to 0.31.0 ([#2006](https://github.com/databricks/cli/pull/2006)).
* Bump golang.org/x/crypto from 0.30.0 to 0.31.0 in /bundle/internal/tf/codegen ([#2005](https://github.com/databricks/cli/pull/2005)).
## [Release] Release v0.236.0
**New features for Databricks Asset Bundles:**
This release adds support for managing Unity Catalog volumes as part of your bundle configuration.
Bundles:
* Add DABs support for Unity Catalog volumes ([#1762](https://github.com/databricks/cli/pull/1762)).
* Support lookup by name of notification destinations ([#1922](https://github.com/databricks/cli/pull/1922)).
* Extend "notebook not found" error to warn about missing extension ([#1920](https://github.com/databricks/cli/pull/1920)).
* Skip sync warning if no sync paths are defined ([#1926](https://github.com/databricks/cli/pull/1926)).
* Add validation for single node clusters ([#1909](https://github.com/databricks/cli/pull/1909)).
* Fix segfault in bundle summary command ([#1937](https://github.com/databricks/cli/pull/1937)).
* Add the `bundle_uuid` helper function for templates ([#1947](https://github.com/databricks/cli/pull/1947)).
* Add default value for `volume_type` for DABs ([#1952](https://github.com/databricks/cli/pull/1952)).
* Properly read Git metadata when running inside workspace ([#1945](https://github.com/databricks/cli/pull/1945)).
* Upgrade TF provider to 1.59.0 ([#1960](https://github.com/databricks/cli/pull/1960)).
Internal:
* Breakout variable lookup into separate files and tests ([#1921](https://github.com/databricks/cli/pull/1921)).
* Add golangci-lint v1.62.2 ([#1953](https://github.com/databricks/cli/pull/1953)).
Dependency updates:
* Bump golang.org/x/term from 0.25.0 to 0.26.0 ([#1907](https://github.com/databricks/cli/pull/1907)).
* Bump github.com/Masterminds/semver/v3 from 3.3.0 to 3.3.1 ([#1930](https://github.com/databricks/cli/pull/1930)).
* Bump github.com/stretchr/testify from 1.9.0 to 1.10.0 ([#1932](https://github.com/databricks/cli/pull/1932)).
* Bump github.com/databricks/databricks-sdk-go from 0.51.0 to 0.52.0 ([#1931](https://github.com/databricks/cli/pull/1931)).
## [Release] Release v0.235.0
**Note:** the `bundle generate` command now uses the `.<resource-type>.yml`
sub-extension for the configuration files it writes. Existing configuration
files that do not use this sub-extension are renamed to include it.
Bundles:
* Make `TableName` field part of quality monitor schema ([#1903](https://github.com/databricks/cli/pull/1903)).
* Do not prepend paths starting with ~ or variable reference ([#1905](https://github.com/databricks/cli/pull/1905)).
* Fix workspace extensions filer accidentally reading notebooks ([#1891](https://github.com/databricks/cli/pull/1891)).
* Fix template initialization when running on Databricks ([#1912](https://github.com/databricks/cli/pull/1912)).
* Source-linked deployments for bundles in the workspace ([#1884](https://github.com/databricks/cli/pull/1884)).
* Added integration test to deploy bundle to /Shared root path ([#1914](https://github.com/databricks/cli/pull/1914)).
* Update filenames used by bundle generate to use `.<resource-type>.yml` ([#1901](https://github.com/databricks/cli/pull/1901)).
Internal:
* Extract functionality to detect if the CLI is running on DBR ([#1889](https://github.com/databricks/cli/pull/1889)).
* Consolidate test helpers for `io/fs` ([#1906](https://github.com/databricks/cli/pull/1906)).
* Use `fs.FS` interface to read template ([#1910](https://github.com/databricks/cli/pull/1910)).
* Use `filer.Filer` to write template instantiation ([#1911](https://github.com/databricks/cli/pull/1911)).
## [Release] Release v0.234.0
Bundles:

View File

@ -1,16 +1,16 @@
default: build
fmt:
@echo "✓ Formatting source code with goimports ..."
@goimports -w $(shell find . -type f -name '*.go' -not -path "./vendor/*")
@echo "✓ Formatting source code with gofmt ..."
@gofmt -w $(shell find . -type f -name '*.go' -not -path "./vendor/*")
lint: vendor
@echo "✓ Linting source code with https://staticcheck.io/ ..."
@staticcheck ./...
@echo "✓ Linting source code with https://golangci-lint.run/ (with --fix)..."
@./lint.sh ./...
test: lint
lintcheck: vendor
@echo "✓ Linting source code with https://golangci-lint.run/ ..."
@golangci-lint run ./...
test: lint testonly
testonly:
@echo "✓ Running tests ..."
@gotestsum --format pkgname-and-test-fails --no-summary=skipped --raw-command go test -v -json -short -coverprofile=coverage.txt ./...
@ -29,6 +29,17 @@ snapshot:
vendor:
@echo "✓ Filling vendor folder with library code ..."
@go mod vendor
schema:
@echo "✓ Generating json-schema ..."
@go run ./bundle/internal/schema ./bundle/internal/schema ./bundle/schema/jsonschema.json
.PHONY: build vendor coverage test lint fmt
INTEGRATION = gotestsum --format github-actions --rerun-fails --jsonfile output.json --packages "./integration/..." -- -parallel 4 -timeout=2h
integration:
$(INTEGRATION)
integration-short:
$(INTEGRATION) -short
.PHONY: lint lintcheck test testonly coverage build snapshot vendor schema integration integration-short

12
NOTICE
View File

@ -73,10 +73,6 @@ fatih/color - https://github.com/fatih/color
Copyright (c) 2013 Fatih Arslan
License - https://github.com/fatih/color/blob/main/LICENSE.md
ghodss/yaml - https://github.com/ghodss/yaml
Copyright (c) 2014 Sam Ghods
License - https://github.com/ghodss/yaml/blob/master/LICENSE
Masterminds/semver - https://github.com/Masterminds/semver
Copyright (C) 2014-2019, Matt Butcher and Matt Farina
License - https://github.com/Masterminds/semver/blob/master/LICENSE.txt
@ -101,3 +97,11 @@ License - https://github.com/stretchr/testify/blob/master/LICENSE
whilp/git-urls - https://github.com/whilp/git-urls
Copyright (c) 2020 Will Maier
License - https://github.com/whilp/git-urls/blob/master/LICENSE
github.com/wI2L/jsondiff v0.6.1
Copyright (c) 2020-2024 William Poussier <william.poussier@gmail.com>
License - https://github.com/wI2L/jsondiff/blob/master/LICENSE
https://github.com/hexops/gotextdiff
Copyright (c) 2009 The Go Authors. All rights reserved.
License - https://github.com/hexops/gotextdiff/blob/main/LICENSE

View File

@ -3,7 +3,6 @@ package artifacts
import (
"context"
"fmt"
"slices"
"github.com/databricks/cli/bundle"

View File

@ -13,8 +13,7 @@ func DetectPackages() bundle.Mutator {
return &autodetect{}
}
type autodetect struct {
}
type autodetect struct{}
func (m *autodetect) Name() string {
return "artifacts.DetectPackages"

View File

@ -96,7 +96,6 @@ func (m *expandGlobs) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnost
// Set the expanded globs back into the configuration.
return dyn.SetByPath(v, base, dyn.V(output))
})
if err != nil {
return diag.FromErr(err)
}

View File

@ -21,18 +21,13 @@ func (m *cleanUp) Name() string {
}
func (m *cleanUp) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
uploadPath, err := libraries.GetUploadBasePath(b)
if err != nil {
return diag.FromErr(err)
}
client, err := libraries.GetFilerForLibraries(b.WorkspaceClient(), uploadPath)
if err != nil {
return diag.FromErr(err)
client, uploadPath, diags := libraries.GetFilerForLibraries(ctx, b)
if diags.HasError() {
return diags
}
// We intentionally ignore the error because it is not critical to the deployment
err = client.Delete(ctx, ".", filer.DeleteRecursively)
err := client.Delete(ctx, ".", filer.DeleteRecursively)
if err != nil {
log.Errorf(ctx, "failed to delete %s: %v", uploadPath, err)
}

View File

@ -15,8 +15,7 @@ import (
"github.com/databricks/cli/libs/log"
)
type detectPkg struct {
}
type detectPkg struct{}
func DetectPackage() bundle.Mutator {
return &detectPkg{}
@ -42,7 +41,7 @@ func (m *detectPkg) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostic
return nil
}
log.Infof(ctx, fmt.Sprintf("Found Python wheel project at %s", b.BundleRootPath))
log.Infof(ctx, "Found Python wheel project at %s", b.BundleRootPath)
module := extractModuleName(setupPy)
if b.Config.Artifacts == nil {

View File

@ -16,12 +16,6 @@ type infer struct {
func (m *infer) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
artifact := b.Config.Artifacts[m.name]
// TODO use python.DetectVEnvExecutable once bundle has a way to specify venv path
py, err := python.DetectExecutable(ctx)
if err != nil {
return diag.FromErr(err)
}
// Note: using --build-number (build tag) flag does not help with re-installing
// libraries on all-purpose clusters. The reason is that `pip` ignoring build tag
// when upgrading the library and only look at wheel version.
@ -36,7 +30,9 @@ func (m *infer) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// version=datetime.datetime.utcnow().strftime("%Y%m%d.%H%M%S"),
// ...
//)
artifact.BuildCommand = fmt.Sprintf(`"%s" setup.py bdist_wheel`, py)
py := python.GetExecutable()
artifact.BuildCommand = fmt.Sprintf(`%s setup.py bdist_wheel`, py)
return nil
}

View File

@ -17,7 +17,6 @@ import (
"github.com/databricks/cli/bundle/env"
"github.com/databricks/cli/bundle/metadata"
"github.com/databricks/cli/libs/fileset"
"github.com/databricks/cli/libs/git"
"github.com/databricks/cli/libs/locker"
"github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/tags"
@ -49,6 +48,10 @@ type Bundle struct {
// Exclusively use this field for filesystem operations.
SyncRoot vfs.Path
// Path to the root of git worktree containing the bundle.
// https://git-scm.com/docs/git-worktree
WorktreeRoot vfs.Path
// Config contains the bundle configuration.
// It is loaded from the bundle configuration files and mutators may update it.
Config config.Root
@ -183,7 +186,7 @@ func (b *Bundle) CacheDir(ctx context.Context, paths ...string) (string, error)
// Make directory if it doesn't exist yet.
dir := filepath.Join(parts...)
err := os.MkdirAll(dir, 0700)
err := os.MkdirAll(dir, 0o700)
if err != nil {
return "", err
}
@ -200,7 +203,7 @@ func (b *Bundle) InternalDir(ctx context.Context) (string, error) {
}
dir := filepath.Join(cacheDir, internalFolder)
err = os.MkdirAll(dir, 0700)
err = os.MkdirAll(dir, 0o700)
if err != nil {
return dir, err
}
@ -223,15 +226,6 @@ func (b *Bundle) GetSyncIncludePatterns(ctx context.Context) ([]string, error) {
return append(b.Config.Sync.Include, filepath.ToSlash(filepath.Join(internalDirRel, "*.*"))), nil
}
func (b *Bundle) GitRepository() (*git.Repository, error) {
_, err := vfs.FindLeafInTree(b.BundleRoot, ".git")
if err != nil {
return nil, fmt.Errorf("unable to locate repository root: %w", err)
}
return git.NewRepository(b.BundleRoot)
}
// AuthEnv returns a map with environment variables and their values
// derived from the workspace client configuration that was resolved
// in the context of this bundle.

View File

@ -32,6 +32,10 @@ func (r ReadOnlyBundle) SyncRoot() vfs.Path {
return r.b.SyncRoot
}
func (r ReadOnlyBundle) WorktreeRoot() vfs.Path {
return r.b.WorktreeRoot
}
func (r ReadOnlyBundle) WorkspaceClient() *databricks.WorkspaceClient {
return r.b.WorkspaceClient()
}

View File

@ -49,4 +49,8 @@ type Bundle struct {
// Databricks CLI version constraints required to run the bundle.
DatabricksCliVersion string `json:"databricks_cli_version,omitempty"`
// A stable generated UUID for the bundle. This is normally serialized by
// Databricks first party template when a user runs bundle init.
Uuid string `json:"uuid,omitempty"`
}

View File

@ -47,8 +47,10 @@ type PyDABs struct {
Import []string `json:"import,omitempty"`
}
type Command string
type ScriptHook string
type (
Command string
ScriptHook string
)
// These hook names are subject to change and currently experimental
const (

View File

@ -6,8 +6,10 @@ import (
"github.com/databricks/databricks-sdk-go/service/jobs"
)
var jobOrder = yamlsaver.NewOrder([]string{"name", "job_clusters", "compute", "tasks"})
var taskOrder = yamlsaver.NewOrder([]string{"task_key", "depends_on", "existing_cluster_id", "new_cluster", "job_cluster_key"})
var (
jobOrder = yamlsaver.NewOrder([]string{"name", "job_clusters", "compute", "tasks"})
taskOrder = yamlsaver.NewOrder([]string{"task_key", "depends_on", "existing_cluster_id", "new_cluster", "job_cluster_key"})
)
func ConvertJobToValue(job *jobs.Job) (dyn.Value, error) {
value := make(map[string]dyn.Value)

View File

@ -27,7 +27,7 @@ func (m *processRootIncludes) Apply(ctx context.Context, b *bundle.Bundle) diag.
var out []bundle.Mutator
// Map with files we've already seen to avoid loading them twice.
var seen = map[string]bool{}
seen := map[string]bool{}
for _, file := range config.FileNames {
seen[file] = true

View File

@ -9,6 +9,7 @@ import (
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/textutil"
@ -221,6 +222,27 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
dashboard.DisplayName = prefix + dashboard.DisplayName
}
if config.IsExplicitlyEnabled((b.Config.Presets.SourceLinkedDeployment)) {
isDatabricksWorkspace := dbr.RunsOnRuntime(ctx) && strings.HasPrefix(b.SyncRootPath, "/Workspace/")
if !isDatabricksWorkspace {
target := b.Config.Bundle.Target
path := dyn.NewPath(dyn.Key("targets"), dyn.Key(target), dyn.Key("presets"), dyn.Key("source_linked_deployment"))
diags = diags.Append(
diag.Diagnostic{
Severity: diag.Warning,
Summary: "source-linked deployment is available only in the Databricks Workspace",
Paths: []dyn.Path{
path,
},
Locations: b.Config.GetLocations(path[2:].String()),
},
)
disabled := false
b.Config.Presets.SourceLinkedDeployment = &disabled
}
}
return diags
}

View File

@ -2,12 +2,16 @@ package mutator_test
import (
"context"
"runtime"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require"
@ -69,7 +73,7 @@ func TestApplyPresetsPrefix(t *testing.T) {
}
}
func TestApplyPresetsPrefixForUcSchema(t *testing.T) {
func TestApplyPresetsPrefixForSchema(t *testing.T) {
tests := []struct {
name string
prefix string
@ -125,6 +129,36 @@ func TestApplyPresetsPrefixForUcSchema(t *testing.T) {
}
}
func TestApplyPresetsVolumesShouldNotBePrefixed(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Volumes: map[string]*resources.Volume{
"volume1": {
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
Name: "volume1",
CatalogName: "catalog1",
SchemaName: "schema1",
},
},
},
},
Presets: config.Presets{
NamePrefix: "[prefix]",
},
},
}
ctx := context.Background()
diag := bundle.Apply(ctx, b, mutator.ApplyPresets())
if diag.HasError() {
t.Fatalf("unexpected error: %v", diag)
}
require.Equal(t, "volume1", b.Config.Resources.Volumes["volume1"].Name)
}
func TestApplyPresetsTags(t *testing.T) {
tests := []struct {
name string
@ -364,3 +398,87 @@ func TestApplyPresetsResourceNotDefined(t *testing.T) {
})
}
}
func TestApplyPresetsSourceLinkedDeployment(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("this test is not applicable on Windows because source-linked mode works only in the Databricks Workspace")
}
testContext := context.Background()
enabled := true
disabled := false
workspacePath := "/Workspace/user.name@company.com"
tests := []struct {
bundlePath string
ctx context.Context
name string
initialValue *bool
expectedValue *bool
expectedWarning string
}{
{
name: "preset enabled, bundle in Workspace, databricks runtime",
bundlePath: workspacePath,
ctx: dbr.MockRuntime(testContext, true),
initialValue: &enabled,
expectedValue: &enabled,
},
{
name: "preset enabled, bundle not in Workspace, databricks runtime",
bundlePath: "/Users/user.name@company.com",
ctx: dbr.MockRuntime(testContext, true),
initialValue: &enabled,
expectedValue: &disabled,
expectedWarning: "source-linked deployment is available only in the Databricks Workspace",
},
{
name: "preset enabled, bundle in Workspace, not databricks runtime",
bundlePath: workspacePath,
ctx: dbr.MockRuntime(testContext, false),
initialValue: &enabled,
expectedValue: &disabled,
expectedWarning: "source-linked deployment is available only in the Databricks Workspace",
},
{
name: "preset disabled, bundle in Workspace, databricks runtime",
bundlePath: workspacePath,
ctx: dbr.MockRuntime(testContext, true),
initialValue: &disabled,
expectedValue: &disabled,
},
{
name: "preset nil, bundle in Workspace, databricks runtime",
bundlePath: workspacePath,
ctx: dbr.MockRuntime(testContext, true),
initialValue: nil,
expectedValue: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
b := &bundle.Bundle{
SyncRootPath: tt.bundlePath,
Config: config.Root{
Presets: config.Presets{
SourceLinkedDeployment: tt.initialValue,
},
},
}
bundletest.SetLocation(b, "presets.source_linked_deployment", []dyn.Location{{File: "databricks.yml"}})
diags := bundle.Apply(tt.ctx, b, mutator.ApplyPresets())
if diags.HasError() {
t.Fatalf("unexpected error: %v", diags)
}
if tt.expectedWarning != "" {
require.Equal(t, tt.expectedWarning, diags[0].Summary)
require.NotEmpty(t, diags[0].Locations)
}
require.Equal(t, tt.expectedValue, b.Config.Presets.SourceLinkedDeployment)
})
}
}

View File

@ -42,7 +42,6 @@ func rewriteComputeIdToClusterId(v dyn.Value, p dyn.Path) (dyn.Value, diag.Diagn
var diags diag.Diagnostics
computeIdPath := p.Append(dyn.Key("compute_id"))
computeId, err := dyn.GetByPath(v, computeIdPath)
// If the "compute_id" key is not set, we don't need to do anything.
if err != nil {
return v, nil

View File

@ -0,0 +1,44 @@
package mutator
import (
"context"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
)
type configureVolumeDefaults struct{}
func ConfigureVolumeDefaults() bundle.Mutator {
return &configureVolumeDefaults{}
}
func (m *configureVolumeDefaults) Name() string {
return "ConfigureVolumeDefaults"
}
func (m *configureVolumeDefaults) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
pattern := dyn.NewPattern(
dyn.Key("resources"),
dyn.Key("volumes"),
dyn.AnyKey(),
)
// Configure defaults for all volumes.
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
return dyn.MapByPattern(v, pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
var err error
v, err = setIfNotExists(v, dyn.NewPath(dyn.Key("volume_type")), dyn.V("MANAGED"))
if err != nil {
return dyn.InvalidValue, err
}
return v, nil
})
})
diags = diags.Extend(diag.FromErr(err))
return diags
}

View File

@ -0,0 +1,75 @@
package mutator_test
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigureVolumeDefaultsVolumeType(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Volumes: map[string]*resources.Volume{
"v1": {
// Empty string is skipped.
// See below for how it is set.
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
VolumeType: "",
},
},
"v2": {
// Non-empty string is skipped.
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
VolumeType: "already-set",
},
},
"v3": {
// No volume type set.
},
"v4": nil,
},
},
},
}
// We can't set an empty string in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.volumes.v1.volume_type", dyn.V(""))
})
diags := bundle.Apply(context.Background(), b, mutator.ConfigureVolumeDefaults())
require.NoError(t, diags.Error())
var v dyn.Value
var err error
// Set to empty string; unchanged.
v, err = dyn.Get(b.Config.Value(), "resources.volumes.v1.volume_type")
require.NoError(t, err)
assert.Equal(t, "", v.MustString())
// Set to non-empty string; unchanged.
v, err = dyn.Get(b.Config.Value(), "resources.volumes.v2.volume_type")
require.NoError(t, err)
assert.Equal(t, "already-set", v.MustString())
// Not set; set to default.
v, err = dyn.Get(b.Config.Value(), "resources.volumes.v3.volume_type")
require.NoError(t, err)
assert.Equal(t, "MANAGED", v.MustString())
// No valid volume; No change.
_, err = dyn.Get(b.Config.Value(), "resources.volumes.v4.volume_type")
assert.True(t, dyn.IsCannotTraverseNilError(err))
}

View File

@ -17,7 +17,7 @@ import (
)
func touchEmptyFile(t *testing.T, path string) {
err := os.MkdirAll(filepath.Dir(path), 0700)
err := os.MkdirAll(filepath.Dir(path), 0o700)
require.NoError(t, err)
f, err := os.Create(path)
require.NoError(t, err)

View File

@ -28,7 +28,7 @@ func (m *expandWorkspaceRoot) Apply(ctx context.Context, b *bundle.Bundle) diag.
}
currentUser := b.Config.Workspace.CurrentUser
if currentUser == nil || currentUser.UserName == "" {
if currentUser == nil || currentUser.User == nil || currentUser.UserName == "" {
return diag.Errorf("unable to expand workspace root: current user not set")
}

View File

@ -10,8 +10,7 @@ import (
"github.com/databricks/cli/libs/diag"
)
type initializeURLs struct {
}
type initializeURLs struct{}
// InitializeURLs makes sure the URL field of each resource is configured.
// NOTE: since this depends on an extra API call, this mutator adds some extra
@ -32,11 +31,14 @@ func (m *initializeURLs) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagn
}
orgId := strconv.FormatInt(workspaceId, 10)
host := b.WorkspaceClient().Config.CanonicalHostName()
initializeForWorkspace(b, orgId, host)
err = initializeForWorkspace(b, orgId, host)
if err != nil {
return diag.FromErr(err)
}
return nil
}
func initializeForWorkspace(b *bundle.Bundle, orgId string, host string) error {
func initializeForWorkspace(b *bundle.Bundle, orgId, host string) error {
baseURL, err := url.Parse(host)
if err != nil {
return err

View File

@ -110,7 +110,8 @@ func TestInitializeURLs(t *testing.T) {
"dashboard1": "https://mycompany.databricks.com/dashboardsv3/01ef8d56871e1d50ae30ce7375e42478/published?o=123456",
}
initializeForWorkspace(b, "123456", "https://mycompany.databricks.com/")
err := initializeForWorkspace(b, "123456", "https://mycompany.databricks.com/")
require.NoError(t, err)
for _, group := range b.Config.Resources.AllResources() {
for key, r := range group.Resources {
@ -133,7 +134,8 @@ func TestInitializeURLsWithoutOrgId(t *testing.T) {
},
}
initializeForWorkspace(b, "123456", "https://adb-123456.azuredatabricks.net/")
err := initializeForWorkspace(b, "123456", "https://adb-123456.azuredatabricks.net/")
require.NoError(t, err)
require.Equal(t, "https://adb-123456.azuredatabricks.net/jobs/1", b.Config.Resources.Jobs["job1"].URL)
}

View File

@ -2,12 +2,14 @@ package mutator
import (
"context"
"errors"
"os"
"path/filepath"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/git"
"github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/vfs"
)
type loadGitDetails struct{}
@ -21,50 +23,42 @@ func (m *loadGitDetails) Name() string {
}
func (m *loadGitDetails) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// Load relevant git repository
repo, err := git.NewRepository(b.BundleRoot)
var diags diag.Diagnostics
info, err := git.FetchRepositoryInfo(ctx, b.BundleRoot.Native(), b.WorkspaceClient())
if err != nil {
return diag.FromErr(err)
if !errors.Is(err, os.ErrNotExist) {
diags = append(diags, diag.WarningFromErr(err)...)
}
}
// Read branch name of current checkout
branch, err := repo.CurrentBranch()
if err == nil {
b.Config.Bundle.Git.ActualBranch = branch
if b.Config.Bundle.Git.Branch == "" {
// Only load branch if there's no user defined value
b.Config.Bundle.Git.Inferred = true
b.Config.Bundle.Git.Branch = branch
}
if info.WorktreeRoot == "" {
b.WorktreeRoot = b.BundleRoot
} else {
log.Warnf(ctx, "failed to load current branch: %s", err)
b.WorktreeRoot = vfs.MustNew(info.WorktreeRoot)
}
b.Config.Bundle.Git.ActualBranch = info.CurrentBranch
if b.Config.Bundle.Git.Branch == "" {
// Only load branch if there's no user defined value
b.Config.Bundle.Git.Inferred = true
b.Config.Bundle.Git.Branch = info.CurrentBranch
}
// load commit hash if undefined
if b.Config.Bundle.Git.Commit == "" {
commit, err := repo.LatestCommit()
if err != nil {
log.Warnf(ctx, "failed to load latest commit: %s", err)
} else {
b.Config.Bundle.Git.Commit = commit
}
}
// load origin url if undefined
if b.Config.Bundle.Git.OriginURL == "" {
remoteUrl := repo.OriginUrl()
b.Config.Bundle.Git.OriginURL = remoteUrl
b.Config.Bundle.Git.Commit = info.LatestCommit
}
// Compute relative path of the bundle root from the Git repo root.
absBundlePath, err := filepath.Abs(b.BundleRootPath)
if err != nil {
return diag.FromErr(err)
// load origin url if undefined
if b.Config.Bundle.Git.OriginURL == "" {
b.Config.Bundle.Git.OriginURL = info.OriginURL
}
// repo.Root() returns the absolute path of the repo
relBundlePath, err := filepath.Rel(repo.Root(), absBundlePath)
relBundlePath, err := filepath.Rel(b.WorktreeRoot.Native(), b.BundleRoot.Native())
if err != nil {
return diag.FromErr(err)
diags = append(diags, diag.FromErr(err)...)
} else {
b.Config.Bundle.Git.BundleRootPath = filepath.ToSlash(relBundlePath)
}
b.Config.Bundle.Git.BundleRootPath = filepath.ToSlash(relBundlePath)
return nil
return diags
}

View File

@ -26,7 +26,6 @@ func DefaultMutators() []bundle.Mutator {
ComputeIdToClusterId(),
InitializeVariables(),
DefineDefaultTarget(),
LoadGitDetails(),
pythonmutator.PythonMutator(pythonmutator.PythonMutatorPhaseLoad),
// Note: This mutator must run before the target overrides are merged.

View File

@ -6,6 +6,7 @@ import (
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/env"
)
@ -22,7 +23,7 @@ func (m *overrideCompute) Name() string {
func overrideJobCompute(j *resources.Job, compute string) {
for i := range j.Tasks {
var task = &j.Tasks[i]
task := &j.Tasks[i]
if task.ForEachTask != nil {
task = &task.ForEachTask.Task
@ -38,18 +39,32 @@ func overrideJobCompute(j *resources.Job, compute string) {
}
func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if b.Config.Bundle.Mode != config.Development {
var diags diag.Diagnostics
if b.Config.Bundle.Mode == config.Production {
if b.Config.Bundle.ClusterId != "" {
return diag.Errorf("cannot override compute for an target that does not use 'mode: development'")
// Overriding compute via a command-line flag for production works, but is not recommended.
diags = diags.Extend(diag.Diagnostics{{
Summary: "Setting a cluster override for a target that uses 'mode: production' is not recommended",
Detail: "It is recommended to always use the same compute for production target for consistency.",
Severity: diag.Warning,
}})
}
return nil
}
if v := env.Get(ctx, "DATABRICKS_CLUSTER_ID"); v != "" {
// For historical reasons, we allow setting the cluster ID via the DATABRICKS_CLUSTER_ID
// when development mode is used. Sometimes, this is done by accident, so we log an info message.
if b.Config.Bundle.Mode == config.Development {
cmdio.LogString(ctx, "Setting a cluster override because DATABRICKS_CLUSTER_ID is set. It is recommended to use --cluster-id instead, which works in any target mode.")
} else {
// We don't allow using DATABRICKS_CLUSTER_ID in any other mode, it's too error-prone.
return diag.Warningf("The DATABRICKS_CLUSTER_ID variable is set but is ignored since the current target does not use 'mode: development'")
}
b.Config.Bundle.ClusterId = v
}
if b.Config.Bundle.ClusterId == "" {
return nil
return diags
}
r := b.Config.Resources
@ -57,5 +72,5 @@ func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diag
overrideJobCompute(r.Jobs[i], b.Config.Bundle.ClusterId)
}
return nil
return diags
}

View File

@ -8,13 +8,14 @@ import (
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestOverrideDevelopment(t *testing.T) {
func TestOverrideComputeModeDevelopment(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "")
b := &bundle.Bundle{
Config: config.Root{
@ -62,10 +63,13 @@ func TestOverrideDevelopment(t *testing.T) {
assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[3].JobClusterKey)
}
func TestOverrideDevelopmentEnv(t *testing.T) {
func TestOverrideComputeModeDefaultIgnoresVariable(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{
Config: config.Root{
Bundle: config.Bundle{
Mode: "",
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job1": {JobSettings: &jobs.JobSettings{
@ -86,11 +90,12 @@ func TestOverrideDevelopmentEnv(t *testing.T) {
m := mutator.OverrideCompute()
diags := bundle.Apply(context.Background(), b, m)
require.NoError(t, diags.Error())
require.Len(t, diags, 1)
assert.Equal(t, "The DATABRICKS_CLUSTER_ID variable is set but is ignored since the current target does not use 'mode: development'", diags[0].Summary)
assert.Equal(t, "cluster2", b.Config.Resources.Jobs["job1"].Tasks[1].ExistingClusterId)
}
func TestOverridePipelineTask(t *testing.T) {
func TestOverrideComputePipelineTask(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{
Config: config.Root{
@ -115,7 +120,7 @@ func TestOverridePipelineTask(t *testing.T) {
assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[0].ExistingClusterId)
}
func TestOverrideForEachTask(t *testing.T) {
func TestOverrideComputeForEachTask(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{
Config: config.Root{
@ -140,10 +145,11 @@ func TestOverrideForEachTask(t *testing.T) {
assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[0].ForEachTask.Task)
}
func TestOverrideProduction(t *testing.T) {
func TestOverrideComputeModeProduction(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Bundle: config.Bundle{
Mode: config.Production,
ClusterId: "newClusterID",
},
Resources: config.Resources{
@ -166,13 +172,19 @@ func TestOverrideProduction(t *testing.T) {
m := mutator.OverrideCompute()
diags := bundle.Apply(context.Background(), b, m)
require.True(t, diags.HasError())
require.Len(t, diags, 1)
assert.Equal(t, "Setting a cluster override for a target that uses 'mode: production' is not recommended", diags[0].Summary)
assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "newClusterID", b.Config.Resources.Jobs["job1"].Tasks[0].ExistingClusterId)
}
func TestOverrideProductionEnv(t *testing.T) {
func TestOverrideComputeModeProductionIgnoresVariable(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{
Config: config.Root{
Bundle: config.Bundle{
Mode: config.Production,
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job1": {JobSettings: &jobs.JobSettings{
@ -193,5 +205,7 @@ func TestOverrideProductionEnv(t *testing.T) {
m := mutator.OverrideCompute()
diags := bundle.Apply(context.Background(), b, m)
require.NoError(t, diags.Error())
require.Len(t, diags, 1)
assert.Equal(t, "The DATABRICKS_CLUSTER_ID variable is set but is ignored since the current target does not use 'mode: development'", diags[0].Summary)
assert.Equal(t, "cluster2", b.Config.Resources.Jobs["job1"].Tasks[1].ExistingClusterId)
}

View File

@ -95,7 +95,7 @@ func jobRewritePatterns() []jobRewritePattern {
// VisitJobPaths visits all paths in job resources and applies a function to each path.
func VisitJobPaths(value dyn.Value, fn VisitFunc) (dyn.Value, error) {
var err error
var newValue = value
newValue := value
for _, rewritePattern := range jobRewritePatterns() {
newValue, err = dyn.MapByPattern(newValue, rewritePattern.pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
@ -105,7 +105,6 @@ func VisitJobPaths(value dyn.Value, fn VisitFunc) (dyn.Value, error) {
return fn(p, rewritePattern.kind, v)
})
if err != nil {
return dyn.InvalidValue, err
}

View File

@ -57,14 +57,12 @@ func (m *prependWorkspacePrefix) Apply(ctx context.Context, b *bundle.Bundle) di
return dyn.NewValue(fmt.Sprintf("/Workspace%s", path), v.Locations()), nil
})
if err != nil {
return dyn.InvalidValue, err
}
}
return v, nil
})
if err != nil {
return diag.FromErr(err)
}

View File

@ -6,6 +6,7 @@ import (
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/iamutil"
@ -57,6 +58,14 @@ func transformDevelopmentMode(ctx context.Context, b *bundle.Bundle) {
t.TriggerPauseStatus = config.Paused
}
if !config.IsExplicitlyDisabled(t.SourceLinkedDeployment) {
isInWorkspace := strings.HasPrefix(b.SyncRootPath, "/Workspace/")
if isInWorkspace && dbr.RunsOnRuntime(ctx) {
enabled := true
t.SourceLinkedDeployment = &enabled
}
}
if !config.IsExplicitlyDisabled(t.PipelinesDevelopment) {
enabled := true
t.PipelinesDevelopment = &enabled

View File

@ -3,14 +3,17 @@ package mutator
import (
"context"
"reflect"
"strings"
"runtime"
"slices"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/tags"
"github.com/databricks/cli/libs/vfs"
sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute"
@ -128,6 +131,9 @@ func mockBundle(mode config.Mode) *bundle.Bundle {
Schemas: map[string]*resources.Schema{
"schema1": {CreateSchema: &catalog.CreateSchema{Name: "schema1"}},
},
Volumes: map[string]*resources.Volume{
"volume1": {CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{Name: "volume1"}},
},
Clusters: map[string]*resources.Cluster{
"cluster1": {ClusterSpec: &compute.ClusterSpec{ClusterName: "cluster1", SparkVersion: "13.2.x", NumWorkers: 1}},
},
@ -140,6 +146,7 @@ func mockBundle(mode config.Mode) *bundle.Bundle {
},
},
},
SyncRoot: vfs.MustNew("/Users/lennart.kats@databricks.com"),
// Use AWS implementation for testing.
Tagging: tags.ForCloud(&sdkconfig.Config{
Host: "https://company.cloud.databricks.com",
@ -307,6 +314,8 @@ func TestProcessTargetModeDefault(t *testing.T) {
assert.Equal(t, "servingendpoint1", b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Name)
assert.Equal(t, "registeredmodel1", b.Config.Resources.RegisteredModels["registeredmodel1"].Name)
assert.Equal(t, "qualityMonitor1", b.Config.Resources.QualityMonitors["qualityMonitor1"].TableName)
assert.Equal(t, "schema1", b.Config.Resources.Schemas["schema1"].Name)
assert.Equal(t, "volume1", b.Config.Resources.Volumes["volume1"].Name)
assert.Equal(t, "cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName)
}
@ -351,6 +360,8 @@ func TestProcessTargetModeProduction(t *testing.T) {
assert.Equal(t, "servingendpoint1", b.Config.Resources.ModelServingEndpoints["servingendpoint1"].Name)
assert.Equal(t, "registeredmodel1", b.Config.Resources.RegisteredModels["registeredmodel1"].Name)
assert.Equal(t, "qualityMonitor1", b.Config.Resources.QualityMonitors["qualityMonitor1"].TableName)
assert.Equal(t, "schema1", b.Config.Resources.Schemas["schema1"].Name)
assert.Equal(t, "volume1", b.Config.Resources.Volumes["volume1"].Name)
assert.Equal(t, "cluster1", b.Config.Resources.Clusters["cluster1"].ClusterName)
}
@ -384,10 +395,17 @@ func TestAllResourcesMocked(t *testing.T) {
}
}
// Make sure that we at least rename all resources
func TestAllResourcesRenamed(t *testing.T) {
// Make sure that we at rename all non UC resources
func TestAllNonUcResourcesAreRenamed(t *testing.T) {
b := mockBundle(config.Development)
// UC resources should not have a prefix added to their name. Right now
// this list only contains the Volume resource since we have yet to remove
// prefixing support for UC schemas and registered models.
ucFields := []reflect.Type{
reflect.TypeOf(&resources.Volume{}),
}
m := bundle.Seq(ProcessTargetMode(), ApplyPresets())
diags := bundle.Apply(context.Background(), b, m)
require.NoError(t, diags.Error())
@ -400,14 +418,14 @@ func TestAllResourcesRenamed(t *testing.T) {
for _, key := range field.MapKeys() {
resource := field.MapIndex(key)
nameField := resource.Elem().FieldByName("Name")
if nameField.IsValid() && nameField.Kind() == reflect.String {
assert.True(
t,
strings.Contains(nameField.String(), "dev"),
"process_target_mode should rename '%s' in '%s'",
key,
resources.Type().Field(i).Name,
)
if !nameField.IsValid() || nameField.Kind() != reflect.String {
continue
}
if slices.Contains(ucFields, resource.Type()) {
assert.NotContains(t, nameField.String(), "dev", "process_target_mode should not rename '%s' in '%s'", key, resources.Type().Field(i).Name)
} else {
assert.Contains(t, nameField.String(), "dev", "process_target_mode should rename '%s' in '%s'", key, resources.Type().Field(i).Name)
}
}
}
@ -522,3 +540,32 @@ func TestPipelinesDevelopmentDisabled(t *testing.T) {
assert.False(t, b.Config.Resources.Pipelines["pipeline1"].PipelineSpec.Development)
}
func TestSourceLinkedDeploymentEnabled(t *testing.T) {
b, diags := processSourceLinkedBundle(t, true)
require.NoError(t, diags.Error())
assert.True(t, *b.Config.Presets.SourceLinkedDeployment)
}
func TestSourceLinkedDeploymentDisabled(t *testing.T) {
b, diags := processSourceLinkedBundle(t, false)
require.NoError(t, diags.Error())
assert.False(t, *b.Config.Presets.SourceLinkedDeployment)
}
func processSourceLinkedBundle(t *testing.T, presetEnabled bool) (*bundle.Bundle, diag.Diagnostics) {
if runtime.GOOS == "windows" {
t.Skip("this test is not applicable on Windows because source-linked mode works only in the Databricks Workspace")
}
b := mockBundle(config.Development)
workspacePath := "/Workspace/lennart@company.com/"
b.SyncRootPath = workspacePath
b.Config.Presets.SourceLinkedDeployment = &presetEnabled
ctx := dbr.MockRuntime(context.Background(), true)
m := bundle.Seq(ProcessTargetMode(), ApplyPresets())
diags := bundle.Apply(ctx, b, m)
return b, diags
}

View File

@ -30,7 +30,6 @@ type parsePythonDiagnosticsTest struct {
}
func TestParsePythonDiagnostics(t *testing.T) {
testCases := []parsePythonDiagnosticsTest{
{
name: "short error with location",

View File

@ -9,12 +9,11 @@ import (
"io"
"os"
"path/filepath"
"strings"
"github.com/databricks/databricks-sdk-go/logger"
"github.com/fatih/color"
"strings"
"github.com/databricks/cli/libs/python"
"github.com/databricks/cli/bundle/env"
@ -94,11 +93,10 @@ func (m *pythonMutator) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagno
// mutateDiags is used because Mutate returns 'error' instead of 'diag.Diagnostics'
var mutateDiags diag.Diagnostics
var mutateDiagsHasError = errors.New("unexpected error")
mutateDiagsHasError := errors.New("unexpected error")
err := b.Config.Mutate(func(leftRoot dyn.Value) (dyn.Value, error) {
pythonPath, err := detectExecutable(ctx, experimental.PyDABs.VEnvPath)
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to get Python interpreter path: %w", err)
}
@ -141,7 +139,7 @@ func createCacheDir(ctx context.Context) (string, error) {
// use 'default' as target name
cacheDir := filepath.Join(tempDir, "default", "pydabs")
err := os.MkdirAll(cacheDir, 0700)
err := os.MkdirAll(cacheDir, 0o700)
if err != nil {
return "", err
}
@ -152,7 +150,7 @@ func createCacheDir(ctx context.Context) (string, error) {
return os.MkdirTemp("", "-pydabs")
}
func (m *pythonMutator) runPythonMutator(ctx context.Context, cacheDir string, rootPath string, pythonPath string, root dyn.Value) (dyn.Value, diag.Diagnostics) {
func (m *pythonMutator) runPythonMutator(ctx context.Context, cacheDir, rootPath, pythonPath string, root dyn.Value) (dyn.Value, diag.Diagnostics) {
inputPath := filepath.Join(cacheDir, "input.json")
outputPath := filepath.Join(cacheDir, "output.json")
diagnosticsPath := filepath.Join(cacheDir, "diagnostics.json")
@ -263,10 +261,10 @@ func writeInputFile(inputPath string, input dyn.Value) error {
return fmt.Errorf("failed to marshal input: %w", err)
}
return os.WriteFile(inputPath, rootConfigJson, 0600)
return os.WriteFile(inputPath, rootConfigJson, 0o600)
}
func loadOutputFile(rootPath string, outputPath string) (dyn.Value, diag.Diagnostics) {
func loadOutputFile(rootPath, outputPath string) (dyn.Value, diag.Diagnostics) {
outputFile, err := os.Open(outputPath)
if err != nil {
return dyn.InvalidValue, diag.FromErr(fmt.Errorf("failed to open output file: %w", err))
@ -381,7 +379,7 @@ func createLoadOverrideVisitor(ctx context.Context) merge.OverrideVisitor {
return right, nil
},
VisitUpdate: func(valuePath dyn.Path, left dyn.Value, right dyn.Value) (dyn.Value, error) {
VisitUpdate: func(valuePath dyn.Path, left, right dyn.Value) (dyn.Value, error) {
return dyn.InvalidValue, fmt.Errorf("unexpected change at %q (update)", valuePath.String())
},
}
@ -430,7 +428,7 @@ func createInitOverrideVisitor(ctx context.Context) merge.OverrideVisitor {
return right, nil
},
VisitUpdate: func(valuePath dyn.Path, left dyn.Value, right dyn.Value) (dyn.Value, error) {
VisitUpdate: func(valuePath dyn.Path, left, right dyn.Value) (dyn.Value, error) {
if !valuePath.HasPrefix(jobsPath) {
return dyn.InvalidValue, fmt.Errorf("unexpected change at %q (update)", valuePath.String())
}

View File

@ -106,7 +106,6 @@ func TestPythonMutator_load(t *testing.T) {
Column: 5,
},
}, diags[0].Locations)
}
func TestPythonMutator_load_disallowed(t *testing.T) {
@ -542,7 +541,7 @@ func TestLoadDiagnosticsFile_nonExistent(t *testing.T) {
func TestInterpreterPath(t *testing.T) {
if runtime.GOOS == "windows" {
assert.Equal(t, "venv\\Scripts\\python3.exe", interpreterPath("venv"))
assert.Equal(t, "venv\\Scripts\\python.exe", interpreterPath("venv"))
} else {
assert.Equal(t, "venv/bin/python3", interpreterPath("venv"))
}
@ -588,7 +587,7 @@ or activate the environment before running CLI commands:
assert.Equal(t, expected, out)
}
func withProcessStub(t *testing.T, args []string, output string, diagnostics string) context.Context {
func withProcessStub(t *testing.T, args []string, output, diagnostics string) context.Context {
ctx := context.Background()
ctx, stub := process.WithStub(ctx)
@ -611,10 +610,10 @@ func withProcessStub(t *testing.T, args []string, output string, diagnostics str
assert.NoError(t, err)
if reflect.DeepEqual(actual.Args, args) {
err := os.WriteFile(outputPath, []byte(output), 0600)
err := os.WriteFile(outputPath, []byte(output), 0o600)
require.NoError(t, err)
err = os.WriteFile(diagnosticsPath, []byte(diagnostics), 0600)
err = os.WriteFile(diagnosticsPath, []byte(diagnostics), 0o600)
require.NoError(t, err)
return nil
@ -626,7 +625,7 @@ func withProcessStub(t *testing.T, args []string, output string, diagnostics str
return ctx
}
func loadYaml(name string, content string) *bundle.Bundle {
func loadYaml(name, content string) *bundle.Bundle {
v, diag := config.LoadFromBytes(name, []byte(content))
if diag.Error() != nil {
@ -650,17 +649,17 @@ func withFakeVEnv(t *testing.T, venvPath string) {
interpreterPath := interpreterPath(venvPath)
err = os.MkdirAll(filepath.Dir(interpreterPath), 0755)
err = os.MkdirAll(filepath.Dir(interpreterPath), 0o755)
if err != nil {
panic(err)
}
err = os.WriteFile(interpreterPath, []byte(""), 0755)
err = os.WriteFile(interpreterPath, []byte(""), 0o755)
if err != nil {
panic(err)
}
err = os.WriteFile(filepath.Join(venvPath, "pyvenv.cfg"), []byte(""), 0755)
err = os.WriteFile(filepath.Join(venvPath, "pyvenv.cfg"), []byte(""), 0o755)
if err != nil {
panic(err)
}
@ -674,7 +673,7 @@ func withFakeVEnv(t *testing.T, venvPath string) {
func interpreterPath(venvPath string) string {
if runtime.GOOS == "windows" {
return filepath.Join(venvPath, "Scripts", "python3.exe")
return filepath.Join(venvPath, "Scripts", "python.exe")
} else {
return filepath.Join(venvPath, "bin", "python3")
}

View File

@ -36,8 +36,7 @@ func (m *resolveResourceReferences) Apply(ctx context.Context, b *bundle.Bundle)
return fmt.Errorf("failed to resolve %s, err: %w", v.Lookup, err)
}
v.Set(id)
return nil
return v.Set(id)
})
}

View File

@ -108,7 +108,8 @@ func TestNoLookupIfVariableIsSet(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient)
b.Config.Variables["my-cluster-id"].Set("random value")
err := b.Config.Variables["my-cluster-id"].Set("random value")
require.NoError(t, err)
diags := bundle.Apply(context.Background(), b, ResolveResourceReferences())
require.NoError(t, diags.Error())

View File

@ -32,11 +32,12 @@ func ResolveVariableReferencesInLookup() bundle.Mutator {
}
func ResolveVariableReferencesInComplexVariables() bundle.Mutator {
return &resolveVariableReferences{prefixes: []string{
"bundle",
"workspace",
"variables",
},
return &resolveVariableReferences{
prefixes: []string{
"bundle",
"workspace",
"variables",
},
pattern: dyn.NewPattern(dyn.Key("variables"), dyn.AnyKey(), dyn.Key("value")),
lookupFn: lookupForComplexVariables,
skipFn: skipResolvingInNonComplexVariables,
@ -173,7 +174,6 @@ func (m *resolveVariableReferences) Apply(ctx context.Context, b *bundle.Bundle)
return dyn.InvalidValue, dynvar.ErrSkipResolution
})
})
if err != nil {
return dyn.InvalidValue, err
}
@ -184,7 +184,6 @@ func (m *resolveVariableReferences) Apply(ctx context.Context, b *bundle.Bundle)
diags = diags.Extend(normaliseDiags)
return root, nil
})
if err != nil {
diags = diags.Extend(diag.FromErr(err))
}

View File

@ -63,7 +63,6 @@ func (m *rewriteWorkspacePrefix) Apply(ctx context.Context, b *bundle.Bundle) di
return v, nil
})
})
if err != nil {
return diag.FromErr(err)
}

View File

@ -81,5 +81,4 @@ func TestNoWorkspacePrefixUsed(t *testing.T) {
require.Equal(t, "${workspace.artifact_path}/jar1.jar", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[1].Libraries[0].Jar)
require.Equal(t, "${workspace.file_path}/notebook2", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[2].NotebookTask.NotebookPath)
require.Equal(t, "${workspace.artifact_path}/jar2.jar", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[2].Libraries[0].Jar)
}

View File

@ -12,8 +12,7 @@ import (
"github.com/databricks/databricks-sdk-go/service/jobs"
)
type setRunAs struct {
}
type setRunAs struct{}
// This mutator does two things:
//
@ -30,7 +29,7 @@ func (m *setRunAs) Name() string {
return "SetRunAs"
}
func reportRunAsNotSupported(resourceType string, location dyn.Location, currentUser string, runAsUser string) diag.Diagnostics {
func reportRunAsNotSupported(resourceType string, location dyn.Location, currentUser, runAsUser string) diag.Diagnostics {
return diag.Diagnostics{{
Summary: fmt.Sprintf("%s do not support a setting a run_as user that is different from the owner.\n"+
"Current identity: %s. Run as identity: %s.\n"+

View File

@ -42,6 +42,7 @@ func allResourceTypes(t *testing.T) []string {
"quality_monitors",
"registered_models",
"schemas",
"volumes",
},
resourceTypes,
)
@ -141,6 +142,7 @@ func TestRunAsErrorForUnsupportedResources(t *testing.T) {
"registered_models",
"experiments",
"schemas",
"volumes",
}
base := config.Root{

View File

@ -65,7 +65,6 @@ func setVariable(ctx context.Context, v dyn.Value, variable *variable.Variable,
// We should have had a value to set for the variable at this point.
return dyn.InvalidValue, fmt.Errorf(`no value assigned to required variable %s. Assignment can be done through the "--var" flag or by setting the %s environment variable`, name, bundleVarPrefix+name)
}
func (m *setVariables) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {

View File

@ -35,7 +35,7 @@ func (m *syncInferRoot) Name() string {
// If the path does not exist, it returns an empty string.
//
// See "sync_infer_root_internal_test.go" for examples.
func (m *syncInferRoot) computeRoot(path string, root string) string {
func (m *syncInferRoot) computeRoot(path, root string) string {
for !filepath.IsLocal(path) {
// Break if we have reached the root of the filesystem.
dir := filepath.Dir(root)

View File

@ -11,6 +11,7 @@ import (
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/notebook"
@ -103,8 +104,13 @@ func (t *translateContext) rewritePath(
return fmt.Errorf("path %s is not contained in sync root path", localPath)
}
// Prefix remote path with its remote root path.
remotePath := path.Join(t.b.Config.Workspace.FilePath, filepath.ToSlash(localRelPath))
var workspacePath string
if config.IsExplicitlyEnabled(t.b.Config.Presets.SourceLinkedDeployment) {
workspacePath = t.b.SyncRootPath
} else {
workspacePath = t.b.Config.Workspace.FilePath
}
remotePath := path.Join(workspacePath, filepath.ToSlash(localRelPath))
// Convert local path into workspace path via specified function.
interp, err := fn(*p, localPath, localRelPath, remotePath)
@ -120,7 +126,33 @@ func (t *translateContext) rewritePath(
func (t *translateContext) translateNotebookPath(literal, localFullPath, localRelPath, remotePath string) (string, error) {
nb, _, err := notebook.DetectWithFS(t.b.SyncRoot, filepath.ToSlash(localRelPath))
if errors.Is(err, fs.ErrNotExist) {
return "", fmt.Errorf("notebook %s not found", literal)
if filepath.Ext(localFullPath) != notebook.ExtensionNone {
return "", fmt.Errorf("notebook %s not found", literal)
}
extensions := []string{
notebook.ExtensionPython,
notebook.ExtensionR,
notebook.ExtensionScala,
notebook.ExtensionSql,
notebook.ExtensionJupyter,
}
// Check whether a file with a notebook extension already exists. This
// way we can provide a more targeted error message.
for _, ext := range extensions {
literalWithExt := literal + ext
localRelPathWithExt := filepath.ToSlash(localRelPath + ext)
if _, err := fs.Stat(t.b.SyncRoot, localRelPathWithExt); err == nil {
return "", fmt.Errorf(`notebook %s not found. Did you mean %s?
Local notebook references are expected to contain one of the following
file extensions: [%s]`, literal, literalWithExt, strings.Join(extensions, ", "))
}
}
// Return a generic error message if no matching possible file is found.
return "", fmt.Errorf(`notebook %s not found. Local notebook references are expected
to contain one of the following file extensions: [%s]`, literal, strings.Join(extensions, ", "))
}
if err != nil {
return "", fmt.Errorf("unable to determine if %s is a notebook: %w", localFullPath, err)
@ -243,8 +275,8 @@ func (m *translatePaths) Apply(_ context.Context, b *bundle.Bundle) diag.Diagnos
}
func gatherFallbackPaths(v dyn.Value, typ string) (map[string]string, error) {
var fallback = make(map[string]string)
var pattern = dyn.NewPattern(dyn.Key("resources"), dyn.Key(typ), dyn.AnyKey())
fallback := make(map[string]string)
pattern := dyn.NewPattern(dyn.Key("resources"), dyn.Key(typ), dyn.AnyKey())
// Previous behavior was to use a resource's location as the base path to resolve
// relative paths in its definition. With the introduction of [dyn.Value] throughout,

View File

@ -2,8 +2,10 @@ package mutator_test
import (
"context"
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
"testing"
@ -26,12 +28,13 @@ import (
func touchNotebookFile(t *testing.T, path string) {
f, err := os.Create(path)
require.NoError(t, err)
f.WriteString("# Databricks notebook source\n")
_, err = f.WriteString("# Databricks notebook source\n")
require.NoError(t, err)
f.Close()
}
func touchEmptyFile(t *testing.T, path string) {
err := os.MkdirAll(filepath.Dir(path), 0700)
err := os.MkdirAll(filepath.Dir(path), 0o700)
require.NoError(t, err)
f, err := os.Create(path)
require.NoError(t, err)
@ -507,6 +510,59 @@ func TestPipelineNotebookDoesNotExistError(t *testing.T) {
assert.EqualError(t, diags.Error(), "notebook ./doesnt_exist.py not found")
}
func TestPipelineNotebookDoesNotExistErrorWithoutExtension(t *testing.T) {
for _, ext := range []string{
".py",
".r",
".scala",
".sql",
".ipynb",
"",
} {
t.Run("case_"+ext, func(t *testing.T) {
dir := t.TempDir()
if ext != "" {
touchEmptyFile(t, filepath.Join(dir, "foo"+ext))
}
b := &bundle.Bundle{
SyncRootPath: dir,
SyncRoot: vfs.MustNew(dir),
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"pipeline": {
PipelineSpec: &pipelines.PipelineSpec{
Libraries: []pipelines.PipelineLibrary{
{
Notebook: &pipelines.NotebookLibrary{
Path: "./foo",
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "fake.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
if ext == "" {
assert.EqualError(t, diags.Error(), `notebook ./foo not found. Local notebook references are expected
to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb]`)
} else {
assert.EqualError(t, diags.Error(), fmt.Sprintf(`notebook ./foo not found. Did you mean ./foo%s?
Local notebook references are expected to contain one of the following
file extensions: [.py, .r, .scala, .sql, .ipynb]`, ext))
}
})
}
}
func TestPipelineFileDoesNotExistError(t *testing.T) {
dir := t.TempDir()
@ -787,3 +843,163 @@ func TestTranslatePathWithComplexVariables(t *testing.T) {
b.Config.Resources.Jobs["job"].Tasks[0].Libraries[0].Whl,
)
}
func TestTranslatePathsWithSourceLinkedDeployment(t *testing.T) {
if runtime.GOOS == "windows" {
t.Skip("this test is not applicable on Windows because source-linked mode works only in the Databricks Workspace")
}
dir := t.TempDir()
touchNotebookFile(t, filepath.Join(dir, "my_job_notebook.py"))
touchNotebookFile(t, filepath.Join(dir, "my_pipeline_notebook.py"))
touchEmptyFile(t, filepath.Join(dir, "my_python_file.py"))
touchEmptyFile(t, filepath.Join(dir, "dist", "task.jar"))
touchEmptyFile(t, filepath.Join(dir, "requirements.txt"))
enabled := true
b := &bundle.Bundle{
SyncRootPath: dir,
SyncRoot: vfs.MustNew(dir),
Config: config.Root{
Workspace: config.Workspace{
FilePath: "/bundle",
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "my_job_notebook.py",
},
Libraries: []compute.Library{
{Whl: "./dist/task.whl"},
},
},
{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "/Users/jane.doe@databricks.com/absolute_remote.py",
},
},
{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "my_job_notebook.py",
},
Libraries: []compute.Library{
{Requirements: "requirements.txt"},
},
},
{
SparkPythonTask: &jobs.SparkPythonTask{
PythonFile: "my_python_file.py",
},
},
{
SparkJarTask: &jobs.SparkJarTask{
MainClassName: "HelloWorld",
},
Libraries: []compute.Library{
{Jar: "./dist/task.jar"},
},
},
{
SparkJarTask: &jobs.SparkJarTask{
MainClassName: "HelloWorldRemote",
},
Libraries: []compute.Library{
{Jar: "dbfs:/bundle/dist/task_remote.jar"},
},
},
},
},
},
},
Pipelines: map[string]*resources.Pipeline{
"pipeline": {
PipelineSpec: &pipelines.PipelineSpec{
Libraries: []pipelines.PipelineLibrary{
{
Notebook: &pipelines.NotebookLibrary{
Path: "my_pipeline_notebook.py",
},
},
{
Notebook: &pipelines.NotebookLibrary{
Path: "/Users/jane.doe@databricks.com/absolute_remote.py",
},
},
{
File: &pipelines.FileLibrary{
Path: "my_python_file.py",
},
},
},
},
},
},
},
Presets: config.Presets{
SourceLinkedDeployment: &enabled,
},
},
}
bundletest.SetLocation(b, ".", []dyn.Location{{File: filepath.Join(dir, "resource.yml")}})
diags := bundle.Apply(context.Background(), b, mutator.TranslatePaths())
require.NoError(t, diags.Error())
// updated to source path
assert.Equal(
t,
filepath.Join(dir, "my_job_notebook"),
b.Config.Resources.Jobs["job"].Tasks[0].NotebookTask.NotebookPath,
)
assert.Equal(
t,
filepath.Join(dir, "requirements.txt"),
b.Config.Resources.Jobs["job"].Tasks[2].Libraries[0].Requirements,
)
assert.Equal(
t,
filepath.Join(dir, "my_python_file.py"),
b.Config.Resources.Jobs["job"].Tasks[3].SparkPythonTask.PythonFile,
)
assert.Equal(
t,
filepath.Join(dir, "my_pipeline_notebook"),
b.Config.Resources.Pipelines["pipeline"].Libraries[0].Notebook.Path,
)
assert.Equal(
t,
filepath.Join(dir, "my_python_file.py"),
b.Config.Resources.Pipelines["pipeline"].Libraries[2].File.Path,
)
// left as is
assert.Equal(
t,
filepath.Join("dist", "task.whl"),
b.Config.Resources.Jobs["job"].Tasks[0].Libraries[0].Whl,
)
assert.Equal(
t,
"/Users/jane.doe@databricks.com/absolute_remote.py",
b.Config.Resources.Jobs["job"].Tasks[1].NotebookTask.NotebookPath,
)
assert.Equal(
t,
filepath.Join("dist", "task.jar"),
b.Config.Resources.Jobs["job"].Tasks[4].Libraries[0].Jar,
)
assert.Equal(
t,
"dbfs:/bundle/dist/task_remote.jar",
b.Config.Resources.Jobs["job"].Tasks[5].Libraries[0].Jar,
)
assert.Equal(
t,
"/Users/jane.doe@databricks.com/absolute_remote.py",
b.Config.Resources.Pipelines["pipeline"].Libraries[1].Notebook.Path,
)
}

View File

@ -15,8 +15,7 @@ func VerifyCliVersion() bundle.Mutator {
return &verifyCliVersion{}
}
type verifyCliVersion struct {
}
type verifyCliVersion struct{}
func (v *verifyCliVersion) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// No constraints specified, skip the check.

View File

@ -1,7 +1,9 @@
package config
const Paused = "PAUSED"
const Unpaused = "UNPAUSED"
const (
Paused = "PAUSED"
Unpaused = "UNPAUSED"
)
type Presets struct {
// NamePrefix to prepend to all resource names.
@ -17,6 +19,11 @@ type Presets struct {
// JobsMaxConcurrentRuns is the default value for the max concurrent runs of jobs.
JobsMaxConcurrentRuns int `json:"jobs_max_concurrent_runs,omitempty"`
// SourceLinkedDeployment indicates whether source-linked deployment is enabled. Works only in Databricks Workspace
// When set to true, resources created during deployment will point to source files in the workspace instead of their workspace copies.
// File synchronization to ${workspace.file_path} is skipped.
SourceLinkedDeployment *bool `json:"source_linked_deployment,omitempty"`
// Tags to add to all resources.
Tags map[string]string `json:"tags,omitempty"`
}

View File

@ -20,6 +20,7 @@ type Resources struct {
RegisteredModels map[string]*resources.RegisteredModel `json:"registered_models,omitempty"`
QualityMonitors map[string]*resources.QualityMonitor `json:"quality_monitors,omitempty"`
Schemas map[string]*resources.Schema `json:"schemas,omitempty"`
Volumes map[string]*resources.Volume `json:"volumes,omitempty"`
Clusters map[string]*resources.Cluster `json:"clusters,omitempty"`
Dashboards map[string]*resources.Dashboard `json:"dashboards,omitempty"`
}
@ -41,6 +42,9 @@ type ConfigResource interface {
// InitializeURL initializes the URL field of the resource.
InitializeURL(baseURL url.URL)
// IsNil returns true if the resource is nil, for example, when it was removed from the bundle.
IsNil() bool
}
// ResourceGroup represents a group of resources of the same type.
@ -57,6 +61,9 @@ func collectResourceMap[T ConfigResource](
) ResourceGroup {
resources := make(map[string]ConfigResource)
for key, resource := range input {
if resource.IsNil() {
continue
}
resources[key] = resource
}
return ResourceGroup{
@ -79,6 +86,7 @@ func (r *Resources) AllResources() []ResourceGroup {
collectResourceMap(descriptions["schemas"], r.Schemas),
collectResourceMap(descriptions["clusters"], r.Clusters),
collectResourceMap(descriptions["dashboards"], r.Dashboards),
collectResourceMap(descriptions["volumes"], r.Volumes),
}
}
@ -183,5 +191,11 @@ func SupportedResources() map[string]ResourceDescription {
SingularTitle: "Dashboard",
PluralTitle: "Dashboards",
},
"volumes": {
SingularName: "volume",
PluralName: "volumes",
SingularTitle: "Volume",
PluralTitle: "Volumes",
},
}
}

View File

@ -56,3 +56,7 @@ func (s *Cluster) GetName() string {
func (s *Cluster) GetURL() string {
return s.URL
}
func (s *Cluster) IsNil() bool {
return s.ClusterSpec == nil
}

View File

@ -79,3 +79,7 @@ func (r *Dashboard) GetName() string {
func (r *Dashboard) GetURL() string {
return r.URL
}
func (r *Dashboard) IsNil() bool {
return r.Dashboard == nil
}

View File

@ -63,3 +63,7 @@ func (j *Job) GetName() string {
func (j *Job) GetURL() string {
return j.URL
}
func (j *Job) IsNil() bool {
return j.JobSettings == nil
}

View File

@ -58,3 +58,7 @@ func (s *MlflowExperiment) GetName() string {
func (s *MlflowExperiment) GetURL() string {
return s.URL
}
func (s *MlflowExperiment) IsNil() bool {
return s.Experiment == nil
}

View File

@ -58,3 +58,7 @@ func (s *MlflowModel) GetName() string {
func (s *MlflowModel) GetURL() string {
return s.URL
}
func (s *MlflowModel) IsNil() bool {
return s.Model == nil
}

View File

@ -66,3 +66,7 @@ func (s *ModelServingEndpoint) GetName() string {
func (s *ModelServingEndpoint) GetURL() string {
return s.URL
}
func (s *ModelServingEndpoint) IsNil() bool {
return s.CreateServingEndpoint == nil
}

View File

@ -58,3 +58,7 @@ func (p *Pipeline) GetName() string {
func (s *Pipeline) GetURL() string {
return s.URL
}
func (s *Pipeline) IsNil() bool {
return s.PipelineSpec == nil
}

View File

@ -62,3 +62,7 @@ func (s *QualityMonitor) GetName() string {
func (s *QualityMonitor) GetURL() string {
return s.URL
}
func (s *QualityMonitor) IsNil() bool {
return s.CreateMonitor == nil
}

View File

@ -68,3 +68,7 @@ func (s *RegisteredModel) GetName() string {
func (s *RegisteredModel) GetURL() string {
return s.URL
}
func (s *RegisteredModel) IsNil() bool {
return s.CreateRegisteredModelRequest == nil
}

View File

@ -56,3 +56,7 @@ func (s *Schema) UnmarshalJSON(b []byte) error {
func (s Schema) MarshalJSON() ([]byte, error) {
return marshal.Marshal(s)
}
func (s *Schema) IsNil() bool {
return s.CreateSchema == nil
}

View File

@ -0,0 +1,62 @@
package resources
import (
"context"
"fmt"
"net/url"
"strings"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/marshal"
"github.com/databricks/databricks-sdk-go/service/catalog"
)
type Volume struct {
// List of grants to apply on this volume.
Grants []Grant `json:"grants,omitempty"`
// Full name of the volume (catalog_name.schema_name.volume_name). This value is read from
// the terraform state after deployment succeeds.
ID string `json:"id,omitempty" bundle:"readonly"`
*catalog.CreateVolumeRequestContent
ModifiedStatus ModifiedStatus `json:"modified_status,omitempty" bundle:"internal"`
URL string `json:"url,omitempty" bundle:"internal"`
}
func (v *Volume) UnmarshalJSON(b []byte) error {
return marshal.Unmarshal(b, v)
}
func (v Volume) MarshalJSON() ([]byte, error) {
return marshal.Marshal(v)
}
func (v *Volume) Exists(ctx context.Context, w *databricks.WorkspaceClient, id string) (bool, error) {
return false, fmt.Errorf("volume.Exists() is not supported")
}
func (v *Volume) TerraformResourceName() string {
return "databricks_volume"
}
func (v *Volume) InitializeURL(baseURL url.URL) {
if v.ID == "" {
return
}
baseURL.Path = fmt.Sprintf("explore/data/volumes/%s", strings.ReplaceAll(v.ID, ".", "/"))
v.URL = baseURL.String()
}
func (v *Volume) GetURL() string {
return v.URL
}
func (v *Volume) GetName() string {
return v.Name
}
func (v *Volume) IsNil() bool {
return v.CreateVolumeRequestContent == nil
}

View File

@ -49,7 +49,8 @@ func TestCustomMarshallerIsImplemented(t *testing.T) {
// Eg: resource.Job implements MarshalJSON
v := reflect.Zero(vt.Elem()).Interface()
assert.NotPanics(t, func() {
json.Marshal(v)
_, err := json.Marshal(v)
assert.NoError(t, err)
}, "Resource %s does not have a custom marshaller", field.Name)
// Unmarshalling a *resourceStruct will panic if the resource does not have a custom unmarshaller
@ -58,7 +59,8 @@ func TestCustomMarshallerIsImplemented(t *testing.T) {
// Eg: *resource.Job implements UnmarshalJSON
v = reflect.New(vt.Elem()).Interface()
assert.NotPanics(t, func() {
json.Unmarshal([]byte("{}"), v)
err := json.Unmarshal([]byte("{}"), v)
assert.NoError(t, err)
}, "Resource %s does not have a custom unmarshaller", field.Name)
}
}

View File

@ -100,7 +100,7 @@ func TestRootMergeTargetOverridesWithMode(t *testing.T) {
},
},
}
root.initializeDynamicValue()
require.NoError(t, root.initializeDynamicValue())
require.NoError(t, root.MergeTargetOverrides("development"))
assert.Equal(t, Development, root.Bundle.Mode)
}
@ -133,7 +133,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
"complex": {
Type: variable.VariableTypeComplex,
Description: "complex var",
Default: map[string]interface{}{
Default: map[string]any{
"key": "value",
},
},
@ -148,7 +148,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
"complex": {
Type: "wrong",
Description: "wrong",
Default: map[string]interface{}{
Default: map[string]any{
"key1": "value1",
},
},
@ -156,7 +156,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
},
},
}
root.initializeDynamicValue()
require.NoError(t, root.initializeDynamicValue())
require.NoError(t, root.MergeTargetOverrides("development"))
assert.Equal(t, "bar", root.Variables["foo"].Default)
assert.Equal(t, "foo var", root.Variables["foo"].Description)
@ -164,11 +164,10 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
assert.Equal(t, "foo2", root.Variables["foo2"].Default)
assert.Equal(t, "foo2 var", root.Variables["foo2"].Description)
assert.Equal(t, map[string]interface{}{
assert.Equal(t, map[string]any{
"key1": "value1",
}, root.Variables["complex"].Default)
assert.Equal(t, "complex var", root.Variables["complex"].Description)
}
func TestIsFullVariableOverrideDef(t *testing.T) {
@ -252,5 +251,4 @@ func TestIsFullVariableOverrideDef(t *testing.T) {
for i, tc := range testCases {
assert.Equal(t, tc.expected, isFullVariableOverrideDef(tc.value), "test case %d", i)
}
}

View File

@ -13,14 +13,19 @@ func FilesToSync() bundle.ReadOnlyMutator {
return &filesToSync{}
}
type filesToSync struct {
}
type filesToSync struct{}
func (v *filesToSync) Name() string {
return "validate:files_to_sync"
}
func (v *filesToSync) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
// The user may be intentional about not synchronizing any files.
// In this case, we should not show any warnings.
if len(rb.Config().Sync.Paths) == 0 {
return nil
}
sync, err := files.GetSync(ctx, rb)
if err != nil {
return diag.FromErr(err)
@ -31,6 +36,7 @@ func (v *filesToSync) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.
return diag.FromErr(err)
}
// If there are files to sync, we don't need to show any warnings.
if len(fl) != 0 {
return nil
}

View File

@ -0,0 +1,107 @@
package validate
import (
"context"
"path/filepath"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/vfs"
sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/iam"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestFilesToSync_NoPaths(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Sync: config.Sync{
Paths: []string{},
},
},
}
ctx := context.Background()
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
assert.Empty(t, diags)
}
func setupBundleForFilesToSyncTest(t *testing.T) *bundle.Bundle {
dir := t.TempDir()
testutil.Touch(t, dir, "file1")
testutil.Touch(t, dir, "file2")
b := &bundle.Bundle{
BundleRootPath: dir,
BundleRoot: vfs.MustNew(dir),
SyncRootPath: dir,
SyncRoot: vfs.MustNew(dir),
WorktreeRoot: vfs.MustNew(dir),
Config: config.Root{
Bundle: config.Bundle{
Target: "default",
},
Workspace: config.Workspace{
FilePath: "/this/doesnt/matter",
CurrentUser: &config.User{
User: &iam.User{},
},
},
Sync: config.Sync{
// Paths are relative to [SyncRootPath].
Paths: []string{"."},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{
Host: "https://foo.com",
}
// The initialization logic in [sync.New] performs a check on the destination path.
// Removing this check at initialization time is tbd...
m.GetMockWorkspaceAPI().EXPECT().GetStatusByPath(mock.Anything, "/this/doesnt/matter").Return(&workspace.ObjectInfo{
ObjectType: workspace.ObjectTypeDirectory,
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
return b
}
func TestFilesToSync_EverythingIgnored(t *testing.T) {
b := setupBundleForFilesToSyncTest(t)
// Ignore all files.
testutil.WriteFile(t, filepath.Join(b.BundleRootPath, ".gitignore"), "*\n.*\n")
ctx := context.Background()
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
require.Equal(t, 1, len(diags))
assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "There are no files to sync, please check your .gitignore", diags[0].Summary)
}
func TestFilesToSync_EverythingExcluded(t *testing.T) {
b := setupBundleForFilesToSyncTest(t)
// Exclude all files.
b.Config.Sync.Exclude = []string{"*"}
ctx := context.Background()
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
require.Equal(t, 1, len(diags))
assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "There are no files to sync, please check your .gitignore and sync.exclude configuration", diags[0].Summary)
}

View File

@ -15,8 +15,7 @@ import (
"golang.org/x/sync/errgroup"
)
type folderPermissions struct {
}
type folderPermissions struct{}
// Apply implements bundle.ReadOnlyMutator.
func (f *folderPermissions) Apply(ctx context.Context, b bundle.ReadOnlyBundle) diag.Diagnostics {

View File

@ -13,8 +13,7 @@ func JobClusterKeyDefined() bundle.ReadOnlyMutator {
return &jobClusterKeyDefined{}
}
type jobClusterKeyDefined struct {
}
type jobClusterKeyDefined struct{}
func (v *jobClusterKeyDefined) Name() string {
return "validate:job_cluster_key_defined"

View File

@ -17,8 +17,7 @@ func JobTaskClusterSpec() bundle.ReadOnlyMutator {
return &jobTaskClusterSpec{}
}
type jobTaskClusterSpec struct {
}
type jobTaskClusterSpec struct{}
func (v *jobTaskClusterSpec) Name() string {
return "validate:job_task_cluster_spec"

View File

@ -0,0 +1,137 @@
package validate
import (
"context"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/log"
)
// Validates that any single node clusters defined in the bundle are correctly configured.
func SingleNodeCluster() bundle.ReadOnlyMutator {
return &singleNodeCluster{}
}
type singleNodeCluster struct{}
func (m *singleNodeCluster) Name() string {
return "validate:SingleNodeCluster"
}
const singleNodeWarningDetail = `num_workers should be 0 only for single-node clusters. To create a
valid single node cluster please ensure that the following properties
are correctly set in the cluster specification:
spark_conf:
spark.databricks.cluster.profile: singleNode
spark.master: local[*]
custom_tags:
ResourceClass: SingleNode
`
const singleNodeWarningSummary = `Single node cluster is not correctly configured`
func showSingleNodeClusterWarning(ctx context.Context, v dyn.Value) bool {
// Check if the user has explicitly set the num_workers to 0. Skip the warning
// if that's not the case.
numWorkers, ok := v.Get("num_workers").AsInt()
if !ok || numWorkers > 0 {
return false
}
// Convenient type that contains the common fields from compute.ClusterSpec and
// pipelines.PipelineCluster that we are interested in.
type ClusterConf struct {
SparkConf map[string]string `json:"spark_conf"`
CustomTags map[string]string `json:"custom_tags"`
PolicyId string `json:"policy_id"`
}
conf := &ClusterConf{}
err := convert.ToTyped(conf, v)
if err != nil {
return false
}
// If the policy id is set, we don't want to show the warning. This is because
// the user might have configured `spark_conf` and `custom_tags` correctly
// in their cluster policy.
if conf.PolicyId != "" {
return false
}
profile, ok := conf.SparkConf["spark.databricks.cluster.profile"]
if !ok {
log.Debugf(ctx, "spark_conf spark.databricks.cluster.profile not found in single-node cluster spec")
return true
}
if profile != "singleNode" {
log.Debugf(ctx, "spark_conf spark.databricks.cluster.profile is not singleNode in single-node cluster spec: %s", profile)
return true
}
master, ok := conf.SparkConf["spark.master"]
if !ok {
log.Debugf(ctx, "spark_conf spark.master not found in single-node cluster spec")
return true
}
if !strings.HasPrefix(master, "local") {
log.Debugf(ctx, "spark_conf spark.master does not start with local in single-node cluster spec: %s", master)
return true
}
resourceClass, ok := conf.CustomTags["ResourceClass"]
if !ok {
log.Debugf(ctx, "custom_tag ResourceClass not found in single-node cluster spec")
return true
}
if resourceClass != "SingleNode" {
log.Debugf(ctx, "custom_tag ResourceClass is not SingleNode in single-node cluster spec: %s", resourceClass)
return true
}
return false
}
func (m *singleNodeCluster) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
diags := diag.Diagnostics{}
patterns := []dyn.Pattern{
// Interactive clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("clusters"), dyn.AnyKey()),
// Job clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("jobs"), dyn.AnyKey(), dyn.Key("job_clusters"), dyn.AnyIndex(), dyn.Key("new_cluster")),
// Job task clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("jobs"), dyn.AnyKey(), dyn.Key("tasks"), dyn.AnyIndex(), dyn.Key("new_cluster")),
// Job for each task clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("jobs"), dyn.AnyKey(), dyn.Key("tasks"), dyn.AnyIndex(), dyn.Key("for_each_task"), dyn.Key("task"), dyn.Key("new_cluster")),
// Pipeline clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("pipelines"), dyn.AnyKey(), dyn.Key("clusters"), dyn.AnyIndex()),
}
for _, p := range patterns {
_, err := dyn.MapByPattern(rb.Config().Value(), p, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
warning := diag.Diagnostic{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: v.Locations(),
Paths: []dyn.Path{p},
}
if showSingleNodeClusterWarning(ctx, v) {
diags = append(diags, warning)
}
return v, nil
})
if err != nil {
log.Debugf(ctx, "Error while applying single node cluster validation: %s", err)
}
}
return diags
}

View File

@ -0,0 +1,565 @@
package validate
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/stretchr/testify/assert"
)
func failCases() []struct {
name string
sparkConf map[string]string
customTags map[string]string
} {
return []struct {
name string
sparkConf map[string]string
customTags map[string]string
}{
{
name: "no tags or conf",
},
{
name: "no tags",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
},
{
name: "no conf",
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "invalid spark cluster profile",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "invalid",
"spark.master": "local[*]",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "invalid spark.master",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "invalid",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "invalid tags",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
customTags: map[string]string{"ResourceClass": "invalid"},
},
{
name: "missing ResourceClass tag",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
customTags: map[string]string{"what": "ever"},
},
{
name: "missing spark.master",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "missing spark.databricks.cluster.profile",
sparkConf: map[string]string{
"spark.master": "local[*]",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
}
}
func TestValidateSingleNodeClusterFailForInteractiveClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Clusters: map[string]*resources.Cluster{
"foo": {
ClusterSpec: &compute.ClusterSpec{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.clusters.foo", []dyn.Location{{File: "a.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.clusters.foo.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "a.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.NewPath(dyn.Key("resources"), dyn.Key("clusters"), dyn.Key("foo"))},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForJobClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
JobClusters: []jobs.JobCluster{
{
NewCluster: compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.jobs.foo.job_clusters[0].new_cluster", []dyn.Location{{File: "b.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.job_clusters[0].new_cluster.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "b.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.jobs.foo.job_clusters[0].new_cluster")},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForJobTaskClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.jobs.foo.tasks[0].new_cluster", []dyn.Location{{File: "c.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].new_cluster.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "c.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.jobs.foo.tasks[0].new_cluster")},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForPipelineClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"foo": {
PipelineSpec: &pipelines.PipelineSpec{
Clusters: []pipelines.PipelineCluster{
{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.pipelines.foo.clusters[0]", []dyn.Location{{File: "d.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.pipelines.foo.clusters[0].num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "d.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.pipelines.foo.clusters[0]")},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForJobForEachTaskCluster(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.jobs.foo.tasks[0].for_each_task.task.new_cluster", []dyn.Location{{File: "e.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].for_each_task.task.new_cluster.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "e.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.jobs.foo.tasks[0].for_each_task.task.new_cluster")},
},
}, diags)
})
}
}
func passCases() []struct {
name string
numWorkers *int
sparkConf map[string]string
customTags map[string]string
policyId string
} {
zero := 0
one := 1
return []struct {
name string
numWorkers *int
sparkConf map[string]string
customTags map[string]string
policyId string
}{
{
name: "single node cluster",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
customTags: map[string]string{
"ResourceClass": "SingleNode",
},
numWorkers: &zero,
},
{
name: "num workers is not zero",
numWorkers: &one,
},
{
name: "num workers is not set",
},
{
name: "policy id is not empty",
policyId: "policy-abc",
numWorkers: &zero,
},
}
}
func TestValidateSingleNodeClusterPassInteractiveClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Clusters: map[string]*resources.Cluster{
"foo": {
ClusterSpec: &compute.ClusterSpec{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.clusters.foo.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassJobClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
JobClusters: []jobs.JobCluster{
{
NewCluster: compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.job_clusters[0].new_cluster.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassJobTaskClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].new_cluster.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassPipelineClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"foo": {
PipelineSpec: &pipelines.PipelineSpec{
Clusters: []pipelines.PipelineCluster{
{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.pipelines.foo.clusters[0].num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassJobForEachTaskCluster(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].for_each_task.task.new_cluster.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}

View File

@ -8,8 +8,7 @@ import (
"github.com/databricks/cli/libs/dyn"
)
type validate struct {
}
type validate struct{}
type location struct {
path string
@ -36,6 +35,7 @@ func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics
ValidateSyncPatterns(),
JobTaskClusterSpec(),
ValidateFolderPermissions(),
SingleNodeCluster(),
))
}

View File

@ -17,8 +17,7 @@ func ValidateSyncPatterns() bundle.ReadOnlyMutator {
return &validateSyncPatterns{}
}
type validateSyncPatterns struct {
}
type validateSyncPatterns struct{}
func (v *validateSyncPatterns) Name() string {
return "validate:validate_sync_patterns"

View File

@ -1,11 +1,8 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package variable
import (
"context"
"fmt"
"strings"
"github.com/databricks/databricks-sdk-go"
)
@ -25,6 +22,8 @@ type Lookup struct {
Metastore string `json:"metastore,omitempty"`
NotificationDestination string `json:"notification_destination,omitempty"`
Pipeline string `json:"pipeline,omitempty"`
Query string `json:"query,omitempty"`
@ -34,323 +33,78 @@ type Lookup struct {
Warehouse string `json:"warehouse,omitempty"`
}
func LookupFromMap(m map[string]any) *Lookup {
l := &Lookup{}
if v, ok := m["alert"]; ok {
l.Alert = v.(string)
type resolver interface {
// Resolve resolves the underlying entity's ID.
Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error)
// String returns a human-readable representation of the resolver.
String() string
}
func (l *Lookup) constructResolver() (resolver, error) {
var resolvers []resolver
if l.Alert != "" {
resolvers = append(resolvers, resolveAlert{name: l.Alert})
}
if v, ok := m["cluster_policy"]; ok {
l.ClusterPolicy = v.(string)
if l.ClusterPolicy != "" {
resolvers = append(resolvers, resolveClusterPolicy{name: l.ClusterPolicy})
}
if v, ok := m["cluster"]; ok {
l.Cluster = v.(string)
if l.Cluster != "" {
resolvers = append(resolvers, resolveCluster{name: l.Cluster})
}
if v, ok := m["dashboard"]; ok {
l.Dashboard = v.(string)
if l.Dashboard != "" {
resolvers = append(resolvers, resolveDashboard{name: l.Dashboard})
}
if v, ok := m["instance_pool"]; ok {
l.InstancePool = v.(string)
if l.InstancePool != "" {
resolvers = append(resolvers, resolveInstancePool{name: l.InstancePool})
}
if v, ok := m["job"]; ok {
l.Job = v.(string)
if l.Job != "" {
resolvers = append(resolvers, resolveJob{name: l.Job})
}
if v, ok := m["metastore"]; ok {
l.Metastore = v.(string)
if l.Metastore != "" {
resolvers = append(resolvers, resolveMetastore{name: l.Metastore})
}
if v, ok := m["pipeline"]; ok {
l.Pipeline = v.(string)
if l.NotificationDestination != "" {
resolvers = append(resolvers, resolveNotificationDestination{name: l.NotificationDestination})
}
if v, ok := m["query"]; ok {
l.Query = v.(string)
if l.Pipeline != "" {
resolvers = append(resolvers, resolvePipeline{name: l.Pipeline})
}
if v, ok := m["service_principal"]; ok {
l.ServicePrincipal = v.(string)
if l.Query != "" {
resolvers = append(resolvers, resolveQuery{name: l.Query})
}
if v, ok := m["warehouse"]; ok {
l.Warehouse = v.(string)
if l.ServicePrincipal != "" {
resolvers = append(resolvers, resolveServicePrincipal{name: l.ServicePrincipal})
}
if l.Warehouse != "" {
resolvers = append(resolvers, resolveWarehouse{name: l.Warehouse})
}
return l
switch len(resolvers) {
case 0:
return nil, fmt.Errorf("no valid lookup fields provided")
case 1:
return resolvers[0], nil
default:
return nil, fmt.Errorf("exactly one lookup field must be provided")
}
}
func (l *Lookup) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
if err := l.validate(); err != nil {
r, err := l.constructResolver()
if err != nil {
return "", err
}
r := allResolvers()
if l.Alert != "" {
return r.Alert(ctx, w, l.Alert)
}
if l.ClusterPolicy != "" {
return r.ClusterPolicy(ctx, w, l.ClusterPolicy)
}
if l.Cluster != "" {
return r.Cluster(ctx, w, l.Cluster)
}
if l.Dashboard != "" {
return r.Dashboard(ctx, w, l.Dashboard)
}
if l.InstancePool != "" {
return r.InstancePool(ctx, w, l.InstancePool)
}
if l.Job != "" {
return r.Job(ctx, w, l.Job)
}
if l.Metastore != "" {
return r.Metastore(ctx, w, l.Metastore)
}
if l.Pipeline != "" {
return r.Pipeline(ctx, w, l.Pipeline)
}
if l.Query != "" {
return r.Query(ctx, w, l.Query)
}
if l.ServicePrincipal != "" {
return r.ServicePrincipal(ctx, w, l.ServicePrincipal)
}
if l.Warehouse != "" {
return r.Warehouse(ctx, w, l.Warehouse)
}
return "", fmt.Errorf("no valid lookup fields provided")
return r.Resolve(ctx, w)
}
func (l *Lookup) String() string {
if l.Alert != "" {
return fmt.Sprintf("alert: %s", l.Alert)
}
if l.ClusterPolicy != "" {
return fmt.Sprintf("cluster-policy: %s", l.ClusterPolicy)
}
if l.Cluster != "" {
return fmt.Sprintf("cluster: %s", l.Cluster)
}
if l.Dashboard != "" {
return fmt.Sprintf("dashboard: %s", l.Dashboard)
}
if l.InstancePool != "" {
return fmt.Sprintf("instance-pool: %s", l.InstancePool)
}
if l.Job != "" {
return fmt.Sprintf("job: %s", l.Job)
}
if l.Metastore != "" {
return fmt.Sprintf("metastore: %s", l.Metastore)
}
if l.Pipeline != "" {
return fmt.Sprintf("pipeline: %s", l.Pipeline)
}
if l.Query != "" {
return fmt.Sprintf("query: %s", l.Query)
}
if l.ServicePrincipal != "" {
return fmt.Sprintf("service-principal: %s", l.ServicePrincipal)
}
if l.Warehouse != "" {
return fmt.Sprintf("warehouse: %s", l.Warehouse)
r, _ := l.constructResolver()
if r == nil {
return ""
}
return ""
}
func (l *Lookup) validate() error {
// Validate that only one field is set
count := 0
if l.Alert != "" {
count++
}
if l.ClusterPolicy != "" {
count++
}
if l.Cluster != "" {
count++
}
if l.Dashboard != "" {
count++
}
if l.InstancePool != "" {
count++
}
if l.Job != "" {
count++
}
if l.Metastore != "" {
count++
}
if l.Pipeline != "" {
count++
}
if l.Query != "" {
count++
}
if l.ServicePrincipal != "" {
count++
}
if l.Warehouse != "" {
count++
}
if count != 1 {
return fmt.Errorf("exactly one lookup field must be provided")
}
if strings.Contains(l.String(), "${var") {
return fmt.Errorf("lookup fields cannot contain variable references")
}
return nil
}
type resolverFunc func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error)
type resolvers struct {
Alert resolverFunc
ClusterPolicy resolverFunc
Cluster resolverFunc
Dashboard resolverFunc
InstancePool resolverFunc
Job resolverFunc
Metastore resolverFunc
Pipeline resolverFunc
Query resolverFunc
ServicePrincipal resolverFunc
Warehouse resolverFunc
}
func allResolvers() *resolvers {
r := &resolvers{}
r.Alert = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Alert"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Alerts.GetByDisplayName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.Id), nil
}
r.ClusterPolicy = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["ClusterPolicy"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.ClusterPolicies.GetByName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.PolicyId), nil
}
r.Cluster = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Cluster"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Clusters.GetByClusterName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.ClusterId), nil
}
r.Dashboard = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Dashboard"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Dashboards.GetByName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.Id), nil
}
r.InstancePool = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["InstancePool"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.InstancePools.GetByInstancePoolName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.InstancePoolId), nil
}
r.Job = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Job"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Jobs.GetBySettingsName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.JobId), nil
}
r.Metastore = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Metastore"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Metastores.GetByName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.MetastoreId), nil
}
r.Pipeline = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Pipeline"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Pipelines.GetByName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.PipelineId), nil
}
r.Query = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Query"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Queries.GetByDisplayName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.Id), nil
}
r.ServicePrincipal = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["ServicePrincipal"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.ServicePrincipals.GetByDisplayName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.ApplicationId), nil
}
r.Warehouse = func(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
fn, ok := lookupOverrides["Warehouse"]
if ok {
return fn(ctx, w, name)
}
entity, err := w.Warehouses.GetByName(ctx, name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.Id), nil
}
return r
return r.String()
}

View File

@ -0,0 +1,59 @@
package variable
import (
"context"
"reflect"
"testing"
"github.com/stretchr/testify/assert"
)
func TestLookup_Coverage(t *testing.T) {
var lookup Lookup
val := reflect.ValueOf(lookup)
typ := val.Type()
for i := 0; i < val.NumField(); i++ {
field := val.Field(i)
if field.Kind() != reflect.String {
t.Fatalf("Field %s is not a string", typ.Field(i).Name)
}
fieldType := typ.Field(i)
t.Run(fieldType.Name, func(t *testing.T) {
// Use a fresh instance of the struct in each test
var lookup Lookup
// Set the field to a non-empty string
reflect.ValueOf(&lookup).Elem().Field(i).SetString("value")
// Test the [String] function
assert.NotEmpty(t, lookup.String())
})
}
}
func TestLookup_Empty(t *testing.T) {
var lookup Lookup
// Resolve returns an error when no fields are provided
_, err := lookup.Resolve(context.Background(), nil)
assert.ErrorContains(t, err, "no valid lookup fields provided")
// No string representation for an invalid lookup
assert.Empty(t, lookup.String())
}
func TestLookup_Multiple(t *testing.T) {
lookup := Lookup{
Alert: "alert",
Query: "query",
}
// Resolve returns an error when multiple fields are provided
_, err := lookup.Resolve(context.Background(), nil)
assert.ErrorContains(t, err, "exactly one lookup field must be provided")
// No string representation for an invalid lookup
assert.Empty(t, lookup.String())
}

View File

@ -0,0 +1,24 @@
package variable
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
)
type resolveAlert struct {
name string
}
func (l resolveAlert) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
entity, err := w.Alerts.GetByDisplayName(ctx, l.name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.Id), nil
}
func (l resolveAlert) String() string {
return fmt.Sprintf("alert: %s", l.name)
}

View File

@ -0,0 +1,49 @@
package variable
import (
"context"
"testing"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/sql"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestResolveAlert_ResolveSuccess(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockAlertsAPI()
api.EXPECT().
GetByDisplayName(mock.Anything, "alert").
Return(&sql.ListAlertsResponseAlert{
Id: "1234",
}, nil)
ctx := context.Background()
l := resolveAlert{name: "alert"}
result, err := l.Resolve(ctx, m.WorkspaceClient)
require.NoError(t, err)
assert.Equal(t, "1234", result)
}
func TestResolveAlert_ResolveNotFound(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockAlertsAPI()
api.EXPECT().
GetByDisplayName(mock.Anything, "alert").
Return(nil, &apierr.APIError{StatusCode: 404})
ctx := context.Background()
l := resolveAlert{name: "alert"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.ErrorIs(t, err, apierr.ErrNotFound)
}
func TestResolveAlert_String(t *testing.T) {
l := resolveAlert{name: "name"}
assert.Equal(t, "alert: name", l.String())
}

View File

@ -8,19 +8,18 @@ import (
"github.com/databricks/databricks-sdk-go/service/compute"
)
var lookupOverrides = map[string]resolverFunc{
"Cluster": resolveCluster,
type resolveCluster struct {
name string
}
// We added a custom resolver for the cluster to add filtering for the cluster source when we list all clusters.
// Without the filtering listing could take a very long time (5-10 mins) which leads to lookup timeouts.
func resolveCluster(ctx context.Context, w *databricks.WorkspaceClient, name string) (string, error) {
func (l resolveCluster) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
result, err := w.Clusters.ListAll(ctx, compute.ListClustersRequest{
FilterBy: &compute.ListClustersFilterBy{
ClusterSources: []compute.ClusterSource{compute.ClusterSourceApi, compute.ClusterSourceUi},
},
})
if err != nil {
return "", err
}
@ -30,6 +29,8 @@ func resolveCluster(ctx context.Context, w *databricks.WorkspaceClient, name str
key := v.ClusterName
tmp[key] = append(tmp[key], v)
}
name := l.name
alternatives, ok := tmp[name]
if !ok || len(alternatives) == 0 {
return "", fmt.Errorf("cluster named '%s' does not exist", name)
@ -39,3 +40,7 @@ func resolveCluster(ctx context.Context, w *databricks.WorkspaceClient, name str
}
return alternatives[0].ClusterId, nil
}
func (l resolveCluster) String() string {
return fmt.Sprintf("cluster: %s", l.name)
}

View File

@ -0,0 +1,24 @@
package variable
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
)
type resolveClusterPolicy struct {
name string
}
func (l resolveClusterPolicy) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
entity, err := w.ClusterPolicies.GetByName(ctx, l.name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.PolicyId), nil
}
func (l resolveClusterPolicy) String() string {
return fmt.Sprintf("cluster-policy: %s", l.name)
}

View File

@ -0,0 +1,49 @@
package variable
import (
"context"
"testing"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestResolveClusterPolicy_ResolveSuccess(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockClusterPoliciesAPI()
api.EXPECT().
GetByName(mock.Anything, "policy").
Return(&compute.Policy{
PolicyId: "1234",
}, nil)
ctx := context.Background()
l := resolveClusterPolicy{name: "policy"}
result, err := l.Resolve(ctx, m.WorkspaceClient)
require.NoError(t, err)
assert.Equal(t, "1234", result)
}
func TestResolveClusterPolicy_ResolveNotFound(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockClusterPoliciesAPI()
api.EXPECT().
GetByName(mock.Anything, "policy").
Return(nil, &apierr.APIError{StatusCode: 404})
ctx := context.Background()
l := resolveClusterPolicy{name: "policy"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.ErrorIs(t, err, apierr.ErrNotFound)
}
func TestResolveClusterPolicy_String(t *testing.T) {
l := resolveClusterPolicy{name: "name"}
assert.Equal(t, "cluster-policy: name", l.String())
}

View File

@ -0,0 +1,50 @@
package variable
import (
"context"
"testing"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestResolveCluster_ResolveSuccess(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockClustersAPI()
api.EXPECT().
ListAll(mock.Anything, mock.Anything).
Return([]compute.ClusterDetails{
{ClusterId: "1234", ClusterName: "cluster1"},
{ClusterId: "2345", ClusterName: "cluster2"},
}, nil)
ctx := context.Background()
l := resolveCluster{name: "cluster2"}
result, err := l.Resolve(ctx, m.WorkspaceClient)
require.NoError(t, err)
assert.Equal(t, "2345", result)
}
func TestResolveCluster_ResolveNotFound(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockClustersAPI()
api.EXPECT().
ListAll(mock.Anything, mock.Anything).
Return([]compute.ClusterDetails{}, nil)
ctx := context.Background()
l := resolveCluster{name: "cluster"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.Error(t, err)
assert.Contains(t, err.Error(), "cluster named 'cluster' does not exist")
}
func TestResolveCluster_String(t *testing.T) {
l := resolveCluster{name: "name"}
assert.Equal(t, "cluster: name", l.String())
}

View File

@ -0,0 +1,24 @@
package variable
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
)
type resolveDashboard struct {
name string
}
func (l resolveDashboard) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
entity, err := w.Dashboards.GetByName(ctx, l.name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.Id), nil
}
func (l resolveDashboard) String() string {
return fmt.Sprintf("dashboard: %s", l.name)
}

View File

@ -0,0 +1,49 @@
package variable
import (
"context"
"testing"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/sql"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestResolveDashboard_ResolveSuccess(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockDashboardsAPI()
api.EXPECT().
GetByName(mock.Anything, "dashboard").
Return(&sql.Dashboard{
Id: "1234",
}, nil)
ctx := context.Background()
l := resolveDashboard{name: "dashboard"}
result, err := l.Resolve(ctx, m.WorkspaceClient)
require.NoError(t, err)
assert.Equal(t, "1234", result)
}
func TestResolveDashboard_ResolveNotFound(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockDashboardsAPI()
api.EXPECT().
GetByName(mock.Anything, "dashboard").
Return(nil, &apierr.APIError{StatusCode: 404})
ctx := context.Background()
l := resolveDashboard{name: "dashboard"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.ErrorIs(t, err, apierr.ErrNotFound)
}
func TestResolveDashboard_String(t *testing.T) {
l := resolveDashboard{name: "name"}
assert.Equal(t, "dashboard: name", l.String())
}

View File

@ -0,0 +1,24 @@
package variable
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
)
type resolveInstancePool struct {
name string
}
func (l resolveInstancePool) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
entity, err := w.InstancePools.GetByInstancePoolName(ctx, l.name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.InstancePoolId), nil
}
func (l resolveInstancePool) String() string {
return fmt.Sprintf("instance-pool: %s", l.name)
}

View File

@ -0,0 +1,49 @@
package variable
import (
"context"
"testing"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestResolveInstancePool_ResolveSuccess(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockInstancePoolsAPI()
api.EXPECT().
GetByInstancePoolName(mock.Anything, "instance_pool").
Return(&compute.InstancePoolAndStats{
InstancePoolId: "5678",
}, nil)
ctx := context.Background()
l := resolveInstancePool{name: "instance_pool"}
result, err := l.Resolve(ctx, m.WorkspaceClient)
require.NoError(t, err)
assert.Equal(t, "5678", result)
}
func TestResolveInstancePool_ResolveNotFound(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockInstancePoolsAPI()
api.EXPECT().
GetByInstancePoolName(mock.Anything, "instance_pool").
Return(nil, &apierr.APIError{StatusCode: 404})
ctx := context.Background()
l := resolveInstancePool{name: "instance_pool"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.ErrorIs(t, err, apierr.ErrNotFound)
}
func TestResolveInstancePool_String(t *testing.T) {
l := resolveInstancePool{name: "name"}
assert.Equal(t, "instance-pool: name", l.String())
}

View File

@ -0,0 +1,24 @@
package variable
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
)
type resolveJob struct {
name string
}
func (l resolveJob) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
entity, err := w.Jobs.GetBySettingsName(ctx, l.name)
if err != nil {
return "", err
}
return fmt.Sprint(entity.JobId), nil
}
func (l resolveJob) String() string {
return fmt.Sprintf("job: %s", l.name)
}

Some files were not shown because too many files have changed in this diff Show More