Compare commits

...

4 Commits

Author SHA1 Message Date
Pieter Noordhuis 61b0c59137
Move test helpers from internal to `acc` and `testutil` (#2008)
## Changes

This change moves fixture helpers to `internal/acc/fixtures.go`. These
helpers create an ephemeral path or resource for the duration of a test.
Call sites are updated to use `acc.WorkspaceTest()` to construct a
workspace-focused test wrapper as needed.

This change also moves the `GetNodeTypeID()` function to `testutil`.

## Tests

n/a
2024-12-12 21:28:04 +00:00
Pieter Noordhuis e472b5d888
Move the CLI test runner to `internal/testcli` package (#2004)
## Changes

The CLI test runner instantiates a new CLI "instance" through
`cmd.New()` and runs it with specified arguments. This is as close as we
get to running the real CLI **in-process**. This runner was located in
the `internal` package next to other helpers. This change moves it to
its own dedicated package.

Note: this runner transitively imports pretty much the entire
repository, which is why we intentionally keep it _separate_ from
`testutil`.

## Tests

n/a
2024-12-12 16:48:51 +00:00
Pieter Noordhuis dd3b7ec450
Define and use `testutil.TestingT` interface (#2003)
## Changes

Using an interface instead of a concrete type means we can pass
`*testing.T` directly or any wrapper type that implements a superset of
this interface. It prepares for more broad use of `acc.WorkspaceT`,
which enhances the testing object with helper functions for using a
Databricks workspace.

This eliminates the need to dereference a `*testing.T` field on a
wrapper type.

## Tests

n/a
2024-12-12 14:42:15 +00:00
dependabot[bot] cabdabf31e
Bump github.com/databricks/databricks-sdk-go from 0.52.0 to 0.53.0 (#1985)
Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.52.0 to 0.53.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.53.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Update Changelog file (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1091">#1091</a>).</li>
</ul>
<h3>Internal Changes</h3>
<ul>
<li>Update to latest OpenAPI spec (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1098">#1098</a>).</li>
</ul>
<p>Note: This release contains breaking changes, please see the API
changes below for more details.</p>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms">cleanrooms</a>
package.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingAccessPolicyAPI">w.AibiDashboardEmbeddingAccessPolicy</a>
workspace-level service.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingApprovedDomainsAPI">w.AibiDashboardEmbeddingApprovedDomains</a>
workspace-level service.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunLifeCycleState">jobs.CleanRoomTaskRunLifeCycleState</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunResultState">jobs.CleanRoomTaskRunResultState</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunState">jobs.CleanRoomTaskRunState</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DataType">dashboards.DataType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchema">dashboards.QuerySchema</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchemaColumn">dashboards.QuerySchemaColumn</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#DatabricksGcpServiceAccount">catalog.DatabricksGcpServiceAccount</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialGcpOptions">catalog.GenerateTemporaryServiceCredentialGcpOptions</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentRange">files.ContentRange</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingRequest">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingResponse">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicy">settings.EgressNetworkPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicy">settings.EgressNetworkPolicyInternetAccessPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestination">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestination</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyMode">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyRestrictionMode">settings.EgressNetworkPolicyInternetAccessPolicyRestrictionMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestination">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestination</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#PartitionSpecificationPartition">sharing.PartitionSpecificationPartition</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialInfo">catalog.CredentialInfo</a>.</li>
<li>Added <code>GcpOptions</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialRequest">catalog.GenerateTemporaryServiceCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCredentialRequest">catalog.UpdateCredentialRequest</a>.</li>
<li>Added <code>CachedQuerySchema</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QueryAttachment">dashboards.QueryAttachment</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#GetMetadataResponse">files.GetMetadataResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GcpServiceAccountKey">catalog.GcpServiceAccountKey</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#FileSize">files.FileSize</a>.</li>
<li>[Breaking] Removed <code>GcpServiceAccountKey</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
</ul>
<p>OpenAPI SHA: 7016dcbf2e011459416cf408ce21143bcc4b3a25, Date:
2024-12-05</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>[Release] Release v0.53.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Update Changelog file (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1091">#1091</a>).</li>
</ul>
<h3>Internal Changes</h3>
<ul>
<li>Update to latest OpenAPI spec (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/pull/1098">#1098</a>).</li>
</ul>
<p>Note: This release contains breaking changes, please see the API
changes below for more details.</p>
<h3>API Changes:</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/cleanrooms">cleanrooms</a>
package.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingAccessPolicyAPI">w.AibiDashboardEmbeddingAccessPolicy</a>
workspace-level service.</li>
<li>Added <code>DeletePublicWorkspaceSetting</code> method for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#AibiDashboardEmbeddingApprovedDomainsAPI">w.AibiDashboardEmbeddingApprovedDomains</a>
workspace-level service.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunLifeCycleState">jobs.CleanRoomTaskRunLifeCycleState</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunResultState">jobs.CleanRoomTaskRunResultState</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#CleanRoomTaskRunState">jobs.CleanRoomTaskRunState</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DataType">dashboards.DataType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchema">dashboards.QuerySchema</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QuerySchemaColumn">dashboards.QuerySchemaColumn</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#DatabricksGcpServiceAccount">catalog.DatabricksGcpServiceAccount</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialGcpOptions">catalog.GenerateTemporaryServiceCredentialGcpOptions</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentRange">files.ContentRange</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingRequest">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingAccessPolicySettingResponse">settings.DeleteAibiDashboardEmbeddingAccessPolicySettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse">settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingResponse</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicy">settings.EgressNetworkPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicy">settings.EgressNetworkPolicyInternetAccessPolicy</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestination">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestination</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationFilteringProtocol</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyInternetDestinationInternetDestinationType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyMode">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeLogOnlyModeType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType">settings.EgressNetworkPolicyInternetAccessPolicyLogOnlyModeWorkloadType</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyRestrictionMode">settings.EgressNetworkPolicyInternetAccessPolicyRestrictionMode</a>,
<a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestination">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestination</a>
and <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/settings#EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType">settings.EgressNetworkPolicyInternetAccessPolicyStorageDestinationStorageDestinationType</a>.</li>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/sharing#PartitionSpecificationPartition">sharing.PartitionSpecificationPartition</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CredentialInfo">catalog.CredentialInfo</a>.</li>
<li>Added <code>GcpOptions</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GenerateTemporaryServiceCredentialRequest">catalog.GenerateTemporaryServiceCredentialRequest</a>.</li>
<li>Added <code>DatabricksGcpServiceAccount</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCredentialRequest">catalog.UpdateCredentialRequest</a>.</li>
<li>Added <code>CachedQuerySchema</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#QueryAttachment">dashboards.QueryAttachment</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#DownloadResponse">files.DownloadResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Changed <code>ContentLength</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#GetMetadataResponse">files.GetMetadataResponse</a>
to <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#ContentLength">files.ContentLength</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#GcpServiceAccountKey">catalog.GcpServiceAccountKey</a>.</li>
<li>[Breaking] Removed <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/files#FileSize">files.FileSize</a>.</li>
<li>[Breaking] Removed <code>GcpServiceAccountKey</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CreateCredentialRequest">catalog.CreateCredentialRequest</a>.</li>
</ul>
<p>OpenAPI SHA: 7016dcbf2e011459416cf408ce21143bcc4b3a25, Date:
2024-12-05</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="6f68afdd47"><code>6f68afd</code></a>
[Release] Release v0.53.0 (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1099">#1099</a>)</li>
<li><a
href="011bd5dab8"><code>011bd5d</code></a>
[Internal] Update to latest OpenAPI spec (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1098">#1098</a>)</li>
<li><a
href="8219c2cda9"><code>8219c2c</code></a>
[Fix] Update Changelog file (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1091">#1091</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.52.0...v0.53.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.52.0&new-version=0.53.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-12-12 14:36:00 +00:00
76 changed files with 2041 additions and 964 deletions

View File

@ -1 +1 @@
f2385add116e3716c8a90a0b68e204deb40f996c
7016dcbf2e011459416cf408ce21143bcc4b3a25

3
.gitattributes vendored
View File

@ -37,6 +37,9 @@ cmd/workspace/apps/apps.go linguist-generated=true
cmd/workspace/artifact-allowlists/artifact-allowlists.go linguist-generated=true
cmd/workspace/automatic-cluster-update/automatic-cluster-update.go linguist-generated=true
cmd/workspace/catalogs/catalogs.go linguist-generated=true
cmd/workspace/clean-room-assets/clean-room-assets.go linguist-generated=true
cmd/workspace/clean-room-task-runs/clean-room-task-runs.go linguist-generated=true
cmd/workspace/clean-rooms/clean-rooms.go linguist-generated=true
cmd/workspace/cluster-policies/cluster-policies.go linguist-generated=true
cmd/workspace/clusters/clusters.go linguist-generated=true
cmd/workspace/cmd.go linguist-generated=true

View File

@ -58,16 +58,6 @@ func TestJsonSchema(t *testing.T) {
assert.NotEmpty(t, pipeline.AnyOf[0].Properties[field].Description)
}
// Assert enum values are loaded
schedule := walk(s.Definitions, "github.com", "databricks", "databricks-sdk-go", "service", "pipelines.RestartWindow")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "MONDAY")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "TUESDAY")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "WEDNESDAY")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "THURSDAY")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "FRIDAY")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "SATURDAY")
assert.Contains(t, schedule.AnyOf[0].Properties["days_of_week"].Enum, "SUNDAY")
providers := walk(s.Definitions, "github.com", "databricks", "databricks-sdk-go", "service", "jobs.GitProvider")
assert.Contains(t, providers.Enum, "gitHub")
assert.Contains(t, providers.Enum, "bitbucketCloud")

View File

@ -2888,7 +2888,7 @@
"anyOf": [
{
"type": "object",
"description": "Write-only setting. Specifies the user, service principal or group that the job/pipeline runs as. If not specified, the job/pipeline runs as the user who created the job/pipeline.\n\nEither `user_name` or `service_principal_name` should be specified. If not, an error is thrown.",
"description": "Write-only setting. Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who created the job.\n\nEither `user_name` or `service_principal_name` should be specified. If not, an error is thrown.",
"properties": {
"service_principal_name": {
"description": "Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role.",
@ -4436,16 +4436,7 @@
"properties": {
"days_of_week": {
"description": "Days of week in which the restart is allowed to happen (within a five-hour window starting at start_hour).\nIf not specified all days of the week will be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.RestartWindowDaysOfWeek",
"enum": [
"MONDAY",
"TUESDAY",
"WEDNESDAY",
"THURSDAY",
"FRIDAY",
"SATURDAY",
"SUNDAY"
]
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/pipelines.RestartWindowDaysOfWeek"
},
"start_hour": {
"description": "An integer between 0 and 23 denoting the start hour for the restart window in the 24-hour day.\nContinuous pipeline restart is triggered only within a five-hour window starting at this hour.",
@ -4468,7 +4459,17 @@
]
},
"pipelines.RestartWindowDaysOfWeek": {
"type": "string"
"type": "string",
"description": "Days of week in which the restart is allowed to happen (within a five-hour window starting at start_hour).\nIf not specified all days of the week will be used.",
"enum": [
"MONDAY",
"TUESDAY",
"WEDNESDAY",
"THURSDAY",
"FRIDAY",
"SATURDAY",
"SUNDAY"
]
},
"pipelines.SchemaSpec": {
"anyOf": [

View File

@ -1,22 +1,22 @@
package config_tests
import (
"path/filepath"
"context"
"testing"
"github.com/databricks/cli/cmd/root"
assert "github.com/databricks/cli/libs/dyn/dynassert"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/stretchr/testify/require"
)
func TestSuggestTargetIfWrongPassed(t *testing.T) {
t.Setenv("BUNDLE_ROOT", filepath.Join("target_overrides", "workspace"))
stdoutBytes, _, err := internal.RequireErrorRun(t, "bundle", "validate", "-e", "incorrect")
stdout := stdoutBytes.String()
b := load(t, "target_overrides/workspace")
assert.Error(t, root.ErrAlreadyPrinted, err)
assert.Contains(t, stdout, "Available targets:")
assert.Contains(t, stdout, "development")
assert.Contains(t, stdout, "staging")
ctx := context.Background()
diags := bundle.Apply(ctx, b, mutator.SelectTarget("incorrect"))
err := diags.Error()
require.Error(t, err)
require.Contains(t, err.Error(), "Available targets:")
require.Contains(t, err.Error(), "development")
require.Contains(t, err.Error(), "staging")
}

View File

@ -4,14 +4,14 @@ import (
"context"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/env"
)
func TestListsInstalledProjects(t *testing.T) {
ctx := context.Background()
ctx = env.WithUserHomeDir(ctx, "project/testdata/installed-in-home")
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "installed")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "installed")
r.RunAndExpectOutput(`
Name Description Version
blueprint Blueprint Project v0.3.15

View File

@ -4,7 +4,7 @@ import (
"context"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/env"
"github.com/stretchr/testify/require"
)
@ -12,7 +12,7 @@ import (
func TestListingWorks(t *testing.T) {
ctx := context.Background()
ctx = env.WithUserHomeDir(ctx, "project/testdata/installed-in-home")
c := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "list")
c := testcli.NewRunnerWithContext(t, ctx, "labs", "list")
stdout, _, err := c.Run()
require.NoError(t, err)
require.Contains(t, stdout.String(), "ucx")

View File

@ -6,7 +6,7 @@ import (
"testing"
"time"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/libs/python"
"github.com/databricks/databricks-sdk-go"
@ -30,7 +30,7 @@ func devEnvContext(t *testing.T) context.Context {
func TestRunningBlueprintEcho(t *testing.T) {
ctx := devEnvContext(t)
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "blueprint", "echo")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "blueprint", "echo")
var out echoOut
r.RunAndParseJSON(&out)
assert.Equal(t, "echo", out.Command)
@ -41,14 +41,14 @@ func TestRunningBlueprintEcho(t *testing.T) {
func TestRunningBlueprintEchoProfileWrongOverride(t *testing.T) {
ctx := devEnvContext(t)
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "blueprint", "echo", "--profile", "workspace-profile")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "blueprint", "echo", "--profile", "workspace-profile")
_, _, err := r.Run()
assert.ErrorIs(t, err, databricks.ErrNotAccountClient)
}
func TestRunningCommand(t *testing.T) {
ctx := devEnvContext(t)
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "blueprint", "foo")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "blueprint", "foo")
r.WithStdin()
defer r.CloseStdin()
@ -60,7 +60,7 @@ func TestRunningCommand(t *testing.T) {
func TestRenderingTable(t *testing.T) {
ctx := devEnvContext(t)
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "blueprint", "table")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "blueprint", "table")
r.RunAndExpectOutput(`
Key Value
First Second

View File

@ -19,7 +19,7 @@ import (
"github.com/databricks/cli/cmd/labs/github"
"github.com/databricks/cli/cmd/labs/project"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/libs/process"
"github.com/databricks/cli/libs/python"
@ -236,7 +236,7 @@ func TestInstallerWorksForReleases(t *testing.T) {
// │ │ │ └── site-packages
// │ │ │ ├── ...
// │ │ │ ├── distutils-precedence.pth
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "install", "blueprint", "--debug")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "install", "blueprint", "--debug")
r.RunAndExpectOutput("setting up important infrastructure")
}
@ -356,7 +356,7 @@ account_id = abc
// └── databrickslabs-blueprint-releases.json
// `databricks labs install .` means "verify this installer i'm developing does work"
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "install", ".")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "install", ".")
r.WithStdin()
defer r.CloseStdin()
@ -426,7 +426,7 @@ func TestUpgraderWorksForReleases(t *testing.T) {
ctx = env.Set(ctx, "DATABRICKS_CLUSTER_ID", "installer-cluster")
ctx = env.Set(ctx, "DATABRICKS_WAREHOUSE_ID", "installer-warehouse")
r := internal.NewCobraTestRunnerWithContext(t, ctx, "labs", "upgrade", "blueprint")
r := testcli.NewRunnerWithContext(t, ctx, "labs", "upgrade", "blueprint")
r.RunAndExpectOutput("setting up important infrastructure")
// Check if the stub was called with the 'python -m pip install' command

View File

@ -26,6 +26,7 @@ func New() *cobra.Command {
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
@ -37,6 +38,62 @@ func New() *cobra.Command {
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteAibiDashboardEmbeddingAccessPolicySettingRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteAibiDashboardEmbeddingAccessPolicySettingRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete the AI/BI dashboard embedding access policy.`
cmd.Long = `Delete the AI/BI dashboard embedding access policy.
Delete the AI/BI dashboard embedding access policy, reverting back to the
default.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.AibiDashboardEmbeddingAccessPolicy().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.

View File

@ -26,6 +26,7 @@ func New() *cobra.Command {
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
@ -37,6 +38,62 @@ func New() *cobra.Command {
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteAibiDashboardEmbeddingApprovedDomainsSettingRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete AI/BI dashboard embedding approved domains.`
cmd.Long = `Delete AI/BI dashboard embedding approved domains.
Delete the list of domains approved to host embedded AI/BI dashboards,
reverting back to the default empty list.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.AibiDashboardEmbeddingApprovedDomains().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.

View File

@ -0,0 +1,419 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package clean_room_assets
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/cleanrooms"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "clean-room-assets",
Short: `Clean room assets are data and code objects — Tables, volumes, and notebooks that are shared with the clean room.`,
Long: `Clean room assets are data and code objects Tables, volumes, and notebooks
that are shared with the clean room.`,
GroupID: "cleanrooms",
Annotations: map[string]string{
"package": "cleanrooms",
},
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newList())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*cleanrooms.CreateCleanRoomAssetRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq cleanrooms.CreateCleanRoomAssetRequest
createReq.Asset = &cleanrooms.CleanRoomAsset{}
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().Var(&createReq.Asset.AssetType, "asset-type", `The type of the asset. Supported values: [FOREIGN_TABLE, NOTEBOOK_FILE, TABLE, VIEW, VOLUME]`)
// TODO: complex arg: foreign_table
// TODO: complex arg: foreign_table_local_details
cmd.Flags().StringVar(&createReq.Asset.Name, "name", createReq.Asset.Name, `A fully qualified name that uniquely identifies the asset within the clean room.`)
// TODO: complex arg: notebook
// TODO: complex arg: table
// TODO: complex arg: table_local_details
// TODO: complex arg: view
// TODO: complex arg: view_local_details
// TODO: complex arg: volume_local_details
cmd.Use = "create CLEAN_ROOM_NAME"
cmd.Short = `Create an asset.`
cmd.Long = `Create an asset.
Create a clean room asset share an asset like a notebook or table into the
clean room. For each UC asset that is added through this method, the clean
room owner must also have enough privilege on the asset to consume it. The
privilege must be maintained indefinitely for the clean room to be able to
access the asset. Typically, you should use a group as the clean room owner.
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := createJson.Unmarshal(&createReq.Asset)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
createReq.CleanRoomName = args[0]
response, err := w.CleanRoomAssets.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*cleanrooms.DeleteCleanRoomAssetRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq cleanrooms.DeleteCleanRoomAssetRequest
// TODO: short flags
cmd.Use = "delete CLEAN_ROOM_NAME ASSET_TYPE ASSET_FULL_NAME"
cmd.Short = `Delete an asset.`
cmd.Long = `Delete an asset.
Delete a clean room asset - unshare/remove the asset from the clean room
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.
ASSET_TYPE: The type of the asset.
ASSET_FULL_NAME: The fully qualified name of the asset, it is same as the name field in
CleanRoomAsset.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(3)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
deleteReq.CleanRoomName = args[0]
_, err = fmt.Sscan(args[1], &deleteReq.AssetType)
if err != nil {
return fmt.Errorf("invalid ASSET_TYPE: %s", args[1])
}
deleteReq.AssetFullName = args[2]
err = w.CleanRoomAssets.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*cleanrooms.GetCleanRoomAssetRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq cleanrooms.GetCleanRoomAssetRequest
// TODO: short flags
cmd.Use = "get CLEAN_ROOM_NAME ASSET_TYPE ASSET_FULL_NAME"
cmd.Short = `Get an asset.`
cmd.Long = `Get an asset.
Get the details of a clean room asset by its type and full name.
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.
ASSET_TYPE: The type of the asset.
ASSET_FULL_NAME: The fully qualified name of the asset, it is same as the name field in
CleanRoomAsset.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(3)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
getReq.CleanRoomName = args[0]
_, err = fmt.Sscan(args[1], &getReq.AssetType)
if err != nil {
return fmt.Errorf("invalid ASSET_TYPE: %s", args[1])
}
getReq.AssetFullName = args[2]
response, err := w.CleanRoomAssets.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*cleanrooms.ListCleanRoomAssetsRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq cleanrooms.ListCleanRoomAssetsRequest
// TODO: short flags
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
cmd.Use = "list CLEAN_ROOM_NAME"
cmd.Short = `List assets.`
cmd.Long = `List assets.
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
listReq.CleanRoomName = args[0]
response := w.CleanRoomAssets.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*cleanrooms.UpdateCleanRoomAssetRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq cleanrooms.UpdateCleanRoomAssetRequest
updateReq.Asset = &cleanrooms.CleanRoomAsset{}
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().Var(&updateReq.Asset.AssetType, "asset-type", `The type of the asset. Supported values: [FOREIGN_TABLE, NOTEBOOK_FILE, TABLE, VIEW, VOLUME]`)
// TODO: complex arg: foreign_table
// TODO: complex arg: foreign_table_local_details
cmd.Flags().StringVar(&updateReq.Asset.Name, "name", updateReq.Asset.Name, `A fully qualified name that uniquely identifies the asset within the clean room.`)
// TODO: complex arg: notebook
// TODO: complex arg: table
// TODO: complex arg: table_local_details
// TODO: complex arg: view
// TODO: complex arg: view_local_details
// TODO: complex arg: volume_local_details
cmd.Use = "update CLEAN_ROOM_NAME ASSET_TYPE NAME"
cmd.Short = `Update an asset.`
cmd.Long = `Update an asset.
Update a clean room asset. For example, updating the content of a notebook;
changing the shared partitions of a table; etc.
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.
ASSET_TYPE: The type of the asset.
NAME: A fully qualified name that uniquely identifies the asset within the clean
room. This is also the name displayed in the clean room UI.
For UC securable assets (tables, volumes, etc.), the format is
*shared_catalog*.*shared_schema*.*asset_name*
For notebooks, the name is the notebook file name.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(3)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq.Asset)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
updateReq.CleanRoomName = args[0]
_, err = fmt.Sscan(args[1], &updateReq.AssetType)
if err != nil {
return fmt.Errorf("invalid ASSET_TYPE: %s", args[1])
}
updateReq.Name = args[2]
response, err := w.CleanRoomAssets.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service CleanRoomAssets

View File

@ -0,0 +1,97 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package clean_room_task_runs
import (
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/databricks-sdk-go/service/cleanrooms"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "clean-room-task-runs",
Short: `Clean room task runs are the executions of notebooks in a clean room.`,
Long: `Clean room task runs are the executions of notebooks in a clean room.`,
GroupID: "cleanrooms",
Annotations: map[string]string{
"package": "cleanrooms",
},
}
// Add methods
cmd.AddCommand(newList())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*cleanrooms.ListCleanRoomNotebookTaskRunsRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq cleanrooms.ListCleanRoomNotebookTaskRunsRequest
// TODO: short flags
cmd.Flags().StringVar(&listReq.NotebookName, "notebook-name", listReq.NotebookName, `Notebook name.`)
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, `The maximum number of task runs to return.`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
cmd.Use = "list CLEAN_ROOM_NAME"
cmd.Short = `List notebook task runs.`
cmd.Long = `List notebook task runs.
List all the historical notebook task runs in a clean room.
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
listReq.CleanRoomName = args[0]
response := w.CleanRoomTaskRuns.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// end service CleanRoomTaskRuns

450
cmd/workspace/clean-rooms/clean-rooms.go generated Executable file
View File

@ -0,0 +1,450 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package clean_rooms
import (
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/cleanrooms"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "clean-rooms",
Short: `A clean room uses Delta Sharing and serverless compute to provide a secure and privacy-protecting environment where multiple parties can work together on sensitive enterprise data without direct access to each others data.`,
Long: `A clean room uses Delta Sharing and serverless compute to provide a secure and
privacy-protecting environment where multiple parties can work together on
sensitive enterprise data without direct access to each others data.`,
GroupID: "cleanrooms",
Annotations: map[string]string{
"package": "cleanrooms",
},
}
// Add methods
cmd.AddCommand(newCreate())
cmd.AddCommand(newCreateOutputCatalog())
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newList())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOverrides []func(
*cobra.Command,
*cleanrooms.CreateCleanRoomRequest,
)
func newCreate() *cobra.Command {
cmd := &cobra.Command{}
var createReq cleanrooms.CreateCleanRoomRequest
createReq.CleanRoom = &cleanrooms.CleanRoom{}
var createJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createReq.CleanRoom.Comment, "comment", createReq.CleanRoom.Comment, ``)
cmd.Flags().StringVar(&createReq.CleanRoom.Name, "name", createReq.CleanRoom.Name, `The name of the clean room.`)
// TODO: complex arg: output_catalog
cmd.Flags().StringVar(&createReq.CleanRoom.Owner, "owner", createReq.CleanRoom.Owner, `This is Databricks username of the owner of the local clean room securable for permission management.`)
// TODO: complex arg: remote_detailed_info
cmd.Use = "create"
cmd.Short = `Create a clean room.`
cmd.Long = `Create a clean room.
Create a new clean room with the specified collaborators. This method is
asynchronous; the returned name field inside the clean_room field can be used
to poll the clean room status, using the :method:cleanrooms/get method. When
this method returns, the cluster will be in a PROVISIONING state. The cluster
will be usable once it enters an ACTIVE state.
The caller must be a metastore admin or have the **CREATE_CLEAN_ROOM**
privilege on the metastore.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := createJson.Unmarshal(&createReq.CleanRoom)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
response, err := w.CleanRooms.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOverrides {
fn(cmd, &createReq)
}
return cmd
}
// start create-output-catalog command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createOutputCatalogOverrides []func(
*cobra.Command,
*cleanrooms.CreateCleanRoomOutputCatalogRequest,
)
func newCreateOutputCatalog() *cobra.Command {
cmd := &cobra.Command{}
var createOutputCatalogReq cleanrooms.CreateCleanRoomOutputCatalogRequest
createOutputCatalogReq.OutputCatalog = &cleanrooms.CleanRoomOutputCatalog{}
var createOutputCatalogJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&createOutputCatalogJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createOutputCatalogReq.OutputCatalog.CatalogName, "catalog-name", createOutputCatalogReq.OutputCatalog.CatalogName, `The name of the output catalog in UC.`)
cmd.Use = "create-output-catalog CLEAN_ROOM_NAME"
cmd.Short = `Create an output catalog.`
cmd.Long = `Create an output catalog.
Create the output catalog of the clean room.
Arguments:
CLEAN_ROOM_NAME: Name of the clean room.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := createOutputCatalogJson.Unmarshal(&createOutputCatalogReq.OutputCatalog)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
createOutputCatalogReq.CleanRoomName = args[0]
response, err := w.CleanRooms.CreateOutputCatalog(ctx, createOutputCatalogReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createOutputCatalogOverrides {
fn(cmd, &createOutputCatalogReq)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*cleanrooms.DeleteCleanRoomRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq cleanrooms.DeleteCleanRoomRequest
// TODO: short flags
cmd.Use = "delete NAME"
cmd.Short = `Delete a clean room.`
cmd.Long = `Delete a clean room.
Delete a clean room. After deletion, the clean room will be removed from the
metastore. If the other collaborators have not deleted the clean room, they
will still have the clean room in their metastore, but it will be in a DELETED
state and no operations other than deletion can be performed on it.
Arguments:
NAME: Name of the clean room.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
deleteReq.Name = args[0]
err = w.CleanRooms.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*cleanrooms.GetCleanRoomRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq cleanrooms.GetCleanRoomRequest
// TODO: short flags
cmd.Use = "get NAME"
cmd.Short = `Get a clean room.`
cmd.Long = `Get a clean room.
Get the details of a clean room given its name.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
getReq.Name = args[0]
response, err := w.CleanRooms.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start list command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var listOverrides []func(
*cobra.Command,
*cleanrooms.ListCleanRoomsRequest,
)
func newList() *cobra.Command {
cmd := &cobra.Command{}
var listReq cleanrooms.ListCleanRoomsRequest
// TODO: short flags
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, `Maximum number of clean rooms to return (i.e., the page length).`)
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
cmd.Use = "list"
cmd.Short = `List clean rooms.`
cmd.Long = `List clean rooms.
Get a list of all clean rooms of the metastore. Only clean rooms the caller
has access to are returned.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response := w.CleanRooms.List(ctx, listReq)
return cmdio.RenderIterator(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range listOverrides {
fn(cmd, &listReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*cleanrooms.UpdateCleanRoomRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq cleanrooms.UpdateCleanRoomRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: clean_room
cmd.Use = "update NAME"
cmd.Short = `Update a clean room.`
cmd.Long = `Update a clean room.
Update a clean room. The caller must be the owner of the clean room, have
**MODIFY_CLEAN_ROOM** privilege, or be metastore admin.
When the caller is a metastore admin, only the __owner__ field can be updated.
Arguments:
NAME: Name of the clean room.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := updateJson.Unmarshal(&updateReq)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
updateReq.Name = args[0]
response, err := w.CleanRooms.Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service CleanRooms

6
cmd/workspace/cmd.go generated
View File

@ -8,6 +8,9 @@ import (
apps "github.com/databricks/cli/cmd/workspace/apps"
artifact_allowlists "github.com/databricks/cli/cmd/workspace/artifact-allowlists"
catalogs "github.com/databricks/cli/cmd/workspace/catalogs"
clean_room_assets "github.com/databricks/cli/cmd/workspace/clean-room-assets"
clean_room_task_runs "github.com/databricks/cli/cmd/workspace/clean-room-task-runs"
clean_rooms "github.com/databricks/cli/cmd/workspace/clean-rooms"
cluster_policies "github.com/databricks/cli/cmd/workspace/cluster-policies"
clusters "github.com/databricks/cli/cmd/workspace/clusters"
connections "github.com/databricks/cli/cmd/workspace/connections"
@ -98,6 +101,9 @@ func All() []*cobra.Command {
out = append(out, apps.New())
out = append(out, artifact_allowlists.New())
out = append(out, catalogs.New())
out = append(out, clean_room_assets.New())
out = append(out, clean_room_task_runs.New())
out = append(out, clean_rooms.New())
out = append(out, cluster_policies.New())
out = append(out, clusters.New())
out = append(out, connections.New())

View File

@ -27,7 +27,7 @@ func New() *cobra.Command {
To create credentials, you must be a Databricks account admin or have the
CREATE SERVICE CREDENTIAL privilege. The user who creates the credential can
delegate ownership to another user or group to manage permissions on it`,
delegate ownership to another user or group to manage permissions on it.`,
GroupID: "catalog",
Annotations: map[string]string{
"package": "catalog",
@ -73,7 +73,7 @@ func newCreateCredential() *cobra.Command {
// TODO: complex arg: azure_managed_identity
// TODO: complex arg: azure_service_principal
cmd.Flags().StringVar(&createCredentialReq.Comment, "comment", createCredentialReq.Comment, `Comment associated with the credential.`)
// TODO: complex arg: gcp_service_account_key
// TODO: complex arg: databricks_gcp_service_account
cmd.Flags().Var(&createCredentialReq.Purpose, "purpose", `Indicates the purpose of the credential. Supported values: [SERVICE, STORAGE]`)
cmd.Flags().BoolVar(&createCredentialReq.ReadOnly, "read-only", createCredentialReq.ReadOnly, `Whether the credential is usable only for read operations.`)
cmd.Flags().BoolVar(&createCredentialReq.SkipValidation, "skip-validation", createCredentialReq.SkipValidation, `Optional.`)
@ -227,6 +227,7 @@ func newGenerateTemporaryServiceCredential() *cobra.Command {
cmd.Flags().Var(&generateTemporaryServiceCredentialJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: azure_options
// TODO: complex arg: gcp_options
cmd.Use = "generate-temporary-service-credential CREDENTIAL_NAME"
cmd.Short = `Generate a temporary service credential.`
@ -434,6 +435,7 @@ func newUpdateCredential() *cobra.Command {
// TODO: complex arg: azure_managed_identity
// TODO: complex arg: azure_service_principal
cmd.Flags().StringVar(&updateCredentialReq.Comment, "comment", updateCredentialReq.Comment, `Comment associated with the credential.`)
// TODO: complex arg: databricks_gcp_service_account
cmd.Flags().BoolVar(&updateCredentialReq.Force, "force", updateCredentialReq.Force, `Force an update even if there are dependent services (when purpose is **SERVICE**) or dependent external locations and external tables (when purpose is **STORAGE**).`)
cmd.Flags().Var(&updateCredentialReq.IsolationMode, "isolation-mode", `Whether the current securable is accessible from all workspaces or a specific set of workspaces. Supported values: [ISOLATION_MODE_ISOLATED, ISOLATION_MODE_OPEN]`)
cmd.Flags().StringVar(&updateCredentialReq.NewName, "new-name", updateCredentialReq.NewName, `New name of credential.`)

View File

@ -72,5 +72,9 @@ func Groups() []cobra.Group {
ID: "apps",
Title: "Apps",
},
{
ID: "cleanrooms",
Title: "Clean Rooms",
},
}
}

View File

@ -160,9 +160,6 @@ func newCreateSchedule() *cobra.Command {
Arguments:
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -242,9 +239,6 @@ func newCreateSubscription() *cobra.Command {
DASHBOARD_ID: UUID identifying the dashboard to which the subscription belongs.
SCHEDULE_ID: UUID identifying the schedule to which the subscription belongs.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -322,9 +316,6 @@ func newDeleteSchedule() *cobra.Command {
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.
SCHEDULE_ID: UUID identifying the schedule.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -384,9 +375,6 @@ func newDeleteSubscription() *cobra.Command {
SCHEDULE_ID: UUID identifying the schedule which the subscription belongs.
SUBSCRIPTION_ID: UUID identifying the subscription.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -562,9 +550,6 @@ func newGetSchedule() *cobra.Command {
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.
SCHEDULE_ID: UUID identifying the schedule.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -624,9 +609,6 @@ func newGetSubscription() *cobra.Command {
SCHEDULE_ID: UUID identifying the schedule which the subscription belongs.
SUBSCRIPTION_ID: UUID identifying the subscription.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -739,9 +721,6 @@ func newListSchedules() *cobra.Command {
Arguments:
DASHBOARD_ID: UUID identifying the dashboard to which the schedules belongs.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -798,9 +777,6 @@ func newListSubscriptions() *cobra.Command {
DASHBOARD_ID: UUID identifying the dashboard which the subscriptions belongs.
SCHEDULE_ID: UUID identifying the schedule which the subscriptions belongs.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -1215,9 +1191,6 @@ func newUpdateSchedule() *cobra.Command {
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.
SCHEDULE_ID: UUID identifying the schedule.`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {

2
go.mod
View File

@ -7,7 +7,7 @@ toolchain go1.23.2
require (
github.com/Masterminds/semver/v3 v3.3.1 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.52.0 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.53.0 // Apache 2.0
github.com/fatih/color v1.18.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause

4
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.52.0 h1:WKcj0F+pdx0gjI5xMicjYC4O43S2q5nyTpaGGMFmgHw=
github.com/databricks/databricks-sdk-go v0.52.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/databricks/databricks-sdk-go v0.53.0 h1:rZMXaTC3HNKZt+m4C4I/dY3EdZj+kl/sVd/Kdq55Qfo=
github.com/databricks/databricks-sdk-go v0.53.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -6,7 +6,8 @@ import (
"path"
"path/filepath"
"strings"
"testing"
"github.com/databricks/cli/internal/testutil"
)
// Detects if test is run from "debug test" feature in VS Code.
@ -16,7 +17,7 @@ func isInDebug() bool {
}
// Loads debug environment from ~/.databricks/debug-env.json.
func loadDebugEnvIfRunFromIDE(t *testing.T, key string) {
func loadDebugEnvIfRunFromIDE(t testutil.TestingT, key string) {
if !isInDebug() {
return
}

133
internal/acc/fixtures.go Normal file
View File

@ -0,0 +1,133 @@
package acc
import (
"fmt"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/files"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require"
)
func TemporaryWorkspaceDir(t *WorkspaceT, name ...string) string {
ctx := t.ctx
me, err := t.W.CurrentUser.Me(ctx)
require.NoError(t, err)
// Prefix the name with "integration-test-" to make it easier to identify.
name = append([]string{"integration-test-"}, name...)
basePath := fmt.Sprintf("/Users/%s/%s", me.UserName, testutil.RandomName(name...))
t.Logf("Creating workspace directory %s", basePath)
err = t.W.Workspace.MkdirsByPath(ctx, basePath)
require.NoError(t, err)
// Remove test directory on test completion.
t.Cleanup(func() {
t.Logf("Removing workspace directory %s", basePath)
err := t.W.Workspace.Delete(ctx, workspace.Delete{
Path: basePath,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove temporary workspace directory %s: %#v", basePath, err)
})
return basePath
}
func TemporaryDbfsDir(t *WorkspaceT, name ...string) string {
ctx := t.ctx
// Prefix the name with "integration-test-" to make it easier to identify.
name = append([]string{"integration-test-"}, name...)
path := fmt.Sprintf("/tmp/%s", testutil.RandomName(name...))
t.Logf("Creating DBFS directory %s", path)
err := t.W.Dbfs.MkdirsByPath(ctx, path)
require.NoError(t, err)
t.Cleanup(func() {
t.Logf("Removing DBFS directory %s", path)
err := t.W.Dbfs.Delete(ctx, files.Delete{
Path: path,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove temporary DBFS directory %s: %#v", path, err)
})
return path
}
func TemporaryRepo(t *WorkspaceT, url string) string {
ctx := t.ctx
me, err := t.W.CurrentUser.Me(ctx)
require.NoError(t, err)
// Prefix the path with "integration-test-" to make it easier to identify.
path := fmt.Sprintf("/Repos/%s/%s", me.UserName, testutil.RandomName("integration-test-"))
t.Logf("Creating repo: %s", path)
resp, err := t.W.Repos.Create(ctx, workspace.CreateRepoRequest{
Url: url,
Path: path,
Provider: "gitHub",
})
require.NoError(t, err)
t.Cleanup(func() {
t.Logf("Removing repo: %s", path)
err := t.W.Repos.Delete(ctx, workspace.DeleteRepoRequest{
RepoId: resp.Id,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove repo %s: %#v", path, err)
})
return path
}
// Create a new Unity Catalog volume in a catalog called "main" in the workspace.
func TemporaryVolume(t *WorkspaceT) string {
ctx := t.ctx
w := t.W
// Create a schema
schema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
CatalogName: "main",
Name: testutil.RandomName("test-schema-"),
})
require.NoError(t, err)
t.Cleanup(func() {
err := w.Schemas.Delete(ctx, catalog.DeleteSchemaRequest{
FullName: schema.FullName,
})
require.NoError(t, err)
})
// Create a volume
volume, err := w.Volumes.Create(ctx, catalog.CreateVolumeRequestContent{
CatalogName: "main",
SchemaName: schema.Name,
Name: "my-volume",
VolumeType: catalog.VolumeTypeManaged,
})
require.NoError(t, err)
t.Cleanup(func() {
err := w.Volumes.Delete(ctx, catalog.DeleteVolumeRequest{
Name: volume.FullName,
})
require.NoError(t, err)
})
return fmt.Sprintf("/Volumes/%s/%s/%s", "main", schema.Name, volume.Name)
}

View File

@ -2,20 +2,16 @@ package acc
import (
"context"
"fmt"
"os"
"testing"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require"
)
type WorkspaceT struct {
*testing.T
testutil.TestingT
W *databricks.WorkspaceClient
@ -24,7 +20,7 @@ type WorkspaceT struct {
exec *compute.CommandExecutorV2
}
func WorkspaceTest(t *testing.T) (context.Context, *WorkspaceT) {
func WorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) {
loadDebugEnvIfRunFromIDE(t, "workspace")
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
@ -33,7 +29,7 @@ func WorkspaceTest(t *testing.T) (context.Context, *WorkspaceT) {
require.NoError(t, err)
wt := &WorkspaceT{
T: t,
TestingT: t,
W: w,
@ -44,7 +40,7 @@ func WorkspaceTest(t *testing.T) (context.Context, *WorkspaceT) {
}
// Run the workspace test only on UC workspaces.
func UcWorkspaceTest(t *testing.T) (context.Context, *WorkspaceT) {
func UcWorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) {
loadDebugEnvIfRunFromIDE(t, "workspace")
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
@ -60,7 +56,7 @@ func UcWorkspaceTest(t *testing.T) (context.Context, *WorkspaceT) {
require.NoError(t, err)
wt := &WorkspaceT{
T: t,
TestingT: t,
W: w,
@ -71,7 +67,7 @@ func UcWorkspaceTest(t *testing.T) (context.Context, *WorkspaceT) {
}
func (t *WorkspaceT) TestClusterID() string {
clusterID := testutil.GetEnvOrSkipTest(t.T, "TEST_BRICKS_CLUSTER_ID")
clusterID := testutil.GetEnvOrSkipTest(t, "TEST_BRICKS_CLUSTER_ID")
err := t.W.Clusters.EnsureClusterIsRunning(t.ctx, clusterID)
require.NoError(t, err)
return clusterID
@ -98,30 +94,3 @@ func (t *WorkspaceT) RunPython(code string) (string, error) {
require.True(t, ok, "unexpected type %T", results.Data)
return output, nil
}
func (t *WorkspaceT) TemporaryWorkspaceDir(name ...string) string {
ctx := context.Background()
me, err := t.W.CurrentUser.Me(ctx)
require.NoError(t, err)
basePath := fmt.Sprintf("/Users/%s/%s", me.UserName, testutil.RandomName(name...))
t.Logf("Creating %s", basePath)
err = t.W.Workspace.MkdirsByPath(ctx, basePath)
require.NoError(t, err)
// Remove test directory on test completion.
t.Cleanup(func() {
t.Logf("Removing %s", basePath)
err := t.W.Workspace.Delete(ctx, workspace.Delete{
Path: basePath,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove temporary workspace directory %s: %#v", basePath, err)
})
return basePath
}

View File

@ -3,6 +3,7 @@ package internal
import (
"testing"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/stretchr/testify/assert"
)
@ -10,6 +11,6 @@ import (
func TestAccAlertsCreateErrWhenNoArguments(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
_, _, err := RequireErrorRun(t, "alerts-legacy", "create")
_, _, err := testcli.RequireErrorRun(t, "alerts-legacy", "create")
assert.Equal(t, "please provide command input in JSON format by specifying the --json flag", err.Error())
}

View File

@ -11,13 +11,14 @@ import (
"github.com/stretchr/testify/require"
_ "github.com/databricks/cli/cmd/api"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
)
func TestAccApiGet(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
stdout, _ := RequireSuccessfulRun(t, "api", "get", "/api/2.0/preview/scim/v2/Me")
stdout, _ := testcli.RequireSuccessfulRun(t, "api", "get", "/api/2.0/preview/scim/v2/Me")
// Deserialize SCIM API response.
var out map[string]any
@ -44,11 +45,11 @@ func TestAccApiPost(t *testing.T) {
// Post to mkdir
{
RequireSuccessfulRun(t, "api", "post", "--json=@"+requestPath, "/api/2.0/dbfs/mkdirs")
testcli.RequireSuccessfulRun(t, "api", "post", "--json=@"+requestPath, "/api/2.0/dbfs/mkdirs")
}
// Post to delete
{
RequireSuccessfulRun(t, "api", "post", "--json=@"+requestPath, "/api/2.0/dbfs/delete")
testcli.RequireSuccessfulRun(t, "api", "post", "--json=@"+requestPath, "/api/2.0/dbfs/delete")
}
}

View File

@ -5,6 +5,7 @@ import (
"fmt"
"testing"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go"
"github.com/stretchr/testify/require"
@ -13,7 +14,7 @@ import (
func TestAuthDescribeSuccess(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
stdout, _ := RequireSuccessfulRun(t, "auth", "describe")
stdout, _ := testcli.RequireSuccessfulRun(t, "auth", "describe")
outStr := stdout.String()
w, err := databricks.NewWorkspaceClient(&databricks.Config{})
@ -34,7 +35,7 @@ func TestAuthDescribeSuccess(t *testing.T) {
func TestAuthDescribeFailure(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
stdout, _ := RequireSuccessfulRun(t, "auth", "describe", "--profile", "nonexistent")
stdout, _ := testcli.RequireSuccessfulRun(t, "auth", "describe", "--profile", "nonexistent")
outStr := stdout.String()
require.NotEmpty(t, outStr)

View File

@ -12,8 +12,8 @@ import (
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute"
@ -33,12 +33,11 @@ func touchEmptyFile(t *testing.T, path string) {
func TestAccUploadArtifactFileToCorrectRemotePath(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
dir := t.TempDir()
whlPath := filepath.Join(dir, "dist", "test.whl")
touchEmptyFile(t, whlPath)
wsDir := internal.TemporaryWorkspaceDir(t, w)
wsDir := acc.TemporaryWorkspaceDir(wt, "artifact-")
b := &bundle.Bundle{
BundleRootPath: dir,
@ -98,12 +97,11 @@ func TestAccUploadArtifactFileToCorrectRemotePath(t *testing.T) {
func TestAccUploadArtifactFileToCorrectRemotePathWithEnvironments(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
dir := t.TempDir()
whlPath := filepath.Join(dir, "dist", "test.whl")
touchEmptyFile(t, whlPath)
wsDir := internal.TemporaryWorkspaceDir(t, w)
wsDir := acc.TemporaryWorkspaceDir(wt, "artifact-")
b := &bundle.Bundle{
BundleRootPath: dir,
@ -163,13 +161,12 @@ func TestAccUploadArtifactFileToCorrectRemotePathWithEnvironments(t *testing.T)
func TestAccUploadArtifactFileToCorrectRemotePathForVolumes(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
if os.Getenv("TEST_METASTORE_ID") == "" {
t.Skip("Skipping tests that require a UC Volume when metastore id is not set.")
}
volumePath := internal.TemporaryUcVolume(t, w)
volumePath := acc.TemporaryVolume(wt)
dir := t.TempDir()
whlPath := filepath.Join(dir, "dist", "test.whl")
@ -257,7 +254,7 @@ func TestAccUploadArtifactFileToVolumeThatDoesNotExist(t *testing.T) {
require.NoError(t, err)
t.Setenv("BUNDLE_ROOT", bundleRoot)
stdout, stderr, err := internal.RequireErrorRun(t, "bundle", "deploy")
stdout, stderr, err := testcli.RequireErrorRun(t, "bundle", "deploy")
assert.Error(t, err)
assert.Equal(t, fmt.Sprintf(`Error: volume /Volumes/main/%s/doesnotexist does not exist: Not Found
@ -294,7 +291,7 @@ func TestAccUploadArtifactToVolumeNotYetDeployed(t *testing.T) {
require.NoError(t, err)
t.Setenv("BUNDLE_ROOT", bundleRoot)
stdout, stderr, err := internal.RequireErrorRun(t, "bundle", "deploy")
stdout, stderr, err := testcli.RequireErrorRun(t, "bundle", "deploy")
assert.Error(t, err)
assert.Equal(t, fmt.Sprintf(`Error: volume /Volumes/main/%s/my_volume does not exist: Not Found

View File

@ -5,9 +5,8 @@ import (
"path/filepath"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testutil"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
)
@ -15,7 +14,7 @@ import (
func TestAccBasicBundleDeployWithFailOnActiveRuns(t *testing.T) {
ctx, _ := acc.WorkspaceTest(t)
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
root, err := initTestTemplate(t, ctx, "basic", map[string]any{
"unique_id": uniqueId,

View File

@ -6,8 +6,8 @@ import (
"path/filepath"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/jobs"
@ -21,9 +21,9 @@ func TestAccBindJobToExistingJob(t *testing.T) {
t.Log(env)
ctx, wt := acc.WorkspaceTest(t)
gt := &generateJobTest{T: t, w: wt.W}
gt := &generateJobTest{T: wt, w: wt.W}
nodeTypeId := internal.GetNodeTypeId(env)
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "basic", map[string]any{
"unique_id": uniqueId,
@ -39,7 +39,7 @@ func TestAccBindJobToExistingJob(t *testing.T) {
})
t.Setenv("BUNDLE_ROOT", bundleRoot)
c := internal.NewCobraTestRunner(t, "bundle", "deployment", "bind", "foo", fmt.Sprint(jobId), "--auto-approve")
c := testcli.NewRunner(t, "bundle", "deployment", "bind", "foo", fmt.Sprint(jobId), "--auto-approve")
_, _, err = c.Run()
require.NoError(t, err)
@ -61,7 +61,7 @@ func TestAccBindJobToExistingJob(t *testing.T) {
require.Equal(t, job.Settings.Name, fmt.Sprintf("test-job-basic-%s", uniqueId))
require.Contains(t, job.Settings.Tasks[0].SparkPythonTask.PythonFile, "hello_world.py")
c = internal.NewCobraTestRunner(t, "bundle", "deployment", "unbind", "foo")
c = testcli.NewRunner(t, "bundle", "deployment", "unbind", "foo")
_, _, err = c.Run()
require.NoError(t, err)
@ -86,9 +86,9 @@ func TestAccAbortBind(t *testing.T) {
t.Log(env)
ctx, wt := acc.WorkspaceTest(t)
gt := &generateJobTest{T: t, w: wt.W}
gt := &generateJobTest{T: wt, w: wt.W}
nodeTypeId := internal.GetNodeTypeId(env)
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "basic", map[string]any{
"unique_id": uniqueId,
@ -107,7 +107,7 @@ func TestAccAbortBind(t *testing.T) {
// Bind should fail because prompting is not possible.
t.Setenv("BUNDLE_ROOT", bundleRoot)
t.Setenv("TERM", "dumb")
c := internal.NewCobraTestRunner(t, "bundle", "deployment", "bind", "foo", fmt.Sprint(jobId))
c := testcli.NewRunner(t, "bundle", "deployment", "bind", "foo", fmt.Sprint(jobId))
// Expect error suggesting to use --auto-approve
_, _, err = c.Run()
@ -135,7 +135,7 @@ func TestAccGenerateAndBind(t *testing.T) {
t.Log(env)
ctx, wt := acc.WorkspaceTest(t)
gt := &generateJobTest{T: t, w: wt.W}
gt := &generateJobTest{T: wt, w: wt.W}
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "with_includes", map[string]any{
@ -157,7 +157,7 @@ func TestAccGenerateAndBind(t *testing.T) {
})
t.Setenv("BUNDLE_ROOT", bundleRoot)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "generate", "job",
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "generate", "job",
"--key", "test_job_key",
"--existing-job-id", fmt.Sprint(jobId),
"--config-dir", filepath.Join(bundleRoot, "resources"),
@ -173,7 +173,7 @@ func TestAccGenerateAndBind(t *testing.T) {
require.Len(t, matches, 1)
c = internal.NewCobraTestRunner(t, "bundle", "deployment", "bind", "test_job_key", fmt.Sprint(jobId), "--auto-approve")
c = testcli.NewRunner(t, "bundle", "deployment", "bind", "test_job_key", fmt.Sprint(jobId), "--auto-approve")
_, _, err = c.Run()
require.NoError(t, err)

View File

@ -4,10 +4,8 @@ import (
"fmt"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/env"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
@ -16,11 +14,11 @@ import (
func TestAccDeployBundleWithCluster(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
if testutil.IsAWSCloud(wt.T) {
if testutil.IsAWSCloud(wt) {
t.Skip("Skipping test for AWS cloud because it is not permitted to create clusters")
}
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
root, err := initTestTemplate(t, ctx, "clusters", map[string]any{
"unique_id": uniqueId,

View File

@ -11,9 +11,9 @@ import (
"testing"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/catalog"
@ -120,7 +120,7 @@ func TestAccBundleDeployUcSchemaFailsWithoutAutoApprove(t *testing.T) {
// Redeploy the bundle
t.Setenv("BUNDLE_ROOT", bundleRoot)
t.Setenv("TERM", "dumb")
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock")
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock")
stdout, stderr, err := c.Run()
assert.EqualError(t, err, root.ErrAlreadyPrinted.Error())
@ -132,7 +132,7 @@ func TestAccBundlePipelineDeleteWithoutAutoApprove(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "deploy_then_remove_resources", map[string]any{
"unique_id": uniqueId,
@ -164,7 +164,7 @@ func TestAccBundlePipelineDeleteWithoutAutoApprove(t *testing.T) {
// Redeploy the bundle. Expect it to fail because deleting the pipeline requires --auto-approve.
t.Setenv("BUNDLE_ROOT", bundleRoot)
t.Setenv("TERM", "dumb")
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock")
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock")
stdout, stderr, err := c.Run()
assert.EqualError(t, err, root.ErrAlreadyPrinted.Error())
@ -203,7 +203,7 @@ func TestAccBundlePipelineRecreateWithoutAutoApprove(t *testing.T) {
// Redeploy the bundle, pointing the DLT pipeline to a different UC catalog.
t.Setenv("BUNDLE_ROOT", bundleRoot)
t.Setenv("TERM", "dumb")
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--var=\"catalog=whatever\"")
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--var=\"catalog=whatever\"")
stdout, stderr, err := c.Run()
assert.EqualError(t, err, root.ErrAlreadyPrinted.Error())
@ -218,7 +218,7 @@ properties such as the 'catalog' or 'storage' are changed:
func TestAccDeployBasicBundleLogs(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
root, err := initTestTemplate(t, ctx, "basic", map[string]any{
"unique_id": uniqueId,
@ -284,7 +284,7 @@ func TestAccDeployUcVolume(t *testing.T) {
// Recreation of the volume without --auto-approve should fail since prompting is not possible
t.Setenv("TERM", "dumb")
t.Setenv("BUNDLE_ROOT", bundleRoot)
stdout, stderr, err := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--var=schema_name=${resources.schemas.schema2.name}").Run()
stdout, stderr, err := testcli.NewRunnerWithContext(t, ctx, "bundle", "deploy", "--var=schema_name=${resources.schemas.schema2.name}").Run()
assert.Error(t, err)
assert.Contains(t, stderr.String(), `This action will result in the deletion or recreation of the following volumes.
For managed volumes, the files stored in the volume are also deleted from your
@ -296,7 +296,7 @@ is removed from the catalog, but the underlying files are not deleted:
// Successfully recreate the volume with --auto-approve
t.Setenv("TERM", "dumb")
t.Setenv("BUNDLE_ROOT", bundleRoot)
_, _, err = internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--var=schema_name=${resources.schemas.schema2.name}", "--auto-approve").Run()
_, _, err = testcli.NewRunnerWithContext(t, ctx, "bundle", "deploy", "--var=schema_name=${resources.schemas.schema2.name}", "--auto-approve").Run()
assert.NoError(t, err)
// Assert the volume is updated successfully

View File

@ -5,9 +5,8 @@ import (
"path/filepath"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testutil"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -17,7 +16,7 @@ func TestAccBundleDeployThenRemoveResources(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "deploy_then_remove_resources", map[string]any{
"unique_id": uniqueId,

View File

@ -4,9 +4,8 @@ import (
"fmt"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testutil"
"github.com/google/uuid"
"github.com/stretchr/testify/require"
)
@ -14,7 +13,7 @@ import (
func TestAccDeployBasicToSharedWorkspacePath(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
currentUser, err := wt.W.CurrentUser.Me(ctx)
@ -29,10 +28,10 @@ func TestAccDeployBasicToSharedWorkspacePath(t *testing.T) {
require.NoError(t, err)
t.Cleanup(func() {
err = destroyBundle(wt.T, ctx, bundleRoot)
require.NoError(wt.T, err)
err = destroyBundle(wt, ctx, bundleRoot)
require.NoError(wt, err)
})
err = deployBundle(wt.T, ctx, bundleRoot)
require.NoError(wt.T, err)
err = deployBundle(wt, ctx, bundleRoot)
require.NoError(wt, err)
}

View File

@ -7,7 +7,6 @@ import (
"testing"
"github.com/databricks/cli/bundle/deploy"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/google/uuid"
@ -21,7 +20,7 @@ func TestAccFilesAreSyncedCorrectlyWhenNoSnapshot(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
nodeTypeId := internal.GetNodeTypeId(env)
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "basic", map[string]any{
"unique_id": uniqueId,

View File

@ -6,9 +6,8 @@ import (
"path/filepath"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
@ -19,7 +18,7 @@ func TestAccBundleDestroy(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "deploy_then_remove_resources", map[string]any{
"unique_id": uniqueId,

View File

@ -9,8 +9,8 @@ import (
"strings"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go"
@ -22,7 +22,7 @@ import (
func TestAccGenerateFromExistingJobAndDeploy(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
gt := &generateJobTest{T: t, w: wt.W}
gt := &generateJobTest{T: wt, w: wt.W}
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "with_includes", map[string]any{
@ -36,7 +36,7 @@ func TestAccGenerateFromExistingJobAndDeploy(t *testing.T) {
})
t.Setenv("BUNDLE_ROOT", bundleRoot)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "generate", "job",
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "generate", "job",
"--existing-job-id", fmt.Sprint(jobId),
"--config-dir", filepath.Join(bundleRoot, "resources"),
"--source-dir", filepath.Join(bundleRoot, "src"))
@ -69,7 +69,7 @@ func TestAccGenerateFromExistingJobAndDeploy(t *testing.T) {
}
type generateJobTest struct {
T *testing.T
T *acc.WorkspaceT
w *databricks.WorkspaceClient
}
@ -77,17 +77,7 @@ func (gt *generateJobTest) createTestJob(ctx context.Context) int64 {
t := gt.T
w := gt.w
var nodeTypeId string
switch testutil.GetCloud(t) {
case testutil.AWS:
nodeTypeId = "i3.xlarge"
case testutil.Azure:
nodeTypeId = "Standard_DS4_v2"
case testutil.GCP:
nodeTypeId = "n1-standard-4"
}
tmpdir := internal.TemporaryWorkspaceDir(t, w)
tmpdir := acc.TemporaryWorkspaceDir(t, "generate-job-")
f, err := filer.NewWorkspaceFilesClient(w, tmpdir)
require.NoError(t, err)
@ -102,7 +92,7 @@ func (gt *generateJobTest) createTestJob(ctx context.Context) int64 {
NewCluster: &compute.ClusterSpec{
SparkVersion: "13.3.x-scala2.12",
NumWorkers: 1,
NodeTypeId: nodeTypeId,
NodeTypeId: testutil.GetCloud(t).NodeTypeID(),
SparkConf: map[string]string{
"spark.databricks.enableWsfs": "true",
"spark.databricks.hive.metastore.glueCatalog.enabled": "true",

View File

@ -9,8 +9,8 @@ import (
"strings"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go"
@ -21,7 +21,7 @@ import (
func TestAccGenerateFromExistingPipelineAndDeploy(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
gt := &generatePipelineTest{T: t, w: wt.W}
gt := &generatePipelineTest{T: wt, w: wt.W}
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "with_includes", map[string]any{
@ -35,7 +35,7 @@ func TestAccGenerateFromExistingPipelineAndDeploy(t *testing.T) {
})
t.Setenv("BUNDLE_ROOT", bundleRoot)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "generate", "pipeline",
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "generate", "pipeline",
"--existing-pipeline-id", fmt.Sprint(pipelineId),
"--config-dir", filepath.Join(bundleRoot, "resources"),
"--source-dir", filepath.Join(bundleRoot, "src"))
@ -77,7 +77,7 @@ func TestAccGenerateFromExistingPipelineAndDeploy(t *testing.T) {
}
type generatePipelineTest struct {
T *testing.T
T *acc.WorkspaceT
w *databricks.WorkspaceClient
}
@ -85,7 +85,7 @@ func (gt *generatePipelineTest) createTestPipeline(ctx context.Context) (string,
t := gt.T
w := gt.w
tmpdir := internal.TemporaryWorkspaceDir(t, w)
tmpdir := acc.TemporaryWorkspaceDir(t, "generate-pipeline-")
f, err := filer.NewWorkspaceFilesClient(w, tmpdir)
require.NoError(t, err)
@ -95,8 +95,7 @@ func (gt *generatePipelineTest) createTestPipeline(ctx context.Context) (string,
err = f.Write(ctx, "test.py", strings.NewReader("print('Hello!')"))
require.NoError(t, err)
env := testutil.GetEnvOrSkipTest(t, "CLOUD_ENV")
nodeTypeId := internal.GetNodeTypeId(env)
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
name := testutil.RandomName("generated-pipeline-")
resp, err := w.Pipelines.Create(ctx, pipelines.CreatePipeline{

View File

@ -9,11 +9,11 @@ import (
"os/exec"
"path/filepath"
"strings"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/libs/filer"
@ -26,12 +26,12 @@ import (
const defaultSparkVersion = "13.3.x-snapshot-scala2.12"
func initTestTemplate(t *testing.T, ctx context.Context, templateName string, config map[string]any) (string, error) {
func initTestTemplate(t testutil.TestingT, ctx context.Context, templateName string, config map[string]any) (string, error) {
bundleRoot := t.TempDir()
return initTestTemplateWithBundleRoot(t, ctx, templateName, config, bundleRoot)
}
func initTestTemplateWithBundleRoot(t *testing.T, ctx context.Context, templateName string, config map[string]any, bundleRoot string) (string, error) {
func initTestTemplateWithBundleRoot(t testutil.TestingT, ctx context.Context, templateName string, config map[string]any, bundleRoot string) (string, error) {
templateRoot := filepath.Join("bundles", templateName)
configFilePath, err := writeConfigFile(t, config)
@ -49,7 +49,7 @@ func initTestTemplateWithBundleRoot(t *testing.T, ctx context.Context, templateN
return bundleRoot, err
}
func writeConfigFile(t *testing.T, config map[string]any) (string, error) {
func writeConfigFile(t testutil.TestingT, config map[string]any) (string, error) {
bytes, err := json.Marshal(config)
if err != nil {
return "", err
@ -63,79 +63,79 @@ func writeConfigFile(t *testing.T, config map[string]any) (string, error) {
return filepath, err
}
func validateBundle(t *testing.T, ctx context.Context, path string) ([]byte, error) {
func validateBundle(t testutil.TestingT, ctx context.Context, path string) ([]byte, error) {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "validate", "--output", "json")
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "validate", "--output", "json")
stdout, _, err := c.Run()
return stdout.Bytes(), err
}
func mustValidateBundle(t *testing.T, ctx context.Context, path string) []byte {
func mustValidateBundle(t testutil.TestingT, ctx context.Context, path string) []byte {
data, err := validateBundle(t, ctx, path)
require.NoError(t, err)
return data
}
func unmarshalConfig(t *testing.T, data []byte) *bundle.Bundle {
func unmarshalConfig(t testutil.TestingT, data []byte) *bundle.Bundle {
bundle := &bundle.Bundle{}
err := json.Unmarshal(data, &bundle.Config)
require.NoError(t, err)
return bundle
}
func deployBundle(t *testing.T, ctx context.Context, path string) error {
func deployBundle(t testutil.TestingT, ctx context.Context, path string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--auto-approve")
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "deploy", "--force-lock", "--auto-approve")
_, _, err := c.Run()
return err
}
func deployBundleWithArgs(t *testing.T, ctx context.Context, path string, args ...string) (string, string, error) {
func deployBundleWithArgs(t testutil.TestingT, ctx context.Context, path string, args ...string) (string, string, error) {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
args = append([]string{"bundle", "deploy"}, args...)
c := internal.NewCobraTestRunnerWithContext(t, ctx, args...)
c := testcli.NewRunnerWithContext(t, ctx, args...)
stdout, stderr, err := c.Run()
return stdout.String(), stderr.String(), err
}
func deployBundleWithFlags(t *testing.T, ctx context.Context, path string, flags []string) error {
func deployBundleWithFlags(t testutil.TestingT, ctx context.Context, path string, flags []string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
args := []string{"bundle", "deploy", "--force-lock"}
args = append(args, flags...)
c := internal.NewCobraTestRunnerWithContext(t, ctx, args...)
c := testcli.NewRunnerWithContext(t, ctx, args...)
_, _, err := c.Run()
return err
}
func runResource(t *testing.T, ctx context.Context, path, key string) (string, error) {
func runResource(t testutil.TestingT, ctx context.Context, path, key string) (string, error) {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
ctx = cmdio.NewContext(ctx, cmdio.Default())
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "run", key)
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "run", key)
stdout, _, err := c.Run()
return stdout.String(), err
}
func runResourceWithParams(t *testing.T, ctx context.Context, path, key string, params ...string) (string, error) {
func runResourceWithParams(t testutil.TestingT, ctx context.Context, path, key string, params ...string) (string, error) {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
ctx = cmdio.NewContext(ctx, cmdio.Default())
args := make([]string, 0)
args = append(args, "bundle", "run", key)
args = append(args, params...)
c := internal.NewCobraTestRunnerWithContext(t, ctx, args...)
c := testcli.NewRunnerWithContext(t, ctx, args...)
stdout, _, err := c.Run()
return stdout.String(), err
}
func destroyBundle(t *testing.T, ctx context.Context, path string) error {
func destroyBundle(t testutil.TestingT, ctx context.Context, path string) error {
ctx = env.Set(ctx, "BUNDLE_ROOT", path)
c := internal.NewCobraTestRunnerWithContext(t, ctx, "bundle", "destroy", "--auto-approve")
c := testcli.NewRunnerWithContext(t, ctx, "bundle", "destroy", "--auto-approve")
_, _, err := c.Run()
return err
}
func getBundleRemoteRootPath(w *databricks.WorkspaceClient, t *testing.T, uniqueId string) string {
func getBundleRemoteRootPath(w *databricks.WorkspaceClient, t testutil.TestingT, uniqueId string) string {
// Compute root path for the bundle deployment
me, err := w.CurrentUser.Me(context.Background())
require.NoError(t, err)
@ -143,7 +143,7 @@ func getBundleRemoteRootPath(w *databricks.WorkspaceClient, t *testing.T, unique
return root
}
func blackBoxRun(t *testing.T, root string, args ...string) (stdout, stderr string) {
func blackBoxRun(t testutil.TestingT, root string, args ...string) (stdout, stderr string) {
gitRoot, err := folders.FindDirWithLeaf(".", ".git")
require.NoError(t, err)

View File

@ -10,9 +10,8 @@ import (
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/metadata"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
@ -23,7 +22,7 @@ func TestAccJobsMetadataFile(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
bundleRoot, err := initTestTemplate(t, ctx, "job_metadata", map[string]any{
"unique_id": uniqueId,

View File

@ -4,9 +4,8 @@ import (
"context"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/env"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go/listing"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/google/uuid"
@ -25,7 +24,7 @@ func TestAccLocalStateStaleness(t *testing.T) {
// Because of deploy (2), the locally cached state of bundle instance A should be stale.
// Then for deploy (3), it must use the remote state over the stale local state.
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
uniqueId := uuid.New().String()
initialize := func() string {
root, err := initTestTemplate(t, ctx, "basic", map[string]any{

View File

@ -3,7 +3,6 @@ package bundle
import (
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/env"
@ -14,7 +13,7 @@ import (
func runPythonWheelTest(t *testing.T, templateName, sparkVersion string, pythonWheelWrapper bool) {
ctx, _ := acc.WorkspaceTest(t)
nodeTypeId := internal.GetNodeTypeId(env.Get(ctx, "CLOUD_ENV"))
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
instancePoolId := env.Get(ctx, "TEST_INSTANCE_POOL_ID")
bundleRoot, err := initTestTemplate(t, ctx, templateName, map[string]any{
"node_type_id": nodeTypeId,
@ -57,7 +56,7 @@ func TestAccPythonWheelTaskDeployAndRunWithWrapper(t *testing.T) {
func TestAccPythonWheelTaskDeployAndRunOnInteractiveCluster(t *testing.T) {
_, wt := acc.WorkspaceTest(t)
if testutil.IsAWSCloud(wt.T) {
if testutil.IsAWSCloud(wt) {
t.Skip("Skipping test for AWS cloud because it is not permitted to create clusters")
}

View File

@ -4,7 +4,6 @@ import (
"context"
"testing"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/env"
@ -13,8 +12,7 @@ import (
)
func runSparkJarTestCommon(t *testing.T, ctx context.Context, sparkVersion, artifactPath string) {
cloudEnv := testutil.GetEnvOrSkipTest(t, "CLOUD_ENV")
nodeTypeId := internal.GetNodeTypeId(cloudEnv)
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
tmpDir := t.TempDir()
instancePoolId := env.Get(ctx, "TEST_INSTANCE_POOL_ID")
bundleRoot, err := initTestTemplateWithBundleRoot(t, ctx, "spark_jar_task", map[string]any{
@ -42,7 +40,7 @@ func runSparkJarTestCommon(t *testing.T, ctx context.Context, sparkVersion, arti
func runSparkJarTestFromVolume(t *testing.T, sparkVersion string) {
ctx, wt := acc.UcWorkspaceTest(t)
volumePath := internal.TemporaryUcVolume(t, wt.W)
volumePath := acc.TemporaryVolume(wt)
ctx = env.Set(ctx, "DATABRICKS_BUNDLE_TARGET", "volume")
runSparkJarTestCommon(t, ctx, sparkVersion, volumePath)
}

View File

@ -6,6 +6,7 @@ import (
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go/listing"
"github.com/databricks/databricks-sdk-go/service/compute"
@ -16,7 +17,7 @@ import (
func TestAccClustersList(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
stdout, stderr := RequireSuccessfulRun(t, "clusters", "list")
stdout, stderr := testcli.RequireSuccessfulRun(t, "clusters", "list")
outStr := stdout.String()
assert.Contains(t, outStr, "ID")
assert.Contains(t, outStr, "Name")
@ -32,14 +33,14 @@ func TestAccClustersGet(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
clusterId := findValidClusterID(t)
stdout, stderr := RequireSuccessfulRun(t, "clusters", "get", clusterId)
stdout, stderr := testcli.RequireSuccessfulRun(t, "clusters", "get", clusterId)
outStr := stdout.String()
assert.Contains(t, outStr, fmt.Sprintf(`"cluster_id":"%s"`, clusterId))
assert.Equal(t, "", stderr.String())
}
func TestClusterCreateErrorWhenNoArguments(t *testing.T) {
_, _, err := RequireErrorRun(t, "clusters", "create")
_, _, err := testcli.RequireErrorRun(t, "clusters", "create")
assert.Contains(t, err.Error(), "accepts 1 arg(s), received 0")
}

View File

@ -7,6 +7,7 @@ import (
"testing"
_ "github.com/databricks/cli/cmd/fs"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/filer"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -21,7 +22,7 @@ func TestAccFsCompletion(t *testing.T) {
f, tmpDir := setupDbfsFiler(t)
setupCompletionFile(t, f)
stdout, _ := RequireSuccessfulRun(t, "__complete", "fs", "ls", tmpDir+"/")
stdout, _ := testcli.RequireSuccessfulRun(t, "__complete", "fs", "ls", tmpDir+"/")
expectedOutput := fmt.Sprintf("%s/dir1/\n:2\n", tmpDir)
assert.Equal(t, expectedOutput, stdout.String())
}

View File

@ -28,7 +28,7 @@ func TestAccDashboardAssumptions_WorkspaceImport(t *testing.T) {
dashboardPayload := []byte(`{"pages":[{"name":"2506f97a","displayName":"New Page"}]}`)
warehouseId := testutil.GetEnvOrSkipTest(t, "TEST_DEFAULT_WAREHOUSE_ID")
dir := wt.TemporaryWorkspaceDir("dashboard-assumptions-")
dir := acc.TemporaryWorkspaceDir(wt, "dashboard-assumptions-")
dashboard, err := wt.W.Lakeview.Create(ctx, dashboards.CreateDashboardRequest{
Dashboard: &dashboards.Dashboard{

View File

@ -122,7 +122,7 @@ func TestAccFilerRecursiveDelete(t *testing.T) {
for _, testCase := range []struct {
name string
f func(t *testing.T) (filer.Filer, string)
f func(t testutil.TestingT) (filer.Filer, string)
}{
{"local", setupLocalFiler},
{"workspace files", setupWsfsFiler},
@ -233,7 +233,7 @@ func TestAccFilerReadWrite(t *testing.T) {
for _, testCase := range []struct {
name string
f func(t *testing.T) (filer.Filer, string)
f func(t testutil.TestingT) (filer.Filer, string)
}{
{"local", setupLocalFiler},
{"workspace files", setupWsfsFiler},
@ -342,7 +342,7 @@ func TestAccFilerReadDir(t *testing.T) {
for _, testCase := range []struct {
name string
f func(t *testing.T) (filer.Filer, string)
f func(t testutil.TestingT) (filer.Filer, string)
}{
{"local", setupLocalFiler},
{"workspace files", setupWsfsFiler},

View File

@ -7,9 +7,10 @@ import (
"strings"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -27,7 +28,7 @@ func TestAccFsCat(t *testing.T) {
err := f.Write(context.Background(), "hello.txt", strings.NewReader("abcd"), filer.CreateParentDirectories)
require.NoError(t, err)
stdout, stderr := RequireSuccessfulRun(t, "fs", "cat", path.Join(tmpDir, "hello.txt"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "cat", path.Join(tmpDir, "hello.txt"))
assert.Equal(t, "", stderr.String())
assert.Equal(t, "abcd", stdout.String())
})
@ -47,7 +48,7 @@ func TestAccFsCatOnADir(t *testing.T) {
err := f.Mkdir(context.Background(), "dir1")
require.NoError(t, err)
_, _, err = RequireErrorRun(t, "fs", "cat", path.Join(tmpDir, "dir1"))
_, _, err = testcli.RequireErrorRun(t, "fs", "cat", path.Join(tmpDir, "dir1"))
assert.ErrorAs(t, err, &filer.NotAFile{})
})
}
@ -64,7 +65,7 @@ func TestAccFsCatOnNonExistentFile(t *testing.T) {
_, tmpDir := tc.setupFiler(t)
_, _, err := RequireErrorRun(t, "fs", "cat", path.Join(tmpDir, "non-existent-file"))
_, _, err := testcli.RequireErrorRun(t, "fs", "cat", path.Join(tmpDir, "non-existent-file"))
assert.ErrorIs(t, err, fs.ErrNotExist)
})
}
@ -73,25 +74,21 @@ func TestAccFsCatOnNonExistentFile(t *testing.T) {
func TestAccFsCatForDbfsInvalidScheme(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
_, _, err := RequireErrorRun(t, "fs", "cat", "dab:/non-existent-file")
_, _, err := testcli.RequireErrorRun(t, "fs", "cat", "dab:/non-existent-file")
assert.ErrorContains(t, err, "invalid scheme: dab")
}
func TestAccFsCatDoesNotSupportOutputModeJson(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
require.NoError(t, err)
tmpDir := TemporaryDbfsDir(t, w)
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
tmpDir := acc.TemporaryDbfsDir(wt, "fs-cat-")
f, err := filer.NewDbfsClient(w, tmpDir)
require.NoError(t, err)
err = f.Write(ctx, "hello.txt", strings.NewReader("abc"))
require.NoError(t, err)
_, _, err = RequireErrorRun(t, "fs", "cat", "dbfs:"+path.Join(tmpDir, "hello.txt"), "--output=json")
_, _, err = testcli.RequireErrorRun(t, "fs", "cat", "dbfs:"+path.Join(tmpDir, "hello.txt"), "--output=json")
assert.ErrorContains(t, err, "json output not supported")
}

View File

@ -10,6 +10,7 @@ import (
"strings"
"testing"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/stretchr/testify/assert"
@ -62,8 +63,8 @@ func assertTargetDir(t *testing.T, ctx context.Context, f filer.Filer) {
type cpTest struct {
name string
setupSource func(*testing.T) (filer.Filer, string)
setupTarget func(*testing.T) (filer.Filer, string)
setupSource func(testutil.TestingT) (filer.Filer, string)
setupTarget func(testutil.TestingT) (filer.Filer, string)
}
func copyTests() []cpTest {
@ -134,7 +135,7 @@ func TestAccFsCpDir(t *testing.T) {
targetFiler, targetDir := tc.setupTarget(t)
setupSourceDir(t, context.Background(), sourceFiler)
RequireSuccessfulRun(t, "fs", "cp", sourceDir, targetDir, "--recursive")
testcli.RequireSuccessfulRun(t, "fs", "cp", sourceDir, targetDir, "--recursive")
assertTargetDir(t, context.Background(), targetFiler)
})
@ -154,7 +155,7 @@ func TestAccFsCpFileToFile(t *testing.T) {
targetFiler, targetDir := tc.setupTarget(t)
setupSourceFile(t, context.Background(), sourceFiler)
RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "foo.txt"), path.Join(targetDir, "bar.txt"))
testcli.RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "foo.txt"), path.Join(targetDir, "bar.txt"))
assertTargetFile(t, context.Background(), targetFiler, "bar.txt")
})
@ -174,7 +175,7 @@ func TestAccFsCpFileToDir(t *testing.T) {
targetFiler, targetDir := tc.setupTarget(t)
setupSourceFile(t, context.Background(), sourceFiler)
RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "foo.txt"), targetDir)
testcli.RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "foo.txt"), targetDir)
assertTargetFile(t, context.Background(), targetFiler, "foo.txt")
})
@ -193,7 +194,7 @@ func TestAccFsCpFileToDirForWindowsPaths(t *testing.T) {
windowsPath := filepath.Join(filepath.FromSlash(sourceDir), "foo.txt")
RequireSuccessfulRun(t, "fs", "cp", windowsPath, targetDir)
testcli.RequireSuccessfulRun(t, "fs", "cp", windowsPath, targetDir)
assertTargetFile(t, ctx, targetFiler, "foo.txt")
}
@ -214,7 +215,7 @@ func TestAccFsCpDirToDirFileNotOverwritten(t *testing.T) {
err := targetFiler.Write(context.Background(), "a/b/c/hello.txt", strings.NewReader("this should not be overwritten"), filer.CreateParentDirectories)
require.NoError(t, err)
RequireSuccessfulRun(t, "fs", "cp", sourceDir, targetDir, "--recursive")
testcli.RequireSuccessfulRun(t, "fs", "cp", sourceDir, targetDir, "--recursive")
assertFileContent(t, context.Background(), targetFiler, "a/b/c/hello.txt", "this should not be overwritten")
assertFileContent(t, context.Background(), targetFiler, "query.sql", "SELECT 1")
assertFileContent(t, context.Background(), targetFiler, "pyNb.py", "# Databricks notebook source\nprint(123)")
@ -239,7 +240,7 @@ func TestAccFsCpFileToDirFileNotOverwritten(t *testing.T) {
err := targetFiler.Write(context.Background(), "a/b/c/hello.txt", strings.NewReader("this should not be overwritten"), filer.CreateParentDirectories)
require.NoError(t, err)
RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c"))
testcli.RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c"))
assertFileContent(t, context.Background(), targetFiler, "a/b/c/hello.txt", "this should not be overwritten")
})
}
@ -262,7 +263,7 @@ func TestAccFsCpFileToFileFileNotOverwritten(t *testing.T) {
err := targetFiler.Write(context.Background(), "a/b/c/dontoverwrite.txt", strings.NewReader("this should not be overwritten"), filer.CreateParentDirectories)
require.NoError(t, err)
RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c/dontoverwrite.txt"))
testcli.RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c/dontoverwrite.txt"))
assertFileContent(t, context.Background(), targetFiler, "a/b/c/dontoverwrite.txt", "this should not be overwritten")
})
}
@ -285,7 +286,7 @@ func TestAccFsCpDirToDirWithOverwriteFlag(t *testing.T) {
err := targetFiler.Write(context.Background(), "a/b/c/hello.txt", strings.NewReader("this should be overwritten"), filer.CreateParentDirectories)
require.NoError(t, err)
RequireSuccessfulRun(t, "fs", "cp", sourceDir, targetDir, "--recursive", "--overwrite")
testcli.RequireSuccessfulRun(t, "fs", "cp", sourceDir, targetDir, "--recursive", "--overwrite")
assertTargetDir(t, context.Background(), targetFiler)
})
}
@ -308,7 +309,7 @@ func TestAccFsCpFileToFileWithOverwriteFlag(t *testing.T) {
err := targetFiler.Write(context.Background(), "a/b/c/overwritten.txt", strings.NewReader("this should be overwritten"), filer.CreateParentDirectories)
require.NoError(t, err)
RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c/overwritten.txt"), "--overwrite")
testcli.RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c/overwritten.txt"), "--overwrite")
assertFileContent(t, context.Background(), targetFiler, "a/b/c/overwritten.txt", "hello, world\n")
})
}
@ -331,7 +332,7 @@ func TestAccFsCpFileToDirWithOverwriteFlag(t *testing.T) {
err := targetFiler.Write(context.Background(), "a/b/c/hello.txt", strings.NewReader("this should be overwritten"), filer.CreateParentDirectories)
require.NoError(t, err)
RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c"), "--overwrite")
testcli.RequireSuccessfulRun(t, "fs", "cp", path.Join(sourceDir, "a/b/c/hello.txt"), path.Join(targetDir, "a/b/c"), "--overwrite")
assertFileContent(t, context.Background(), targetFiler, "a/b/c/hello.txt", "hello, world\n")
})
}
@ -348,7 +349,7 @@ func TestAccFsCpErrorsWhenSourceIsDirWithoutRecursiveFlag(t *testing.T) {
_, tmpDir := tc.setupFiler(t)
_, _, err := RequireErrorRun(t, "fs", "cp", path.Join(tmpDir), path.Join(tmpDir, "foobar"))
_, _, err := testcli.RequireErrorRun(t, "fs", "cp", path.Join(tmpDir), path.Join(tmpDir, "foobar"))
r := regexp.MustCompile("source path .* is a directory. Please specify the --recursive flag")
assert.Regexp(t, r, err.Error())
})
@ -358,7 +359,7 @@ func TestAccFsCpErrorsWhenSourceIsDirWithoutRecursiveFlag(t *testing.T) {
func TestAccFsCpErrorsOnInvalidScheme(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
_, _, err := RequireErrorRun(t, "fs", "cp", "dbfs:/a", "https:/b")
_, _, err := testcli.RequireErrorRun(t, "fs", "cp", "dbfs:/a", "https:/b")
assert.Equal(t, "invalid scheme: https", err.Error())
}
@ -379,7 +380,7 @@ func TestAccFsCpSourceIsDirectoryButTargetIsFile(t *testing.T) {
err := targetFiler.Write(context.Background(), "my_target", strings.NewReader("I'll block any attempts to recursively copy"), filer.CreateParentDirectories)
require.NoError(t, err)
_, _, err = RequireErrorRun(t, "fs", "cp", sourceDir, path.Join(targetDir, "my_target"), "--recursive")
_, _, err = testcli.RequireErrorRun(t, "fs", "cp", sourceDir, path.Join(targetDir, "my_target"), "--recursive")
assert.Error(t, err)
})
}

View File

@ -10,6 +10,7 @@ import (
"testing"
_ "github.com/databricks/cli/cmd/fs"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/stretchr/testify/assert"
@ -18,7 +19,7 @@ import (
type fsTest struct {
name string
setupFiler func(t *testing.T) (filer.Filer, string)
setupFiler func(t testutil.TestingT) (filer.Filer, string)
}
var fsTests = []fsTest{
@ -51,7 +52,7 @@ func TestAccFsLs(t *testing.T) {
f, tmpDir := tc.setupFiler(t)
setupLsFiles(t, f)
stdout, stderr := RequireSuccessfulRun(t, "fs", "ls", tmpDir, "--output=json")
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "ls", tmpDir, "--output=json")
assert.Equal(t, "", stderr.String())
var parsedStdout []map[string]any
@ -84,7 +85,7 @@ func TestAccFsLsWithAbsolutePaths(t *testing.T) {
f, tmpDir := tc.setupFiler(t)
setupLsFiles(t, f)
stdout, stderr := RequireSuccessfulRun(t, "fs", "ls", tmpDir, "--output=json", "--absolute")
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "ls", tmpDir, "--output=json", "--absolute")
assert.Equal(t, "", stderr.String())
var parsedStdout []map[string]any
@ -116,7 +117,7 @@ func TestAccFsLsOnFile(t *testing.T) {
f, tmpDir := tc.setupFiler(t)
setupLsFiles(t, f)
_, _, err := RequireErrorRun(t, "fs", "ls", path.Join(tmpDir, "a", "hello.txt"), "--output=json")
_, _, err := testcli.RequireErrorRun(t, "fs", "ls", path.Join(tmpDir, "a", "hello.txt"), "--output=json")
assert.Regexp(t, regexp.MustCompile("not a directory: .*/a/hello.txt"), err.Error())
assert.ErrorAs(t, err, &filer.NotADirectory{})
})
@ -134,7 +135,7 @@ func TestAccFsLsOnEmptyDir(t *testing.T) {
_, tmpDir := tc.setupFiler(t)
stdout, stderr := RequireSuccessfulRun(t, "fs", "ls", tmpDir, "--output=json")
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "ls", tmpDir, "--output=json")
assert.Equal(t, "", stderr.String())
var parsedStdout []map[string]any
err := json.Unmarshal(stdout.Bytes(), &parsedStdout)
@ -157,7 +158,7 @@ func TestAccFsLsForNonexistingDir(t *testing.T) {
_, tmpDir := tc.setupFiler(t)
_, _, err := RequireErrorRun(t, "fs", "ls", path.Join(tmpDir, "nonexistent"), "--output=json")
_, _, err := testcli.RequireErrorRun(t, "fs", "ls", path.Join(tmpDir, "nonexistent"), "--output=json")
assert.ErrorIs(t, err, fs.ErrNotExist)
assert.Regexp(t, regexp.MustCompile("no such directory: .*/nonexistent"), err.Error())
})
@ -169,6 +170,6 @@ func TestAccFsLsWithoutScheme(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
_, _, err := RequireErrorRun(t, "fs", "ls", "/path-without-a-dbfs-scheme", "--output=json")
_, _, err := testcli.RequireErrorRun(t, "fs", "ls", "/path-without-a-dbfs-scheme", "--output=json")
assert.ErrorIs(t, err, fs.ErrNotExist)
}

View File

@ -7,6 +7,7 @@ import (
"strings"
"testing"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/filer"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -24,7 +25,7 @@ func TestAccFsMkdir(t *testing.T) {
f, tmpDir := tc.setupFiler(t)
// create directory "a"
stdout, stderr := RequireSuccessfulRun(t, "fs", "mkdir", path.Join(tmpDir, "a"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "mkdir", path.Join(tmpDir, "a"))
assert.Equal(t, "", stderr.String())
assert.Equal(t, "", stdout.String())
@ -49,7 +50,7 @@ func TestAccFsMkdirCreatesIntermediateDirectories(t *testing.T) {
f, tmpDir := tc.setupFiler(t)
// create directory "a/b/c"
stdout, stderr := RequireSuccessfulRun(t, "fs", "mkdir", path.Join(tmpDir, "a", "b", "c"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "mkdir", path.Join(tmpDir, "a", "b", "c"))
assert.Equal(t, "", stderr.String())
assert.Equal(t, "", stdout.String())
@ -90,7 +91,7 @@ func TestAccFsMkdirWhenDirectoryAlreadyExists(t *testing.T) {
require.NoError(t, err)
// assert run is successful without any errors
stdout, stderr := RequireSuccessfulRun(t, "fs", "mkdir", path.Join(tmpDir, "a"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "mkdir", path.Join(tmpDir, "a"))
assert.Equal(t, "", stderr.String())
assert.Equal(t, "", stdout.String())
})
@ -110,7 +111,7 @@ func TestAccFsMkdirWhenFileExistsAtPath(t *testing.T) {
require.NoError(t, err)
// assert mkdir fails
_, _, err = RequireErrorRun(t, "fs", "mkdir", path.Join(tmpDir, "hello"))
_, _, err = testcli.RequireErrorRun(t, "fs", "mkdir", path.Join(tmpDir, "hello"))
// Different cloud providers or cloud configurations return different errors.
regex := regexp.MustCompile(`(^|: )Path is a file: .*$|(^|: )Cannot create directory .* because .* is an existing file\.$|(^|: )mkdirs\(hadoopPath: .*, permission: rwxrwxrwx\): failed$|(^|: )"The specified path already exists.".*$`)
@ -127,7 +128,7 @@ func TestAccFsMkdirWhenFileExistsAtPath(t *testing.T) {
require.NoError(t, err)
// assert mkdir fails
_, _, err = RequireErrorRun(t, "fs", "mkdir", path.Join(tmpDir, "hello"))
_, _, err = testcli.RequireErrorRun(t, "fs", "mkdir", path.Join(tmpDir, "hello"))
assert.ErrorAs(t, err, &filer.FileAlreadyExistsError{})
})

View File

@ -7,6 +7,7 @@ import (
"strings"
"testing"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/libs/filer"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -31,7 +32,7 @@ func TestAccFsRmFile(t *testing.T) {
assert.NoError(t, err)
// Run rm command
stdout, stderr := RequireSuccessfulRun(t, "fs", "rm", path.Join(tmpDir, "hello.txt"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "rm", path.Join(tmpDir, "hello.txt"))
assert.Equal(t, "", stderr.String())
assert.Equal(t, "", stdout.String())
@ -61,7 +62,7 @@ func TestAccFsRmEmptyDir(t *testing.T) {
assert.NoError(t, err)
// Run rm command
stdout, stderr := RequireSuccessfulRun(t, "fs", "rm", path.Join(tmpDir, "a"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "rm", path.Join(tmpDir, "a"))
assert.Equal(t, "", stderr.String())
assert.Equal(t, "", stdout.String())
@ -95,7 +96,7 @@ func TestAccFsRmNonEmptyDirectory(t *testing.T) {
assert.NoError(t, err)
// Run rm command
_, _, err = RequireErrorRun(t, "fs", "rm", path.Join(tmpDir, "a"))
_, _, err = testcli.RequireErrorRun(t, "fs", "rm", path.Join(tmpDir, "a"))
assert.ErrorIs(t, err, fs.ErrInvalid)
assert.ErrorAs(t, err, &filer.DirectoryNotEmptyError{})
})
@ -114,7 +115,7 @@ func TestAccFsRmForNonExistentFile(t *testing.T) {
_, tmpDir := tc.setupFiler(t)
// Expect error if file does not exist
_, _, err := RequireErrorRun(t, "fs", "rm", path.Join(tmpDir, "does-not-exist"))
_, _, err := testcli.RequireErrorRun(t, "fs", "rm", path.Join(tmpDir, "does-not-exist"))
assert.ErrorIs(t, err, fs.ErrNotExist)
})
}
@ -144,7 +145,7 @@ func TestAccFsRmDirRecursively(t *testing.T) {
assert.NoError(t, err)
// Run rm command
stdout, stderr := RequireSuccessfulRun(t, "fs", "rm", path.Join(tmpDir, "a"), "--recursive")
stdout, stderr := testcli.RequireSuccessfulRun(t, "fs", "rm", path.Join(tmpDir, "a"), "--recursive")
assert.Equal(t, "", stderr.String())
assert.Equal(t, "", stdout.String())

View File

@ -8,6 +8,7 @@ import (
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/git"
@ -44,9 +45,9 @@ func TestAccFetchRepositoryInfoAPI_FromRepo(t *testing.T) {
require.NoError(t, err)
targetPath := testutil.RandomName(path.Join("/Workspace/Users", me.UserName, "/testing-clone-bundle-examples-"))
stdout, stderr := RequireSuccessfulRun(t, "repos", "create", examplesRepoUrl, examplesRepoProvider, "--path", targetPath)
stdout, stderr := testcli.RequireSuccessfulRun(t, "repos", "create", examplesRepoUrl, examplesRepoProvider, "--path", targetPath)
t.Cleanup(func() {
RequireSuccessfulRun(t, "repos", "delete", targetPath)
testcli.RequireSuccessfulRun(t, "repos", "delete", targetPath)
})
assert.Empty(t, stderr.String())
@ -71,9 +72,9 @@ func TestAccFetchRepositoryInfoAPI_FromNonRepo(t *testing.T) {
require.NoError(t, err)
rootPath := testutil.RandomName(path.Join("/Workspace/Users", me.UserName, "testing-nonrepo-"))
_, stderr := RequireSuccessfulRun(t, "workspace", "mkdirs", path.Join(rootPath, "a/b/c"))
_, stderr := testcli.RequireSuccessfulRun(t, "workspace", "mkdirs", path.Join(rootPath, "a/b/c"))
t.Cleanup(func() {
RequireSuccessfulRun(t, "workspace", "delete", "--recursive", rootPath)
testcli.RequireSuccessfulRun(t, "workspace", "delete", "--recursive", rootPath)
})
assert.Empty(t, stderr.String())

View File

@ -1,528 +1,21 @@
package internal
import (
"bufio"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"os"
"path"
"path/filepath"
"reflect"
"strings"
"sync"
"testing"
"time"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/cli/cmd"
_ "github.com/databricks/cli/cmd/version"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/files"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/stretchr/testify/require"
_ "github.com/databricks/cli/cmd/workspace"
)
// Helper for running the root command in the background.
// It ensures that the background goroutine terminates upon
// test completion through cancelling the command context.
type cobraTestRunner struct {
*testing.T
args []string
stdout bytes.Buffer
stderr bytes.Buffer
stdinR *io.PipeReader
stdinW *io.PipeWriter
ctx context.Context
// Line-by-line output.
// Background goroutines populate these channels by reading from stdout/stderr pipes.
stdoutLines <-chan string
stderrLines <-chan string
errch <-chan error
}
func consumeLines(ctx context.Context, wg *sync.WaitGroup, r io.Reader) <-chan string {
ch := make(chan string, 30000)
wg.Add(1)
go func() {
defer close(ch)
defer wg.Done()
scanner := bufio.NewScanner(r)
for scanner.Scan() {
// We expect to be able to always send these lines into the channel.
// If we can't, it means the channel is full and likely there is a problem
// in either the test or the code under test.
select {
case <-ctx.Done():
return
case ch <- scanner.Text():
continue
default:
panic("line buffer is full")
}
}
}()
return ch
}
func (t *cobraTestRunner) registerFlagCleanup(c *cobra.Command) {
// Find target command that will be run. Example: if the command run is `databricks fs cp`,
// target command corresponds to `cp`
targetCmd, _, err := c.Find(t.args)
if err != nil && strings.HasPrefix(err.Error(), "unknown command") {
// even if command is unknown, we can proceed
require.NotNil(t, targetCmd)
} else {
require.NoError(t, err)
}
// Force initialization of default flags.
// These are initialized by cobra at execution time and would otherwise
// not be cleaned up by the cleanup function below.
targetCmd.InitDefaultHelpFlag()
targetCmd.InitDefaultVersionFlag()
// Restore flag values to their original value on test completion.
targetCmd.Flags().VisitAll(func(f *pflag.Flag) {
v := reflect.ValueOf(f.Value)
if v.Kind() == reflect.Ptr {
v = v.Elem()
}
// Store copy of the current flag value.
reset := reflect.New(v.Type()).Elem()
reset.Set(v)
t.Cleanup(func() {
v.Set(reset)
})
})
}
// Like [cobraTestRunner.Eventually], but more specific
func (t *cobraTestRunner) WaitForTextPrinted(text string, timeout time.Duration) {
t.Eventually(func() bool {
currentStdout := t.stdout.String()
return strings.Contains(currentStdout, text)
}, timeout, 50*time.Millisecond)
}
func (t *cobraTestRunner) WaitForOutput(text string, timeout time.Duration) {
require.Eventually(t.T, func() bool {
currentStdout := t.stdout.String()
currentErrout := t.stderr.String()
return strings.Contains(currentStdout, text) || strings.Contains(currentErrout, text)
}, timeout, 50*time.Millisecond)
}
func (t *cobraTestRunner) WithStdin() {
reader, writer := io.Pipe()
t.stdinR = reader
t.stdinW = writer
}
func (t *cobraTestRunner) CloseStdin() {
if t.stdinW == nil {
panic("no standard input configured")
}
t.stdinW.Close()
}
func (t *cobraTestRunner) SendText(text string) {
if t.stdinW == nil {
panic("no standard input configured")
}
_, err := t.stdinW.Write([]byte(text + "\n"))
if err != nil {
panic("Failed to to write to t.stdinW")
}
}
func (t *cobraTestRunner) RunBackground() {
var stdoutR, stderrR io.Reader
var stdoutW, stderrW io.WriteCloser
stdoutR, stdoutW = io.Pipe()
stderrR, stderrW = io.Pipe()
ctx := cmdio.NewContext(t.ctx, &cmdio.Logger{
Mode: flags.ModeAppend,
Reader: bufio.Reader{},
Writer: stderrW,
})
cli := cmd.New(ctx)
cli.SetOut(stdoutW)
cli.SetErr(stderrW)
cli.SetArgs(t.args)
if t.stdinW != nil {
cli.SetIn(t.stdinR)
}
// Register cleanup function to restore flags to their original values
// once test has been executed. This is needed because flag values reside
// in a global singleton data-structure, and thus subsequent tests might
// otherwise interfere with each other
t.registerFlagCleanup(cli)
errch := make(chan error)
ctx, cancel := context.WithCancel(ctx)
// Tee stdout/stderr to buffers.
stdoutR = io.TeeReader(stdoutR, &t.stdout)
stderrR = io.TeeReader(stderrR, &t.stderr)
// Consume stdout/stderr line-by-line.
var wg sync.WaitGroup
t.stdoutLines = consumeLines(ctx, &wg, stdoutR)
t.stderrLines = consumeLines(ctx, &wg, stderrR)
// Run command in background.
go func() {
err := root.Execute(ctx, cli)
if err != nil {
t.Logf("Error running command: %s", err)
}
// Close pipes to signal EOF.
stdoutW.Close()
stderrW.Close()
// Wait for the [consumeLines] routines to finish now that
// the pipes they're reading from have closed.
wg.Wait()
if t.stdout.Len() > 0 {
// Make a copy of the buffer such that it remains "unread".
scanner := bufio.NewScanner(bytes.NewBuffer(t.stdout.Bytes()))
for scanner.Scan() {
t.Logf("[databricks stdout]: %s", scanner.Text())
}
}
if t.stderr.Len() > 0 {
// Make a copy of the buffer such that it remains "unread".
scanner := bufio.NewScanner(bytes.NewBuffer(t.stderr.Bytes()))
for scanner.Scan() {
t.Logf("[databricks stderr]: %s", scanner.Text())
}
}
// Reset context on command for the next test.
// These commands are globals so we have to clean up to the best of our ability after each run.
// See https://github.com/spf13/cobra/blob/a6f198b635c4b18fff81930c40d464904e55b161/command.go#L1062-L1066
//nolint:staticcheck // cobra sets the context and doesn't clear it
cli.SetContext(nil)
// Make caller aware of error.
errch <- err
close(errch)
}()
// Ensure command terminates upon test completion (success or failure).
t.Cleanup(func() {
// Signal termination of command.
cancel()
// Wait for goroutine to finish.
<-errch
})
t.errch = errch
}
func (t *cobraTestRunner) Run() (bytes.Buffer, bytes.Buffer, error) {
t.RunBackground()
err := <-t.errch
return t.stdout, t.stderr, err
}
// Like [require.Eventually] but errors if the underlying command has failed.
func (c *cobraTestRunner) Eventually(condition func() bool, waitFor, tick time.Duration, msgAndArgs ...any) {
ch := make(chan bool, 1)
timer := time.NewTimer(waitFor)
defer timer.Stop()
ticker := time.NewTicker(tick)
defer ticker.Stop()
// Kick off condition check immediately.
go func() { ch <- condition() }()
for tick := ticker.C; ; {
select {
case err := <-c.errch:
require.Fail(c, "Command failed", err)
return
case <-timer.C:
require.Fail(c, "Condition never satisfied", msgAndArgs...)
return
case <-tick:
tick = nil
go func() { ch <- condition() }()
case v := <-ch:
if v {
return
}
tick = ticker.C
}
}
}
func (t *cobraTestRunner) RunAndExpectOutput(heredoc string) {
stdout, _, err := t.Run()
require.NoError(t, err)
require.Equal(t, cmdio.Heredoc(heredoc), strings.TrimSpace(stdout.String()))
}
func (t *cobraTestRunner) RunAndParseJSON(v any) {
stdout, _, err := t.Run()
require.NoError(t, err)
err = json.Unmarshal(stdout.Bytes(), &v)
require.NoError(t, err)
}
func NewCobraTestRunner(t *testing.T, args ...string) *cobraTestRunner {
return &cobraTestRunner{
T: t,
ctx: context.Background(),
args: args,
}
}
func NewCobraTestRunnerWithContext(t *testing.T, ctx context.Context, args ...string) *cobraTestRunner {
return &cobraTestRunner{
T: t,
ctx: ctx,
args: args,
}
}
func RequireSuccessfulRun(t *testing.T, args ...string) (bytes.Buffer, bytes.Buffer) {
t.Logf("run args: [%s]", strings.Join(args, ", "))
c := NewCobraTestRunner(t, args...)
stdout, stderr, err := c.Run()
require.NoError(t, err)
return stdout, stderr
}
func RequireErrorRun(t *testing.T, args ...string) (bytes.Buffer, bytes.Buffer, error) {
c := NewCobraTestRunner(t, args...)
stdout, stderr, err := c.Run()
require.Error(t, err)
return stdout, stderr, err
}
func GenerateNotebookTasks(notebookPath string, versions []string, nodeTypeId string) []jobs.SubmitTask {
tasks := make([]jobs.SubmitTask, 0)
for i := 0; i < len(versions); i++ {
task := jobs.SubmitTask{
TaskKey: fmt.Sprintf("notebook_%s", strings.ReplaceAll(versions[i], ".", "_")),
NotebookTask: &jobs.NotebookTask{
NotebookPath: notebookPath,
},
NewCluster: &compute.ClusterSpec{
SparkVersion: versions[i],
NumWorkers: 1,
NodeTypeId: nodeTypeId,
DataSecurityMode: compute.DataSecurityModeUserIsolation,
},
}
tasks = append(tasks, task)
}
return tasks
}
func GenerateSparkPythonTasks(notebookPath string, versions []string, nodeTypeId string) []jobs.SubmitTask {
tasks := make([]jobs.SubmitTask, 0)
for i := 0; i < len(versions); i++ {
task := jobs.SubmitTask{
TaskKey: fmt.Sprintf("spark_%s", strings.ReplaceAll(versions[i], ".", "_")),
SparkPythonTask: &jobs.SparkPythonTask{
PythonFile: notebookPath,
},
NewCluster: &compute.ClusterSpec{
SparkVersion: versions[i],
NumWorkers: 1,
NodeTypeId: nodeTypeId,
DataSecurityMode: compute.DataSecurityModeUserIsolation,
},
}
tasks = append(tasks, task)
}
return tasks
}
func GenerateWheelTasks(wheelPath string, versions []string, nodeTypeId string) []jobs.SubmitTask {
tasks := make([]jobs.SubmitTask, 0)
for i := 0; i < len(versions); i++ {
task := jobs.SubmitTask{
TaskKey: fmt.Sprintf("whl_%s", strings.ReplaceAll(versions[i], ".", "_")),
PythonWheelTask: &jobs.PythonWheelTask{
PackageName: "my_test_code",
EntryPoint: "run",
},
NewCluster: &compute.ClusterSpec{
SparkVersion: versions[i],
NumWorkers: 1,
NodeTypeId: nodeTypeId,
DataSecurityMode: compute.DataSecurityModeUserIsolation,
},
Libraries: []compute.Library{
{Whl: wheelPath},
},
}
tasks = append(tasks, task)
}
return tasks
}
func TemporaryWorkspaceDir(t *testing.T, w *databricks.WorkspaceClient) string {
ctx := context.Background()
me, err := w.CurrentUser.Me(ctx)
require.NoError(t, err)
basePath := fmt.Sprintf("/Users/%s/%s", me.UserName, testutil.RandomName("integration-test-wsfs-"))
t.Logf("Creating %s", basePath)
err = w.Workspace.MkdirsByPath(ctx, basePath)
require.NoError(t, err)
// Remove test directory on test completion.
t.Cleanup(func() {
t.Logf("Removing %s", basePath)
err := w.Workspace.Delete(ctx, workspace.Delete{
Path: basePath,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("Unable to remove temporary workspace directory %s: %#v", basePath, err)
})
return basePath
}
func TemporaryDbfsDir(t *testing.T, w *databricks.WorkspaceClient) string {
ctx := context.Background()
path := fmt.Sprintf("/tmp/%s", testutil.RandomName("integration-test-dbfs-"))
t.Logf("Creating DBFS folder:%s", path)
err := w.Dbfs.MkdirsByPath(ctx, path)
require.NoError(t, err)
t.Cleanup(func() {
t.Logf("Removing DBFS folder:%s", path)
err := w.Dbfs.Delete(ctx, files.Delete{
Path: path,
Recursive: true,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("unable to remove temporary dbfs directory %s: %#v", path, err)
})
return path
}
// Create a new UC volume in a catalog called "main" in the workspace.
func TemporaryUcVolume(t *testing.T, w *databricks.WorkspaceClient) string {
ctx := context.Background()
// Create a schema
schema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
CatalogName: "main",
Name: testutil.RandomName("test-schema-"),
})
require.NoError(t, err)
t.Cleanup(func() {
err := w.Schemas.Delete(ctx, catalog.DeleteSchemaRequest{
FullName: schema.FullName,
})
require.NoError(t, err)
})
// Create a volume
volume, err := w.Volumes.Create(ctx, catalog.CreateVolumeRequestContent{
CatalogName: "main",
SchemaName: schema.Name,
Name: "my-volume",
VolumeType: catalog.VolumeTypeManaged,
})
require.NoError(t, err)
t.Cleanup(func() {
err := w.Volumes.Delete(ctx, catalog.DeleteVolumeRequest{
Name: volume.FullName,
})
require.NoError(t, err)
})
return path.Join("/Volumes", "main", schema.Name, volume.Name)
}
func TemporaryRepo(t *testing.T, w *databricks.WorkspaceClient) string {
ctx := context.Background()
me, err := w.CurrentUser.Me(ctx)
require.NoError(t, err)
repoPath := fmt.Sprintf("/Repos/%s/%s", me.UserName, testutil.RandomName("integration-test-repo-"))
t.Logf("Creating repo:%s", repoPath)
repoInfo, err := w.Repos.Create(ctx, workspace.CreateRepoRequest{
Url: "https://github.com/databricks/cli",
Provider: "github",
Path: repoPath,
})
require.NoError(t, err)
t.Cleanup(func() {
t.Logf("Removing repo: %s", repoPath)
err := w.Repos.Delete(ctx, workspace.DeleteRepoRequest{
RepoId: repoInfo.Id,
})
if err == nil || apierr.IsMissing(err) {
return
}
t.Logf("unable to remove repo %s: %#v", repoPath, err)
})
return repoPath
}
func GetNodeTypeId(env string) string {
if env == "gcp" {
return "n1-standard-4"
} else if env == "aws" || env == "ucws" {
// aws-prod-ucws has CLOUD_ENV set to "ucws"
return "i3.xlarge"
}
return "Standard_DS4_v2"
}
func setupLocalFiler(t *testing.T) (filer.Filer, string) {
func setupLocalFiler(t testutil.TestingT) (filer.Filer, string) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
tmp := t.TempDir()
@ -532,10 +25,10 @@ func setupLocalFiler(t *testing.T) (filer.Filer, string) {
return f, path.Join(filepath.ToSlash(tmp))
}
func setupWsfsFiler(t *testing.T) (filer.Filer, string) {
func setupWsfsFiler(t testutil.TestingT) (filer.Filer, string) {
ctx, wt := acc.WorkspaceTest(t)
tmpdir := TemporaryWorkspaceDir(t, wt.W)
tmpdir := acc.TemporaryWorkspaceDir(wt)
f, err := filer.NewWorkspaceFilesClient(wt.W, tmpdir)
require.NoError(t, err)
@ -549,39 +42,34 @@ func setupWsfsFiler(t *testing.T) (filer.Filer, string) {
return f, tmpdir
}
func setupWsfsExtensionsFiler(t *testing.T) (filer.Filer, string) {
func setupWsfsExtensionsFiler(t testutil.TestingT) (filer.Filer, string) {
_, wt := acc.WorkspaceTest(t)
tmpdir := TemporaryWorkspaceDir(t, wt.W)
tmpdir := acc.TemporaryWorkspaceDir(wt)
f, err := filer.NewWorkspaceFilesExtensionsClient(wt.W, tmpdir)
require.NoError(t, err)
return f, tmpdir
}
func setupDbfsFiler(t *testing.T) (filer.Filer, string) {
func setupDbfsFiler(t testutil.TestingT) (filer.Filer, string) {
_, wt := acc.WorkspaceTest(t)
tmpDir := TemporaryDbfsDir(t, wt.W)
f, err := filer.NewDbfsClient(wt.W, tmpDir)
tmpdir := acc.TemporaryDbfsDir(wt)
f, err := filer.NewDbfsClient(wt.W, tmpdir)
require.NoError(t, err)
return f, path.Join("dbfs:/", tmpDir)
return f, path.Join("dbfs:/", tmpdir)
}
func setupUcVolumesFiler(t *testing.T) (filer.Filer, string) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
func setupUcVolumesFiler(t testutil.TestingT) (filer.Filer, string) {
_, wt := acc.WorkspaceTest(t)
if os.Getenv("TEST_METASTORE_ID") == "" {
t.Skip("Skipping tests that require a UC Volume when metastore id is not set.")
}
w, err := databricks.NewWorkspaceClient()
tmpdir := acc.TemporaryVolume(wt)
f, err := filer.NewFilesClient(wt.W, tmpdir)
require.NoError(t, err)
tmpDir := TemporaryUcVolume(t, w)
f, err := filer.NewFilesClient(w, tmpDir)
require.NoError(t, err)
return f, path.Join("dbfs:/", tmpDir)
return f, path.Join("dbfs:/", tmpdir)
}

View File

@ -11,6 +11,7 @@ import (
"testing"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/iamutil"
"github.com/databricks/databricks-sdk-go"
@ -22,7 +23,7 @@ func TestAccBundleInitErrorOnUnknownFields(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
tmpDir := t.TempDir()
_, _, err := RequireErrorRun(t, "bundle", "init", "./testdata/init/field-does-not-exist", "--output-dir", tmpDir)
_, _, err := testcli.RequireErrorRun(t, "bundle", "init", "./testdata/init/field-does-not-exist", "--output-dir", tmpDir)
assert.EqualError(t, err, "failed to compute file content for bar.tmpl. variable \"does_not_exist\" not defined")
}
@ -63,7 +64,7 @@ func TestAccBundleInitOnMlopsStacks(t *testing.T) {
// Run bundle init
assert.NoFileExists(t, filepath.Join(tmpDir2, "repo_name", projectName, "README.md"))
RequireSuccessfulRun(t, "bundle", "init", "mlops-stacks", "--output-dir", tmpDir2, "--config-file", filepath.Join(tmpDir1, "config.json"))
testcli.RequireSuccessfulRun(t, "bundle", "init", "mlops-stacks", "--output-dir", tmpDir2, "--config-file", filepath.Join(tmpDir1, "config.json"))
// Assert that the README.md file was created
assert.FileExists(t, filepath.Join(tmpDir2, "repo_name", projectName, "README.md"))
@ -71,17 +72,17 @@ func TestAccBundleInitOnMlopsStacks(t *testing.T) {
// Validate the stack
testutil.Chdir(t, filepath.Join(tmpDir2, "repo_name", projectName))
RequireSuccessfulRun(t, "bundle", "validate")
testcli.RequireSuccessfulRun(t, "bundle", "validate")
// Deploy the stack
RequireSuccessfulRun(t, "bundle", "deploy")
testcli.RequireSuccessfulRun(t, "bundle", "deploy")
t.Cleanup(func() {
// Delete the stack
RequireSuccessfulRun(t, "bundle", "destroy", "--auto-approve")
testcli.RequireSuccessfulRun(t, "bundle", "destroy", "--auto-approve")
})
// Get summary of the bundle deployment
stdout, _ := RequireSuccessfulRun(t, "bundle", "summary", "--output", "json")
stdout, _ := testcli.RequireSuccessfulRun(t, "bundle", "summary", "--output", "json")
summary := &config.Root{}
err = json.Unmarshal(stdout.Bytes(), summary)
require.NoError(t, err)
@ -159,7 +160,7 @@ func TestAccBundleInitHelpers(t *testing.T) {
require.NoError(t, err)
// Run bundle init.
RequireSuccessfulRun(t, "bundle", "init", tmpDir, "--output-dir", tmpDir2)
testcli.RequireSuccessfulRun(t, "bundle", "init", tmpDir, "--output-dir", tmpDir2)
// Assert that the helper function was correctly computed.
assertLocalFileContents(t, filepath.Join(tmpDir2, "foo.txt"), test.expected)

View File

@ -6,6 +6,7 @@ import (
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -17,10 +18,10 @@ func TestAccCreateJob(t *testing.T) {
if env != "azure" {
t.Skipf("Not running test on cloud %s", env)
}
stdout, stderr := RequireSuccessfulRun(t, "jobs", "create", "--json", "@testjsons/create_job_without_workers.json", "--log-level=debug")
stdout, stderr := testcli.RequireSuccessfulRun(t, "jobs", "create", "--json", "@testjsons/create_job_without_workers.json", "--log-level=debug")
assert.Empty(t, stderr.String())
var output map[string]int
err := json.Unmarshal(stdout.Bytes(), &output)
require.NoError(t, err)
RequireSuccessfulRun(t, "jobs", "delete", fmt.Sprint(output["job_id"]), "--log-level=debug")
testcli.RequireSuccessfulRun(t, "jobs", "delete", fmt.Sprint(output["job_id"]), "--log-level=debug")
}

View File

@ -11,6 +11,7 @@ import (
"testing"
"time"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
lockpkg "github.com/databricks/cli/libs/locker"
@ -164,14 +165,12 @@ func TestAccLock(t *testing.T) {
assert.True(t, lockers[indexOfAnInactiveLocker].Active)
}
func setupLockerTest(ctx context.Context, t *testing.T) (*lockpkg.Locker, filer.Filer) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
w, err := databricks.NewWorkspaceClient()
require.NoError(t, err)
func setupLockerTest(t *testing.T) (context.Context, *lockpkg.Locker, filer.Filer) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
// create temp wsfs dir
tmpDir := TemporaryWorkspaceDir(t, w)
tmpDir := acc.TemporaryWorkspaceDir(wt, "locker-")
f, err := filer.NewWorkspaceFilesClient(w, tmpDir)
require.NoError(t, err)
@ -179,12 +178,11 @@ func setupLockerTest(ctx context.Context, t *testing.T) (*lockpkg.Locker, filer.
locker, err := lockpkg.CreateLocker("redfoo@databricks.com", tmpDir, w)
require.NoError(t, err)
return locker, f
return ctx, locker, f
}
func TestAccLockUnlockWithoutAllowsLockFileNotExist(t *testing.T) {
ctx := context.Background()
locker, f := setupLockerTest(ctx, t)
ctx, locker, f := setupLockerTest(t)
var err error
// Acquire lock on tmp directory
@ -205,8 +203,7 @@ func TestAccLockUnlockWithoutAllowsLockFileNotExist(t *testing.T) {
}
func TestAccLockUnlockWithAllowsLockFileNotExist(t *testing.T) {
ctx := context.Background()
locker, f := setupLockerTest(ctx, t)
ctx, locker, f := setupLockerTest(t)
var err error
// Acquire lock on tmp directory

View File

@ -14,10 +14,11 @@ import (
"time"
"github.com/databricks/cli/bundle/run/output"
"github.com/databricks/cli/internal"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/require"
@ -127,14 +128,14 @@ func runPythonTasks(t *testing.T, tw *testFiles, opts testOpts) {
w := tw.w
nodeTypeId := internal.GetNodeTypeId(env)
nodeTypeId := testutil.GetCloud(t).NodeTypeID()
tasks := make([]jobs.SubmitTask, 0)
if opts.includeNotebookTasks {
tasks = append(tasks, internal.GenerateNotebookTasks(tw.pyNotebookPath, sparkVersions, nodeTypeId)...)
tasks = append(tasks, GenerateNotebookTasks(tw.pyNotebookPath, sparkVersions, nodeTypeId)...)
}
if opts.includeSparkPythonTasks {
tasks = append(tasks, internal.GenerateSparkPythonTasks(tw.sparkPythonPath, sparkVersions, nodeTypeId)...)
tasks = append(tasks, GenerateSparkPythonTasks(tw.sparkPythonPath, sparkVersions, nodeTypeId)...)
}
if opts.includeWheelTasks {
@ -142,7 +143,7 @@ func runPythonTasks(t *testing.T, tw *testFiles, opts testOpts) {
if len(opts.wheelSparkVersions) > 0 {
versions = opts.wheelSparkVersions
}
tasks = append(tasks, internal.GenerateWheelTasks(tw.wheelPath, versions, nodeTypeId)...)
tasks = append(tasks, GenerateWheelTasks(tw.wheelPath, versions, nodeTypeId)...)
}
ctx := context.Background()
@ -179,13 +180,13 @@ func runPythonTasks(t *testing.T, tw *testFiles, opts testOpts) {
}
func prepareWorkspaceFiles(t *testing.T) *testFiles {
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
require.NoError(t, err)
var err error
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
baseDir := acc.TemporaryWorkspaceDir(wt, "python-tasks-")
baseDir := internal.TemporaryWorkspaceDir(t, w)
pyNotebookPath := path.Join(baseDir, "test.py")
err = w.Workspace.Import(ctx, workspace.Import{
Path: pyNotebookPath,
Overwrite: true,
@ -225,11 +226,12 @@ func prepareWorkspaceFiles(t *testing.T) *testFiles {
}
func prepareDBFSFiles(t *testing.T) *testFiles {
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
require.NoError(t, err)
var err error
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
baseDir := acc.TemporaryDbfsDir(wt, "python-tasks-")
baseDir := internal.TemporaryDbfsDir(t, w)
f, err := filer.NewDbfsClient(w, baseDir)
require.NoError(t, err)
@ -254,15 +256,83 @@ func prepareDBFSFiles(t *testing.T) *testFiles {
}
func prepareRepoFiles(t *testing.T) *testFiles {
w, err := databricks.NewWorkspaceClient()
require.NoError(t, err)
_, wt := acc.WorkspaceTest(t)
w := wt.W
baseDir := acc.TemporaryRepo(wt, "https://github.com/databricks/cli")
repo := internal.TemporaryRepo(t, w)
packagePath := "internal/python/testdata"
return &testFiles{
w: w,
pyNotebookPath: path.Join(repo, packagePath, "test"),
sparkPythonPath: path.Join(repo, packagePath, "spark.py"),
wheelPath: path.Join(repo, packagePath, "my_test_code-0.0.1-py3-none-any.whl"),
pyNotebookPath: path.Join(baseDir, packagePath, "test"),
sparkPythonPath: path.Join(baseDir, packagePath, "spark.py"),
wheelPath: path.Join(baseDir, packagePath, "my_test_code-0.0.1-py3-none-any.whl"),
}
}
func GenerateNotebookTasks(notebookPath string, versions []string, nodeTypeId string) []jobs.SubmitTask {
tasks := make([]jobs.SubmitTask, 0)
for i := 0; i < len(versions); i++ {
task := jobs.SubmitTask{
TaskKey: fmt.Sprintf("notebook_%s", strings.ReplaceAll(versions[i], ".", "_")),
NotebookTask: &jobs.NotebookTask{
NotebookPath: notebookPath,
},
NewCluster: &compute.ClusterSpec{
SparkVersion: versions[i],
NumWorkers: 1,
NodeTypeId: nodeTypeId,
DataSecurityMode: compute.DataSecurityModeUserIsolation,
},
}
tasks = append(tasks, task)
}
return tasks
}
func GenerateSparkPythonTasks(notebookPath string, versions []string, nodeTypeId string) []jobs.SubmitTask {
tasks := make([]jobs.SubmitTask, 0)
for i := 0; i < len(versions); i++ {
task := jobs.SubmitTask{
TaskKey: fmt.Sprintf("spark_%s", strings.ReplaceAll(versions[i], ".", "_")),
SparkPythonTask: &jobs.SparkPythonTask{
PythonFile: notebookPath,
},
NewCluster: &compute.ClusterSpec{
SparkVersion: versions[i],
NumWorkers: 1,
NodeTypeId: nodeTypeId,
DataSecurityMode: compute.DataSecurityModeUserIsolation,
},
}
tasks = append(tasks, task)
}
return tasks
}
func GenerateWheelTasks(wheelPath string, versions []string, nodeTypeId string) []jobs.SubmitTask {
tasks := make([]jobs.SubmitTask, 0)
for i := 0; i < len(versions); i++ {
task := jobs.SubmitTask{
TaskKey: fmt.Sprintf("whl_%s", strings.ReplaceAll(versions[i], ".", "_")),
PythonWheelTask: &jobs.PythonWheelTask{
PackageName: "my_test_code",
EntryPoint: "run",
},
NewCluster: &compute.ClusterSpec{
SparkVersion: versions[i],
NumWorkers: 1,
NodeTypeId: nodeTypeId,
DataSecurityMode: compute.DataSecurityModeUserIsolation,
},
Libraries: []compute.Library{
{Whl: wheelPath},
},
}
tasks = append(tasks, task)
}
return tasks
}

View File

@ -6,6 +6,7 @@ import (
"strconv"
"testing"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/apierr"
@ -52,7 +53,7 @@ func TestAccReposCreateWithProvider(t *testing.T) {
require.NoError(t, err)
repoPath := synthesizeTemporaryRepoPath(t, w, ctx)
_, stderr := RequireSuccessfulRun(t, "repos", "create", repoUrl, "gitHub", "--path", repoPath)
_, stderr := testcli.RequireSuccessfulRun(t, "repos", "create", repoUrl, "gitHub", "--path", repoPath)
assert.Equal(t, "", stderr.String())
// Confirm the repo was created.
@ -69,7 +70,7 @@ func TestAccReposCreateWithoutProvider(t *testing.T) {
require.NoError(t, err)
repoPath := synthesizeTemporaryRepoPath(t, w, ctx)
_, stderr := RequireSuccessfulRun(t, "repos", "create", repoUrl, "--path", repoPath)
_, stderr := testcli.RequireSuccessfulRun(t, "repos", "create", repoUrl, "--path", repoPath)
assert.Equal(t, "", stderr.String())
// Confirm the repo was created.
@ -88,22 +89,22 @@ func TestAccReposGet(t *testing.T) {
repoId, repoPath := createTemporaryRepo(t, w, ctx)
// Get by ID
byIdOutput, stderr := RequireSuccessfulRun(t, "repos", "get", strconv.FormatInt(repoId, 10), "--output=json")
byIdOutput, stderr := testcli.RequireSuccessfulRun(t, "repos", "get", strconv.FormatInt(repoId, 10), "--output=json")
assert.Equal(t, "", stderr.String())
// Get by path
byPathOutput, stderr := RequireSuccessfulRun(t, "repos", "get", repoPath, "--output=json")
byPathOutput, stderr := testcli.RequireSuccessfulRun(t, "repos", "get", repoPath, "--output=json")
assert.Equal(t, "", stderr.String())
// Output should be the same
assert.Equal(t, byIdOutput.String(), byPathOutput.String())
// Get by path fails
_, stderr, err = RequireErrorRun(t, "repos", "get", repoPath+"-doesntexist", "--output=json")
_, stderr, err = testcli.RequireErrorRun(t, "repos", "get", repoPath+"-doesntexist", "--output=json")
assert.ErrorContains(t, err, "failed to look up repo")
// Get by path resolves to something other than a repo
_, stderr, err = RequireErrorRun(t, "repos", "get", "/Repos", "--output=json")
_, stderr, err = testcli.RequireErrorRun(t, "repos", "get", "/Repos", "--output=json")
assert.ErrorContains(t, err, "is not a repo")
}
@ -117,11 +118,11 @@ func TestAccReposUpdate(t *testing.T) {
repoId, repoPath := createTemporaryRepo(t, w, ctx)
// Update by ID
byIdOutput, stderr := RequireSuccessfulRun(t, "repos", "update", strconv.FormatInt(repoId, 10), "--branch", "ide")
byIdOutput, stderr := testcli.RequireSuccessfulRun(t, "repos", "update", strconv.FormatInt(repoId, 10), "--branch", "ide")
assert.Equal(t, "", stderr.String())
// Update by path
byPathOutput, stderr := RequireSuccessfulRun(t, "repos", "update", repoPath, "--branch", "ide")
byPathOutput, stderr := testcli.RequireSuccessfulRun(t, "repos", "update", repoPath, "--branch", "ide")
assert.Equal(t, "", stderr.String())
// Output should be the same
@ -138,7 +139,7 @@ func TestAccReposDeleteByID(t *testing.T) {
repoId, _ := createTemporaryRepo(t, w, ctx)
// Delete by ID
stdout, stderr := RequireSuccessfulRun(t, "repos", "delete", strconv.FormatInt(repoId, 10))
stdout, stderr := testcli.RequireSuccessfulRun(t, "repos", "delete", strconv.FormatInt(repoId, 10))
assert.Equal(t, "", stdout.String())
assert.Equal(t, "", stderr.String())
@ -157,7 +158,7 @@ func TestAccReposDeleteByPath(t *testing.T) {
repoId, repoPath := createTemporaryRepo(t, w, ctx)
// Delete by path
stdout, stderr := RequireSuccessfulRun(t, "repos", "delete", repoPath)
stdout, stderr := testcli.RequireSuccessfulRun(t, "repos", "delete", repoPath)
assert.Equal(t, "", stdout.String())
assert.Equal(t, "", stderr.String())

View File

@ -7,6 +7,7 @@ import (
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
@ -14,7 +15,7 @@ import (
)
func TestSecretsCreateScopeErrWhenNoArguments(t *testing.T) {
_, _, err := RequireErrorRun(t, "secrets", "create-scope")
_, _, err := testcli.RequireErrorRun(t, "secrets", "create-scope")
assert.Contains(t, err.Error(), "accepts 1 arg(s), received 0")
}
@ -68,7 +69,7 @@ func TestAccSecretsPutSecretStringValue(tt *testing.T) {
key := "test-key"
value := "test-value\nwith-newlines\n"
stdout, stderr := RequireSuccessfulRun(t.T, "secrets", "put-secret", scope, key, "--string-value", value)
stdout, stderr := testcli.RequireSuccessfulRun(t, "secrets", "put-secret", scope, key, "--string-value", value)
assert.Empty(t, stdout)
assert.Empty(t, stderr)
@ -82,7 +83,7 @@ func TestAccSecretsPutSecretBytesValue(tt *testing.T) {
key := "test-key"
value := []byte{0x00, 0x01, 0x02, 0x03}
stdout, stderr := RequireSuccessfulRun(t.T, "secrets", "put-secret", scope, key, "--bytes-value", string(value))
stdout, stderr := testcli.RequireSuccessfulRun(t, "secrets", "put-secret", scope, key, "--bytes-value", string(value))
assert.Empty(t, stdout)
assert.Empty(t, stderr)

View File

@ -4,6 +4,7 @@ import (
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/stretchr/testify/assert"
)
@ -14,7 +15,7 @@ func TestAccStorageCredentialsListRendersResponse(t *testing.T) {
// Check if metastore is assigned for the workspace, otherwise test will fail
t.Log(testutil.GetEnvOrSkipTest(t, "TEST_METASTORE_ID"))
stdout, stderr := RequireSuccessfulRun(t, "storage-credentials", "list")
stdout, stderr := testcli.RequireSuccessfulRun(t, "storage-credentials", "list")
assert.NotEmpty(t, stdout)
assert.Empty(t, stderr)
}

View File

@ -15,7 +15,8 @@ import (
"testing"
"time"
_ "github.com/databricks/cli/cmd/sync"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/cli/libs/sync"
@ -64,7 +65,7 @@ func setupRepo(t *testing.T, wsc *databricks.WorkspaceClient, ctx context.Contex
type syncTest struct {
t *testing.T
c *cobraTestRunner
c *testcli.Runner
w *databricks.WorkspaceClient
f filer.Filer
localRoot string
@ -72,11 +73,11 @@ type syncTest struct {
}
func setupSyncTest(t *testing.T, args ...string) *syncTest {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
_, wt := acc.WorkspaceTest(t)
w := wt.W
w := databricks.Must(databricks.NewWorkspaceClient())
localRoot := t.TempDir()
remoteRoot := TemporaryWorkspaceDir(t, w)
remoteRoot := acc.TemporaryWorkspaceDir(wt, "sync-")
f, err := filer.NewWorkspaceFilesClient(w, remoteRoot)
require.NoError(t, err)
@ -89,7 +90,7 @@ func setupSyncTest(t *testing.T, args ...string) *syncTest {
"json",
}, args...)
c := NewCobraTestRunner(t, args...)
c := testcli.NewRunner(t, args...)
c.RunBackground()
return &syncTest{
@ -110,7 +111,7 @@ func (s *syncTest) waitForCompletionMarker() {
select {
case <-ctx.Done():
s.t.Fatal("timed out waiting for sync to complete")
case line := <-s.c.stdoutLines:
case line := <-s.c.StdoutLines:
var event sync.EventBase
err := json.Unmarshal([]byte(line), &event)
require.NoError(s.t, err)

View File

@ -1,33 +1,19 @@
package internal
import (
"context"
"strings"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require"
)
func testTags(t *testing.T, tags map[string]string) error {
var nodeTypeId string
switch testutil.GetCloud(t) {
case testutil.AWS:
nodeTypeId = "i3.xlarge"
case testutil.Azure:
nodeTypeId = "Standard_DS4_v2"
case testutil.GCP:
nodeTypeId = "n1-standard-4"
}
w, err := databricks.NewWorkspaceClient()
require.NoError(t, err)
ctx := context.Background()
resp, err := w.Jobs.Create(ctx, jobs.CreateJob{
ctx, wt := acc.WorkspaceTest(t)
resp, err := wt.W.Jobs.Create(ctx, jobs.CreateJob{
Name: testutil.RandomName("test-tags-"),
Tasks: []jobs.Task{
{
@ -35,7 +21,7 @@ func testTags(t *testing.T, tags map[string]string) error {
NewCluster: &compute.ClusterSpec{
SparkVersion: "13.3.x-scala2.12",
NumWorkers: 1,
NodeTypeId: nodeTypeId,
NodeTypeId: testutil.GetCloud(t).NodeTypeID(),
},
SparkPythonTask: &jobs.SparkPythonTask{
PythonFile: "/doesnt_exist.py",
@ -47,7 +33,7 @@ func testTags(t *testing.T, tags map[string]string) error {
if resp != nil {
t.Cleanup(func() {
_ = w.Jobs.DeleteByJobId(ctx, resp.JobId)
_ = wt.W.Jobs.DeleteByJobId(ctx, resp.JobId)
// Cannot enable errchecking there, tests fail with:
// Error: Received unexpected error:
// Job 0 does not exist.

View File

@ -0,0 +1,7 @@
# testcli
This package provides a way to run the CLI from tests as if it were a separate process.
By running the CLI inline we can still set breakpoints and step through execution.
It transitively imports pretty much this entire repository, which is why we
intentionally keep this package _separate_ from `testutil`.

315
internal/testcli/runner.go Normal file
View File

@ -0,0 +1,315 @@
package testcli
import (
"bufio"
"bytes"
"context"
"encoding/json"
"io"
"reflect"
"strings"
"sync"
"time"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/stretchr/testify/require"
"github.com/databricks/cli/cmd"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
)
// Helper for running the root command in the background.
// It ensures that the background goroutine terminates upon
// test completion through cancelling the command context.
type Runner struct {
testutil.TestingT
args []string
stdout bytes.Buffer
stderr bytes.Buffer
stdinR *io.PipeReader
stdinW *io.PipeWriter
ctx context.Context
// Line-by-line output.
// Background goroutines populate these channels by reading from stdout/stderr pipes.
StdoutLines <-chan string
StderrLines <-chan string
errch <-chan error
}
func consumeLines(ctx context.Context, wg *sync.WaitGroup, r io.Reader) <-chan string {
ch := make(chan string, 30000)
wg.Add(1)
go func() {
defer close(ch)
defer wg.Done()
scanner := bufio.NewScanner(r)
for scanner.Scan() {
// We expect to be able to always send these lines into the channel.
// If we can't, it means the channel is full and likely there is a problem
// in either the test or the code under test.
select {
case <-ctx.Done():
return
case ch <- scanner.Text():
continue
default:
panic("line buffer is full")
}
}
}()
return ch
}
func (r *Runner) registerFlagCleanup(c *cobra.Command) {
// Find target command that will be run. Example: if the command run is `databricks fs cp`,
// target command corresponds to `cp`
targetCmd, _, err := c.Find(r.args)
if err != nil && strings.HasPrefix(err.Error(), "unknown command") {
// even if command is unknown, we can proceed
require.NotNil(r, targetCmd)
} else {
require.NoError(r, err)
}
// Force initialization of default flags.
// These are initialized by cobra at execution time and would otherwise
// not be cleaned up by the cleanup function below.
targetCmd.InitDefaultHelpFlag()
targetCmd.InitDefaultVersionFlag()
// Restore flag values to their original value on test completion.
targetCmd.Flags().VisitAll(func(f *pflag.Flag) {
v := reflect.ValueOf(f.Value)
if v.Kind() == reflect.Ptr {
v = v.Elem()
}
// Store copy of the current flag value.
reset := reflect.New(v.Type()).Elem()
reset.Set(v)
r.Cleanup(func() {
v.Set(reset)
})
})
}
// Like [Runner.Eventually], but more specific
func (r *Runner) WaitForTextPrinted(text string, timeout time.Duration) {
r.Eventually(func() bool {
currentStdout := r.stdout.String()
return strings.Contains(currentStdout, text)
}, timeout, 50*time.Millisecond)
}
func (r *Runner) WaitForOutput(text string, timeout time.Duration) {
require.Eventually(r, func() bool {
currentStdout := r.stdout.String()
currentErrout := r.stderr.String()
return strings.Contains(currentStdout, text) || strings.Contains(currentErrout, text)
}, timeout, 50*time.Millisecond)
}
func (r *Runner) WithStdin() {
reader, writer := io.Pipe()
r.stdinR = reader
r.stdinW = writer
}
func (r *Runner) CloseStdin() {
if r.stdinW == nil {
panic("no standard input configured")
}
r.stdinW.Close()
}
func (r *Runner) SendText(text string) {
if r.stdinW == nil {
panic("no standard input configured")
}
_, err := r.stdinW.Write([]byte(text + "\n"))
if err != nil {
panic("Failed to to write to t.stdinW")
}
}
func (r *Runner) RunBackground() {
var stdoutR, stderrR io.Reader
var stdoutW, stderrW io.WriteCloser
stdoutR, stdoutW = io.Pipe()
stderrR, stderrW = io.Pipe()
ctx := cmdio.NewContext(r.ctx, &cmdio.Logger{
Mode: flags.ModeAppend,
Reader: bufio.Reader{},
Writer: stderrW,
})
cli := cmd.New(ctx)
cli.SetOut(stdoutW)
cli.SetErr(stderrW)
cli.SetArgs(r.args)
if r.stdinW != nil {
cli.SetIn(r.stdinR)
}
// Register cleanup function to restore flags to their original values
// once test has been executed. This is needed because flag values reside
// in a global singleton data-structure, and thus subsequent tests might
// otherwise interfere with each other
r.registerFlagCleanup(cli)
errch := make(chan error)
ctx, cancel := context.WithCancel(ctx)
// Tee stdout/stderr to buffers.
stdoutR = io.TeeReader(stdoutR, &r.stdout)
stderrR = io.TeeReader(stderrR, &r.stderr)
// Consume stdout/stderr line-by-line.
var wg sync.WaitGroup
r.StdoutLines = consumeLines(ctx, &wg, stdoutR)
r.StderrLines = consumeLines(ctx, &wg, stderrR)
// Run command in background.
go func() {
err := root.Execute(ctx, cli)
if err != nil {
r.Logf("Error running command: %s", err)
}
// Close pipes to signal EOF.
stdoutW.Close()
stderrW.Close()
// Wait for the [consumeLines] routines to finish now that
// the pipes they're reading from have closed.
wg.Wait()
if r.stdout.Len() > 0 {
// Make a copy of the buffer such that it remains "unread".
scanner := bufio.NewScanner(bytes.NewBuffer(r.stdout.Bytes()))
for scanner.Scan() {
r.Logf("[databricks stdout]: %s", scanner.Text())
}
}
if r.stderr.Len() > 0 {
// Make a copy of the buffer such that it remains "unread".
scanner := bufio.NewScanner(bytes.NewBuffer(r.stderr.Bytes()))
for scanner.Scan() {
r.Logf("[databricks stderr]: %s", scanner.Text())
}
}
// Reset context on command for the next test.
// These commands are globals so we have to clean up to the best of our ability after each run.
// See https://github.com/spf13/cobra/blob/a6f198b635c4b18fff81930c40d464904e55b161/command.go#L1062-L1066
//nolint:staticcheck // cobra sets the context and doesn't clear it
cli.SetContext(nil)
// Make caller aware of error.
errch <- err
close(errch)
}()
// Ensure command terminates upon test completion (success or failure).
r.Cleanup(func() {
// Signal termination of command.
cancel()
// Wait for goroutine to finish.
<-errch
})
r.errch = errch
}
func (r *Runner) Run() (bytes.Buffer, bytes.Buffer, error) {
r.RunBackground()
err := <-r.errch
return r.stdout, r.stderr, err
}
// Like [require.Eventually] but errors if the underlying command has failed.
func (r *Runner) Eventually(condition func() bool, waitFor, tick time.Duration, msgAndArgs ...any) {
ch := make(chan bool, 1)
timer := time.NewTimer(waitFor)
defer timer.Stop()
ticker := time.NewTicker(tick)
defer ticker.Stop()
// Kick off condition check immediately.
go func() { ch <- condition() }()
for tick := ticker.C; ; {
select {
case err := <-r.errch:
require.Fail(r, "Command failed", err)
return
case <-timer.C:
require.Fail(r, "Condition never satisfied", msgAndArgs...)
return
case <-tick:
tick = nil
go func() { ch <- condition() }()
case v := <-ch:
if v {
return
}
tick = ticker.C
}
}
}
func (r *Runner) RunAndExpectOutput(heredoc string) {
stdout, _, err := r.Run()
require.NoError(r, err)
require.Equal(r, cmdio.Heredoc(heredoc), strings.TrimSpace(stdout.String()))
}
func (r *Runner) RunAndParseJSON(v any) {
stdout, _, err := r.Run()
require.NoError(r, err)
err = json.Unmarshal(stdout.Bytes(), &v)
require.NoError(r, err)
}
func NewRunner(t testutil.TestingT, args ...string) *Runner {
return &Runner{
TestingT: t,
ctx: context.Background(),
args: args,
}
}
func NewRunnerWithContext(t testutil.TestingT, ctx context.Context, args ...string) *Runner {
return &Runner{
TestingT: t,
ctx: ctx,
args: args,
}
}
func RequireSuccessfulRun(t testutil.TestingT, args ...string) (bytes.Buffer, bytes.Buffer) {
t.Logf("run args: [%s]", strings.Join(args, ", "))
r := NewRunner(t, args...)
stdout, stderr, err := r.Run()
require.NoError(t, err)
return stdout, stderr
}
func RequireErrorRun(t testutil.TestingT, args ...string) (bytes.Buffer, bytes.Buffer, error) {
r := NewRunner(t, args...)
stdout, stderr, err := r.Run()
require.Error(t, err)
return stdout, stderr, err
}

View File

@ -1,9 +1,5 @@
package testutil
import (
"testing"
)
type Cloud int
const (
@ -13,7 +9,7 @@ const (
)
// Implement [Requirement].
func (c Cloud) Verify(t *testing.T) {
func (c Cloud) Verify(t TestingT) {
if c != GetCloud(t) {
t.Skipf("Skipping %s-specific test", c)
}
@ -32,7 +28,20 @@ func (c Cloud) String() string {
}
}
func GetCloud(t *testing.T) Cloud {
func (c Cloud) NodeTypeID() string {
switch c {
case AWS:
return "i3.xlarge"
case Azure:
return "Standard_DS4_v2"
case GCP:
return "n1-standard-4"
default:
return "unknown"
}
}
func GetCloud(t TestingT) Cloud {
env := GetEnvOrSkipTest(t, "CLOUD_ENV")
switch env {
case "aws":
@ -50,6 +59,6 @@ func GetCloud(t *testing.T) Cloud {
return -1
}
func IsAWSCloud(t *testing.T) bool {
func IsAWSCloud(t TestingT) bool {
return GetCloud(t) == AWS
}

View File

@ -5,14 +5,13 @@ import (
"io/fs"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
// CopyDirectory copies the contents of a directory to another directory.
// The destination directory is created if it does not exist.
func CopyDirectory(t *testing.T, src, dst string) {
func CopyDirectory(t TestingT, src, dst string) {
err := filepath.WalkDir(src, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err

View File

@ -5,7 +5,6 @@ import (
"path/filepath"
"runtime"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
@ -13,7 +12,7 @@ import (
// CleanupEnvironment sets up a pristine environment containing only $PATH and $HOME.
// The original environment is restored upon test completion.
// Note: use of this function is incompatible with parallel execution.
func CleanupEnvironment(t *testing.T) {
func CleanupEnvironment(t TestingT) {
// Restore environment when test finishes.
environ := os.Environ()
t.Cleanup(func() {
@ -41,7 +40,7 @@ func CleanupEnvironment(t *testing.T) {
// Changes into specified directory for the duration of the test.
// Returns the current working directory.
func Chdir(t *testing.T, dir string) string {
func Chdir(t TestingT, dir string) string {
// Prevent parallel execution when changing the working directory.
// t.Setenv automatically fails if t.Parallel is set.
t.Setenv("DO_NOT_RUN_IN_PARALLEL", "true")

View File

@ -3,12 +3,11 @@ package testutil
import (
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
func TouchNotebook(t *testing.T, elems ...string) string {
func TouchNotebook(t TestingT, elems ...string) string {
path := filepath.Join(elems...)
err := os.MkdirAll(filepath.Dir(path), 0o755)
require.NoError(t, err)
@ -18,7 +17,7 @@ func TouchNotebook(t *testing.T, elems ...string) string {
return path
}
func Touch(t *testing.T, elems ...string) string {
func Touch(t TestingT, elems ...string) string {
path := filepath.Join(elems...)
err := os.MkdirAll(filepath.Dir(path), 0o755)
require.NoError(t, err)
@ -32,7 +31,7 @@ func Touch(t *testing.T, elems ...string) string {
}
// WriteFile writes content to a file.
func WriteFile(t *testing.T, path, content string) {
func WriteFile(t TestingT, path, content string) {
err := os.MkdirAll(filepath.Dir(path), 0o755)
require.NoError(t, err)
@ -47,7 +46,7 @@ func WriteFile(t *testing.T, path, content string) {
}
// ReadFile reads a file and returns its content as a string.
func ReadFile(t require.TestingT, path string) string {
func ReadFile(t TestingT, path string) string {
b, err := os.ReadFile(path)
require.NoError(t, err)

View File

@ -5,11 +5,10 @@ import (
"math/rand"
"os"
"strings"
"testing"
)
// GetEnvOrSkipTest proceeds with test only with that env variable.
func GetEnvOrSkipTest(t *testing.T, name string) string {
func GetEnvOrSkipTest(t TestingT, name string) string {
value := os.Getenv(name)
if value == "" {
t.Skipf("Environment variable %s is missing", name)

View File

@ -0,0 +1,27 @@
package testutil
// TestingT is an interface wrapper around *testing.T that provides the methods
// that are used by the test package to convey information about test failures.
//
// We use an interface so we can wrap *testing.T and provide additional functionality.
type TestingT interface {
Log(args ...any)
Logf(format string, args ...any)
Error(args ...any)
Errorf(format string, args ...any)
Fatal(args ...any)
Fatalf(format string, args ...any)
Skip(args ...any)
Skipf(format string, args ...any)
FailNow()
Cleanup(func())
Setenv(key, value string)
TempDir() string
}

View File

@ -5,12 +5,11 @@ import (
"context"
"os/exec"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func RequireJDK(t *testing.T, ctx context.Context, version string) {
func RequireJDK(t TestingT, ctx context.Context, version string) {
var stderr bytes.Buffer
cmd := exec.Command("javac", "-version")

View File

@ -1,18 +1,14 @@
package testutil
import (
"testing"
)
// Requirement is the interface for test requirements.
type Requirement interface {
Verify(t *testing.T)
Verify(t TestingT)
}
// Require should be called at the beginning of a test to ensure that all
// requirements are met before running the test.
// If any requirement is not met, the test will be skipped.
func Require(t *testing.T, requirements ...Requirement) {
func Require(t TestingT, requirements ...Requirement) {
for _, r := range requirements {
r.Verify(t)
}

View File

@ -0,0 +1,36 @@
package testutil_test
import (
"go/parser"
"go/token"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestNoTestingImport checks that no file in the package imports the testing package.
// All exported functions must use the TestingT interface instead of *testing.T.
func TestNoTestingImport(t *testing.T) {
// Parse the package
fset := token.NewFileSet()
pkgs, err := parser.ParseDir(fset, ".", nil, parser.AllErrors)
require.NoError(t, err)
// Iterate through the files in the package
for _, pkg := range pkgs {
for _, file := range pkg.Files {
// Skip test files
if strings.HasSuffix(fset.Position(file.Pos()).Filename, "_test.go") {
continue
}
// Check the imports of each file
for _, imp := range file.Imports {
if imp.Path.Value == `"testing"` {
assert.Fail(t, "File imports the testing package", "File %s imports the testing package", fset.Position(file.Pos()).Filename)
}
}
}
}
}

View File

@ -3,11 +3,12 @@ package internal
import (
"testing"
"github.com/databricks/cli/internal/testcli"
assert "github.com/databricks/cli/libs/dyn/dynassert"
)
func TestUnknownCommand(t *testing.T) {
stdout, stderr, err := RequireErrorRun(t, "unknown-command")
stdout, stderr, err := testcli.RequireErrorRun(t, "unknown-command")
assert.Error(t, err, "unknown command", `unknown command "unknown-command" for "databricks"`)
assert.Equal(t, "", stdout.String())

View File

@ -6,31 +6,32 @@ import (
"testing"
"github.com/databricks/cli/internal/build"
"github.com/databricks/cli/internal/testcli"
"github.com/stretchr/testify/assert"
)
var expectedVersion = fmt.Sprintf("Databricks CLI v%s\n", build.GetInfo().Version)
func TestVersionFlagShort(t *testing.T) {
stdout, stderr := RequireSuccessfulRun(t, "-v")
stdout, stderr := testcli.RequireSuccessfulRun(t, "-v")
assert.Equal(t, expectedVersion, stdout.String())
assert.Equal(t, "", stderr.String())
}
func TestVersionFlagLong(t *testing.T) {
stdout, stderr := RequireSuccessfulRun(t, "--version")
stdout, stderr := testcli.RequireSuccessfulRun(t, "--version")
assert.Equal(t, expectedVersion, stdout.String())
assert.Equal(t, "", stderr.String())
}
func TestVersionCommand(t *testing.T) {
stdout, stderr := RequireSuccessfulRun(t, "version")
stdout, stderr := testcli.RequireSuccessfulRun(t, "version")
assert.Equal(t, expectedVersion, stdout.String())
assert.Equal(t, "", stderr.String())
}
func TestVersionCommandWithJSONOutput(t *testing.T) {
stdout, stderr := RequireSuccessfulRun(t, "version", "--output", "json")
stdout, stderr := testcli.RequireSuccessfulRun(t, "version", "--output", "json")
assert.NotEmpty(t, stdout.String())
assert.Equal(t, "", stderr.String())

View File

@ -12,9 +12,9 @@ import (
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/internal/testcli"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@ -23,7 +23,7 @@ import (
func TestAccWorkspaceList(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
stdout, stderr := RequireSuccessfulRun(t, "workspace", "list", "/")
stdout, stderr := testcli.RequireSuccessfulRun(t, "workspace", "list", "/")
outStr := stdout.String()
assert.Contains(t, outStr, "ID")
assert.Contains(t, outStr, "Type")
@ -33,21 +33,20 @@ func TestAccWorkspaceList(t *testing.T) {
}
func TestWorkpaceListErrorWhenNoArguments(t *testing.T) {
_, _, err := RequireErrorRun(t, "workspace", "list")
_, _, err := testcli.RequireErrorRun(t, "workspace", "list")
assert.Contains(t, err.Error(), "accepts 1 arg(s), received 0")
}
func TestWorkpaceGetStatusErrorWhenNoArguments(t *testing.T) {
_, _, err := RequireErrorRun(t, "workspace", "get-status")
_, _, err := testcli.RequireErrorRun(t, "workspace", "get-status")
assert.Contains(t, err.Error(), "accepts 1 arg(s), received 0")
}
func TestAccWorkpaceExportPrintsContents(t *testing.T) {
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
ctx := context.Background()
w := databricks.Must(databricks.NewWorkspaceClient())
tmpdir := TemporaryWorkspaceDir(t, w)
tmpdir := acc.TemporaryWorkspaceDir(wt, "workspace-export-")
f, err := filer.NewWorkspaceFilesClient(w, tmpdir)
require.NoError(t, err)
@ -57,16 +56,17 @@ func TestAccWorkpaceExportPrintsContents(t *testing.T) {
require.NoError(t, err)
// Run export
stdout, stderr := RequireSuccessfulRun(t, "workspace", "export", path.Join(tmpdir, "file-a"))
stdout, stderr := testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(tmpdir, "file-a"))
assert.Equal(t, contents, stdout.String())
assert.Equal(t, "", stderr.String())
}
func setupWorkspaceImportExportTest(t *testing.T) (context.Context, filer.Filer, string) {
ctx, wt := acc.WorkspaceTest(t)
w := wt.W
tmpdir := TemporaryWorkspaceDir(t, wt.W)
f, err := filer.NewWorkspaceFilesClient(wt.W, tmpdir)
tmpdir := acc.TemporaryWorkspaceDir(wt, "workspace-import-")
f, err := filer.NewWorkspaceFilesClient(w, tmpdir)
require.NoError(t, err)
return ctx, f, tmpdir
@ -125,7 +125,7 @@ func TestAccExportDir(t *testing.T) {
}, "\n")
// Run Export
stdout, stderr := RequireSuccessfulRun(t, "workspace", "export-dir", sourceDir, targetDir)
stdout, stderr := testcli.RequireSuccessfulRun(t, "workspace", "export-dir", sourceDir, targetDir)
assert.Equal(t, expectedLogs, stdout.String())
assert.Equal(t, "", stderr.String())
@ -153,7 +153,7 @@ func TestAccExportDirDoesNotOverwrite(t *testing.T) {
require.NoError(t, err)
// Run Export
RequireSuccessfulRun(t, "workspace", "export-dir", sourceDir, targetDir)
testcli.RequireSuccessfulRun(t, "workspace", "export-dir", sourceDir, targetDir)
// Assert file is not overwritten
assertLocalFileContents(t, filepath.Join(targetDir, "file-a"), "local content")
@ -174,7 +174,7 @@ func TestAccExportDirWithOverwriteFlag(t *testing.T) {
require.NoError(t, err)
// Run Export
RequireSuccessfulRun(t, "workspace", "export-dir", sourceDir, targetDir, "--overwrite")
testcli.RequireSuccessfulRun(t, "workspace", "export-dir", sourceDir, targetDir, "--overwrite")
// Assert file has been overwritten
assertLocalFileContents(t, filepath.Join(targetDir, "file-a"), "content from workspace")
@ -182,7 +182,7 @@ func TestAccExportDirWithOverwriteFlag(t *testing.T) {
func TestAccImportDir(t *testing.T) {
ctx, workspaceFiler, targetDir := setupWorkspaceImportExportTest(t)
stdout, stderr := RequireSuccessfulRun(t, "workspace", "import-dir", "./testdata/import_dir", targetDir, "--log-level=debug")
stdout, stderr := testcli.RequireSuccessfulRun(t, "workspace", "import-dir", "./testdata/import_dir", targetDir, "--log-level=debug")
expectedLogs := strings.Join([]string{
fmt.Sprintf("Importing files from %s", "./testdata/import_dir"),
@ -223,7 +223,7 @@ func TestAccImportDirDoesNotOverwrite(t *testing.T) {
assertFilerFileContents(t, ctx, workspaceFiler, "file-a", "old file")
assertFilerFileContents(t, ctx, workspaceFiler, "pyNotebook", "# Databricks notebook source\nprint(\"old notebook\")")
RequireSuccessfulRun(t, "workspace", "import-dir", "./testdata/import_dir", targetDir)
testcli.RequireSuccessfulRun(t, "workspace", "import-dir", "./testdata/import_dir", targetDir)
// Assert files are imported
assertFilerFileContents(t, ctx, workspaceFiler, "a/b/c/file-b", "file-in-dir")
@ -251,7 +251,7 @@ func TestAccImportDirWithOverwriteFlag(t *testing.T) {
assertFilerFileContents(t, ctx, workspaceFiler, "file-a", "old file")
assertFilerFileContents(t, ctx, workspaceFiler, "pyNotebook", "# Databricks notebook source\nprint(\"old notebook\")")
RequireSuccessfulRun(t, "workspace", "import-dir", "./testdata/import_dir", targetDir, "--overwrite")
testcli.RequireSuccessfulRun(t, "workspace", "import-dir", "./testdata/import_dir", targetDir, "--overwrite")
// Assert files are imported
assertFilerFileContents(t, ctx, workspaceFiler, "a/b/c/file-b", "file-in-dir")
@ -273,7 +273,7 @@ func TestAccExport(t *testing.T) {
// Export vanilla file
err = f.Write(ctx, "file-a", strings.NewReader("abc"))
require.NoError(t, err)
stdout, _ := RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "file-a"))
stdout, _ := testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "file-a"))
b, err := io.ReadAll(&stdout)
require.NoError(t, err)
assert.Equal(t, "abc", string(b))
@ -281,13 +281,13 @@ func TestAccExport(t *testing.T) {
// Export python notebook
err = f.Write(ctx, "pyNotebook.py", strings.NewReader("# Databricks notebook source"))
require.NoError(t, err)
stdout, _ = RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"))
stdout, _ = testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"))
b, err = io.ReadAll(&stdout)
require.NoError(t, err)
assert.Equal(t, "# Databricks notebook source\n", string(b))
// Export python notebook as jupyter
stdout, _ = RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"), "--format", "JUPYTER")
stdout, _ = testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"), "--format", "JUPYTER")
b, err = io.ReadAll(&stdout)
require.NoError(t, err)
assert.Contains(t, string(b), `"cells":`, "jupyter notebooks contain the cells field")
@ -303,7 +303,7 @@ func TestAccExportWithFileFlag(t *testing.T) {
// Export vanilla file
err = f.Write(ctx, "file-a", strings.NewReader("abc"))
require.NoError(t, err)
stdout, _ := RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "file-a"), "--file", filepath.Join(localTmpDir, "file.txt"))
stdout, _ := testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "file-a"), "--file", filepath.Join(localTmpDir, "file.txt"))
b, err := io.ReadAll(&stdout)
require.NoError(t, err)
// Expect nothing to be printed to stdout
@ -313,14 +313,14 @@ func TestAccExportWithFileFlag(t *testing.T) {
// Export python notebook
err = f.Write(ctx, "pyNotebook.py", strings.NewReader("# Databricks notebook source"))
require.NoError(t, err)
stdout, _ = RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"), "--file", filepath.Join(localTmpDir, "pyNb.py"))
stdout, _ = testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"), "--file", filepath.Join(localTmpDir, "pyNb.py"))
b, err = io.ReadAll(&stdout)
require.NoError(t, err)
assert.Equal(t, "", string(b))
assertLocalFileContents(t, filepath.Join(localTmpDir, "pyNb.py"), "# Databricks notebook source\n")
// Export python notebook as jupyter
stdout, _ = RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"), "--format", "JUPYTER", "--file", filepath.Join(localTmpDir, "jupyterNb.ipynb"))
stdout, _ = testcli.RequireSuccessfulRun(t, "workspace", "export", path.Join(sourceDir, "pyNotebook"), "--format", "JUPYTER", "--file", filepath.Join(localTmpDir, "jupyterNb.ipynb"))
b, err = io.ReadAll(&stdout)
require.NoError(t, err)
assert.Equal(t, "", string(b))
@ -332,13 +332,13 @@ func TestAccImportFileUsingContentFormatSource(t *testing.T) {
ctx, workspaceFiler, targetDir := setupWorkspaceImportExportTest(t)
// Content = `print(1)`. Uploaded as a notebook by default
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "pyScript"),
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "pyScript"),
"--content", base64.StdEncoding.EncodeToString([]byte("print(1)")), "--language=PYTHON")
assertFilerFileContents(t, ctx, workspaceFiler, "pyScript", "print(1)")
assertWorkspaceFileType(t, ctx, workspaceFiler, "pyScript", workspace.ObjectTypeNotebook)
// Import with content = `# Databricks notebook source\nprint(1)`. Uploaded as a notebook with the content just being print(1)
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "pyNb"),
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "pyNb"),
"--content", base64.StdEncoding.EncodeToString([]byte("`# Databricks notebook source\nprint(1)")),
"--language=PYTHON")
assertFilerFileContents(t, ctx, workspaceFiler, "pyNb", "print(1)")
@ -349,19 +349,19 @@ func TestAccImportFileUsingContentFormatAuto(t *testing.T) {
ctx, workspaceFiler, targetDir := setupWorkspaceImportExportTest(t)
// Content = `# Databricks notebook source\nprint(1)`. Upload as file if path has no extension.
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-file"),
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-file"),
"--content", base64.StdEncoding.EncodeToString([]byte("`# Databricks notebook source\nprint(1)")), "--format=AUTO")
assertFilerFileContents(t, ctx, workspaceFiler, "py-nb-as-file", "# Databricks notebook source\nprint(1)")
assertWorkspaceFileType(t, ctx, workspaceFiler, "py-nb-as-file", workspace.ObjectTypeFile)
// Content = `# Databricks notebook source\nprint(1)`. Upload as notebook if path has py extension
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-notebook.py"),
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-notebook.py"),
"--content", base64.StdEncoding.EncodeToString([]byte("`# Databricks notebook source\nprint(1)")), "--format=AUTO")
assertFilerFileContents(t, ctx, workspaceFiler, "py-nb-as-notebook", "# Databricks notebook source\nprint(1)")
assertWorkspaceFileType(t, ctx, workspaceFiler, "py-nb-as-notebook", workspace.ObjectTypeNotebook)
// Content = `print(1)`. Upload as file if content is not notebook (even if path has .py extension)
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "not-a-notebook.py"), "--content",
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "not-a-notebook.py"), "--content",
base64.StdEncoding.EncodeToString([]byte("print(1)")), "--format=AUTO")
assertFilerFileContents(t, ctx, workspaceFiler, "not-a-notebook.py", "print(1)")
assertWorkspaceFileType(t, ctx, workspaceFiler, "not-a-notebook.py", workspace.ObjectTypeFile)
@ -369,15 +369,15 @@ func TestAccImportFileUsingContentFormatAuto(t *testing.T) {
func TestAccImportFileFormatSource(t *testing.T) {
ctx, workspaceFiler, targetDir := setupWorkspaceImportExportTest(t)
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "pyNotebook"), "--file", "./testdata/import_dir/pyNotebook.py", "--language=PYTHON")
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "pyNotebook"), "--file", "./testdata/import_dir/pyNotebook.py", "--language=PYTHON")
assertFilerFileContents(t, ctx, workspaceFiler, "pyNotebook", "# Databricks notebook source\nprint(\"python\")")
assertWorkspaceFileType(t, ctx, workspaceFiler, "pyNotebook", workspace.ObjectTypeNotebook)
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "scalaNotebook"), "--file", "./testdata/import_dir/scalaNotebook.scala", "--language=SCALA")
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "scalaNotebook"), "--file", "./testdata/import_dir/scalaNotebook.scala", "--language=SCALA")
assertFilerFileContents(t, ctx, workspaceFiler, "scalaNotebook", "// Databricks notebook source\nprintln(\"scala\")")
assertWorkspaceFileType(t, ctx, workspaceFiler, "scalaNotebook", workspace.ObjectTypeNotebook)
_, _, err := RequireErrorRun(t, "workspace", "import", path.Join(targetDir, "scalaNotebook"), "--file", "./testdata/import_dir/scalaNotebook.scala")
_, _, err := testcli.RequireErrorRun(t, "workspace", "import", path.Join(targetDir, "scalaNotebook"), "--file", "./testdata/import_dir/scalaNotebook.scala")
assert.ErrorContains(t, err, "The zip file may not be valid or may be an unsupported version. Hint: Objects imported using format=SOURCE are expected to be zip encoded databricks source notebook(s) by default. Please specify a language using the --language flag if you are trying to import a single uncompressed notebook")
}
@ -385,18 +385,18 @@ func TestAccImportFileFormatAuto(t *testing.T) {
ctx, workspaceFiler, targetDir := setupWorkspaceImportExportTest(t)
// Upload as file if path has no extension
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-file"), "--file", "./testdata/import_dir/pyNotebook.py", "--format=AUTO")
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-file"), "--file", "./testdata/import_dir/pyNotebook.py", "--format=AUTO")
assertFilerFileContents(t, ctx, workspaceFiler, "py-nb-as-file", "# Databricks notebook source")
assertFilerFileContents(t, ctx, workspaceFiler, "py-nb-as-file", "print(\"python\")")
assertWorkspaceFileType(t, ctx, workspaceFiler, "py-nb-as-file", workspace.ObjectTypeFile)
// Upload as notebook if path has extension
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-notebook.py"), "--file", "./testdata/import_dir/pyNotebook.py", "--format=AUTO")
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "py-nb-as-notebook.py"), "--file", "./testdata/import_dir/pyNotebook.py", "--format=AUTO")
assertFilerFileContents(t, ctx, workspaceFiler, "py-nb-as-notebook", "# Databricks notebook source\nprint(\"python\")")
assertWorkspaceFileType(t, ctx, workspaceFiler, "py-nb-as-notebook", workspace.ObjectTypeNotebook)
// Upload as file if content is not notebook (even if path has .py extension)
RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "not-a-notebook.py"), "--file", "./testdata/import_dir/file-a", "--format=AUTO")
testcli.RequireSuccessfulRun(t, "workspace", "import", path.Join(targetDir, "not-a-notebook.py"), "--file", "./testdata/import_dir/file-a", "--format=AUTO")
assertFilerFileContents(t, ctx, workspaceFiler, "not-a-notebook.py", "hello, world")
assertWorkspaceFileType(t, ctx, workspaceFiler, "not-a-notebook.py", workspace.ObjectTypeFile)
}