Compare commits

...

13 Commits

Author SHA1 Message Date
Shreyas Goenka f98369d9e9
- 2025-02-11 17:58:15 +01:00
Shreyas Goenka 4cdcbd6b12
Merge remote-tracking branch 'origin' into async-logger-clean 2025-02-11 17:00:43 +01:00
shreyas-goenka 24ac8d8d59
Add acceptance tests for auth resolution (#2285)
## Changes

This PR adds acceptance tests for native Databricks auth methods: basic,
oauth, and pat.

In the future we could compare this with auth credentials used by
downstream tools like TF or the telemetry process to ensure consistent
auth credentials are picked up and used.

Note: 
We do not add acceptance tests for other auth methods like Azure because
they communicate with external endpoints. To test them locally, we would
need to set up a reverse proxy server, which is out of scope for this
change.

## Tests
N/A
2025-02-11 15:50:03 +00:00
Denis Bilenko 5d392acbef
acc: Allow mixing custom stubs with default server impl (#2334)
## Changes
- Currently if you define [[Server]] block, you disable the default
server implementation. With this change [[Server]] block takes
precedence over default server but default server remains.
- Switched mux implementation to
[gorilla/mux](https://github.com/gorilla/mux) -- unlike built-in it does
not panic if you set two handlers on the same part (instead the earliest
one wins). It also does not have any dependencies.
- Move acceptance/selftest into acceptance/selftest/basic and added
acceptance/selftest/server that demoes server override.
- Rewrite server set up to ensure that env vars and replacements are set
up correctly. Previously replacements for DATABRICKS_HOST referred to
default server, not to the custom server.
- Avoid calling CurrentUser.Me() in the local case. This allows
overriding /api/2.0/preview/scim/v2/Me, which we use in some tests (e.g.
bundle/templates-machinery/helpers-error). Previously the test passed
because CurrentUser.Me() was calling default server which is incorrect
but it happened to make the tests pass.
- The default server is now available on DATABRICKS_DEFAULT_HOST env
var.
- Rewrite "not found" handler in local test to handle error better (do
not raise http500 when header is already written).

## Tests
New acceptance test selftest/server specifically tests that both custom
and default handlers are available in a single test.
2025-02-11 15:03:41 +00:00
Denis Bilenko 272ce61302
acc: Fix singleTest option to support forward slashes (#2336)
The filtering of tests needs to see forward slashes otherwise it is
OS-dependent.

I've also switched to filepath.ToSlash but it should be a no-op.
2025-02-11 15:26:46 +01:00
Denis Bilenko 878fa80322
acc: Fix RecordRequests to support requests without body (#2333)
## Changes
Do not paste request body into output if it's not a valid JSON.

## Tests
While working on #2334 I found that if I try to record a test that calls
/api/2.0/preview/scim/v2/Me which has no request body, it crashes.
2025-02-11 10:50:52 +00:00
Denis Bilenko 8d849fe868
acc: Disable custom server on CLOUD_ENV (#2332)
We're not using local server when CLOUD_ENV is enabled, no need to set
up a custom one.
2025-02-11 10:37:48 +00:00
Denis Bilenko f2096eddcc
acc: Do not show all replacements on every failure (#2331)
## Changes
- Only print replacements if VERBOSE_TEST flag is set.
- This is set on CI but not when you do "go test" or "make test".

Note, env var is used, so that it can be set in Makefile.

## Tests
Manually.
2025-02-11 09:38:53 +00:00
dependabot[bot] e81ec4ee23
Bump golang.org/x/mod from 0.22.0 to 0.23.0 (#2324)
Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.22.0 to
0.23.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="52289f1fa7"><code>52289f1</code></a>
modfile: fix trailing empty lines in require blocks</li>
<li>See full diff in <a
href="https://github.com/golang/mod/compare/v0.22.0...v0.23.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/mod&package-manager=go_modules&previous-version=0.22.0&new-version=0.23.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 17:13:49 +01:00
dependabot[bot] 6f3dbaec4c
Bump golang.org/x/text from 0.21.0 to 0.22.0 (#2323)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.21.0 to
0.22.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3b64043c9e"><code>3b64043</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="1e59086680"><code>1e59086</code></a>
message/pipeline: add two Unalias calls</li>
<li>See full diff in <a
href="https://github.com/golang/text/compare/v0.21.0...v0.22.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/text&package-manager=go_modules&previous-version=0.21.0&new-version=0.22.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 16:13:38 +01:00
dependabot[bot] f6c50a6318
Bump golang.org/x/term from 0.28.0 to 0.29.0 (#2325)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.28.0 to
0.29.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="743b2709ab"><code>743b270</code></a>
go.mod: update golang.org/x dependencies</li>
<li>See full diff in <a
href="https://github.com/golang/term/compare/v0.28.0...v0.29.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/term&package-manager=go_modules&previous-version=0.28.0&new-version=0.29.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 15:49:52 +01:00
Andrew Nester f7a45d0c7e
Upgrade to TF provider 1.65.1 (#2328)
## Changes
Upgrade to TF provider 1.65.1

Notable changes:
- Now it's possible to use `run_as` field in `pipelines` definition
- Added support for `performance_target` for `jobs`
2025-02-10 14:06:02 +00:00
dependabot[bot] 4bc231ad4f
Bump golang.org/x/oauth2 from 0.25.0 to 0.26.0 (#2322)
Bumps [golang.org/x/oauth2](https://github.com/golang/oauth2) from
0.25.0 to 0.26.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b9c813be7d"><code>b9c813b</code></a>
google: add warning about externally-provided credentials</li>
<li>See full diff in <a
href="https://github.com/golang/oauth2/compare/v0.25.0...v0.26.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/oauth2&package-manager=go_modules&previous-version=0.25.0&new-version=0.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-10 14:58:18 +01:00
44 changed files with 521 additions and 249 deletions

View File

@ -28,7 +28,7 @@ test:
cover: cover:
rm -fr ./acceptance/build/cover/ rm -fr ./acceptance/build/cover/
CLI_GOCOVERDIR=build/cover ${GOTESTSUM_CMD} -- -coverprofile=coverage.txt ${PACKAGES} VERBOSE_TEST=1 CLI_GOCOVERDIR=build/cover ${GOTESTSUM_CMD} -- -coverprofile=coverage.txt ${PACKAGES}
rm -fr ./acceptance/build/cover-merged/ rm -fr ./acceptance/build/cover-merged/
mkdir -p acceptance/build/cover-merged/ mkdir -p acceptance/build/cover-merged/
go tool covdata merge -i $$(printf '%s,' acceptance/build/cover/* | sed 's/,$$//') -o acceptance/build/cover-merged/ go tool covdata merge -i $$(printf '%s,' acceptance/build/cover/* | sed 's/,$$//') -o acceptance/build/cover-merged/
@ -61,6 +61,6 @@ integration: vendor
$(INTEGRATION) $(INTEGRATION)
integration-short: vendor integration-short: vendor
$(INTEGRATION) -short VERBOSE_TEST=1 $(INTEGRATION) -short
.PHONY: lint tidy lintcheck fmt test cover showcover build snapshot vendor schema integration integration-short acc-cover acc-showcover docs .PHONY: lint tidy lintcheck fmt test cover showcover build snapshot vendor schema integration integration-short acc-cover acc-showcover docs

4
NOTICE
View File

@ -114,3 +114,7 @@ dario.cat/mergo
Copyright (c) 2013 Dario Castañé. All rights reserved. Copyright (c) 2013 Dario Castañé. All rights reserved.
Copyright (c) 2012 The Go Authors. All rights reserved. Copyright (c) 2012 The Go Authors. All rights reserved.
https://github.com/darccio/mergo/blob/master/LICENSE https://github.com/darccio/mergo/blob/master/LICENSE
https://github.com/gorilla/mux
Copyright (c) 2023 The Gorilla Authors. All rights reserved.
https://github.com/gorilla/mux/blob/main/LICENSE

View File

@ -11,6 +11,7 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"regexp"
"runtime" "runtime"
"slices" "slices"
"sort" "sort"
@ -26,12 +27,14 @@ import (
"github.com/databricks/cli/libs/testdiff" "github.com/databricks/cli/libs/testdiff"
"github.com/databricks/cli/libs/testserver" "github.com/databricks/cli/libs/testserver"
"github.com/databricks/databricks-sdk-go" "github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/iam"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
var ( var (
KeepTmp bool KeepTmp bool
NoRepl bool NoRepl bool
VerboseTest bool = os.Getenv("VERBOSE_TEST") != ""
) )
// In order to debug CLI running under acceptance test, set this to full subtest name, e.g. "bundle/variables/empty" // In order to debug CLI running under acceptance test, set this to full subtest name, e.g. "bundle/variables/empty"
@ -71,7 +74,8 @@ func TestInprocessMode(t *testing.T) {
if InprocessMode { if InprocessMode {
t.Skip("Already tested by TestAccept") t.Skip("Already tested by TestAccept")
} }
require.Equal(t, 1, testAccept(t, true, "selftest")) require.Equal(t, 1, testAccept(t, true, "selftest/basic"))
require.Equal(t, 1, testAccept(t, true, "selftest/server"))
} }
func testAccept(t *testing.T, InprocessMode bool, singleTest string) int { func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
@ -117,14 +121,12 @@ func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
uvCache := getUVDefaultCacheDir(t) uvCache := getUVDefaultCacheDir(t)
t.Setenv("UV_CACHE_DIR", uvCache) t.Setenv("UV_CACHE_DIR", uvCache)
ctx := context.Background()
cloudEnv := os.Getenv("CLOUD_ENV") cloudEnv := os.Getenv("CLOUD_ENV")
if cloudEnv == "" { if cloudEnv == "" {
defaultServer := testserver.New(t) defaultServer := testserver.New(t)
AddHandlers(defaultServer) AddHandlers(defaultServer)
// Redirect API access to local server: t.Setenv("DATABRICKS_DEFAULT_HOST", defaultServer.URL)
t.Setenv("DATABRICKS_HOST", defaultServer.URL)
homeDir := t.TempDir() homeDir := t.TempDir()
// Do not read user's ~/.databrickscfg // Do not read user's ~/.databrickscfg
@ -147,27 +149,12 @@ func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
// do it last so that full paths match first: // do it last so that full paths match first:
repls.SetPath(buildDir, "[BUILD_DIR]") repls.SetPath(buildDir, "[BUILD_DIR]")
var config databricks.Config
if cloudEnv == "" {
// use fake token for local tests
config = databricks.Config{Token: "dbapi1234"}
} else {
// non-local tests rely on environment variables
config = databricks.Config{}
}
workspaceClient, err := databricks.NewWorkspaceClient(&config)
require.NoError(t, err)
user, err := workspaceClient.CurrentUser.Me(ctx)
require.NoError(t, err)
require.NotNil(t, user)
testdiff.PrepareReplacementsUser(t, &repls, *user)
testdiff.PrepareReplacementsWorkspaceClient(t, &repls, workspaceClient)
testdiff.PrepareReplacementsUUID(t, &repls)
testdiff.PrepareReplacementsDevVersion(t, &repls) testdiff.PrepareReplacementsDevVersion(t, &repls)
testdiff.PrepareReplacementSdkVersion(t, &repls) testdiff.PrepareReplacementSdkVersion(t, &repls)
testdiff.PrepareReplacementsGoVersion(t, &repls) testdiff.PrepareReplacementsGoVersion(t, &repls)
repls.Repls = append(repls.Repls, testdiff.Replacement{Old: regexp.MustCompile("dbapi[0-9a-f]+"), New: "[DATABRICKS_TOKEN]"})
testDirs := getTests(t) testDirs := getTests(t)
require.NotEmpty(t, testDirs) require.NotEmpty(t, testDirs)
@ -179,8 +166,7 @@ func testAccept(t *testing.T, InprocessMode bool, singleTest string) int {
} }
for _, dir := range testDirs { for _, dir := range testDirs {
testName := strings.ReplaceAll(dir, "\\", "/") t.Run(dir, func(t *testing.T) {
t.Run(testName, func(t *testing.T) {
if !InprocessMode { if !InprocessMode {
t.Parallel() t.Parallel()
} }
@ -202,7 +188,8 @@ func getTests(t *testing.T) []string {
name := filepath.Base(path) name := filepath.Base(path)
if name == EntryPointScript { if name == EntryPointScript {
// Presence of 'script' marks a test case in this directory // Presence of 'script' marks a test case in this directory
testDirs = append(testDirs, filepath.Dir(path)) testName := filepath.ToSlash(filepath.Dir(path))
testDirs = append(testDirs, testName)
} }
return nil return nil
}) })
@ -238,7 +225,6 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
} }
repls.SetPathWithParents(tmpDir, "[TMPDIR]") repls.SetPathWithParents(tmpDir, "[TMPDIR]")
repls.Repls = append(repls.Repls, config.Repls...)
scriptContents := readMergedScriptContents(t, dir) scriptContents := readMergedScriptContents(t, dir)
testutil.WriteFile(t, filepath.Join(tmpDir, EntryPointScript), scriptContents) testutil.WriteFile(t, filepath.Join(tmpDir, EntryPointScript), scriptContents)
@ -252,38 +238,79 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
cmd := exec.Command(args[0], args[1:]...) cmd := exec.Command(args[0], args[1:]...)
cmd.Env = os.Environ() cmd.Env = os.Environ()
var workspaceClient *databricks.WorkspaceClient
var user iam.User
// Start a new server with a custom configuration if the acceptance test // Start a new server with a custom configuration if the acceptance test
// specifies a custom server stubs. // specifies a custom server stubs.
var server *testserver.Server var server *testserver.Server
// Start a new server for this test if either: if cloudEnv == "" {
// 1. A custom server spec is defined in the test configuration. // Start a new server for this test if either:
// 2. The test is configured to record requests and assert on them. We need // 1. A custom server spec is defined in the test configuration.
// a duplicate of the default server to record requests because the default // 2. The test is configured to record requests and assert on them. We need
// server otherwise is a shared resource. // a duplicate of the default server to record requests because the default
if len(config.Server) > 0 || config.RecordRequests { // server otherwise is a shared resource.
server = testserver.New(t)
server.RecordRequests = config.RecordRequests
server.IncludeRequestHeaders = config.IncludeRequestHeaders
// If no custom server stubs are defined, add the default handlers. databricksLocalHost := os.Getenv("DATABRICKS_DEFAULT_HOST")
if len(config.Server) == 0 {
if len(config.Server) > 0 || config.RecordRequests {
server = testserver.New(t)
server.RecordRequests = config.RecordRequests
server.IncludeRequestHeaders = config.IncludeRequestHeaders
for _, stub := range config.Server {
require.NotEmpty(t, stub.Pattern)
items := strings.Split(stub.Pattern, " ")
require.Len(t, items, 2)
server.Handle(items[0], items[1], func(fakeWorkspace *testserver.FakeWorkspace, req *http.Request) (any, int) {
statusCode := http.StatusOK
if stub.Response.StatusCode != 0 {
statusCode = stub.Response.StatusCode
}
return stub.Response.Body, statusCode
})
}
// The earliest handlers take precedence, add default handlers last
AddHandlers(server) AddHandlers(server)
databricksLocalHost = server.URL
} }
for _, stub := range config.Server { // Each local test should use a new token that will result into a new fake workspace,
require.NotEmpty(t, stub.Pattern) // so that test don't interfere with each other.
server.Handle(stub.Pattern, func(fakeWorkspace *testserver.FakeWorkspace, req *http.Request) (any, int) { tokenSuffix := strings.ReplaceAll(uuid.NewString(), "-", "")
statusCode := http.StatusOK config := databricks.Config{
if stub.Response.StatusCode != 0 { Host: databricksLocalHost,
statusCode = stub.Response.StatusCode Token: "dbapi" + tokenSuffix,
}
return stub.Response.Body, statusCode
})
} }
cmd.Env = append(cmd.Env, "DATABRICKS_HOST="+server.URL) workspaceClient, err = databricks.NewWorkspaceClient(&config)
require.NoError(t, err)
cmd.Env = append(cmd.Env, "DATABRICKS_HOST="+config.Host)
cmd.Env = append(cmd.Env, "DATABRICKS_TOKEN="+config.Token)
// For the purposes of replacements, use testUser.
// Note, users might have overriden /api/2.0/preview/scim/v2/Me but that should not affect the replacement:
user = testUser
} else {
// Use whatever authentication mechanism is configured by the test runner.
workspaceClient, err = databricks.NewWorkspaceClient(&databricks.Config{})
require.NoError(t, err)
pUser, err := workspaceClient.CurrentUser.Me(context.Background())
require.NoError(t, err, "Failed to get current user")
user = *pUser
} }
testdiff.PrepareReplacementsUser(t, &repls, user)
testdiff.PrepareReplacementsWorkspaceClient(t, &repls, workspaceClient)
// Must be added PrepareReplacementsUser, otherwise conflicts with [USERNAME]
testdiff.PrepareReplacementsUUID(t, &repls)
// User replacements come last:
repls.Repls = append(repls.Repls, config.Repls...)
if coverDir != "" { if coverDir != "" {
// Creating individual coverage directory for each test, because writing to the same one // Creating individual coverage directory for each test, because writing to the same one
// results in sporadic failures like this one (only if tests are running in parallel): // results in sporadic failures like this one (only if tests are running in parallel):
@ -294,15 +321,6 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
cmd.Env = append(cmd.Env, "GOCOVERDIR="+coverDir) cmd.Env = append(cmd.Env, "GOCOVERDIR="+coverDir)
} }
// Each local test should use a new token that will result into a new fake workspace,
// so that test don't interfere with each other.
if cloudEnv == "" {
tokenSuffix := strings.ReplaceAll(uuid.NewString(), "-", "")
token := "dbapi" + tokenSuffix
cmd.Env = append(cmd.Env, "DATABRICKS_TOKEN="+token)
repls.Set(token, "[DATABRICKS_TOKEN]")
}
// Write combined output to a file // Write combined output to a file
out, err := os.Create(filepath.Join(tmpDir, "output.txt")) out, err := os.Create(filepath.Join(tmpDir, "output.txt"))
require.NoError(t, err) require.NoError(t, err)
@ -319,7 +337,7 @@ func runTest(t *testing.T, dir, coverDir string, repls testdiff.ReplacementsCont
for _, req := range server.Requests { for _, req := range server.Requests {
reqJson, err := json.MarshalIndent(req, "", " ") reqJson, err := json.MarshalIndent(req, "", " ")
require.NoError(t, err) require.NoErrorf(t, err, "Failed to indent: %#v", req)
reqJsonWithRepls := repls.Replace(string(reqJson)) reqJsonWithRepls := repls.Replace(string(reqJson))
_, err = f.WriteString(reqJsonWithRepls + "\n") _, err = f.WriteString(reqJsonWithRepls + "\n")
@ -412,7 +430,7 @@ func doComparison(t *testing.T, repls testdiff.ReplacementsContext, dirRef, dirN
testutil.WriteFile(t, pathRef, valueNew) testutil.WriteFile(t, pathRef, valueNew)
} }
if !equal && printedRepls != nil && !*printedRepls { if VerboseTest && !equal && printedRepls != nil && !*printedRepls {
*printedRepls = true *printedRepls = true
var items []string var items []string
for _, item := range repls.Repls { for _, item := range repls.Repls {

View File

@ -13,13 +13,13 @@
=== Inside the bundle, profile flag not matching bundle host. Badness: should use profile from flag instead and not fail === Inside the bundle, profile flag not matching bundle host. Badness: should use profile from flag instead and not fail
>>> errcode [CLI] current-user me -p profile_name >>> errcode [CLI] current-user me -p profile_name
Error: cannot resolve bundle auth configuration: config host mismatch: profile uses host https://non-existing-subdomain.databricks.com, but CLI configured to use [DATABRICKS_URL] Error: cannot resolve bundle auth configuration: config host mismatch: profile uses host https://non-existing-subdomain.databricks.com, but CLI configured to use [DATABRICKS_TARGET]
Exit code: 1 Exit code: 1
=== Inside the bundle, target and not matching profile === Inside the bundle, target and not matching profile
>>> errcode [CLI] current-user me -t dev -p profile_name >>> errcode [CLI] current-user me -t dev -p profile_name
Error: cannot resolve bundle auth configuration: config host mismatch: profile uses host https://non-existing-subdomain.databricks.com, but CLI configured to use [DATABRICKS_URL] Error: cannot resolve bundle auth configuration: config host mismatch: profile uses host https://non-existing-subdomain.databricks.com, but CLI configured to use [DATABRICKS_TARGET]
Exit code: 1 Exit code: 1

View File

@ -5,4 +5,8 @@ Badness = "When -p flag is used inside the bundle folder for any CLI commands, C
# This is a workaround to replace DATABRICKS_URL with DATABRICKS_HOST # This is a workaround to replace DATABRICKS_URL with DATABRICKS_HOST
[[Repls]] [[Repls]]
Old='DATABRICKS_HOST' Old='DATABRICKS_HOST'
New='DATABRICKS_URL' New='DATABRICKS_TARGET'
[[Repls]]
Old='DATABRICKS_URL'
New='DATABRICKS_TARGET'

View File

@ -0,0 +1,12 @@
{
"headers": {
"Authorization": [
"Basic [ENCODED_AUTH]"
],
"User-Agent": [
"cli/[DEV_VERSION] databricks-sdk-go/[SDK_VERSION] go/[GO_VERSION] os/[OS] cmd/current-user_me cmd-exec-id/[UUID] auth/basic"
]
},
"method": "GET",
"path": "/api/2.0/preview/scim/v2/Me"
}

View File

@ -0,0 +1,4 @@
{
"id":"[USERID]",
"userName":"[USERNAME]"
}

View File

@ -0,0 +1,8 @@
# Unset the token which is configured by default
# in acceptance tests
export DATABRICKS_TOKEN=""
export DATABRICKS_USERNAME=username
export DATABRICKS_PASSWORD=password
$CLI current-user me

View File

@ -0,0 +1,4 @@
# "username:password" in base64 is dXNlcm5hbWU6cGFzc3dvcmQ=, expect to see this in Authorization header
[[Repls]]
Old = "dXNlcm5hbWU6cGFzc3dvcmQ="
New = "[ENCODED_AUTH]"

View File

@ -0,0 +1,34 @@
{
"headers": {
"User-Agent": [
"cli/[DEV_VERSION] databricks-sdk-go/[SDK_VERSION] go/[GO_VERSION] os/[OS]"
]
},
"method": "GET",
"path": "/oidc/.well-known/oauth-authorization-server"
}
{
"headers": {
"Authorization": [
"Basic [ENCODED_AUTH]"
],
"User-Agent": [
"cli/[DEV_VERSION] databricks-sdk-go/[SDK_VERSION] go/[GO_VERSION] os/[OS]"
]
},
"method": "POST",
"path": "/oidc/v1/token",
"raw_body": "grant_type=client_credentials\u0026scope=all-apis"
}
{
"headers": {
"Authorization": [
"Bearer oauth-token"
],
"User-Agent": [
"cli/[DEV_VERSION] databricks-sdk-go/[SDK_VERSION] go/[GO_VERSION] os/[OS] cmd/current-user_me cmd-exec-id/[UUID] auth/oauth-m2m"
]
},
"method": "GET",
"path": "/api/2.0/preview/scim/v2/Me"
}

View File

@ -0,0 +1,4 @@
{
"id":"[USERID]",
"userName":"[USERNAME]"
}

View File

@ -0,0 +1,8 @@
# Unset the token which is configured by default
# in acceptance tests
export DATABRICKS_TOKEN=""
export DATABRICKS_CLIENT_ID=client_id
export DATABRICKS_CLIENT_SECRET=client_secret
$CLI current-user me

View File

@ -0,0 +1,5 @@
# "client_id:client_secret" in base64 is Y2xpZW50X2lkOmNsaWVudF9zZWNyZXQ=, expect to
# see this in Authorization header
[[Repls]]
Old = "Y2xpZW50X2lkOmNsaWVudF9zZWNyZXQ="
New = "[ENCODED_AUTH]"

View File

@ -0,0 +1,12 @@
{
"headers": {
"Authorization": [
"Bearer dapi1234"
],
"User-Agent": [
"cli/[DEV_VERSION] databricks-sdk-go/[SDK_VERSION] go/[GO_VERSION] os/[OS] cmd/current-user_me cmd-exec-id/[UUID] auth/pat"
]
},
"method": "GET",
"path": "/api/2.0/preview/scim/v2/Me"
}

View File

@ -0,0 +1,4 @@
{
"id":"[USERID]",
"userName":"[USERNAME]"
}

View File

@ -0,0 +1,3 @@
export DATABRICKS_TOKEN=dapi1234
$CLI current-user me

View File

@ -0,0 +1,20 @@
LocalOnly = true
RecordRequests = true
IncludeRequestHeaders = ["Authorization", "User-Agent"]
[[Repls]]
Old = '(linux|darwin|windows)'
New = '[OS]'
[[Repls]]
Old = " upstream/[A-Za-z0-9.-]+"
New = ""
[[Repls]]
Old = " upstream-version/[A-Za-z0-9.-]+"
New = ""
[[Repls]]
Old = " cicd/[A-Za-z0-9.-]+"
New = ""

View File

@ -14,11 +14,7 @@ import (
func StartCmdServer(t *testing.T) *testserver.Server { func StartCmdServer(t *testing.T) *testserver.Server {
server := testserver.New(t) server := testserver.New(t)
server.Handle("GET", "/", func(_ *testserver.FakeWorkspace, r *http.Request) (any, int) {
// {$} is a wildcard that only matches the end of the URL. We explicitly use
// /{$} to disambiguate it from the generic handler for '/' which is used to
// identify unhandled API endpoints in the test server.
server.Handle("/{$}", func(w *testserver.FakeWorkspace, r *http.Request) (any, int) {
q := r.URL.Query() q := r.URL.Query()
args := strings.Split(q.Get("args"), " ") args := strings.Split(q.Get("args"), " ")

View File

@ -0,0 +1,8 @@
{
"method": "GET",
"path": "/api/2.0/preview/scim/v2/Me"
}
{
"method": "GET",
"path": "/custom/endpoint"
}

View File

@ -0,0 +1,15 @@
>>> curl -s [DATABRICKS_URL]/api/2.0/preview/scim/v2/Me
{
"id": "[USERID]",
"userName": "[USERNAME]"
}
>>> curl -sD - [DATABRICKS_URL]/custom/endpoint?query=param
HTTP/1.1 201 Created
Content-Type: application/json
Date: (redacted)
Content-Length: (redacted)
custom
---
response

View File

@ -0,0 +1,2 @@
trace curl -s $DATABRICKS_HOST/api/2.0/preview/scim/v2/Me
trace curl -sD - $DATABRICKS_HOST/custom/endpoint?query=param

View File

@ -0,0 +1,18 @@
LocalOnly = true
RecordRequests = true
[[Server]]
Pattern = "GET /custom/endpoint"
Response.Body = '''custom
---
response
'''
Response.StatusCode = 201
[[Repls]]
Old = 'Date: .*'
New = 'Date: (redacted)'
[[Repls]]
Old = 'Content-Length: [0-9]*'
New = 'Content-Length: (redacted)'

View File

@ -8,6 +8,7 @@ import (
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/iam" "github.com/databricks/databricks-sdk-go/service/iam"
"github.com/gorilla/mux"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
@ -16,8 +17,13 @@ import (
"github.com/databricks/databricks-sdk-go/service/workspace" "github.com/databricks/databricks-sdk-go/service/workspace"
) )
var testUser = iam.User{
Id: "1000012345",
UserName: "tester@databricks.com",
}
func AddHandlers(server *testserver.Server) { func AddHandlers(server *testserver.Server) {
server.Handle("GET /api/2.0/policies/clusters/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.0/policies/clusters/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return compute.ListPoliciesResponse{ return compute.ListPoliciesResponse{
Policies: []compute.Policy{ Policies: []compute.Policy{
{ {
@ -32,7 +38,7 @@ func AddHandlers(server *testserver.Server) {
}, http.StatusOK }, http.StatusOK
}) })
server.Handle("GET /api/2.0/instance-pools/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.0/instance-pools/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return compute.ListInstancePools{ return compute.ListInstancePools{
InstancePools: []compute.InstancePoolAndStats{ InstancePools: []compute.InstancePoolAndStats{
{ {
@ -43,7 +49,7 @@ func AddHandlers(server *testserver.Server) {
}, http.StatusOK }, http.StatusOK
}) })
server.Handle("GET /api/2.1/clusters/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.1/clusters/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return compute.ListClustersResponse{ return compute.ListClustersResponse{
Clusters: []compute.ClusterDetails{ Clusters: []compute.ClusterDetails{
{ {
@ -58,20 +64,17 @@ func AddHandlers(server *testserver.Server) {
}, http.StatusOK }, http.StatusOK
}) })
server.Handle("GET /api/2.0/preview/scim/v2/Me", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.0/preview/scim/v2/Me", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return iam.User{ return testUser, http.StatusOK
Id: "1000012345",
UserName: "tester@databricks.com",
}, http.StatusOK
}) })
server.Handle("GET /api/2.0/workspace/get-status", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.0/workspace/get-status", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
path := r.URL.Query().Get("path") path := r.URL.Query().Get("path")
return fakeWorkspace.WorkspaceGetStatus(path) return fakeWorkspace.WorkspaceGetStatus(path)
}) })
server.Handle("POST /api/2.0/workspace/mkdirs", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("POST", "/api/2.0/workspace/mkdirs", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
request := workspace.Mkdirs{} request := workspace.Mkdirs{}
decoder := json.NewDecoder(r.Body) decoder := json.NewDecoder(r.Body)
@ -83,13 +86,13 @@ func AddHandlers(server *testserver.Server) {
return fakeWorkspace.WorkspaceMkdirs(request) return fakeWorkspace.WorkspaceMkdirs(request)
}) })
server.Handle("GET /api/2.0/workspace/export", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.0/workspace/export", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
path := r.URL.Query().Get("path") path := r.URL.Query().Get("path")
return fakeWorkspace.WorkspaceExport(path) return fakeWorkspace.WorkspaceExport(path)
}) })
server.Handle("POST /api/2.0/workspace/delete", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("POST", "/api/2.0/workspace/delete", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
path := r.URL.Query().Get("path") path := r.URL.Query().Get("path")
recursiveStr := r.URL.Query().Get("recursive") recursiveStr := r.URL.Query().Get("recursive")
var recursive bool var recursive bool
@ -103,8 +106,9 @@ func AddHandlers(server *testserver.Server) {
return fakeWorkspace.WorkspaceDelete(path, recursive) return fakeWorkspace.WorkspaceDelete(path, recursive)
}) })
server.Handle("POST /api/2.0/workspace-files/import-file/{path}", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("POST", "/api/2.0/workspace-files/import-file/{path:.*}", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
path := r.PathValue("path") vars := mux.Vars(r)
path := vars["path"]
body := new(bytes.Buffer) body := new(bytes.Buffer)
_, err := body.ReadFrom(r.Body) _, err := body.ReadFrom(r.Body)
@ -115,14 +119,15 @@ func AddHandlers(server *testserver.Server) {
return fakeWorkspace.WorkspaceFilesImportFile(path, body.Bytes()) return fakeWorkspace.WorkspaceFilesImportFile(path, body.Bytes())
}) })
server.Handle("GET /api/2.1/unity-catalog/current-metastore-assignment", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.1/unity-catalog/current-metastore-assignment", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return catalog.MetastoreAssignment{ return catalog.MetastoreAssignment{
DefaultCatalogName: "main", DefaultCatalogName: "main",
}, http.StatusOK }, http.StatusOK
}) })
server.Handle("GET /api/2.0/permissions/directories/{objectId}", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.0/permissions/directories/{objectId}", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
objectId := r.PathValue("objectId") vars := mux.Vars(r)
objectId := vars["objectId"]
return workspace.WorkspaceObjectPermissions{ return workspace.WorkspaceObjectPermissions{
ObjectId: objectId, ObjectId: objectId,
@ -140,7 +145,7 @@ func AddHandlers(server *testserver.Server) {
}, http.StatusOK }, http.StatusOK
}) })
server.Handle("POST /api/2.1/jobs/create", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("POST", "/api/2.1/jobs/create", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
request := jobs.CreateJob{} request := jobs.CreateJob{}
decoder := json.NewDecoder(r.Body) decoder := json.NewDecoder(r.Body)
@ -152,15 +157,31 @@ func AddHandlers(server *testserver.Server) {
return fakeWorkspace.JobsCreate(request) return fakeWorkspace.JobsCreate(request)
}) })
server.Handle("GET /api/2.1/jobs/get", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.1/jobs/get", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
jobId := r.URL.Query().Get("job_id") jobId := r.URL.Query().Get("job_id")
return fakeWorkspace.JobsGet(jobId) return fakeWorkspace.JobsGet(jobId)
}) })
server.Handle("GET /api/2.1/jobs/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) { server.Handle("GET", "/api/2.1/jobs/list", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return fakeWorkspace.JobsList() return fakeWorkspace.JobsList()
}) })
server.Handle("GET", "/oidc/.well-known/oauth-authorization-server", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return map[string]string{
"authorization_endpoint": server.URL + "oidc/v1/authorize",
"token_endpoint": server.URL + "/oidc/v1/token",
}, http.StatusOK
})
server.Handle("POST", "/oidc/v1/token", func(fakeWorkspace *testserver.FakeWorkspace, r *http.Request) (any, int) {
return map[string]string{
"access_token": "oauth-token",
"expires_in": "3600",
"scope": "all-apis",
"token_type": "Bearer",
}, http.StatusOK
})
} }
func internalError(err error) (any, int) { func internalError(err error) (any, int) {

View File

@ -2,7 +2,7 @@ terraform {
required_providers { required_providers {
databricks = { databricks = {
source = "databricks/databricks" source = "databricks/databricks"
version = "1.64.1" version = "1.65.1"
} }
} }

View File

@ -4,9 +4,9 @@
Initializing the backend... Initializing the backend...
Initializing provider plugins... Initializing provider plugins...
- Finding databricks/databricks versions matching "1.64.1"... - Finding databricks/databricks versions matching "1.65.1"...
- Installing databricks/databricks v1.64.1... - Installing databricks/databricks v1.65.1...
- Installed databricks/databricks v1.64.1 (unauthenticated) - Installed databricks/databricks v1.65.1 (unauthenticated)
Terraform has created a lock file .terraform.lock.hcl to record the provider Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository selections it made above. Include this file in your version control repository

View File

@ -122,6 +122,9 @@ func TestConvertPipeline(t *testing.T) {
"num_workers": int64(1), "num_workers": int64(1),
}, },
}, },
"run_as": map[string]any{
"user_name": "foo@bar.com",
},
}, out.Pipeline["my_pipeline"]) }, out.Pipeline["my_pipeline"])
// Assert equality on the permissions // Assert equality on the permissions

View File

@ -19,3 +19,6 @@ How to regenerate Go structs from an updated terraform provider?
2. Delete `./tmp` if it exists 2. Delete `./tmp` if it exists
3. Run `go run .` 3. Run `go run .`
4. Run `gofmt -s -w ../schema` 4. Run `gofmt -s -w ../schema`
5. Go back to the root of the repo.
6. Update `/acceptance/terraform/main.tf` file to use new version of TF provider
7. Run `go test ./acceptance -v -update -run TestAccept/terraform` to update test output with a new version of TF provider

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.64.1" const ProviderVersion = "1.65.1"

View File

@ -28,7 +28,6 @@ type DataSourceCatalogCatalogInfo struct {
Owner string `json:"owner,omitempty"` Owner string `json:"owner,omitempty"`
Properties map[string]string `json:"properties,omitempty"` Properties map[string]string `json:"properties,omitempty"`
ProviderName string `json:"provider_name,omitempty"` ProviderName string `json:"provider_name,omitempty"`
SecurableKind string `json:"securable_kind,omitempty"`
SecurableType string `json:"securable_type,omitempty"` SecurableType string `json:"securable_type,omitempty"`
ShareName string `json:"share_name,omitempty"` ShareName string `json:"share_name,omitempty"`
StorageLocation string `json:"storage_location,omitempty"` StorageLocation string `json:"storage_location,omitempty"`

View File

@ -0,0 +1,14 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type ResourceAibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy struct {
AccessPolicyType string `json:"access_policy_type"`
}
type ResourceAibiDashboardEmbeddingAccessPolicySetting struct {
Etag string `json:"etag,omitempty"`
Id string `json:"id,omitempty"`
SettingName string `json:"setting_name,omitempty"`
AibiDashboardEmbeddingAccessPolicy *ResourceAibiDashboardEmbeddingAccessPolicySettingAibiDashboardEmbeddingAccessPolicy `json:"aibi_dashboard_embedding_access_policy,omitempty"`
}

View File

@ -0,0 +1,14 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type ResourceAibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains struct {
ApprovedDomains []string `json:"approved_domains"`
}
type ResourceAibiDashboardEmbeddingApprovedDomainsSetting struct {
Etag string `json:"etag,omitempty"`
Id string `json:"id,omitempty"`
SettingName string `json:"setting_name,omitempty"`
AibiDashboardEmbeddingApprovedDomains *ResourceAibiDashboardEmbeddingApprovedDomainsSettingAibiDashboardEmbeddingApprovedDomains `json:"aibi_dashboard_embedding_approved_domains,omitempty"`
}

View File

@ -8,16 +8,17 @@ type ResourceCustomAppIntegrationTokenAccessPolicy struct {
} }
type ResourceCustomAppIntegration struct { type ResourceCustomAppIntegration struct {
ClientId string `json:"client_id,omitempty"` ClientId string `json:"client_id,omitempty"`
ClientSecret string `json:"client_secret,omitempty"` ClientSecret string `json:"client_secret,omitempty"`
Confidential bool `json:"confidential,omitempty"` Confidential bool `json:"confidential,omitempty"`
CreateTime string `json:"create_time,omitempty"` CreateTime string `json:"create_time,omitempty"`
CreatedBy int `json:"created_by,omitempty"` CreatedBy int `json:"created_by,omitempty"`
CreatorUsername string `json:"creator_username,omitempty"` CreatorUsername string `json:"creator_username,omitempty"`
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
IntegrationId string `json:"integration_id,omitempty"` IntegrationId string `json:"integration_id,omitempty"`
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
RedirectUrls []string `json:"redirect_urls,omitempty"` RedirectUrls []string `json:"redirect_urls,omitempty"`
Scopes []string `json:"scopes,omitempty"` Scopes []string `json:"scopes,omitempty"`
TokenAccessPolicy *ResourceCustomAppIntegrationTokenAccessPolicy `json:"token_access_policy,omitempty"` UserAuthorizedScopes []string `json:"user_authorized_scopes,omitempty"`
TokenAccessPolicy *ResourceCustomAppIntegrationTokenAccessPolicy `json:"token_access_policy,omitempty"`
} }

View File

@ -1489,6 +1489,7 @@ type ResourceJob struct {
MaxRetries int `json:"max_retries,omitempty"` MaxRetries int `json:"max_retries,omitempty"`
MinRetryIntervalMillis int `json:"min_retry_interval_millis,omitempty"` MinRetryIntervalMillis int `json:"min_retry_interval_millis,omitempty"`
Name string `json:"name,omitempty"` Name string `json:"name,omitempty"`
PerformanceTarget string `json:"performance_target,omitempty"`
RetryOnTimeout bool `json:"retry_on_timeout,omitempty"` RetryOnTimeout bool `json:"retry_on_timeout,omitempty"`
Tags map[string]string `json:"tags,omitempty"` Tags map[string]string `json:"tags,omitempty"`
TimeoutSeconds int `json:"timeout_seconds,omitempty"` TimeoutSeconds int `json:"timeout_seconds,omitempty"`

View File

@ -249,6 +249,11 @@ type ResourcePipelineRestartWindow struct {
TimeZoneId string `json:"time_zone_id,omitempty"` TimeZoneId string `json:"time_zone_id,omitempty"`
} }
type ResourcePipelineRunAs struct {
ServicePrincipalName string `json:"service_principal_name,omitempty"`
UserName string `json:"user_name,omitempty"`
}
type ResourcePipelineTriggerCron struct { type ResourcePipelineTriggerCron struct {
QuartzCronSchedule string `json:"quartz_cron_schedule,omitempty"` QuartzCronSchedule string `json:"quartz_cron_schedule,omitempty"`
TimezoneId string `json:"timezone_id,omitempty"` TimezoneId string `json:"timezone_id,omitempty"`
@ -296,5 +301,6 @@ type ResourcePipeline struct {
Library []ResourcePipelineLibrary `json:"library,omitempty"` Library []ResourcePipelineLibrary `json:"library,omitempty"`
Notification []ResourcePipelineNotification `json:"notification,omitempty"` Notification []ResourcePipelineNotification `json:"notification,omitempty"`
RestartWindow *ResourcePipelineRestartWindow `json:"restart_window,omitempty"` RestartWindow *ResourcePipelineRestartWindow `json:"restart_window,omitempty"`
RunAs *ResourcePipelineRunAs `json:"run_as,omitempty"`
Trigger *ResourcePipelineTrigger `json:"trigger,omitempty"` Trigger *ResourcePipelineTrigger `json:"trigger,omitempty"`
} }

View File

@ -3,115 +3,119 @@
package schema package schema
type Resources struct { type Resources struct {
AccessControlRuleSet map[string]any `json:"databricks_access_control_rule_set,omitempty"` AccessControlRuleSet map[string]any `json:"databricks_access_control_rule_set,omitempty"`
Alert map[string]any `json:"databricks_alert,omitempty"` AibiDashboardEmbeddingAccessPolicySetting map[string]any `json:"databricks_aibi_dashboard_embedding_access_policy_setting,omitempty"`
App map[string]any `json:"databricks_app,omitempty"` AibiDashboardEmbeddingApprovedDomainsSetting map[string]any `json:"databricks_aibi_dashboard_embedding_approved_domains_setting,omitempty"`
ArtifactAllowlist map[string]any `json:"databricks_artifact_allowlist,omitempty"` Alert map[string]any `json:"databricks_alert,omitempty"`
AutomaticClusterUpdateWorkspaceSetting map[string]any `json:"databricks_automatic_cluster_update_workspace_setting,omitempty"` App map[string]any `json:"databricks_app,omitempty"`
AwsS3Mount map[string]any `json:"databricks_aws_s3_mount,omitempty"` ArtifactAllowlist map[string]any `json:"databricks_artifact_allowlist,omitempty"`
AzureAdlsGen1Mount map[string]any `json:"databricks_azure_adls_gen1_mount,omitempty"` AutomaticClusterUpdateWorkspaceSetting map[string]any `json:"databricks_automatic_cluster_update_workspace_setting,omitempty"`
AzureAdlsGen2Mount map[string]any `json:"databricks_azure_adls_gen2_mount,omitempty"` AwsS3Mount map[string]any `json:"databricks_aws_s3_mount,omitempty"`
AzureBlobMount map[string]any `json:"databricks_azure_blob_mount,omitempty"` AzureAdlsGen1Mount map[string]any `json:"databricks_azure_adls_gen1_mount,omitempty"`
Budget map[string]any `json:"databricks_budget,omitempty"` AzureAdlsGen2Mount map[string]any `json:"databricks_azure_adls_gen2_mount,omitempty"`
Catalog map[string]any `json:"databricks_catalog,omitempty"` AzureBlobMount map[string]any `json:"databricks_azure_blob_mount,omitempty"`
CatalogWorkspaceBinding map[string]any `json:"databricks_catalog_workspace_binding,omitempty"` Budget map[string]any `json:"databricks_budget,omitempty"`
Cluster map[string]any `json:"databricks_cluster,omitempty"` Catalog map[string]any `json:"databricks_catalog,omitempty"`
ClusterPolicy map[string]any `json:"databricks_cluster_policy,omitempty"` CatalogWorkspaceBinding map[string]any `json:"databricks_catalog_workspace_binding,omitempty"`
ComplianceSecurityProfileWorkspaceSetting map[string]any `json:"databricks_compliance_security_profile_workspace_setting,omitempty"` Cluster map[string]any `json:"databricks_cluster,omitempty"`
Connection map[string]any `json:"databricks_connection,omitempty"` ClusterPolicy map[string]any `json:"databricks_cluster_policy,omitempty"`
Credential map[string]any `json:"databricks_credential,omitempty"` ComplianceSecurityProfileWorkspaceSetting map[string]any `json:"databricks_compliance_security_profile_workspace_setting,omitempty"`
CustomAppIntegration map[string]any `json:"databricks_custom_app_integration,omitempty"` Connection map[string]any `json:"databricks_connection,omitempty"`
Dashboard map[string]any `json:"databricks_dashboard,omitempty"` Credential map[string]any `json:"databricks_credential,omitempty"`
DbfsFile map[string]any `json:"databricks_dbfs_file,omitempty"` CustomAppIntegration map[string]any `json:"databricks_custom_app_integration,omitempty"`
DefaultNamespaceSetting map[string]any `json:"databricks_default_namespace_setting,omitempty"` Dashboard map[string]any `json:"databricks_dashboard,omitempty"`
Directory map[string]any `json:"databricks_directory,omitempty"` DbfsFile map[string]any `json:"databricks_dbfs_file,omitempty"`
EnhancedSecurityMonitoringWorkspaceSetting map[string]any `json:"databricks_enhanced_security_monitoring_workspace_setting,omitempty"` DefaultNamespaceSetting map[string]any `json:"databricks_default_namespace_setting,omitempty"`
Entitlements map[string]any `json:"databricks_entitlements,omitempty"` Directory map[string]any `json:"databricks_directory,omitempty"`
ExternalLocation map[string]any `json:"databricks_external_location,omitempty"` EnhancedSecurityMonitoringWorkspaceSetting map[string]any `json:"databricks_enhanced_security_monitoring_workspace_setting,omitempty"`
File map[string]any `json:"databricks_file,omitempty"` Entitlements map[string]any `json:"databricks_entitlements,omitempty"`
GitCredential map[string]any `json:"databricks_git_credential,omitempty"` ExternalLocation map[string]any `json:"databricks_external_location,omitempty"`
GlobalInitScript map[string]any `json:"databricks_global_init_script,omitempty"` File map[string]any `json:"databricks_file,omitempty"`
Grant map[string]any `json:"databricks_grant,omitempty"` GitCredential map[string]any `json:"databricks_git_credential,omitempty"`
Grants map[string]any `json:"databricks_grants,omitempty"` GlobalInitScript map[string]any `json:"databricks_global_init_script,omitempty"`
Group map[string]any `json:"databricks_group,omitempty"` Grant map[string]any `json:"databricks_grant,omitempty"`
GroupInstanceProfile map[string]any `json:"databricks_group_instance_profile,omitempty"` Grants map[string]any `json:"databricks_grants,omitempty"`
GroupMember map[string]any `json:"databricks_group_member,omitempty"` Group map[string]any `json:"databricks_group,omitempty"`
GroupRole map[string]any `json:"databricks_group_role,omitempty"` GroupInstanceProfile map[string]any `json:"databricks_group_instance_profile,omitempty"`
InstancePool map[string]any `json:"databricks_instance_pool,omitempty"` GroupMember map[string]any `json:"databricks_group_member,omitempty"`
InstanceProfile map[string]any `json:"databricks_instance_profile,omitempty"` GroupRole map[string]any `json:"databricks_group_role,omitempty"`
IpAccessList map[string]any `json:"databricks_ip_access_list,omitempty"` InstancePool map[string]any `json:"databricks_instance_pool,omitempty"`
Job map[string]any `json:"databricks_job,omitempty"` InstanceProfile map[string]any `json:"databricks_instance_profile,omitempty"`
LakehouseMonitor map[string]any `json:"databricks_lakehouse_monitor,omitempty"` IpAccessList map[string]any `json:"databricks_ip_access_list,omitempty"`
Library map[string]any `json:"databricks_library,omitempty"` Job map[string]any `json:"databricks_job,omitempty"`
Metastore map[string]any `json:"databricks_metastore,omitempty"` LakehouseMonitor map[string]any `json:"databricks_lakehouse_monitor,omitempty"`
MetastoreAssignment map[string]any `json:"databricks_metastore_assignment,omitempty"` Library map[string]any `json:"databricks_library,omitempty"`
MetastoreDataAccess map[string]any `json:"databricks_metastore_data_access,omitempty"` Metastore map[string]any `json:"databricks_metastore,omitempty"`
MlflowExperiment map[string]any `json:"databricks_mlflow_experiment,omitempty"` MetastoreAssignment map[string]any `json:"databricks_metastore_assignment,omitempty"`
MlflowModel map[string]any `json:"databricks_mlflow_model,omitempty"` MetastoreDataAccess map[string]any `json:"databricks_metastore_data_access,omitempty"`
MlflowWebhook map[string]any `json:"databricks_mlflow_webhook,omitempty"` MlflowExperiment map[string]any `json:"databricks_mlflow_experiment,omitempty"`
ModelServing map[string]any `json:"databricks_model_serving,omitempty"` MlflowModel map[string]any `json:"databricks_mlflow_model,omitempty"`
Mount map[string]any `json:"databricks_mount,omitempty"` MlflowWebhook map[string]any `json:"databricks_mlflow_webhook,omitempty"`
MwsCredentials map[string]any `json:"databricks_mws_credentials,omitempty"` ModelServing map[string]any `json:"databricks_model_serving,omitempty"`
MwsCustomerManagedKeys map[string]any `json:"databricks_mws_customer_managed_keys,omitempty"` Mount map[string]any `json:"databricks_mount,omitempty"`
MwsLogDelivery map[string]any `json:"databricks_mws_log_delivery,omitempty"` MwsCredentials map[string]any `json:"databricks_mws_credentials,omitempty"`
MwsNccBinding map[string]any `json:"databricks_mws_ncc_binding,omitempty"` MwsCustomerManagedKeys map[string]any `json:"databricks_mws_customer_managed_keys,omitempty"`
MwsNccPrivateEndpointRule map[string]any `json:"databricks_mws_ncc_private_endpoint_rule,omitempty"` MwsLogDelivery map[string]any `json:"databricks_mws_log_delivery,omitempty"`
MwsNetworkConnectivityConfig map[string]any `json:"databricks_mws_network_connectivity_config,omitempty"` MwsNccBinding map[string]any `json:"databricks_mws_ncc_binding,omitempty"`
MwsNetworks map[string]any `json:"databricks_mws_networks,omitempty"` MwsNccPrivateEndpointRule map[string]any `json:"databricks_mws_ncc_private_endpoint_rule,omitempty"`
MwsPermissionAssignment map[string]any `json:"databricks_mws_permission_assignment,omitempty"` MwsNetworkConnectivityConfig map[string]any `json:"databricks_mws_network_connectivity_config,omitempty"`
MwsPrivateAccessSettings map[string]any `json:"databricks_mws_private_access_settings,omitempty"` MwsNetworks map[string]any `json:"databricks_mws_networks,omitempty"`
MwsStorageConfigurations map[string]any `json:"databricks_mws_storage_configurations,omitempty"` MwsPermissionAssignment map[string]any `json:"databricks_mws_permission_assignment,omitempty"`
MwsVpcEndpoint map[string]any `json:"databricks_mws_vpc_endpoint,omitempty"` MwsPrivateAccessSettings map[string]any `json:"databricks_mws_private_access_settings,omitempty"`
MwsWorkspaces map[string]any `json:"databricks_mws_workspaces,omitempty"` MwsStorageConfigurations map[string]any `json:"databricks_mws_storage_configurations,omitempty"`
Notebook map[string]any `json:"databricks_notebook,omitempty"` MwsVpcEndpoint map[string]any `json:"databricks_mws_vpc_endpoint,omitempty"`
NotificationDestination map[string]any `json:"databricks_notification_destination,omitempty"` MwsWorkspaces map[string]any `json:"databricks_mws_workspaces,omitempty"`
OboToken map[string]any `json:"databricks_obo_token,omitempty"` Notebook map[string]any `json:"databricks_notebook,omitempty"`
OnlineTable map[string]any `json:"databricks_online_table,omitempty"` NotificationDestination map[string]any `json:"databricks_notification_destination,omitempty"`
PermissionAssignment map[string]any `json:"databricks_permission_assignment,omitempty"` OboToken map[string]any `json:"databricks_obo_token,omitempty"`
Permissions map[string]any `json:"databricks_permissions,omitempty"` OnlineTable map[string]any `json:"databricks_online_table,omitempty"`
Pipeline map[string]any `json:"databricks_pipeline,omitempty"` PermissionAssignment map[string]any `json:"databricks_permission_assignment,omitempty"`
Provider map[string]any `json:"databricks_provider,omitempty"` Permissions map[string]any `json:"databricks_permissions,omitempty"`
QualityMonitor map[string]any `json:"databricks_quality_monitor,omitempty"` Pipeline map[string]any `json:"databricks_pipeline,omitempty"`
Query map[string]any `json:"databricks_query,omitempty"` Provider map[string]any `json:"databricks_provider,omitempty"`
Recipient map[string]any `json:"databricks_recipient,omitempty"` QualityMonitor map[string]any `json:"databricks_quality_monitor,omitempty"`
RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"` Query map[string]any `json:"databricks_query,omitempty"`
Repo map[string]any `json:"databricks_repo,omitempty"` Recipient map[string]any `json:"databricks_recipient,omitempty"`
RestrictWorkspaceAdminsSetting map[string]any `json:"databricks_restrict_workspace_admins_setting,omitempty"` RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"`
Schema map[string]any `json:"databricks_schema,omitempty"` Repo map[string]any `json:"databricks_repo,omitempty"`
Secret map[string]any `json:"databricks_secret,omitempty"` RestrictWorkspaceAdminsSetting map[string]any `json:"databricks_restrict_workspace_admins_setting,omitempty"`
SecretAcl map[string]any `json:"databricks_secret_acl,omitempty"` Schema map[string]any `json:"databricks_schema,omitempty"`
SecretScope map[string]any `json:"databricks_secret_scope,omitempty"` Secret map[string]any `json:"databricks_secret,omitempty"`
ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"` SecretAcl map[string]any `json:"databricks_secret_acl,omitempty"`
ServicePrincipalRole map[string]any `json:"databricks_service_principal_role,omitempty"` SecretScope map[string]any `json:"databricks_secret_scope,omitempty"`
ServicePrincipalSecret map[string]any `json:"databricks_service_principal_secret,omitempty"` ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"`
Share map[string]any `json:"databricks_share,omitempty"` ServicePrincipalRole map[string]any `json:"databricks_service_principal_role,omitempty"`
SqlAlert map[string]any `json:"databricks_sql_alert,omitempty"` ServicePrincipalSecret map[string]any `json:"databricks_service_principal_secret,omitempty"`
SqlDashboard map[string]any `json:"databricks_sql_dashboard,omitempty"` Share map[string]any `json:"databricks_share,omitempty"`
SqlEndpoint map[string]any `json:"databricks_sql_endpoint,omitempty"` SqlAlert map[string]any `json:"databricks_sql_alert,omitempty"`
SqlGlobalConfig map[string]any `json:"databricks_sql_global_config,omitempty"` SqlDashboard map[string]any `json:"databricks_sql_dashboard,omitempty"`
SqlPermissions map[string]any `json:"databricks_sql_permissions,omitempty"` SqlEndpoint map[string]any `json:"databricks_sql_endpoint,omitempty"`
SqlQuery map[string]any `json:"databricks_sql_query,omitempty"` SqlGlobalConfig map[string]any `json:"databricks_sql_global_config,omitempty"`
SqlTable map[string]any `json:"databricks_sql_table,omitempty"` SqlPermissions map[string]any `json:"databricks_sql_permissions,omitempty"`
SqlVisualization map[string]any `json:"databricks_sql_visualization,omitempty"` SqlQuery map[string]any `json:"databricks_sql_query,omitempty"`
SqlWidget map[string]any `json:"databricks_sql_widget,omitempty"` SqlTable map[string]any `json:"databricks_sql_table,omitempty"`
StorageCredential map[string]any `json:"databricks_storage_credential,omitempty"` SqlVisualization map[string]any `json:"databricks_sql_visualization,omitempty"`
SystemSchema map[string]any `json:"databricks_system_schema,omitempty"` SqlWidget map[string]any `json:"databricks_sql_widget,omitempty"`
Table map[string]any `json:"databricks_table,omitempty"` StorageCredential map[string]any `json:"databricks_storage_credential,omitempty"`
Token map[string]any `json:"databricks_token,omitempty"` SystemSchema map[string]any `json:"databricks_system_schema,omitempty"`
User map[string]any `json:"databricks_user,omitempty"` Table map[string]any `json:"databricks_table,omitempty"`
UserInstanceProfile map[string]any `json:"databricks_user_instance_profile,omitempty"` Token map[string]any `json:"databricks_token,omitempty"`
UserRole map[string]any `json:"databricks_user_role,omitempty"` User map[string]any `json:"databricks_user,omitempty"`
VectorSearchEndpoint map[string]any `json:"databricks_vector_search_endpoint,omitempty"` UserInstanceProfile map[string]any `json:"databricks_user_instance_profile,omitempty"`
VectorSearchIndex map[string]any `json:"databricks_vector_search_index,omitempty"` UserRole map[string]any `json:"databricks_user_role,omitempty"`
Volume map[string]any `json:"databricks_volume,omitempty"` VectorSearchEndpoint map[string]any `json:"databricks_vector_search_endpoint,omitempty"`
WorkspaceBinding map[string]any `json:"databricks_workspace_binding,omitempty"` VectorSearchIndex map[string]any `json:"databricks_vector_search_index,omitempty"`
WorkspaceConf map[string]any `json:"databricks_workspace_conf,omitempty"` Volume map[string]any `json:"databricks_volume,omitempty"`
WorkspaceFile map[string]any `json:"databricks_workspace_file,omitempty"` WorkspaceBinding map[string]any `json:"databricks_workspace_binding,omitempty"`
WorkspaceConf map[string]any `json:"databricks_workspace_conf,omitempty"`
WorkspaceFile map[string]any `json:"databricks_workspace_file,omitempty"`
} }
func NewResources() *Resources { func NewResources() *Resources {
return &Resources{ return &Resources{
AccessControlRuleSet: make(map[string]any), AccessControlRuleSet: make(map[string]any),
AibiDashboardEmbeddingAccessPolicySetting: make(map[string]any),
AibiDashboardEmbeddingApprovedDomainsSetting: make(map[string]any),
Alert: make(map[string]any), Alert: make(map[string]any),
App: make(map[string]any), App: make(map[string]any),
ArtifactAllowlist: make(map[string]any), ArtifactAllowlist: make(map[string]any),

View File

@ -21,7 +21,7 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.64.1" const ProviderVersion = "1.65.1"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{

13
go.mod
View File

@ -12,6 +12,7 @@ require (
github.com/databricks/databricks-sdk-go v0.57.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.57.0 // Apache 2.0
github.com/fatih/color v1.18.0 // MIT github.com/fatih/color v1.18.0 // MIT
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause
github.com/gorilla/mux v1.8.1 // BSD 3-Clause
github.com/hashicorp/go-version v1.7.0 // MPL 2.0 github.com/hashicorp/go-version v1.7.0 // MPL 2.0
github.com/hashicorp/hc-install v0.9.1 // MPL 2.0 github.com/hashicorp/hc-install v0.9.1 // MPL 2.0
github.com/hashicorp/terraform-exec v0.22.0 // MPL 2.0 github.com/hashicorp/terraform-exec v0.22.0 // MPL 2.0
@ -27,11 +28,11 @@ require (
github.com/stretchr/testify v1.10.0 // MIT github.com/stretchr/testify v1.10.0 // MIT
github.com/wI2L/jsondiff v0.6.1 // MIT github.com/wI2L/jsondiff v0.6.1 // MIT
golang.org/x/exp v0.0.0-20240222234643-814bf88cf225 golang.org/x/exp v0.0.0-20240222234643-814bf88cf225
golang.org/x/mod v0.22.0 golang.org/x/mod v0.23.0
golang.org/x/oauth2 v0.25.0 golang.org/x/oauth2 v0.26.0
golang.org/x/sync v0.10.0 golang.org/x/sync v0.11.0
golang.org/x/term v0.28.0 golang.org/x/term v0.29.0
golang.org/x/text v0.21.0 golang.org/x/text v0.22.0
gopkg.in/ini.v1 v1.67.0 // Apache 2.0 gopkg.in/ini.v1 v1.67.0 // Apache 2.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
@ -71,7 +72,7 @@ require (
go.opentelemetry.io/otel/trace v1.24.0 // indirect go.opentelemetry.io/otel/trace v1.24.0 // indirect
golang.org/x/crypto v0.31.0 // indirect golang.org/x/crypto v0.31.0 // indirect
golang.org/x/net v0.33.0 // indirect golang.org/x/net v0.33.0 // indirect
golang.org/x/sys v0.29.0 // indirect golang.org/x/sys v0.30.0 // indirect
golang.org/x/time v0.5.0 // indirect golang.org/x/time v0.5.0 // indirect
google.golang.org/api v0.182.0 // indirect google.golang.org/api v0.182.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240521202816-d264139d666e // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240521202816-d264139d666e // indirect

26
go.sum generated
View File

@ -97,6 +97,8 @@ github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfF
github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0= github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
github.com/googleapis/gax-go/v2 v2.12.4 h1:9gWcmF85Wvq4ryPFvGFaOgPIs1AQX0d0bcbGw4Z96qg= github.com/googleapis/gax-go/v2 v2.12.4 h1:9gWcmF85Wvq4ryPFvGFaOgPIs1AQX0d0bcbGw4Z96qg=
github.com/googleapis/gax-go/v2 v2.12.4/go.mod h1:KYEYLorsnIGDi/rPC8b5TdlB9kbKoFubselGIoBMCwI= github.com/googleapis/gax-go/v2 v2.12.4/go.mod h1:KYEYLorsnIGDi/rPC8b5TdlB9kbKoFubselGIoBMCwI=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k= github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
@ -199,8 +201,8 @@ golang.org/x/exp v0.0.0-20240222234643-814bf88cf225/go.mod h1:CxmFvTBINI24O/j8iY
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4= golang.org/x/mod v0.23.0 h1:Zb7khfcRGKk+kqfxFaP5tZqCnDZMjC5VtUBs87Hr6QM=
golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= golang.org/x/mod v0.23.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -210,13 +212,13 @@ golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I= golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4= golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70= golang.org/x/oauth2 v0.26.0 h1:afQXWNNaeC4nvZ0Ed9XvCCzXM6UHJG7iCg0W4fPqSBE=
golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= golang.org/x/oauth2 v0.26.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ= golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -227,14 +229,14 @@ golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.28.0 h1:/Ts8HFuMR2E6IP/jlo7QVLZHggjKQbhu/7H0LJFr3Gg= golang.org/x/term v0.29.0 h1:L6pJp37ocefwRRtYPKSWOWzOtWSxVajvz2ldH/xi3iU=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek= golang.org/x/term v0.29.0/go.mod h1:6bl4lRlvVuDgSf3179VpIxBF0o10JUpXWOnI7nErv7s=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk= golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@ -20,7 +20,7 @@ func TestTelemetryUpload(t *testing.T) {
t.Cleanup(server.Close) t.Cleanup(server.Close)
count := 0 count := 0
server.Handle("POST /telemetry-ext", func(_ *testserver.FakeWorkspace, req *http.Request) (resp any, statusCode int) { server.Handle("POST", "/telemetry-ext", func(_ *testserver.FakeWorkspace, req *http.Request) (resp any, statusCode int) {
count++ count++
if count == 1 { if count == 1 {
return ResponseBody{ return ResponseBody{

View File

@ -9,6 +9,8 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/gorilla/mux"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/databricks/cli/internal/testutil" "github.com/databricks/cli/internal/testutil"
@ -17,7 +19,7 @@ import (
type Server struct { type Server struct {
*httptest.Server *httptest.Server
Mux *http.ServeMux Router *mux.Router
t testutil.TestingT t testutil.TestingT
@ -34,26 +36,25 @@ type Request struct {
Headers http.Header `json:"headers,omitempty"` Headers http.Header `json:"headers,omitempty"`
Method string `json:"method"` Method string `json:"method"`
Path string `json:"path"` Path string `json:"path"`
Body any `json:"body"` Body any `json:"body,omitempty"`
RawBody string `json:"raw_body,omitempty"`
} }
func New(t testutil.TestingT) *Server { func New(t testutil.TestingT) *Server {
mux := http.NewServeMux() router := mux.NewRouter()
server := httptest.NewServer(mux) server := httptest.NewServer(router)
t.Cleanup(server.Close) t.Cleanup(server.Close)
s := &Server{ s := &Server{
Server: server, Server: server,
Mux: mux, Router: router,
t: t, t: t,
mu: &sync.Mutex{}, mu: &sync.Mutex{},
fakeWorkspaces: map[string]*FakeWorkspace{}, fakeWorkspaces: map[string]*FakeWorkspace{},
} }
// The server resolves conflicting handlers by using the one with higher // Set up the not found handler as fallback
// specificity. This handler is the least specific, so it will be used as a router.NotFoundHandler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// fallback when no other handlers match.
s.Handle("/", func(fakeWorkspace *FakeWorkspace, r *http.Request) (any, int) {
pattern := r.Method + " " + r.URL.Path pattern := r.Method + " " + r.URL.Path
t.Errorf(` t.Errorf(`
@ -74,9 +75,22 @@ Response.StatusCode = <response status-code here>
`, pattern, pattern) `, pattern, pattern)
return apierr.APIError{ w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusNotImplemented)
resp := apierr.APIError{
Message: "No stub found for pattern: " + pattern, Message: "No stub found for pattern: " + pattern,
}, http.StatusNotImplemented }
respBytes, err := json.Marshal(resp)
if err != nil {
t.Errorf("JSON encoding error: %s", err)
respBytes = []byte("{\"message\": \"JSON encoding error\"}")
}
if _, err := w.Write(respBytes); err != nil {
t.Errorf("Response write error: %s", err)
}
}) })
return s return s
@ -84,8 +98,8 @@ Response.StatusCode = <response status-code here>
type HandlerFunc func(fakeWorkspace *FakeWorkspace, req *http.Request) (resp any, statusCode int) type HandlerFunc func(fakeWorkspace *FakeWorkspace, req *http.Request) (resp any, statusCode int)
func (s *Server) Handle(pattern string, handler HandlerFunc) { func (s *Server) Handle(method, path string, handler HandlerFunc) {
s.Mux.HandleFunc(pattern, func(w http.ResponseWriter, r *http.Request) { s.Router.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
// For simplicity we process requests sequentially. It's fast enough because // For simplicity we process requests sequentially. It's fast enough because
// we don't do any IO except reading and writing request/response bodies. // we don't do any IO except reading and writing request/response bodies.
s.mu.Lock() s.mu.Lock()
@ -119,13 +133,19 @@ func (s *Server) Handle(pattern string, handler HandlerFunc) {
} }
} }
s.Requests = append(s.Requests, Request{ req := Request{
Headers: headers, Headers: headers,
Method: r.Method, Method: r.Method,
Path: r.URL.Path, Path: r.URL.Path,
Body: json.RawMessage(body), }
})
if json.Valid(body) {
req.Body = json.RawMessage(body)
} else {
req.RawBody = string(body)
}
s.Requests = append(s.Requests, req)
} }
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
@ -149,7 +169,7 @@ func (s *Server) Handle(pattern string, handler HandlerFunc) {
http.Error(w, err.Error(), http.StatusInternalServerError) http.Error(w, err.Error(), http.StatusInternalServerError)
return return
} }
}) }).Methods(method)
} }
func getToken(r *http.Request) string { func getToken(r *http.Request) string {