Compare commits

..

24 Commits

Author SHA1 Message Date
Ilya Kuznetsov d2bfb67be2
fix: Few descriptions fixes 2024-12-10 16:55:07 +01:00
Ilya Kuznetsov 0171513338
fix: More details about markdownDescription in the comment 2024-12-10 16:44:46 +01:00
Ilya Kuznetsov 16042b5db7
fix: Adds 'markdownDescription' field to list of known keywords 2024-12-10 16:43:17 +01:00
Ilya Kuznetsov 9a5503755a
feat: Format markdown links to absolute 2024-12-10 16:13:51 +01:00
Ilya Kuznetsov e15107fbcc
feat: New annotations 2024-12-10 16:07:25 +01:00
Ilya Kuznetsov a579b1cd03
fix: Use `convert.FromTyped` to generate dyn.Value 2024-12-10 15:51:13 +01:00
Ilya Kuznetsov 00164a8dbf
fix: Linter fix for missing error handler 2024-12-10 14:54:31 +01:00
Ilya Kuznetsov d1649682e6
Merge branch 'main' of github.com:databricks/cli into feat/custom-annotations-json-schema 2024-12-10 14:40:53 +01:00
Ilya Kuznetsov a6c45d5bf7
feat: Yaml styles for open api overrides 2024-12-10 14:00:00 +01:00
Ilya Kuznetsov 073aeca5b7
feat: Add styles 2024-12-10 13:44:20 +01:00
Lennart Kats (databricks) f3c628e537
Allow overriding compute for non-development mode targets (#1899)
## Changes
Allow overriding compute for non-development targets. We previously had
a restriction in place where `--cluster-id` was only allowed for targets
that use `mode: development`. The intention was to prevent mistakes, but
this was overly restrictive.

## Tests
Updated unit tests.
2024-12-10 10:02:44 +00:00
Pieter Noordhuis 48d7c08a46
Bump TF codegen dependencies to latest (#1961)
## Changes

This updates the TF codegen dependencies to latest.

## Tests

Ran codegen and confirmed it still works.

See `bundle/internal/tf/codegen/README.md` for instructions.
2024-12-09 21:29:11 +01:00
Andrew Nester dc2cf3bc42
Process top-level permissions for resources using dynamic config (#1964)
## Changes
This PR ensures that when new resources are added they are handled by
top-level permissions mutator, either by supporting or not supporting
the resource type.

## Tests
Added unit tests
2024-12-09 15:26:41 +00:00
Pieter Noordhuis 3457c10c7f
Pin gotestsum version to v1.12.0 (#1981)
Make this robust against inadvertent upgrades.
2024-12-09 16:19:19 +01:00
Pieter Noordhuis e9fa7b7c0e
Remove global variable from clusters integration test (#1972)
## Changes

I saw this test fail on rerun because the global wasn't set.

Fix by removing the global and using a different approach to acquire a
valid cluster ID.

## Tests

Integration tests.
2024-12-09 14:25:06 +00:00
Denis Bilenko 1b2be1b2cb
Add error checking in tests and enable errcheck there (#1980)
## Changes
Fix all errcheck-found issues in tests and test helpers. Mostly this
done by adding require.NoError(t, err), sometimes panic() where t object
is not available).

Initial change is obtained with aider+claude, then manually reviewed and
cleaned up.

## Tests
Existing tests.
2024-12-09 13:56:41 +01:00
shreyas-goenka e0d54f0bb6
Do not run mlops-stacks integration test in parallel (#1982)
## Changes
This test changes the cwd using the `testutil.Chdir` function. This
causes flakiness with other integration tests, like
`TestAccWorkspaceFilesExtensionsNotebooksAreNotDeletedAsFiles`, which
rely on the cwd being configured correctly to read test fixtures.

The `t.Setenv` call in `testutil.Chdir` ensures that it is not run from
a test whose upstream is executing in parallel.
2024-12-09 12:34:27 +00:00
Pieter Noordhuis 227a13556b
Add build rule for running integration tests (#1965)
## Changes

Make it possible to change what/how we run our integration tests from
within this repository.

## Tests

Integration tests all pass when run with this command.
2024-12-09 09:52:08 +00:00
Denis Bilenko 4c1042132b
Enable linter bodyclose (#1968)
## Changes
Enable linter '[bodyclose](https://github.com/timakin/bodyclose)' and
fix 2 cases in tests.

## Tests
Existing tests.
2024-12-05 19:11:49 +00:00
Pieter Noordhuis 62bc59a3a6
Fail filer integration test if error is nil (#1967)
## Changes

I found a race where this error is nil and the subsequent assert panics
on the error being nil. This change makes the test robust against this
to fail immediately if the error is different from the one we expect.

## Tests

n/a
2024-12-05 18:20:46 +00:00
Pieter Noordhuis 6e754d4f34
Rewrite 'interface{} -> any' (#1959)
## Changes

The `any` alias for `interface{}` has been around since Go 1.18.

Now that we're using golangci-lint (#1953), we can lint on it.

Existing commits can be updated with:
```
gofmt -w -r 'interface{} -> any' .
```

## Tests

n/a
2024-12-05 15:37:24 +00:00
Pieter Noordhuis 7ffe93e4d0
[Release] Release v0.236.0 (#1966)
**New features for Databricks Asset Bundles:**

This release adds support for managing Unity Catalog volumes as part of
your bundle configuration.

Bundles:
* Add DABs support for Unity Catalog volumes
([#1762](https://github.com/databricks/cli/pull/1762)).
* Support lookup by name of notification destinations
([#1922](https://github.com/databricks/cli/pull/1922)).
* Extend "notebook not found" error to warn about missing extension
([#1920](https://github.com/databricks/cli/pull/1920)).
* Skip sync warning if no sync paths are defined
([#1926](https://github.com/databricks/cli/pull/1926)).
* Add validation for single node clusters
([#1909](https://github.com/databricks/cli/pull/1909)).
* Fix segfault in bundle summary command
([#1937](https://github.com/databricks/cli/pull/1937)).
* Add the `bundle_uuid` helper function for templates
([#1947](https://github.com/databricks/cli/pull/1947)).
* Add default value for `volume_type` for DABs
([#1952](https://github.com/databricks/cli/pull/1952)).
* Properly read Git metadata when running inside workspace
([#1945](https://github.com/databricks/cli/pull/1945)).
* Upgrade TF provider to 1.59.0
([#1960](https://github.com/databricks/cli/pull/1960)).

Internal:
* Breakout variable lookup into separate files and tests
([#1921](https://github.com/databricks/cli/pull/1921)).
* Add golangci-lint v1.62.2
([#1953](https://github.com/databricks/cli/pull/1953)).

Dependency updates:
* Bump golang.org/x/term from 0.25.0 to 0.26.0
([#1907](https://github.com/databricks/cli/pull/1907)).
* Bump github.com/Masterminds/semver/v3 from 3.3.0 to 3.3.1
([#1930](https://github.com/databricks/cli/pull/1930)).
* Bump github.com/stretchr/testify from 1.9.0 to 1.10.0
([#1932](https://github.com/databricks/cli/pull/1932)).
* Bump github.com/databricks/databricks-sdk-go from 0.51.0 to 0.52.0
([#1931](https://github.com/databricks/cli/pull/1931)).
2024-12-05 14:39:26 +00:00
Pieter Noordhuis 647b09e6e2
Upgrade TF provider to 1.59.0 (#1960)
## Changes

Notable changes:
* Fixes dashboard deployment if it was trashed out-of-band.
* Removes client-side validation for single-node cluster configuration
(also see #1546).

Beware: for the same reason as in #1900, this excludes the changes for
the quality monitor resource.

## Tests

Integration tests pass.
2024-12-05 12:09:45 +01:00
Denis Bilenko 0ad790e468
Properly read Git metadata when running inside workspace (#1945)
## Changes

Since there is no .git directory in Workspace file system, we need to
make an API call to api/2.0/workspace/get-status?return_git_info=true to
fetch git the root of the repo, current branch, commit and origin.

Added new function FetchRepositoryInfo that either looks up and parses
.git or calls remote API depending on env.

Refactor Repository/View/FileSet to accept repository root rather than
calculate it. This helps because:
- Repository is currently created in multiple places and finding the
repository root is becoming relatively expensive (API call needed).
- Repository/FileSet/View do not have access to current Bundle which is
where WorkplaceClient is stored.

## Tests

- Tested manually by running "bundle validate --json" inside web
terminal within Databricks env.
- Added integration tests for the new API.

---------

Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-05 10:13:13 +00:00
100 changed files with 2207 additions and 871 deletions

View File

@ -44,7 +44,7 @@ jobs:
run: | run: |
echo "GOPATH=$(go env GOPATH)" >> $GITHUB_ENV echo "GOPATH=$(go env GOPATH)" >> $GITHUB_ENV
echo "$(go env GOPATH)/bin" >> $GITHUB_PATH echo "$(go env GOPATH)/bin" >> $GITHUB_PATH
go install gotest.tools/gotestsum@latest go install gotest.tools/gotestsum@v1.12.0
- name: Pull external libraries - name: Pull external libraries
run: | run: |
@ -124,14 +124,19 @@ jobs:
# By default the ajv-cli runs in strict mode which will fail if the schema # By default the ajv-cli runs in strict mode which will fail if the schema
# itself is not valid. Strict mode is more strict than the JSON schema # itself is not valid. Strict mode is more strict than the JSON schema
# specification. See for details: https://ajv.js.org/options.html#strict-mode-options # specification. See for details: https://ajv.js.org/options.html#strict-mode-options
# The ajv-cli is configured to use the markdownDescription keyword which is not part of the JSON schema specification,
# but is used in editors like VSCode to render markdown in the description field
- name: Validate bundle schema - name: Validate bundle schema
run: | run: |
go run main.go bundle schema > schema.json go run main.go bundle schema > schema.json
# Add markdownDescription keyword to ajv
echo "module.exports=function(a){a.addKeyword('markdownDescription')}" >> keywords.js
for file in ./bundle/internal/schema/testdata/pass/*.yml; do for file in ./bundle/internal/schema/testdata/pass/*.yml; do
ajv test -s schema.json -d $file --valid ajv test -s schema.json -d $file --valid -c=./keywords.js
done done
for file in ./bundle/internal/schema/testdata/fail/*.yml; do for file in ./bundle/internal/schema/testdata/fail/*.yml; do
ajv test -s schema.json -d $file --invalid ajv test -s schema.json -d $file --invalid -c=./keywords.js
done done

View File

@ -1,9 +1,8 @@
linters: linters:
disable-all: true disable-all: true
enable: enable:
# errcheck and govet are part of default setup and should be included but give too many errors now - bodyclose
# once errors are fixed, they should be enabled here: - errcheck
#- errcheck
- gosimple - gosimple
#- govet #- govet
- ineffassign - ineffassign
@ -15,5 +14,11 @@ linters-settings:
rewrite-rules: rewrite-rules:
- pattern: 'a[b:len(a)]' - pattern: 'a[b:len(a)]'
replacement: 'a[b:]' replacement: 'a[b:]'
- pattern: 'interface{}'
replacement: 'any'
issues: issues:
exclude-dirs-use-default: false # recommended by docs https://golangci-lint.run/usage/false-positives/ exclude-dirs-use-default: false # recommended by docs https://golangci-lint.run/usage/false-positives/
exclude-rules:
- path-except: (_test\.go|internal)
linters:
- errcheck

View File

@ -1,5 +1,32 @@
# Version changelog # Version changelog
## [Release] Release v0.236.0
**New features for Databricks Asset Bundles:**
This release adds support for managing Unity Catalog volumes as part of your bundle configuration.
Bundles:
* Add DABs support for Unity Catalog volumes ([#1762](https://github.com/databricks/cli/pull/1762)).
* Support lookup by name of notification destinations ([#1922](https://github.com/databricks/cli/pull/1922)).
* Extend "notebook not found" error to warn about missing extension ([#1920](https://github.com/databricks/cli/pull/1920)).
* Skip sync warning if no sync paths are defined ([#1926](https://github.com/databricks/cli/pull/1926)).
* Add validation for single node clusters ([#1909](https://github.com/databricks/cli/pull/1909)).
* Fix segfault in bundle summary command ([#1937](https://github.com/databricks/cli/pull/1937)).
* Add the `bundle_uuid` helper function for templates ([#1947](https://github.com/databricks/cli/pull/1947)).
* Add default value for `volume_type` for DABs ([#1952](https://github.com/databricks/cli/pull/1952)).
* Properly read Git metadata when running inside workspace ([#1945](https://github.com/databricks/cli/pull/1945)).
* Upgrade TF provider to 1.59.0 ([#1960](https://github.com/databricks/cli/pull/1960)).
Internal:
* Breakout variable lookup into separate files and tests ([#1921](https://github.com/databricks/cli/pull/1921)).
* Add golangci-lint v1.62.2 ([#1953](https://github.com/databricks/cli/pull/1953)).
Dependency updates:
* Bump golang.org/x/term from 0.25.0 to 0.26.0 ([#1907](https://github.com/databricks/cli/pull/1907)).
* Bump github.com/Masterminds/semver/v3 from 3.3.0 to 3.3.1 ([#1930](https://github.com/databricks/cli/pull/1930)).
* Bump github.com/stretchr/testify from 1.9.0 to 1.10.0 ([#1932](https://github.com/databricks/cli/pull/1932)).
* Bump github.com/databricks/databricks-sdk-go from 0.51.0 to 0.52.0 ([#1931](https://github.com/databricks/cli/pull/1931)).
## [Release] Release v0.235.0 ## [Release] Release v0.235.0
**Note:** the `bundle generate` command now uses the `.<resource-type>.yml` **Note:** the `bundle generate` command now uses the `.<resource-type>.yml`

View File

@ -36,8 +36,11 @@ vendor:
@echo "✓ Filling vendor folder with library code ..." @echo "✓ Filling vendor folder with library code ..."
@go mod vendor @go mod vendor
.PHONY: build vendor coverage test lint fmt integration:
gotestsum --format github-actions --rerun-fails --jsonfile output.json --packages "./internal/..." -- -run "TestAcc.*" -parallel 4 -timeout=2h
schema: schema:
@echo "✓ Generating json-schema ..." @echo "✓ Generating json-schema ..."
@go run ./bundle/internal/schema ./bundle/internal/schema ./bundle/schema/jsonschema.json @go run ./bundle/internal/schema ./bundle/internal/schema ./bundle/schema/jsonschema.json
.PHONY: fmt lint lintfix test testonly coverage build snapshot vendor integration schema

View File

@ -48,6 +48,10 @@ type Bundle struct {
// Exclusively use this field for filesystem operations. // Exclusively use this field for filesystem operations.
SyncRoot vfs.Path SyncRoot vfs.Path
// Path to the root of git worktree containing the bundle.
// https://git-scm.com/docs/git-worktree
WorktreeRoot vfs.Path
// Config contains the bundle configuration. // Config contains the bundle configuration.
// It is loaded from the bundle configuration files and mutators may update it. // It is loaded from the bundle configuration files and mutators may update it.
Config config.Root Config config.Root

View File

@ -32,6 +32,10 @@ func (r ReadOnlyBundle) SyncRoot() vfs.Path {
return r.b.SyncRoot return r.b.SyncRoot
} }
func (r ReadOnlyBundle) WorktreeRoot() vfs.Path {
return r.b.WorktreeRoot
}
func (r ReadOnlyBundle) WorkspaceClient() *databricks.WorkspaceClient { func (r ReadOnlyBundle) WorkspaceClient() *databricks.WorkspaceClient {
return r.b.WorkspaceClient() return r.b.WorkspaceClient()
} }

View File

@ -110,7 +110,8 @@ func TestInitializeURLs(t *testing.T) {
"dashboard1": "https://mycompany.databricks.com/dashboardsv3/01ef8d56871e1d50ae30ce7375e42478/published?o=123456", "dashboard1": "https://mycompany.databricks.com/dashboardsv3/01ef8d56871e1d50ae30ce7375e42478/published?o=123456",
} }
initializeForWorkspace(b, "123456", "https://mycompany.databricks.com/") err := initializeForWorkspace(b, "123456", "https://mycompany.databricks.com/")
require.NoError(t, err)
for _, group := range b.Config.Resources.AllResources() { for _, group := range b.Config.Resources.AllResources() {
for key, r := range group.Resources { for key, r := range group.Resources {
@ -133,7 +134,8 @@ func TestInitializeURLsWithoutOrgId(t *testing.T) {
}, },
} }
initializeForWorkspace(b, "123456", "https://adb-123456.azuredatabricks.net/") err := initializeForWorkspace(b, "123456", "https://adb-123456.azuredatabricks.net/")
require.NoError(t, err)
require.Equal(t, "https://adb-123456.azuredatabricks.net/jobs/1", b.Config.Resources.Jobs["job1"].URL) require.Equal(t, "https://adb-123456.azuredatabricks.net/jobs/1", b.Config.Resources.Jobs["job1"].URL)
} }

View File

@ -7,7 +7,7 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/git" "github.com/databricks/cli/libs/git"
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/vfs"
) )
type loadGitDetails struct{} type loadGitDetails struct{}
@ -21,45 +21,40 @@ func (m *loadGitDetails) Name() string {
} }
func (m *loadGitDetails) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *loadGitDetails) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
// Load relevant git repository var diags diag.Diagnostics
repo, err := git.NewRepository(b.BundleRoot) info, err := git.FetchRepositoryInfo(ctx, b.BundleRoot.Native(), b.WorkspaceClient())
if err != nil { if err != nil {
return diag.FromErr(err) diags = append(diags, diag.WarningFromErr(err)...)
} }
// Read branch name of current checkout if info.WorktreeRoot == "" {
branch, err := repo.CurrentBranch() b.WorktreeRoot = b.BundleRoot
if err == nil { } else {
b.Config.Bundle.Git.ActualBranch = branch b.WorktreeRoot = vfs.MustNew(info.WorktreeRoot)
}
b.Config.Bundle.Git.ActualBranch = info.CurrentBranch
if b.Config.Bundle.Git.Branch == "" { if b.Config.Bundle.Git.Branch == "" {
// Only load branch if there's no user defined value // Only load branch if there's no user defined value
b.Config.Bundle.Git.Inferred = true b.Config.Bundle.Git.Inferred = true
b.Config.Bundle.Git.Branch = branch b.Config.Bundle.Git.Branch = info.CurrentBranch
}
} else {
log.Warnf(ctx, "failed to load current branch: %s", err)
} }
// load commit hash if undefined // load commit hash if undefined
if b.Config.Bundle.Git.Commit == "" { if b.Config.Bundle.Git.Commit == "" {
commit, err := repo.LatestCommit() b.Config.Bundle.Git.Commit = info.LatestCommit
if err != nil {
log.Warnf(ctx, "failed to load latest commit: %s", err)
} else {
b.Config.Bundle.Git.Commit = commit
}
}
// load origin url if undefined
if b.Config.Bundle.Git.OriginURL == "" {
remoteUrl := repo.OriginUrl()
b.Config.Bundle.Git.OriginURL = remoteUrl
} }
// repo.Root() returns the absolute path of the repo // load origin url if undefined
relBundlePath, err := filepath.Rel(repo.Root(), b.BundleRoot.Native()) if b.Config.Bundle.Git.OriginURL == "" {
b.Config.Bundle.Git.OriginURL = info.OriginURL
}
relBundlePath, err := filepath.Rel(b.WorktreeRoot.Native(), b.BundleRoot.Native())
if err != nil { if err != nil {
return diag.FromErr(err) diags = append(diags, diag.FromErr(err)...)
} } else {
b.Config.Bundle.Git.BundleRootPath = filepath.ToSlash(relBundlePath) b.Config.Bundle.Git.BundleRootPath = filepath.ToSlash(relBundlePath)
return nil }
return diags
} }

View File

@ -6,6 +6,7 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/env" "github.com/databricks/cli/libs/env"
) )
@ -38,18 +39,31 @@ func overrideJobCompute(j *resources.Job, compute string) {
} }
func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if b.Config.Bundle.Mode != config.Development { var diags diag.Diagnostics
if b.Config.Bundle.Mode == config.Production {
if b.Config.Bundle.ClusterId != "" { if b.Config.Bundle.ClusterId != "" {
return diag.Errorf("cannot override compute for an target that does not use 'mode: development'") // Overriding compute via a command-line flag for production works, but is not recommended.
diags = diags.Extend(diag.Diagnostics{{
Summary: "Setting a cluster override for a target that uses 'mode: production' is not recommended",
Detail: "It is recommended to always use the same compute for production target for consistency.",
}})
} }
return nil
} }
if v := env.Get(ctx, "DATABRICKS_CLUSTER_ID"); v != "" { if v := env.Get(ctx, "DATABRICKS_CLUSTER_ID"); v != "" {
// For historical reasons, we allow setting the cluster ID via the DATABRICKS_CLUSTER_ID
// when development mode is used. Sometimes, this is done by accident, so we log an info message.
if b.Config.Bundle.Mode == config.Development {
cmdio.LogString(ctx, "Setting a cluster override because DATABRICKS_CLUSTER_ID is set. It is recommended to use --cluster-id instead, which works in any target mode.")
} else {
// We don't allow using DATABRICKS_CLUSTER_ID in any other mode, it's too error-prone.
return diag.Warningf("The DATABRICKS_CLUSTER_ID variable is set but is ignored since the current target does not use 'mode: development'")
}
b.Config.Bundle.ClusterId = v b.Config.Bundle.ClusterId = v
} }
if b.Config.Bundle.ClusterId == "" { if b.Config.Bundle.ClusterId == "" {
return nil return diags
} }
r := b.Config.Resources r := b.Config.Resources
@ -57,5 +71,5 @@ func (m *overrideCompute) Apply(ctx context.Context, b *bundle.Bundle) diag.Diag
overrideJobCompute(r.Jobs[i], b.Config.Bundle.ClusterId) overrideJobCompute(r.Jobs[i], b.Config.Bundle.ClusterId)
} }
return nil return diags
} }

View File

@ -14,7 +14,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestOverrideDevelopment(t *testing.T) { func TestOverrideComputeModeDevelopment(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "") t.Setenv("DATABRICKS_CLUSTER_ID", "")
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
@ -62,10 +62,13 @@ func TestOverrideDevelopment(t *testing.T) {
assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[3].JobClusterKey) assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[3].JobClusterKey)
} }
func TestOverrideDevelopmentEnv(t *testing.T) { func TestOverrideComputeModeDefaultIgnoresVariable(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId") t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{
Mode: "",
},
Resources: config.Resources{ Resources: config.Resources{
Jobs: map[string]*resources.Job{ Jobs: map[string]*resources.Job{
"job1": {JobSettings: &jobs.JobSettings{ "job1": {JobSettings: &jobs.JobSettings{
@ -86,11 +89,12 @@ func TestOverrideDevelopmentEnv(t *testing.T) {
m := mutator.OverrideCompute() m := mutator.OverrideCompute()
diags := bundle.Apply(context.Background(), b, m) diags := bundle.Apply(context.Background(), b, m)
require.NoError(t, diags.Error()) require.Len(t, diags, 1)
assert.Equal(t, "The DATABRICKS_CLUSTER_ID variable is set but is ignored since the current target does not use 'mode: development'", diags[0].Summary)
assert.Equal(t, "cluster2", b.Config.Resources.Jobs["job1"].Tasks[1].ExistingClusterId) assert.Equal(t, "cluster2", b.Config.Resources.Jobs["job1"].Tasks[1].ExistingClusterId)
} }
func TestOverridePipelineTask(t *testing.T) { func TestOverrideComputePipelineTask(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId") t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
@ -115,7 +119,7 @@ func TestOverridePipelineTask(t *testing.T) {
assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[0].ExistingClusterId) assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[0].ExistingClusterId)
} }
func TestOverrideForEachTask(t *testing.T) { func TestOverrideComputeForEachTask(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId") t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
@ -140,10 +144,11 @@ func TestOverrideForEachTask(t *testing.T) {
assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[0].ForEachTask.Task) assert.Empty(t, b.Config.Resources.Jobs["job1"].Tasks[0].ForEachTask.Task)
} }
func TestOverrideProduction(t *testing.T) { func TestOverrideComputeModeProduction(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Mode: config.Production,
ClusterId: "newClusterID", ClusterId: "newClusterID",
}, },
Resources: config.Resources{ Resources: config.Resources{
@ -166,13 +171,18 @@ func TestOverrideProduction(t *testing.T) {
m := mutator.OverrideCompute() m := mutator.OverrideCompute()
diags := bundle.Apply(context.Background(), b, m) diags := bundle.Apply(context.Background(), b, m)
require.True(t, diags.HasError()) require.Len(t, diags, 1)
assert.Equal(t, "Setting a cluster override for a target that uses 'mode: production' is not recommended", diags[0].Summary)
assert.Equal(t, "newClusterID", b.Config.Resources.Jobs["job1"].Tasks[0].ExistingClusterId)
} }
func TestOverrideProductionEnv(t *testing.T) { func TestOverrideComputeModeProductionIgnoresVariable(t *testing.T) {
t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId") t.Setenv("DATABRICKS_CLUSTER_ID", "newClusterId")
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{
Mode: config.Production,
},
Resources: config.Resources{ Resources: config.Resources{
Jobs: map[string]*resources.Job{ Jobs: map[string]*resources.Job{
"job1": {JobSettings: &jobs.JobSettings{ "job1": {JobSettings: &jobs.JobSettings{
@ -193,5 +203,7 @@ func TestOverrideProductionEnv(t *testing.T) {
m := mutator.OverrideCompute() m := mutator.OverrideCompute()
diags := bundle.Apply(context.Background(), b, m) diags := bundle.Apply(context.Background(), b, m)
require.NoError(t, diags.Error()) require.Len(t, diags, 1)
assert.Equal(t, "The DATABRICKS_CLUSTER_ID variable is set but is ignored since the current target does not use 'mode: development'", diags[0].Summary)
assert.Equal(t, "cluster2", b.Config.Resources.Jobs["job1"].Tasks[1].ExistingClusterId)
} }

View File

@ -108,7 +108,8 @@ func TestNoLookupIfVariableIsSet(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t) m := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(m.WorkspaceClient) b.SetWorkpaceClient(m.WorkspaceClient)
b.Config.Variables["my-cluster-id"].Set("random value") err := b.Config.Variables["my-cluster-id"].Set("random value")
require.NoError(t, err)
diags := bundle.Apply(context.Background(), b, ResolveResourceReferences()) diags := bundle.Apply(context.Background(), b, ResolveResourceReferences())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())

View File

@ -28,7 +28,8 @@ import (
func touchNotebookFile(t *testing.T, path string) { func touchNotebookFile(t *testing.T, path string) {
f, err := os.Create(path) f, err := os.Create(path)
require.NoError(t, err) require.NoError(t, err)
f.WriteString("# Databricks notebook source\n") _, err = f.WriteString("# Databricks notebook source\n")
require.NoError(t, err)
f.Close() f.Close()
} }

View File

@ -49,7 +49,8 @@ func TestCustomMarshallerIsImplemented(t *testing.T) {
// Eg: resource.Job implements MarshalJSON // Eg: resource.Job implements MarshalJSON
v := reflect.Zero(vt.Elem()).Interface() v := reflect.Zero(vt.Elem()).Interface()
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
json.Marshal(v) _, err := json.Marshal(v)
assert.NoError(t, err)
}, "Resource %s does not have a custom marshaller", field.Name) }, "Resource %s does not have a custom marshaller", field.Name)
// Unmarshalling a *resourceStruct will panic if the resource does not have a custom unmarshaller // Unmarshalling a *resourceStruct will panic if the resource does not have a custom unmarshaller
@ -58,7 +59,8 @@ func TestCustomMarshallerIsImplemented(t *testing.T) {
// Eg: *resource.Job implements UnmarshalJSON // Eg: *resource.Job implements UnmarshalJSON
v = reflect.New(vt.Elem()).Interface() v = reflect.New(vt.Elem()).Interface()
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
json.Unmarshal([]byte("{}"), v) err := json.Unmarshal([]byte("{}"), v)
assert.NoError(t, err)
}, "Resource %s does not have a custom unmarshaller", field.Name) }, "Resource %s does not have a custom unmarshaller", field.Name)
} }
} }

View File

@ -100,7 +100,7 @@ func TestRootMergeTargetOverridesWithMode(t *testing.T) {
}, },
}, },
} }
root.initializeDynamicValue() require.NoError(t, root.initializeDynamicValue())
require.NoError(t, root.MergeTargetOverrides("development")) require.NoError(t, root.MergeTargetOverrides("development"))
assert.Equal(t, Development, root.Bundle.Mode) assert.Equal(t, Development, root.Bundle.Mode)
} }
@ -133,7 +133,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
"complex": { "complex": {
Type: variable.VariableTypeComplex, Type: variable.VariableTypeComplex,
Description: "complex var", Description: "complex var",
Default: map[string]interface{}{ Default: map[string]any{
"key": "value", "key": "value",
}, },
}, },
@ -148,7 +148,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
"complex": { "complex": {
Type: "wrong", Type: "wrong",
Description: "wrong", Description: "wrong",
Default: map[string]interface{}{ Default: map[string]any{
"key1": "value1", "key1": "value1",
}, },
}, },
@ -156,7 +156,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
}, },
}, },
} }
root.initializeDynamicValue() require.NoError(t, root.initializeDynamicValue())
require.NoError(t, root.MergeTargetOverrides("development")) require.NoError(t, root.MergeTargetOverrides("development"))
assert.Equal(t, "bar", root.Variables["foo"].Default) assert.Equal(t, "bar", root.Variables["foo"].Default)
assert.Equal(t, "foo var", root.Variables["foo"].Description) assert.Equal(t, "foo var", root.Variables["foo"].Description)
@ -164,7 +164,7 @@ func TestRootMergeTargetOverridesWithVariables(t *testing.T) {
assert.Equal(t, "foo2", root.Variables["foo2"].Default) assert.Equal(t, "foo2", root.Variables["foo2"].Default)
assert.Equal(t, "foo2 var", root.Variables["foo2"].Description) assert.Equal(t, "foo2 var", root.Variables["foo2"].Description)
assert.Equal(t, map[string]interface{}{ assert.Equal(t, map[string]any{
"key1": "value1", "key1": "value1",
}, root.Variables["complex"].Default) }, root.Variables["complex"].Default)
assert.Equal(t, "complex var", root.Variables["complex"].Description) assert.Equal(t, "complex var", root.Variables["complex"].Description)

View File

@ -44,6 +44,7 @@ func setupBundleForFilesToSyncTest(t *testing.T) *bundle.Bundle {
BundleRoot: vfs.MustNew(dir), BundleRoot: vfs.MustNew(dir),
SyncRootPath: dir, SyncRootPath: dir,
SyncRoot: vfs.MustNew(dir), SyncRoot: vfs.MustNew(dir),
WorktreeRoot: vfs.MustNew(dir),
Config: config.Root{ Config: config.Root{
Bundle: config.Bundle{ Bundle: config.Bundle{
Target: "default", Target: "default",

View File

@ -11,6 +11,7 @@ import (
"github.com/databricks/cli/libs/databrickscfg" "github.com/databricks/cli/libs/databrickscfg"
"github.com/databricks/databricks-sdk-go/config" "github.com/databricks/databricks-sdk-go/config"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func setupWorkspaceTest(t *testing.T) string { func setupWorkspaceTest(t *testing.T) string {
@ -42,11 +43,12 @@ func TestWorkspaceResolveProfileFromHost(t *testing.T) {
setupWorkspaceTest(t) setupWorkspaceTest(t)
// This works if there is a config file with a matching profile. // This works if there is a config file with a matching profile.
databrickscfg.SaveToProfile(context.Background(), &config.Config{ err := databrickscfg.SaveToProfile(context.Background(), &config.Config{
Profile: "default", Profile: "default",
Host: "https://abc.cloud.databricks.com", Host: "https://abc.cloud.databricks.com",
Token: "123", Token: "123",
}) })
require.NoError(t, err)
client, err := w.Client() client, err := w.Client()
assert.NoError(t, err) assert.NoError(t, err)
@ -57,12 +59,13 @@ func TestWorkspaceResolveProfileFromHost(t *testing.T) {
home := setupWorkspaceTest(t) home := setupWorkspaceTest(t)
// This works if there is a config file with a matching profile. // This works if there is a config file with a matching profile.
databrickscfg.SaveToProfile(context.Background(), &config.Config{ err := databrickscfg.SaveToProfile(context.Background(), &config.Config{
ConfigFile: filepath.Join(home, "customcfg"), ConfigFile: filepath.Join(home, "customcfg"),
Profile: "custom", Profile: "custom",
Host: "https://abc.cloud.databricks.com", Host: "https://abc.cloud.databricks.com",
Token: "123", Token: "123",
}) })
require.NoError(t, err)
t.Setenv("DATABRICKS_CONFIG_FILE", filepath.Join(home, "customcfg")) t.Setenv("DATABRICKS_CONFIG_FILE", filepath.Join(home, "customcfg"))
client, err := w.Client() client, err := w.Client()
@ -90,12 +93,13 @@ func TestWorkspaceVerifyProfileForHost(t *testing.T) {
setupWorkspaceTest(t) setupWorkspaceTest(t)
// This works if there is a config file with a matching profile. // This works if there is a config file with a matching profile.
databrickscfg.SaveToProfile(context.Background(), &config.Config{ err := databrickscfg.SaveToProfile(context.Background(), &config.Config{
Profile: "abc", Profile: "abc",
Host: "https://abc.cloud.databricks.com", Host: "https://abc.cloud.databricks.com",
}) })
require.NoError(t, err)
_, err := w.Client() _, err = w.Client()
assert.NoError(t, err) assert.NoError(t, err)
}) })
@ -103,12 +107,13 @@ func TestWorkspaceVerifyProfileForHost(t *testing.T) {
setupWorkspaceTest(t) setupWorkspaceTest(t)
// This works if there is a config file with a matching profile. // This works if there is a config file with a matching profile.
databrickscfg.SaveToProfile(context.Background(), &config.Config{ err := databrickscfg.SaveToProfile(context.Background(), &config.Config{
Profile: "abc", Profile: "abc",
Host: "https://def.cloud.databricks.com", Host: "https://def.cloud.databricks.com",
}) })
require.NoError(t, err)
_, err := w.Client() _, err = w.Client()
assert.ErrorContains(t, err, "config host mismatch") assert.ErrorContains(t, err, "config host mismatch")
}) })
@ -116,14 +121,15 @@ func TestWorkspaceVerifyProfileForHost(t *testing.T) {
home := setupWorkspaceTest(t) home := setupWorkspaceTest(t)
// This works if there is a config file with a matching profile. // This works if there is a config file with a matching profile.
databrickscfg.SaveToProfile(context.Background(), &config.Config{ err := databrickscfg.SaveToProfile(context.Background(), &config.Config{
ConfigFile: filepath.Join(home, "customcfg"), ConfigFile: filepath.Join(home, "customcfg"),
Profile: "abc", Profile: "abc",
Host: "https://abc.cloud.databricks.com", Host: "https://abc.cloud.databricks.com",
}) })
require.NoError(t, err)
t.Setenv("DATABRICKS_CONFIG_FILE", filepath.Join(home, "customcfg")) t.Setenv("DATABRICKS_CONFIG_FILE", filepath.Join(home, "customcfg"))
_, err := w.Client() _, err = w.Client()
assert.NoError(t, err) assert.NoError(t, err)
}) })
@ -131,14 +137,15 @@ func TestWorkspaceVerifyProfileForHost(t *testing.T) {
home := setupWorkspaceTest(t) home := setupWorkspaceTest(t)
// This works if there is a config file with a matching profile. // This works if there is a config file with a matching profile.
databrickscfg.SaveToProfile(context.Background(), &config.Config{ err := databrickscfg.SaveToProfile(context.Background(), &config.Config{
ConfigFile: filepath.Join(home, "customcfg"), ConfigFile: filepath.Join(home, "customcfg"),
Profile: "abc", Profile: "abc",
Host: "https://def.cloud.databricks.com", Host: "https://def.cloud.databricks.com",
}) })
require.NoError(t, err)
t.Setenv("DATABRICKS_CONFIG_FILE", filepath.Join(home, "customcfg")) t.Setenv("DATABRICKS_CONFIG_FILE", filepath.Join(home, "customcfg"))
_, err := w.Client() _, err = w.Client()
assert.ErrorContains(t, err, "config host mismatch") assert.ErrorContains(t, err, "config host mismatch")
}) })
} }

View File

@ -28,6 +28,7 @@ func GetSyncOptions(ctx context.Context, rb bundle.ReadOnlyBundle) (*sync.SyncOp
} }
opts := &sync.SyncOptions{ opts := &sync.SyncOptions{
WorktreeRoot: rb.WorktreeRoot(),
LocalRoot: rb.SyncRoot(), LocalRoot: rb.SyncRoot(),
Paths: rb.Config().Sync.Paths, Paths: rb.Config().Sync.Paths,
Include: includes, Include: includes,

View File

@ -10,7 +10,7 @@ import (
// with the path it is loaded from. // with the path it is loaded from.
func SetLocation(b *bundle.Bundle, prefix string, locations []dyn.Location) { func SetLocation(b *bundle.Bundle, prefix string, locations []dyn.Location) {
start := dyn.MustPathFromString(prefix) start := dyn.MustPathFromString(prefix)
b.Config.Mutate(func(root dyn.Value) (dyn.Value, error) { err := b.Config.Mutate(func(root dyn.Value) (dyn.Value, error) {
return dyn.Walk(root, func(p dyn.Path, v dyn.Value) (dyn.Value, error) { return dyn.Walk(root, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
// If the path has the given prefix, set the location. // If the path has the given prefix, set the location.
if p.HasPrefix(start) { if p.HasPrefix(start) {
@ -27,4 +27,7 @@ func SetLocation(b *bundle.Bundle, prefix string, locations []dyn.Location) {
return v, dyn.ErrSkip return v, dyn.ErrSkip
}) })
}) })
if err != nil {
panic("Mutate() failed: " + err.Error())
}
} }

View File

@ -2,11 +2,13 @@ package main
import ( import (
"bytes" "bytes"
"fmt"
"os" "os"
"reflect" "reflect"
"regexp"
"strings" "strings"
"github.com/ghodss/yaml" yaml3 "gopkg.in/yaml.v3"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert" "github.com/databricks/cli/libs/dyn/convert"
@ -104,32 +106,25 @@ func (d *annotationHandler) addAnnotations(typ reflect.Type, s jsonschema.Schema
// Adds empty annotations with placeholders to the annotation file // Adds empty annotations with placeholders to the annotation file
func (d *annotationHandler) sync(outputPath string) error { func (d *annotationHandler) sync(outputPath string) error {
file, err := os.ReadFile(outputPath) existingFile, err := os.ReadFile(outputPath)
if err != nil {
return err
}
existing, err := yamlloader.LoadYAML("", bytes.NewBuffer(existingFile))
if err != nil {
return err
}
missingAnnotations, err := convert.FromTyped(&d.empty, dyn.NilValue)
if err != nil { if err != nil {
return err return err
} }
existing, err := yamlloader.LoadYAML(outputPath, bytes.NewBuffer(file)) output, err := merge.Merge(existing, missingAnnotations)
if err != nil {
return err
}
emptyB, err := yaml.Marshal(d.empty)
if err != nil { if err != nil {
return err return err
} }
empty, err := yamlloader.LoadYAML("", bytes.NewBuffer(emptyB)) err = saveYamlWithStyle(outputPath, output)
if err != nil {
return err
}
mergedFile, err := merge.Merge(existing, empty)
if err != nil {
return err
}
saver := yamlsaver.NewSaver()
config, _ := mergedFile.AsMap()
err = saver.SaveAsYAML(config, outputPath, true)
if err != nil { if err != nil {
return err return err
} }
@ -148,7 +143,65 @@ func assingAnnotation(s *jsonschema.Schema, a annotation) {
if a.Default != nil { if a.Default != nil {
s.Default = a.Default s.Default = a.Default
} }
s.MarkdownDescription = a.MarkdownDescription s.MarkdownDescription = convertLinksToAbsoluteUrl(a.MarkdownDescription)
s.Title = a.Title s.Title = a.Title
s.Enum = a.Enum s.Enum = a.Enum
} }
func saveYamlWithStyle(outputPath string, input dyn.Value) error {
style := map[string]yaml3.Style{}
file, _ := input.AsMap()
for _, v := range file.Keys() {
style[v.MustString()] = yaml3.LiteralStyle
}
saver := yamlsaver.NewSaverWithStyle(style)
err := saver.SaveAsYAML(file, outputPath, true)
if err != nil {
return err
}
return nil
}
func convertLinksToAbsoluteUrl(s string) string {
if s == "" {
return s
}
base := "https://docs.databricks.com"
referencePage := "/dev-tools/bundles/reference.html"
// Regular expression to match Markdown-style links
re := regexp.MustCompile(`\[_\]\(([^)]+)\)`)
result := re.ReplaceAllStringFunc(s, func(match string) string {
// Extract the URL inside parentheses
matches := re.FindStringSubmatch(match)
if len(matches) < 2 {
return match // Return original if no match found
}
link := matches[1]
var text, absoluteURL string
if strings.HasPrefix(link, "#") {
text = strings.TrimPrefix(link, "#")
absoluteURL = fmt.Sprintf("%s%s%s", base, referencePage, link)
} else if strings.HasPrefix(link, "/") {
// Handle relative paths like /dev-tools/bundles/resources.html#dashboard
if strings.Contains(link, "#") {
parts := strings.Split(link, "#")
text = parts[1]
absoluteURL = fmt.Sprintf("%s%s", base, link)
} else {
text = "link"
absoluteURL = fmt.Sprintf("%s%s", base, link)
}
absoluteURL = strings.ReplaceAll(absoluteURL, ".md", ".html")
} else {
return match
}
return fmt.Sprintf("[%s](%s)", text, absoluteURL)
})
return result
}

View File

@ -1,289 +1,495 @@
github.com/databricks/cli/bundle/config.Artifact: github.com/databricks/cli/bundle/config.Artifact:
build: "build":
description: An optional set of non-default build commands that you want to run locally before deployment. For Python wheel builds, the Databricks CLI assumes that it can find a local install of the Python wheel package to run builds, and it runs the command python setup.py bdist_wheel by default during each bundle deployment. To specify multiple build commands, separate each command with double-ampersand (&&) characters. "description": |-
executable: An optional set of non-default build commands that you want to run locally before deployment.
description: PLACEHOLDER
files: For Python wheel builds, the Databricks CLI assumes that it can find a local install of the Python wheel package to run builds, and it runs the command python setup.py bdist_wheel by default during each bundle deployment.
description: PLACEHOLDER
path: To specify multiple build commands, separate each command with double-ampersand (&&) characters.
description: PLACEHOLDER "executable":
type: "description": |-
description: PLACEHOLDER The executable type.
"files":
"description": |-
The source files for the artifact.
"markdown_description": |-
The source files for the artifact, defined as an [_](#artifact_file).
"path":
"description": |-
The location where the built artifact will be saved.
"type":
"description": |-
The type of the artifact. Valid values are wheel or jar.
github.com/databricks/cli/bundle/config.ArtifactFile: github.com/databricks/cli/bundle/config.ArtifactFile:
source: "source":
description: PLACEHOLDER "description": |-
The path of the files used to build the artifact.
github.com/databricks/cli/bundle/config.Bundle: github.com/databricks/cli/bundle/config.Bundle:
cluster_id: "cluster_id":
description: PLACEHOLDER "description": |-
compute_id: The ID of a cluster to use to run the bundle.
description: PLACEHOLDER "markdown_description": |-
databricks_cli_version: The ID of a cluster to use to run the bundle. See [_](/dev-tools/bundles/settings.md#cluster_id).
description: PLACEHOLDER "compute_id":
deployment: "description": |-
description: PLACEHOLDER PLACEHOLDER
git: "databricks_cli_version":
description: PLACEHOLDER "description": |-
name: The Databricks CLI version to use for the bundle.
description: PLACEHOLDER "markdown_description": |-
uuid: The Databricks CLI version to use for the bundle. See [_](/dev-tools/bundles/settings.md#databricks_cli_version).
description: PLACEHOLDER "deployment":
"description": |-
The definition of the bundle deployment
"markdown_description": |-
The definition of the bundle deployment. For supported attributes, see [_](#deployment) and [_](/dev-tools/bundles/deployment-modes.md).
"git":
"description": |-
The Git version control details that are associated with your bundle.
"markdown_description": |-
The Git version control details that are associated with your bundle. For supported attributes, see [_](#git) and [_](/dev-tools/bundles/settings.md#git).
"name":
"description": |-
The name of the bundle.
"uuid":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config.Deployment: github.com/databricks/cli/bundle/config.Deployment:
fail_on_active_runs: "fail_on_active_runs":
description: PLACEHOLDER "description": |-
lock: Whether to fail on active runs. If this is set to true a deployment that is running can be interrupted.
description: PLACEHOLDER "lock":
"description": |-
The deployment lock attributes.
"markdown_description": |-
The deployment lock attributes. See [_](#lock).
github.com/databricks/cli/bundle/config.Experimental: github.com/databricks/cli/bundle/config.Experimental:
pydabs: "pydabs":
description: PLACEHOLDER "description": |-
python_wheel_wrapper: The PyDABs configuration.
description: PLACEHOLDER "python_wheel_wrapper":
scripts: "description": |-
description: PLACEHOLDER Whether to use a Python wheel wrapper
use_legacy_run_as: "scripts":
description: PLACEHOLDER "description": |-
The commands to run
"use_legacy_run_as":
"description": |-
Whether to use the legacy run_as behavior
github.com/databricks/cli/bundle/config.Git: github.com/databricks/cli/bundle/config.Git:
branch: "branch":
description: PLACEHOLDER "description": |-
origin_url: The Git branch name.
description: PLACEHOLDER "markdown_description": |-
The Git branch name. See [_](/dev-tools/bundles/settings.md#git).
"origin_url":
"description": |-
The origin URL of the repository.
"markdown_description": |-
The origin URL of the repository. See [_](/dev-tools/bundles/settings.md#git).
github.com/databricks/cli/bundle/config.Lock: github.com/databricks/cli/bundle/config.Lock:
enabled: "enabled":
description: PLACEHOLDER "description": |-
force: Whether this lock is enabled.
description: PLACEHOLDER "force":
"description": |-
Whether to force this lock if it is enabled.
github.com/databricks/cli/bundle/config.Presets: github.com/databricks/cli/bundle/config.Presets:
jobs_max_concurrent_runs: "jobs_max_concurrent_runs":
description: PLACEHOLDER "description": |-
name_prefix: The maximum concurrent runs for a job.
description: PLACEHOLDER "name_prefix":
pipelines_development: "description": |-
description: PLACEHOLDER The prefix for job runs of the bundle.
source_linked_deployment: "pipelines_development":
description: PLACEHOLDER "description": |-
tags: Whether pipeline deployments should be locked in development mode.
description: PLACEHOLDER "source_linked_deployment":
trigger_pause_status: "description": |-
description: PLACEHOLDER Whether to link the deployment to the bundle source.
"tags":
"description": |-
The tags for the bundle deployment.
"trigger_pause_status":
"description": |-
A pause status to apply to all job triggers and schedules. Valid values are PAUSED or UNPAUSED.
github.com/databricks/cli/bundle/config.PyDABs: github.com/databricks/cli/bundle/config.PyDABs:
enabled: "enabled":
description: PLACEHOLDER "description": |-
import: Whether or not PyDABs (Private Preview) is enabled
description: PLACEHOLDER "import":
venv_path: "description": |-
description: PLACEHOLDER The PyDABs project to import to discover resources, resource generator and mutators
"venv_path":
"description": |-
The Python virtual environment path
github.com/databricks/cli/bundle/config.Resources: github.com/databricks/cli/bundle/config.Resources:
clusters: "clusters":
description: PLACEHOLDER "description": |-
dashboards: The cluster definitions for the bundle.
description: PLACEHOLDER "markdown_description": |-
experiments: The cluster definitions for the bundle. See [_](/dev-tools/bundles/resources.md#cluster)
description: PLACEHOLDER "dashboards":
jobs: "description": |-
description: PLACEHOLDER The dashboard definitions for the bundle.
model_serving_endpoints: "markdown_description": |-
description: PLACEHOLDER The dashboard definitions for the bundle. See [_](/dev-tools/bundles/resources.md#dashboard)
models: "experiments":
description: PLACEHOLDER "description": |-
pipelines: The experiment definitions for the bundle.
description: PLACEHOLDER "markdown_description": |-
quality_monitors: The experiment definitions for the bundle. See [_](/dev-tools/bundles/resources.md#experiment)
description: PLACEHOLDER "jobs":
registered_models: "description": |-
description: PLACEHOLDER The job definitions for the bundle.
schemas: "markdown_description": |-
description: PLACEHOLDER The job definitions for the bundle. See [_](/dev-tools/bundles/resources.md#job)
volumes: "model_serving_endpoints":
description: PLACEHOLDER "description": |-
The model serving endpoint definitions for the bundle.
"markdown_description": |-
The model serving endpoint definitions for the bundle. See [_](/dev-tools/bundles/resources.md#model_serving_endpoint)
"models":
"description": |-
The model definitions for the bundle.
"markdown_description": |-
The model definitions for the bundle. See [_](/dev-tools/bundles/resources.md#model)
"pipelines":
"description": |-
The pipeline definitions for the bundle.
"markdown_description": |-
The pipeline definitions for the bundle. See [_](/dev-tools/bundles/resources.md#pipeline)
"quality_monitors":
"description": |-
The quality monitor definitions for the bundle.
"markdown_description": |-
The quality monitor definitions for the bundle. See [_](/dev-tools/bundles/resources.md#quality_monitor)
"registered_models":
"description": |-
The registered model definitions for the bundle.
"markdown_description": |-
The registered model definitions for the bundle. See [_](/dev-tools/bundles/resources.md#registered_model)
"schemas":
"description": |-
The schema definitions for the bundle.
"markdown_description": |-
The schema definitions for the bundle. See [_](/dev-tools/bundles/resources.md#schema)
"volumes":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config.Root: github.com/databricks/cli/bundle/config.Root:
artifacts: "artifacts":
description: Defines the attributes to build an artifact "description": |-
bundle: Defines the attributes to build an artifact
description: PLACEHOLDER "bundle":
experimental: "description": |-
description: PLACEHOLDER The attributes of the bundle.
include: "markdown_description": |-
description: PLACEHOLDER The attributes of the bundle. See [_](/dev-tools/bundles/settings.md#bundle)
permissions: "experimental":
description: PLACEHOLDER "description": |-
presets: Defines attributes for experimental features.
description: PLACEHOLDER "include":
resources: "description": |-
description: PLACEHOLDER PLACEHOLDER
run_as: "permissions":
description: PLACEHOLDER "description": |-
sync: Defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle
description: PLACEHOLDER "markdown_description": |-
targets: Defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle. See [_](/dev-tools/bundles/settings.md#permissions) and [_](/dev-tools/bundles/permissions.md).
description: PLACEHOLDER "presets":
variables: "description": |-
description: PLACEHOLDER Defines bundle deployment presets.
workspace: "markdown_description": |-
description: PLACEHOLDER Defines bundle deployment presets. See [_](/dev-tools/bundles/deployment-modes.md#presets).
"resources":
"description": |-
PLACEHOLDER
"markdown_description": |-
See [_](/dev-tools/bundles/resources.md).
"run_as":
"description": |-
The identity to use to run the bundle.
"sync":
"description": |-
The files and file paths to include or exclude in the bundle.
"markdown_description": |-
The files and file paths to include or exclude in the bundle. See [_](/dev-tools/bundles/)
"targets":
"description": |-
Defines deployment targets for the bundle.
"variables":
"description": |-
A Map that defines the custom variables for the bundle, where each key is the name of the variable, and the value is a Map that defines the variable.
"workspace":
"description": |-
Defines the Databricks workspace for the bundle.
github.com/databricks/cli/bundle/config.Sync: github.com/databricks/cli/bundle/config.Sync:
exclude: "exclude":
description: PLACEHOLDER "description": |-
include: A list of files or folders to exclude from the bundle.
description: PLACEHOLDER "include":
paths: "description": |-
description: PLACEHOLDER A list of files or folders to include in the bundle.
"paths":
"description": |-
The local folder paths, which can be outside the bundle root, to synchronize to the workspace when the bundle is deployed.
github.com/databricks/cli/bundle/config.Target: github.com/databricks/cli/bundle/config.Target:
artifacts: "artifacts":
description: PLACEHOLDER "description": |-
bundle: The artifacts to include in the target deployment.
description: PLACEHOLDER "markdown_description": |-
cluster_id: The artifacts to include in the target deployment. See [_](#artifact)
description: PLACEHOLDER "bundle":
compute_id: "description": |-
description: PLACEHOLDER The name of the bundle when deploying to this target.
default: "cluster_id":
description: PLACEHOLDER "description": |-
git: The ID of the cluster to use for this target.
description: PLACEHOLDER "compute_id":
mode: "description": |-
description: PLACEHOLDER Deprecated. The ID of the compute to use for this target.
permissions: "default":
description: PLACEHOLDER "description": |-
presets: Whether this target is the default target.
description: PLACEHOLDER "git":
resources: "description": |-
description: PLACEHOLDER The Git version control settings for the target.
run_as: "markdown_description": |-
description: PLACEHOLDER The Git version control settings for the target. See [_](#git).
sync: "mode":
description: PLACEHOLDER "description": |-
variables: The deployment mode for the target. Valid values are development or production.
description: PLACEHOLDER "markdown_description": |-
workspace: The deployment mode for the target. Valid values are development or production. See [_](/dev-tools/bundles/deployment-modes.md).
description: PLACEHOLDER "permissions":
"description": |-
The permissions for deploying and running the bundle in the target.
"markdown_description": |-
The permissions for deploying and running the bundle in the target. See [_](#permission).
"presets":
"description": |-
The deployment presets for the target.
"markdown_description": |-
The deployment presets for the target. See [_](#preset).
"resources":
"description": |-
The resource definitions for the target.
"markdown_description": |-
The resource definitions for the target. See [_](#resources).
"run_as":
"description": |-
The identity to use to run the bundle.
"markdown_description": |-
The identity to use to run the bundle. See [_](#job_run_as) and [_](/dev-tools/bundles/run_as.md).
"sync":
"description": |-
The local paths to sync to the target workspace when a bundle is run or deployed.
"markdown_description": |-
The local paths to sync to the target workspace when a bundle is run or deployed. See [_](#sync).
"variables":
"description": |-
The custom variable definitions for the target.
"markdown_description": |-
The custom variable definitions for the target. See [_](/dev-tools/bundles/settings.md#variables) and [_](/dev-tools/bundles/variables.md).
"workspace":
"description": |-
The Databricks workspace for the target. _.
"markdown_description": |-
The Databricks workspace for the target. [_](#workspace)
github.com/databricks/cli/bundle/config.Workspace: github.com/databricks/cli/bundle/config.Workspace:
artifact_path: "artifact_path":
description: PLACEHOLDER "description": |-
auth_type: The artifact path to use within the workspace for both deployments and workflow runs
description: PLACEHOLDER "auth_type":
azure_client_id: "description": |-
description: PLACEHOLDER The authentication type.
azure_environment: "azure_client_id":
description: PLACEHOLDER "description": |-
azure_login_app_id: The Azure client ID
description: PLACEHOLDER "azure_environment":
azure_tenant_id: "description": |-
description: PLACEHOLDER The Azure environment
azure_use_msi: "azure_login_app_id":
description: PLACEHOLDER "description": |-
azure_workspace_resource_id: The Azure login app ID
description: PLACEHOLDER "azure_tenant_id":
client_id: "description": |-
description: PLACEHOLDER The Azure tenant ID
file_path: "azure_use_msi":
description: PLACEHOLDER "description": |-
google_service_account: Whether to use MSI for Azure
description: PLACEHOLDER "azure_workspace_resource_id":
host: "description": |-
description: PLACEHOLDER The Azure workspace resource ID
profile: "client_id":
description: PLACEHOLDER "description": |-
resource_path: The client ID for the workspace
description: PLACEHOLDER "file_path":
root_path: "description": |-
description: PLACEHOLDER The file path to use within the workspace for both deployments and workflow runs
state_path: "google_service_account":
description: PLACEHOLDER "description": |-
The Google service account name
"host":
"description": |-
The Databricks workspace host URL
"profile":
"description": |-
The Databricks workspace profile name
"resource_path":
"description": |-
The workspace resource path
"root_path":
"description": |-
The Databricks workspace root path
"state_path":
"description": |-
The workspace state path
github.com/databricks/cli/bundle/config/resources.Grant: github.com/databricks/cli/bundle/config/resources.Grant:
principal: "principal":
description: PLACEHOLDER "description": |-
privileges: The name of the principal that will be granted privileges
description: PLACEHOLDER "privileges":
"description": |-
The privileges to grant to the specified entity
github.com/databricks/cli/bundle/config/resources.Permission: github.com/databricks/cli/bundle/config/resources.Permission:
group_name: "group_name":
description: PLACEHOLDER "description": |-
level: The name of the group that has the permission set in level.
description: PLACEHOLDER "level":
service_principal_name: "description": |-
description: PLACEHOLDER The allowed permission for user, group, service principal defined for this permission.
user_name: "service_principal_name":
description: PLACEHOLDER "description": |-
The name of the service principal that has the permission set in level.
"user_name":
"description": |-
The name of the user that has the permission set in level.
github.com/databricks/cli/bundle/config/variable.Lookup: github.com/databricks/cli/bundle/config/variable.Lookup:
alert: "alert":
description: PLACEHOLDER "description": |-
cluster: PLACEHOLDER
description: PLACEHOLDER "cluster":
cluster_policy: "description": |-
description: PLACEHOLDER PLACEHOLDER
dashboard: "cluster_policy":
description: PLACEHOLDER "description": |-
instance_pool: PLACEHOLDER
description: PLACEHOLDER "dashboard":
job: "description": |-
description: PLACEHOLDER PLACEHOLDER
metastore: "instance_pool":
description: PLACEHOLDER "description": |-
notification_destination: PLACEHOLDER
description: PLACEHOLDER "job":
pipeline: "description": |-
description: PLACEHOLDER PLACEHOLDER
query: "metastore":
description: PLACEHOLDER "description": |-
service_principal: PLACEHOLDER
description: PLACEHOLDER "notification_destination":
warehouse: "description": |-
description: PLACEHOLDER PLACEHOLDER
"pipeline":
"description": |-
PLACEHOLDER
"query":
"description": |-
PLACEHOLDER
"service_principal":
"description": |-
PLACEHOLDER
"warehouse":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/variable.TargetVariable: github.com/databricks/cli/bundle/config/variable.TargetVariable:
default: "default":
description: PLACEHOLDER "description": |-
description: PLACEHOLDER
description: PLACEHOLDER "description":
lookup: "description": |-
description: PLACEHOLDER The description of the variable
type: "lookup":
description: PLACEHOLDER "description": |-
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.
"type":
"description": |-
The type of the variable.
"markdown_description":
"description": |-
The type of the variable. Valid values are `complex`.
github.com/databricks/cli/bundle/config/variable.Variable: github.com/databricks/cli/bundle/config/variable.Variable:
default: "default":
description: PLACEHOLDER "description": |-
description: PLACEHOLDER
description: PLACEHOLDER "description":
lookup: "description": |-
description: PLACEHOLDER The description of the variable
type: "lookup":
description: PLACEHOLDER "description": |-
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.
"type":
"description": |-
The type of the variable. Valid values are complex.
github.com/databricks/databricks-sdk-go/service/serving.Ai21LabsConfig: github.com/databricks/databricks-sdk-go/service/serving.Ai21LabsConfig:
ai21labs_api_key: "ai21labs_api_key":
description: PLACEHOLDER "description": |-
ai21labs_api_key_plaintext: PLACEHOLDER
description: PLACEHOLDER "ai21labs_api_key_plaintext":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.GoogleCloudVertexAiConfig: github.com/databricks/databricks-sdk-go/service/serving.GoogleCloudVertexAiConfig:
private_key: "private_key":
description: PLACEHOLDER "description": |-
private_key_plaintext: PLACEHOLDER
description: PLACEHOLDER "private_key_plaintext":
project_id: "description": |-
description: PLACEHOLDER PLACEHOLDER
region: "project_id":
description: PLACEHOLDER "description": |-
PLACEHOLDER
"region":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.OpenAiConfig: github.com/databricks/databricks-sdk-go/service/serving.OpenAiConfig:
microsoft_entra_client_id: "microsoft_entra_client_id":
description: PLACEHOLDER "description": |-
microsoft_entra_client_secret: PLACEHOLDER
description: PLACEHOLDER "microsoft_entra_client_secret":
microsoft_entra_client_secret_plaintext: "description": |-
description: PLACEHOLDER PLACEHOLDER
microsoft_entra_tenant_id: "microsoft_entra_client_secret_plaintext":
description: PLACEHOLDER "description": |-
openai_api_base: PLACEHOLDER
description: PLACEHOLDER "microsoft_entra_tenant_id":
openai_api_key: "description": |-
description: PLACEHOLDER PLACEHOLDER
openai_api_key_plaintext: "openai_api_base":
description: PLACEHOLDER "description": |-
openai_api_type: PLACEHOLDER
description: PLACEHOLDER "openai_api_key":
openai_api_version: "description": |-
description: PLACEHOLDER PLACEHOLDER
openai_deployment_name: "openai_api_key_plaintext":
description: PLACEHOLDER "description": |-
openai_organization: PLACEHOLDER
description: PLACEHOLDER "openai_api_type":
"description": |-
PLACEHOLDER
"openai_api_version":
"description": |-
PLACEHOLDER
"openai_deployment_name":
"description": |-
PLACEHOLDER
"openai_organization":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig: github.com/databricks/databricks-sdk-go/service/serving.PaLmConfig:
palm_api_key: "palm_api_key":
description: PLACEHOLDER "description": |-
palm_api_key_plaintext: PLACEHOLDER
description: PLACEHOLDER "palm_api_key_plaintext":
"description": |-
PLACEHOLDER

View File

@ -1,112 +1,155 @@
github.com/databricks/cli/bundle/config/resources.Cluster: github.com/databricks/cli/bundle/config/resources.Cluster:
data_security_mode: "data_security_mode":
description: PLACEHOLDER "description": |-
docker_image: PLACEHOLDER
description: PLACEHOLDER "docker_image":
permissions: "description": |-
description: PLACEHOLDER PLACEHOLDER
runtime_engine: "permissions":
description: PLACEHOLDER "description": |-
workload_type: PLACEHOLDER
description: PLACEHOLDER "runtime_engine":
"description": |-
PLACEHOLDER
"workload_type":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Dashboard: github.com/databricks/cli/bundle/config/resources.Dashboard:
embed_credentials: "embed_credentials":
description: PLACEHOLDER "description": |-
file_path: PLACEHOLDER
description: PLACEHOLDER "file_path":
permissions: "description": |-
description: PLACEHOLDER PLACEHOLDER
"permissions":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Job: github.com/databricks/cli/bundle/config/resources.Job:
health: "health":
description: PLACEHOLDER "description": |-
permissions: PLACEHOLDER
description: PLACEHOLDER "permissions":
run_as: "description": |-
description: PLACEHOLDER PLACEHOLDER
"run_as":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.MlflowExperiment: github.com/databricks/cli/bundle/config/resources.MlflowExperiment:
permissions: "permissions":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.MlflowModel: github.com/databricks/cli/bundle/config/resources.MlflowModel:
permissions: "permissions":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint: github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint:
permissions: "permissions":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Pipeline: github.com/databricks/cli/bundle/config/resources.Pipeline:
permissions: "permissions":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.QualityMonitor: github.com/databricks/cli/bundle/config/resources.QualityMonitor:
table_name: "table_name":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.RegisteredModel: github.com/databricks/cli/bundle/config/resources.RegisteredModel:
grants: "grants":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Schema: github.com/databricks/cli/bundle/config/resources.Schema:
grants: "grants":
description: PLACEHOLDER "description": |-
properties: PLACEHOLDER
description: PLACEHOLDER "properties":
"description": |-
PLACEHOLDER
github.com/databricks/cli/bundle/config/resources.Volume: github.com/databricks/cli/bundle/config/resources.Volume:
grants: "grants":
description: PLACEHOLDER "description": |-
volume_type: PLACEHOLDER
description: PLACEHOLDER "volume_type":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes: github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes:
availability: "availability":
description: PLACEHOLDER "description": |-
ebs_volume_type: PLACEHOLDER
description: PLACEHOLDER "ebs_volume_type":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.AzureAttributes: github.com/databricks/databricks-sdk-go/service/compute.AzureAttributes:
availability: "availability":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec: github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
data_security_mode: "data_security_mode":
description: PLACEHOLDER "description": |-
docker_image: PLACEHOLDER
description: PLACEHOLDER "docker_image":
runtime_engine: "description": |-
description: PLACEHOLDER PLACEHOLDER
workload_type: "runtime_engine":
description: PLACEHOLDER "description": |-
PLACEHOLDER
"workload_type":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.DockerImage: github.com/databricks/databricks-sdk-go/service/compute.DockerImage:
basic_auth: "basic_auth":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.GcpAttributes: github.com/databricks/databricks-sdk-go/service/compute.GcpAttributes:
availability: "availability":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.GitSource: github.com/databricks/databricks-sdk-go/service/jobs.GitSource:
git_snapshot: "git_snapshot":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.JobEnvironment: github.com/databricks/databricks-sdk-go/service/jobs.JobEnvironment:
spec: "spec":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRule: github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRule:
metric: "metric":
description: PLACEHOLDER "description": |-
op: PLACEHOLDER
description: PLACEHOLDER "op":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRules: github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRules:
rules: "rules":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.RunJobTask: github.com/databricks/databricks-sdk-go/service/jobs.RunJobTask:
python_named_params: "python_named_params":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.Task: github.com/databricks/databricks-sdk-go/service/jobs.Task:
health: "health":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.TriggerSettings: github.com/databricks/databricks-sdk-go/service/jobs.TriggerSettings:
table_update: "table_update":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/jobs.Webhook: github.com/databricks/databricks-sdk-go/service/jobs.Webhook:
id: "id":
description: PLACEHOLDER "description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/pipelines.CronTrigger: github.com/databricks/databricks-sdk-go/service/pipelines.CronTrigger:
quartz_cron_schedule: "quartz_cron_schedule":
description: PLACEHOLDER "description": |-
timezone_id: PLACEHOLDER
description: PLACEHOLDER "timezone_id":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/pipelines.PipelineTrigger: github.com/databricks/databricks-sdk-go/service/pipelines.PipelineTrigger:
cron: "cron":
description: PLACEHOLDER "description": |-
manual: PLACEHOLDER
description: PLACEHOLDER "manual":
"description": |-
PLACEHOLDER

View File

@ -139,7 +139,10 @@ func generateSchema(workdir, outputFile string) {
log.Fatal(err) log.Fatal(err)
} }
fmt.Printf("Writing OpenAPI annotations to %s\n", annotationsOpenApiPath) fmt.Printf("Writing OpenAPI annotations to %s\n", annotationsOpenApiPath)
p.extractAnnotations(reflect.TypeOf(config.Root{}), annotationsOpenApiPath, annotationsOpenApiOverridesPath) err = p.extractAnnotations(reflect.TypeOf(config.Root{}), annotationsOpenApiPath, annotationsOpenApiOverridesPath)
if err != nil {
log.Fatal(err)
}
} }
a, err := newAnnotationHandler([]string{annotationsOpenApiPath, annotationsOpenApiOverridesPath, annotationsPath}) a, err := newAnnotationHandler([]string{annotationsOpenApiPath, annotationsOpenApiOverridesPath, annotationsPath})

View File

@ -11,6 +11,7 @@ import (
"github.com/ghodss/yaml" "github.com/ghodss/yaml"
"github.com/databricks/cli/libs/dyn/yamlloader"
"github.com/databricks/cli/libs/jsonschema" "github.com/databricks/cli/libs/jsonschema"
) )
@ -147,11 +148,14 @@ func (p *openapiParser) extractAnnotations(typ reflect.Type, outputPath, overrid
if err != nil { if err != nil {
return err return err
} }
err = os.WriteFile(overridesPath, b, 0644) o, err := yamlloader.LoadYAML("", bytes.NewBuffer(b))
if err != nil {
return err
}
err = saveYamlWithStyle(overridesPath, o)
if err != nil { if err != nil {
return err return err
} }
b, err = yaml.Marshal(annotations) b, err = yaml.Marshal(annotations)
if err != nil { if err != nil {
return err return err

View File

@ -1,24 +1,27 @@
module github.com/databricks/cli/bundle/internal/tf/codegen module github.com/databricks/cli/bundle/internal/tf/codegen
go 1.21 go 1.23
toolchain go1.23.2
require ( require (
github.com/hashicorp/go-version v1.6.0 github.com/hashicorp/go-version v1.7.0
github.com/hashicorp/hc-install v0.6.3 github.com/hashicorp/hc-install v0.9.0
github.com/hashicorp/terraform-exec v0.20.0 github.com/hashicorp/terraform-exec v0.21.0
github.com/hashicorp/terraform-json v0.21.0 github.com/hashicorp/terraform-json v0.23.0
github.com/iancoleman/strcase v0.3.0 github.com/iancoleman/strcase v0.3.0
github.com/zclconf/go-cty v1.14.2 github.com/zclconf/go-cty v1.15.1
golang.org/x/exp v0.0.0-20240213143201-ec583247a57a golang.org/x/exp v0.0.0-20241204233417-43b7b7cde48d
) )
require ( require (
github.com/ProtonMail/go-crypto v1.1.0-alpha.0 // indirect github.com/ProtonMail/go-crypto v1.1.3 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/cloudflare/circl v1.3.7 // indirect github.com/cloudflare/circl v1.5.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
golang.org/x/crypto v0.19.0 // indirect github.com/hashicorp/go-retryablehttp v0.7.7 // indirect
golang.org/x/mod v0.15.0 // indirect golang.org/x/crypto v0.30.0 // indirect
golang.org/x/sys v0.17.0 // indirect golang.org/x/mod v0.22.0 // indirect
golang.org/x/text v0.14.0 // indirect golang.org/x/sys v0.28.0 // indirect
golang.org/x/text v0.21.0 // indirect
) )

View File

@ -2,67 +2,79 @@ dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk= dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow= github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM= github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/ProtonMail/go-crypto v1.1.0-alpha.0 h1:nHGfwXmFvJrSR9xu8qL7BkO4DqTHXE9N5vPhgY2I+j0= github.com/ProtonMail/go-crypto v1.1.3 h1:nRBOetoydLeUb4nHajyO2bKqMLfWQ/ZPwkXqXxPxCFk=
github.com/ProtonMail/go-crypto v1.1.0-alpha.0/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE= github.com/ProtonMail/go-crypto v1.1.3/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE=
github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY= github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4= github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/cloudflare/circl v1.3.7 h1:qlCDlTPz2n9fu58M0Nh1J/JzcFpfgkFHHX3O35r5vcU= github.com/cloudflare/circl v1.5.0 h1:hxIWksrX6XN5a1L2TI/h53AGPhNHoUBo+TD1ms9+pys=
github.com/cloudflare/circl v1.3.7/go.mod h1:sRTcRWXGLrKw6yIGJ+l7amYJFfAXbZG0kBSc8r4zxgA= github.com/cloudflare/circl v1.5.0/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc= github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ= github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ=
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI= github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic= github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
github.com/go-git/go-billy/v5 v5.5.0 h1:yEY4yhzCDuMGSv83oGxiBotRzhwhNr8VZyphhiu+mTU= github.com/go-git/go-billy/v5 v5.5.0 h1:yEY4yhzCDuMGSv83oGxiBotRzhwhNr8VZyphhiu+mTU=
github.com/go-git/go-billy/v5 v5.5.0/go.mod h1:hmexnoNsr2SJU1Ju67OaNz5ASJY3+sHgFRpCtpDCKow= github.com/go-git/go-billy/v5 v5.5.0/go.mod h1:hmexnoNsr2SJU1Ju67OaNz5ASJY3+sHgFRpCtpDCKow=
github.com/go-git/go-git/v5 v5.11.0 h1:XIZc1p+8YzypNr34itUfSvYJcv+eYdTnTvOZ2vD3cA4= github.com/go-git/go-git/v5 v5.12.0 h1:7Md+ndsjrzZxbddRDZjF14qK+NN56sy6wkqaVrjZtys=
github.com/go-git/go-git/v5 v5.11.0/go.mod h1:6GFcX2P3NM7FPBfpePbpLd21XxsgdAt+lKqXmCUiUCY= github.com/go-git/go-git/v5 v5.12.0/go.mod h1:FTM9VKtnI2m65hNI/TenDDDnUf2Q9FHnXYjuz9i5OEY=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ= github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mOkIeek= github.com/hashicorp/go-hclog v1.6.3 h1:Qr2kF+eVWjTiYmU7Y31tYlP1h0q/X3Nl3tPGdaB11/k=
github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-hclog v1.6.3/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/hc-install v0.6.3 h1:yE/r1yJvWbtrJ0STwScgEnCanb0U9v7zp0Gbkmcoxqs= github.com/hashicorp/go-retryablehttp v0.7.7 h1:C8hUCYzor8PIfXHa4UrZkU4VvK8o9ISHxT2Q8+VepXU=
github.com/hashicorp/hc-install v0.6.3/go.mod h1:KamGdbodYzlufbWh4r9NRo8y6GLHWZP2GBtdnms1Ln0= github.com/hashicorp/go-retryablehttp v0.7.7/go.mod h1:pkQpWZeYWskR+D1tR2O5OcBFOxfA7DoAO6xtkuQnHTk=
github.com/hashicorp/terraform-exec v0.20.0 h1:DIZnPsqzPGuUnq6cH8jWcPunBfY+C+M8JyYF3vpnuEo= github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
github.com/hashicorp/terraform-exec v0.20.0/go.mod h1:ckKGkJWbsNqFKV1itgMnE0hY9IYf1HoiekpuN0eWoDw= github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/terraform-json v0.21.0 h1:9NQxbLNqPbEMze+S6+YluEdXgJmhQykRyRNd+zTI05U= github.com/hashicorp/hc-install v0.9.0 h1:2dIk8LcvANwtv3QZLckxcjyF5w8KVtiMxu6G6eLhghE=
github.com/hashicorp/terraform-json v0.21.0/go.mod h1:qdeBs11ovMzo5puhrRibdD6d2Dq6TyE/28JiU4tIQxk= github.com/hashicorp/hc-install v0.9.0/go.mod h1:+6vOP+mf3tuGgMApVYtmsnDoKWMDcFXeTxCACYZ8SFg=
github.com/hashicorp/terraform-exec v0.21.0 h1:uNkLAe95ey5Uux6KJdua6+cv8asgILFVWkd/RG0D2XQ=
github.com/hashicorp/terraform-exec v0.21.0/go.mod h1:1PPeMYou+KDUSSeRE9szMZ/oHf4fYUmB923Wzbq1ICg=
github.com/hashicorp/terraform-json v0.23.0 h1:sniCkExU4iKtTADReHzACkk8fnpQXrdD2xoR+lppBkI=
github.com/hashicorp/terraform-json v0.23.0/go.mod h1:MHdXbBAbSg0GvzuWazEGKAn/cyNfIB7mN6y7KJN6y2c=
github.com/iancoleman/strcase v0.3.0 h1:nTXanmYxhfFAMjZL34Ov6gkzEsSJZ5DbhxWjvSASxEI= github.com/iancoleman/strcase v0.3.0 h1:nTXanmYxhfFAMjZL34Ov6gkzEsSJZ5DbhxWjvSASxEI=
github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho= github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo= github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4= github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4=
github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM= github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4= github.com/pjbgf/sha1cd v0.3.0 h1:4D5XXmUUBUl/xQ6IjCkEAbqXskkq/4O7LmGn0AqMDs4=
github.com/pjbgf/sha1cd v0.3.0/go.mod h1:nZ1rrWOcGJ5uZgEEVL1VUM9iRQiZvWdbZjkKyFzPPsI= github.com/pjbgf/sha1cd v0.3.0/go.mod h1:nZ1rrWOcGJ5uZgEEVL1VUM9iRQiZvWdbZjkKyFzPPsI=
github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/skeema/knownhosts v1.2.1 h1:SHWdIUa82uGZz+F+47k8SY4QhhI291cXCpopT1lK2AQ= github.com/skeema/knownhosts v1.2.2 h1:Iug2P4fLmDw9f41PB6thxUkNUkJzB5i+1/exaj40L3A=
github.com/skeema/knownhosts v1.2.1/go.mod h1:xYbVRSPxqBZFrdmDyMmsOs+uX1UZC3nTN3ThzgDxUwo= github.com/skeema/knownhosts v1.2.2/go.mod h1:xYbVRSPxqBZFrdmDyMmsOs+uX1UZC3nTN3ThzgDxUwo=
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM= github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw= github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI1Bc68Uw=
github.com/zclconf/go-cty v1.14.2 h1:kTG7lqmBou0Zkx35r6HJHUQTvaRPr5bIAf3AoHS0izI= github.com/zclconf/go-cty v1.15.1 h1:RgQYm4j2EvoBRXOPxhUvxPzRrGDo1eCOhHXuGfrj5S0=
github.com/zclconf/go-cty v1.14.2/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE= github.com/zclconf/go-cty v1.15.1/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
golang.org/x/crypto v0.19.0 h1:ENy+Az/9Y1vSrlrvBSyna3PITt4tiZLf7sgCjZBX7Wo= golang.org/x/crypto v0.30.0 h1:RwoQn3GkWiMkzlX562cLB7OxWvjH1L8xutO2WoJcRoY=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.30.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20240213143201-ec583247a57a h1:HinSgX1tJRX3KsL//Gxynpw5CTOAIPhgL4W8PNiIpVE= golang.org/x/exp v0.0.0-20241204233417-43b7b7cde48d h1:0olWaB5pg3+oychR51GUVCEsGkeCU/2JxjBgIo4f3M0=
golang.org/x/exp v0.0.0-20240213143201-ec583247a57a/go.mod h1:CxmFvTBINI24O/j8iY7H1xHzx2i4OsyguNBmN/uPtqc= golang.org/x/exp v0.0.0-20241204233417-43b7b7cde48d/go.mod h1:qj5a5QZpwLU2NLQudwIN5koi3beDhSAlJwa67PuM98c=
golang.org/x/mod v0.15.0 h1:SernR4v+D55NyBH2QiEQrlBAnj1ECL6AGrA5+dPaMY8= golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c= golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y= golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ= golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/tools v0.18.0 h1:k8NLag8AGHnn+PHbl7g43CtqZAwG60vZkLqgyZgIHgQ= golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/tools v0.18.0/go.mod h1:GL7B4CwcLLeo59yx/9UWWuNOW1n3VZ4f5axWfML7Lcg= golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/tools v0.28.0 h1:WuB6qZ4RPCQo5aP3WdKZS7i595EdWqWR8vqJTlwTVK8=
golang.org/x/tools v0.28.0/go.mod h1:dcIOrVd3mfQKTgrDVQHqCPMWy6lnhfhtX3hLXYVLfRw=
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME= gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI= gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=

View File

@ -15,10 +15,10 @@ import (
) )
func (s *Schema) writeTerraformBlock(_ context.Context) error { func (s *Schema) writeTerraformBlock(_ context.Context) error {
var body = map[string]interface{}{ var body = map[string]any{
"terraform": map[string]interface{}{ "terraform": map[string]any{
"required_providers": map[string]interface{}{ "required_providers": map[string]any{
"databricks": map[string]interface{}{ "databricks": map[string]any{
"source": "databricks/databricks", "source": "databricks/databricks",
"version": ProviderVersion, "version": ProviderVersion,
}, },

View File

@ -1,3 +1,3 @@
package schema package schema
const ProviderVersion = "1.58.0" const ProviderVersion = "1.59.0"

View File

@ -3,6 +3,7 @@
package schema package schema
type DataSourceAwsAssumeRolePolicy struct { type DataSourceAwsAssumeRolePolicy struct {
AwsPartition string `json:"aws_partition,omitempty"`
DatabricksAccountId string `json:"databricks_account_id,omitempty"` DatabricksAccountId string `json:"databricks_account_id,omitempty"`
ExternalId string `json:"external_id"` ExternalId string `json:"external_id"`
ForLogDelivery bool `json:"for_log_delivery,omitempty"` ForLogDelivery bool `json:"for_log_delivery,omitempty"`

View File

@ -3,6 +3,7 @@
package schema package schema
type DataSourceAwsBucketPolicy struct { type DataSourceAwsBucketPolicy struct {
AwsPartition string `json:"aws_partition,omitempty"`
Bucket string `json:"bucket"` Bucket string `json:"bucket"`
DatabricksAccountId string `json:"databricks_account_id,omitempty"` DatabricksAccountId string `json:"databricks_account_id,omitempty"`
DatabricksE2AccountId string `json:"databricks_e2_account_id,omitempty"` DatabricksE2AccountId string `json:"databricks_e2_account_id,omitempty"`

View File

@ -4,6 +4,7 @@ package schema
type DataSourceAwsCrossaccountPolicy struct { type DataSourceAwsCrossaccountPolicy struct {
AwsAccountId string `json:"aws_account_id,omitempty"` AwsAccountId string `json:"aws_account_id,omitempty"`
AwsPartition string `json:"aws_partition,omitempty"`
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
Json string `json:"json,omitempty"` Json string `json:"json,omitempty"`
PassRoles []string `json:"pass_roles,omitempty"` PassRoles []string `json:"pass_roles,omitempty"`

View File

@ -4,6 +4,7 @@ package schema
type DataSourceAwsUnityCatalogAssumeRolePolicy struct { type DataSourceAwsUnityCatalogAssumeRolePolicy struct {
AwsAccountId string `json:"aws_account_id"` AwsAccountId string `json:"aws_account_id"`
AwsPartition string `json:"aws_partition,omitempty"`
ExternalId string `json:"external_id"` ExternalId string `json:"external_id"`
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
Json string `json:"json,omitempty"` Json string `json:"json,omitempty"`

View File

@ -4,6 +4,7 @@ package schema
type DataSourceAwsUnityCatalogPolicy struct { type DataSourceAwsUnityCatalogPolicy struct {
AwsAccountId string `json:"aws_account_id"` AwsAccountId string `json:"aws_account_id"`
AwsPartition string `json:"aws_partition,omitempty"`
BucketName string `json:"bucket_name"` BucketName string `json:"bucket_name"`
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
Json string `json:"json,omitempty"` Json string `json:"json,omitempty"`

View File

@ -0,0 +1,51 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceMwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule struct {
CidrBlocks []string `json:"cidr_blocks,omitempty"`
}
type DataSourceMwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule struct {
Subnets []string `json:"subnets,omitempty"`
TargetRegion string `json:"target_region,omitempty"`
TargetServices []string `json:"target_services,omitempty"`
}
type DataSourceMwsNetworkConnectivityConfigEgressConfigDefaultRules struct {
AwsStableIpRule *DataSourceMwsNetworkConnectivityConfigEgressConfigDefaultRulesAwsStableIpRule `json:"aws_stable_ip_rule,omitempty"`
AzureServiceEndpointRule *DataSourceMwsNetworkConnectivityConfigEgressConfigDefaultRulesAzureServiceEndpointRule `json:"azure_service_endpoint_rule,omitempty"`
}
type DataSourceMwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRules struct {
ConnectionState string `json:"connection_state,omitempty"`
CreationTime int `json:"creation_time,omitempty"`
Deactivated bool `json:"deactivated,omitempty"`
DeactivatedAt int `json:"deactivated_at,omitempty"`
EndpointName string `json:"endpoint_name,omitempty"`
GroupId string `json:"group_id,omitempty"`
NetworkConnectivityConfigId string `json:"network_connectivity_config_id,omitempty"`
ResourceId string `json:"resource_id,omitempty"`
RuleId string `json:"rule_id,omitempty"`
UpdatedTime int `json:"updated_time,omitempty"`
}
type DataSourceMwsNetworkConnectivityConfigEgressConfigTargetRules struct {
AzurePrivateEndpointRules []DataSourceMwsNetworkConnectivityConfigEgressConfigTargetRulesAzurePrivateEndpointRules `json:"azure_private_endpoint_rules,omitempty"`
}
type DataSourceMwsNetworkConnectivityConfigEgressConfig struct {
DefaultRules *DataSourceMwsNetworkConnectivityConfigEgressConfigDefaultRules `json:"default_rules,omitempty"`
TargetRules *DataSourceMwsNetworkConnectivityConfigEgressConfigTargetRules `json:"target_rules,omitempty"`
}
type DataSourceMwsNetworkConnectivityConfig struct {
AccountId string `json:"account_id,omitempty"`
CreationTime int `json:"creation_time,omitempty"`
Id string `json:"id,omitempty"`
Name string `json:"name"`
NetworkConnectivityConfigId string `json:"network_connectivity_config_id,omitempty"`
Region string `json:"region,omitempty"`
UpdatedTime int `json:"updated_time,omitempty"`
EgressConfig *DataSourceMwsNetworkConnectivityConfigEgressConfig `json:"egress_config,omitempty"`
}

View File

@ -0,0 +1,9 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceMwsNetworkConnectivityConfigs struct {
Id string `json:"id,omitempty"`
Names []string `json:"names,omitempty"`
Region string `json:"region,omitempty"`
}

View File

@ -0,0 +1,52 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceRegisteredModelVersionsModelVersionsAliases struct {
AliasName string `json:"alias_name,omitempty"`
VersionNum int `json:"version_num,omitempty"`
}
type DataSourceRegisteredModelVersionsModelVersionsModelVersionDependenciesDependenciesFunction struct {
FunctionFullName string `json:"function_full_name"`
}
type DataSourceRegisteredModelVersionsModelVersionsModelVersionDependenciesDependenciesTable struct {
TableFullName string `json:"table_full_name"`
}
type DataSourceRegisteredModelVersionsModelVersionsModelVersionDependenciesDependencies struct {
Function []DataSourceRegisteredModelVersionsModelVersionsModelVersionDependenciesDependenciesFunction `json:"function,omitempty"`
Table []DataSourceRegisteredModelVersionsModelVersionsModelVersionDependenciesDependenciesTable `json:"table,omitempty"`
}
type DataSourceRegisteredModelVersionsModelVersionsModelVersionDependencies struct {
Dependencies []DataSourceRegisteredModelVersionsModelVersionsModelVersionDependenciesDependencies `json:"dependencies,omitempty"`
}
type DataSourceRegisteredModelVersionsModelVersions struct {
BrowseOnly bool `json:"browse_only,omitempty"`
CatalogName string `json:"catalog_name,omitempty"`
Comment string `json:"comment,omitempty"`
CreatedAt int `json:"created_at,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
Id string `json:"id,omitempty"`
MetastoreId string `json:"metastore_id,omitempty"`
ModelName string `json:"model_name,omitempty"`
RunId string `json:"run_id,omitempty"`
RunWorkspaceId int `json:"run_workspace_id,omitempty"`
SchemaName string `json:"schema_name,omitempty"`
Source string `json:"source,omitempty"`
Status string `json:"status,omitempty"`
StorageLocation string `json:"storage_location,omitempty"`
UpdatedAt int `json:"updated_at,omitempty"`
UpdatedBy string `json:"updated_by,omitempty"`
Version int `json:"version,omitempty"`
Aliases []DataSourceRegisteredModelVersionsModelVersionsAliases `json:"aliases,omitempty"`
ModelVersionDependencies []DataSourceRegisteredModelVersionsModelVersionsModelVersionDependencies `json:"model_version_dependencies,omitempty"`
}
type DataSourceRegisteredModelVersions struct {
FullName string `json:"full_name"`
ModelVersions []DataSourceRegisteredModelVersionsModelVersions `json:"model_versions,omitempty"`
}

View File

@ -0,0 +1,178 @@
// Generated from Databricks Terraform provider schema. DO NOT EDIT.
package schema
type DataSourceServingEndpointsEndpointsAiGatewayGuardrailsInputPii struct {
Behavior string `json:"behavior"`
}
type DataSourceServingEndpointsEndpointsAiGatewayGuardrailsInput struct {
InvalidKeywords []string `json:"invalid_keywords,omitempty"`
Safety bool `json:"safety,omitempty"`
ValidTopics []string `json:"valid_topics,omitempty"`
Pii []DataSourceServingEndpointsEndpointsAiGatewayGuardrailsInputPii `json:"pii,omitempty"`
}
type DataSourceServingEndpointsEndpointsAiGatewayGuardrailsOutputPii struct {
Behavior string `json:"behavior"`
}
type DataSourceServingEndpointsEndpointsAiGatewayGuardrailsOutput struct {
InvalidKeywords []string `json:"invalid_keywords,omitempty"`
Safety bool `json:"safety,omitempty"`
ValidTopics []string `json:"valid_topics,omitempty"`
Pii []DataSourceServingEndpointsEndpointsAiGatewayGuardrailsOutputPii `json:"pii,omitempty"`
}
type DataSourceServingEndpointsEndpointsAiGatewayGuardrails struct {
Input []DataSourceServingEndpointsEndpointsAiGatewayGuardrailsInput `json:"input,omitempty"`
Output []DataSourceServingEndpointsEndpointsAiGatewayGuardrailsOutput `json:"output,omitempty"`
}
type DataSourceServingEndpointsEndpointsAiGatewayInferenceTableConfig struct {
CatalogName string `json:"catalog_name,omitempty"`
Enabled bool `json:"enabled,omitempty"`
SchemaName string `json:"schema_name,omitempty"`
TableNamePrefix string `json:"table_name_prefix,omitempty"`
}
type DataSourceServingEndpointsEndpointsAiGatewayRateLimits struct {
Calls int `json:"calls"`
Key string `json:"key,omitempty"`
RenewalPeriod string `json:"renewal_period"`
}
type DataSourceServingEndpointsEndpointsAiGatewayUsageTrackingConfig struct {
Enabled bool `json:"enabled,omitempty"`
}
type DataSourceServingEndpointsEndpointsAiGateway struct {
Guardrails []DataSourceServingEndpointsEndpointsAiGatewayGuardrails `json:"guardrails,omitempty"`
InferenceTableConfig []DataSourceServingEndpointsEndpointsAiGatewayInferenceTableConfig `json:"inference_table_config,omitempty"`
RateLimits []DataSourceServingEndpointsEndpointsAiGatewayRateLimits `json:"rate_limits,omitempty"`
UsageTrackingConfig []DataSourceServingEndpointsEndpointsAiGatewayUsageTrackingConfig `json:"usage_tracking_config,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelAi21LabsConfig struct {
Ai21LabsApiKey string `json:"ai21labs_api_key,omitempty"`
Ai21LabsApiKeyPlaintext string `json:"ai21labs_api_key_plaintext,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelAmazonBedrockConfig struct {
AwsAccessKeyId string `json:"aws_access_key_id,omitempty"`
AwsAccessKeyIdPlaintext string `json:"aws_access_key_id_plaintext,omitempty"`
AwsRegion string `json:"aws_region"`
AwsSecretAccessKey string `json:"aws_secret_access_key,omitempty"`
AwsSecretAccessKeyPlaintext string `json:"aws_secret_access_key_plaintext,omitempty"`
BedrockProvider string `json:"bedrock_provider"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelAnthropicConfig struct {
AnthropicApiKey string `json:"anthropic_api_key,omitempty"`
AnthropicApiKeyPlaintext string `json:"anthropic_api_key_plaintext,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelCohereConfig struct {
CohereApiBase string `json:"cohere_api_base,omitempty"`
CohereApiKey string `json:"cohere_api_key,omitempty"`
CohereApiKeyPlaintext string `json:"cohere_api_key_plaintext,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelDatabricksModelServingConfig struct {
DatabricksApiToken string `json:"databricks_api_token,omitempty"`
DatabricksApiTokenPlaintext string `json:"databricks_api_token_plaintext,omitempty"`
DatabricksWorkspaceUrl string `json:"databricks_workspace_url"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelGoogleCloudVertexAiConfig struct {
PrivateKey string `json:"private_key,omitempty"`
PrivateKeyPlaintext string `json:"private_key_plaintext,omitempty"`
ProjectId string `json:"project_id,omitempty"`
Region string `json:"region,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelOpenaiConfig struct {
MicrosoftEntraClientId string `json:"microsoft_entra_client_id,omitempty"`
MicrosoftEntraClientSecret string `json:"microsoft_entra_client_secret,omitempty"`
MicrosoftEntraClientSecretPlaintext string `json:"microsoft_entra_client_secret_plaintext,omitempty"`
MicrosoftEntraTenantId string `json:"microsoft_entra_tenant_id,omitempty"`
OpenaiApiBase string `json:"openai_api_base,omitempty"`
OpenaiApiKey string `json:"openai_api_key,omitempty"`
OpenaiApiKeyPlaintext string `json:"openai_api_key_plaintext,omitempty"`
OpenaiApiType string `json:"openai_api_type,omitempty"`
OpenaiApiVersion string `json:"openai_api_version,omitempty"`
OpenaiDeploymentName string `json:"openai_deployment_name,omitempty"`
OpenaiOrganization string `json:"openai_organization,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelPalmConfig struct {
PalmApiKey string `json:"palm_api_key,omitempty"`
PalmApiKeyPlaintext string `json:"palm_api_key_plaintext,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModel struct {
Name string `json:"name"`
Provider string `json:"provider"`
Task string `json:"task"`
Ai21LabsConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelAi21LabsConfig `json:"ai21labs_config,omitempty"`
AmazonBedrockConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelAmazonBedrockConfig `json:"amazon_bedrock_config,omitempty"`
AnthropicConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelAnthropicConfig `json:"anthropic_config,omitempty"`
CohereConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelCohereConfig `json:"cohere_config,omitempty"`
DatabricksModelServingConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelDatabricksModelServingConfig `json:"databricks_model_serving_config,omitempty"`
GoogleCloudVertexAiConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelGoogleCloudVertexAiConfig `json:"google_cloud_vertex_ai_config,omitempty"`
OpenaiConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelOpenaiConfig `json:"openai_config,omitempty"`
PalmConfig []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModelPalmConfig `json:"palm_config,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntitiesFoundationModel struct {
Description string `json:"description,omitempty"`
DisplayName string `json:"display_name,omitempty"`
Docs string `json:"docs,omitempty"`
Name string `json:"name,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedEntities struct {
EntityName string `json:"entity_name,omitempty"`
EntityVersion string `json:"entity_version,omitempty"`
Name string `json:"name,omitempty"`
ExternalModel []DataSourceServingEndpointsEndpointsConfigServedEntitiesExternalModel `json:"external_model,omitempty"`
FoundationModel []DataSourceServingEndpointsEndpointsConfigServedEntitiesFoundationModel `json:"foundation_model,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfigServedModels struct {
ModelName string `json:"model_name,omitempty"`
ModelVersion string `json:"model_version,omitempty"`
Name string `json:"name,omitempty"`
}
type DataSourceServingEndpointsEndpointsConfig struct {
ServedEntities []DataSourceServingEndpointsEndpointsConfigServedEntities `json:"served_entities,omitempty"`
ServedModels []DataSourceServingEndpointsEndpointsConfigServedModels `json:"served_models,omitempty"`
}
type DataSourceServingEndpointsEndpointsState struct {
ConfigUpdate string `json:"config_update,omitempty"`
Ready string `json:"ready,omitempty"`
}
type DataSourceServingEndpointsEndpointsTags struct {
Key string `json:"key"`
Value string `json:"value,omitempty"`
}
type DataSourceServingEndpointsEndpoints struct {
CreationTimestamp int `json:"creation_timestamp,omitempty"`
Creator string `json:"creator,omitempty"`
Id string `json:"id,omitempty"`
LastUpdatedTimestamp int `json:"last_updated_timestamp,omitempty"`
Name string `json:"name,omitempty"`
Task string `json:"task,omitempty"`
AiGateway []DataSourceServingEndpointsEndpointsAiGateway `json:"ai_gateway,omitempty"`
Config []DataSourceServingEndpointsEndpointsConfig `json:"config,omitempty"`
State []DataSourceServingEndpointsEndpointsState `json:"state,omitempty"`
Tags []DataSourceServingEndpointsEndpointsTags `json:"tags,omitempty"`
}
type DataSourceServingEndpoints struct {
Endpoints []DataSourceServingEndpointsEndpoints `json:"endpoints,omitempty"`
}

View File

@ -33,6 +33,8 @@ type DataSources struct {
MlflowModel map[string]any `json:"databricks_mlflow_model,omitempty"` MlflowModel map[string]any `json:"databricks_mlflow_model,omitempty"`
MlflowModels map[string]any `json:"databricks_mlflow_models,omitempty"` MlflowModels map[string]any `json:"databricks_mlflow_models,omitempty"`
MwsCredentials map[string]any `json:"databricks_mws_credentials,omitempty"` MwsCredentials map[string]any `json:"databricks_mws_credentials,omitempty"`
MwsNetworkConnectivityConfig map[string]any `json:"databricks_mws_network_connectivity_config,omitempty"`
MwsNetworkConnectivityConfigs map[string]any `json:"databricks_mws_network_connectivity_configs,omitempty"`
MwsWorkspaces map[string]any `json:"databricks_mws_workspaces,omitempty"` MwsWorkspaces map[string]any `json:"databricks_mws_workspaces,omitempty"`
NodeType map[string]any `json:"databricks_node_type,omitempty"` NodeType map[string]any `json:"databricks_node_type,omitempty"`
Notebook map[string]any `json:"databricks_notebook,omitempty"` Notebook map[string]any `json:"databricks_notebook,omitempty"`
@ -40,10 +42,12 @@ type DataSources struct {
NotificationDestinations map[string]any `json:"databricks_notification_destinations,omitempty"` NotificationDestinations map[string]any `json:"databricks_notification_destinations,omitempty"`
Pipelines map[string]any `json:"databricks_pipelines,omitempty"` Pipelines map[string]any `json:"databricks_pipelines,omitempty"`
RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"` RegisteredModel map[string]any `json:"databricks_registered_model,omitempty"`
RegisteredModelVersions map[string]any `json:"databricks_registered_model_versions,omitempty"`
Schema map[string]any `json:"databricks_schema,omitempty"` Schema map[string]any `json:"databricks_schema,omitempty"`
Schemas map[string]any `json:"databricks_schemas,omitempty"` Schemas map[string]any `json:"databricks_schemas,omitempty"`
ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"` ServicePrincipal map[string]any `json:"databricks_service_principal,omitempty"`
ServicePrincipals map[string]any `json:"databricks_service_principals,omitempty"` ServicePrincipals map[string]any `json:"databricks_service_principals,omitempty"`
ServingEndpoints map[string]any `json:"databricks_serving_endpoints,omitempty"`
Share map[string]any `json:"databricks_share,omitempty"` Share map[string]any `json:"databricks_share,omitempty"`
Shares map[string]any `json:"databricks_shares,omitempty"` Shares map[string]any `json:"databricks_shares,omitempty"`
SparkVersion map[string]any `json:"databricks_spark_version,omitempty"` SparkVersion map[string]any `json:"databricks_spark_version,omitempty"`
@ -92,6 +96,8 @@ func NewDataSources() *DataSources {
MlflowModel: make(map[string]any), MlflowModel: make(map[string]any),
MlflowModels: make(map[string]any), MlflowModels: make(map[string]any),
MwsCredentials: make(map[string]any), MwsCredentials: make(map[string]any),
MwsNetworkConnectivityConfig: make(map[string]any),
MwsNetworkConnectivityConfigs: make(map[string]any),
MwsWorkspaces: make(map[string]any), MwsWorkspaces: make(map[string]any),
NodeType: make(map[string]any), NodeType: make(map[string]any),
Notebook: make(map[string]any), Notebook: make(map[string]any),
@ -99,10 +105,12 @@ func NewDataSources() *DataSources {
NotificationDestinations: make(map[string]any), NotificationDestinations: make(map[string]any),
Pipelines: make(map[string]any), Pipelines: make(map[string]any),
RegisteredModel: make(map[string]any), RegisteredModel: make(map[string]any),
RegisteredModelVersions: make(map[string]any),
Schema: make(map[string]any), Schema: make(map[string]any),
Schemas: make(map[string]any), Schemas: make(map[string]any),
ServicePrincipal: make(map[string]any), ServicePrincipal: make(map[string]any),
ServicePrincipals: make(map[string]any), ServicePrincipals: make(map[string]any),
ServingEndpoints: make(map[string]any),
Share: make(map[string]any), Share: make(map[string]any),
Shares: make(map[string]any), Shares: make(map[string]any),
SparkVersion: make(map[string]any), SparkVersion: make(map[string]any),

View File

@ -32,6 +32,7 @@ type ResourcePermissions struct {
SqlDashboardId string `json:"sql_dashboard_id,omitempty"` SqlDashboardId string `json:"sql_dashboard_id,omitempty"`
SqlEndpointId string `json:"sql_endpoint_id,omitempty"` SqlEndpointId string `json:"sql_endpoint_id,omitempty"`
SqlQueryId string `json:"sql_query_id,omitempty"` SqlQueryId string `json:"sql_query_id,omitempty"`
VectorSearchEndpointId string `json:"vector_search_endpoint_id,omitempty"`
WorkspaceFileId string `json:"workspace_file_id,omitempty"` WorkspaceFileId string `json:"workspace_file_id,omitempty"`
WorkspaceFilePath string `json:"workspace_file_path,omitempty"` WorkspaceFilePath string `json:"workspace_file_path,omitempty"`
AccessControl []ResourcePermissionsAccessControl `json:"access_control,omitempty"` AccessControl []ResourcePermissionsAccessControl `json:"access_control,omitempty"`

View File

@ -21,13 +21,13 @@ type Root struct {
const ProviderHost = "registry.terraform.io" const ProviderHost = "registry.terraform.io"
const ProviderSource = "databricks/databricks" const ProviderSource = "databricks/databricks"
const ProviderVersion = "1.58.0" const ProviderVersion = "1.59.0"
func NewRoot() *Root { func NewRoot() *Root {
return &Root{ return &Root{
Terraform: map[string]interface{}{ Terraform: map[string]any{
"required_providers": map[string]interface{}{ "required_providers": map[string]any{
"databricks": map[string]interface{}{ "databricks": map[string]any{
"source": ProviderSource, "source": ProviderSource,
"version": ProviderVersion, "version": ProviderVersion,
}, },

View File

@ -7,13 +7,18 @@ import (
"strings" "strings"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
) )
const CAN_MANAGE = "CAN_MANAGE" const CAN_MANAGE = "CAN_MANAGE"
const CAN_VIEW = "CAN_VIEW" const CAN_VIEW = "CAN_VIEW"
const CAN_RUN = "CAN_RUN" const CAN_RUN = "CAN_RUN"
var unsupportedResources = []string{"clusters", "volumes", "schemas", "quality_monitors", "registered_models"}
var allowedLevels = []string{CAN_MANAGE, CAN_VIEW, CAN_RUN} var allowedLevels = []string{CAN_MANAGE, CAN_VIEW, CAN_RUN}
var levelsMap = map[string](map[string]string){ var levelsMap = map[string](map[string]string){
"jobs": { "jobs": {
@ -26,11 +31,11 @@ var levelsMap = map[string](map[string]string){
CAN_VIEW: "CAN_VIEW", CAN_VIEW: "CAN_VIEW",
CAN_RUN: "CAN_RUN", CAN_RUN: "CAN_RUN",
}, },
"mlflow_experiments": { "experiments": {
CAN_MANAGE: "CAN_MANAGE", CAN_MANAGE: "CAN_MANAGE",
CAN_VIEW: "CAN_READ", CAN_VIEW: "CAN_READ",
}, },
"mlflow_models": { "models": {
CAN_MANAGE: "CAN_MANAGE", CAN_MANAGE: "CAN_MANAGE",
CAN_VIEW: "CAN_READ", CAN_VIEW: "CAN_READ",
}, },
@ -57,11 +62,58 @@ func (m *bundlePermissions) Apply(ctx context.Context, b *bundle.Bundle) diag.Di
return diag.FromErr(err) return diag.FromErr(err)
} }
applyForJobs(ctx, b) patterns := make(map[string]dyn.Pattern, 0)
applyForPipelines(ctx, b) for key := range levelsMap {
applyForMlModels(ctx, b) patterns[key] = dyn.NewPattern(
applyForMlExperiments(ctx, b) dyn.Key("resources"),
applyForModelServiceEndpoints(ctx, b) dyn.Key(key),
dyn.AnyKey(),
)
}
err = b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
for key, pattern := range patterns {
v, err = dyn.MapByPattern(v, pattern, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
var permissions []resources.Permission
pv, err := dyn.Get(v, "permissions")
// If the permissions field is not found, we set to an empty array
if err != nil {
pv = dyn.V([]dyn.Value{})
}
err = convert.ToTyped(&permissions, pv)
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to convert permissions: %w", err)
}
permissions = append(permissions, convertPermissions(
ctx,
b.Config.Permissions,
permissions,
key,
levelsMap[key],
)...)
pv, err = convert.FromTyped(permissions, dyn.NilValue)
if err != nil {
return dyn.InvalidValue, fmt.Errorf("failed to convert permissions: %w", err)
}
return dyn.Set(v, "permissions", pv)
})
if err != nil {
return dyn.InvalidValue, err
}
}
return v, nil
})
if err != nil {
return diag.FromErr(err)
}
return nil return nil
} }
@ -76,66 +128,6 @@ func validate(b *bundle.Bundle) error {
return nil return nil
} }
func applyForJobs(ctx context.Context, b *bundle.Bundle) {
for key, job := range b.Config.Resources.Jobs {
job.Permissions = append(job.Permissions, convert(
ctx,
b.Config.Permissions,
job.Permissions,
key,
levelsMap["jobs"],
)...)
}
}
func applyForPipelines(ctx context.Context, b *bundle.Bundle) {
for key, pipeline := range b.Config.Resources.Pipelines {
pipeline.Permissions = append(pipeline.Permissions, convert(
ctx,
b.Config.Permissions,
pipeline.Permissions,
key,
levelsMap["pipelines"],
)...)
}
}
func applyForMlExperiments(ctx context.Context, b *bundle.Bundle) {
for key, experiment := range b.Config.Resources.Experiments {
experiment.Permissions = append(experiment.Permissions, convert(
ctx,
b.Config.Permissions,
experiment.Permissions,
key,
levelsMap["mlflow_experiments"],
)...)
}
}
func applyForMlModels(ctx context.Context, b *bundle.Bundle) {
for key, model := range b.Config.Resources.Models {
model.Permissions = append(model.Permissions, convert(
ctx,
b.Config.Permissions,
model.Permissions,
key,
levelsMap["mlflow_models"],
)...)
}
}
func applyForModelServiceEndpoints(ctx context.Context, b *bundle.Bundle) {
for key, model := range b.Config.Resources.ModelServingEndpoints {
model.Permissions = append(model.Permissions, convert(
ctx,
b.Config.Permissions,
model.Permissions,
key,
levelsMap["model_serving_endpoints"],
)...)
}
}
func (m *bundlePermissions) Name() string { func (m *bundlePermissions) Name() string {
return "ApplyBundlePermissions" return "ApplyBundlePermissions"
} }

View File

@ -2,12 +2,15 @@ package permissions
import ( import (
"context" "context"
"fmt"
"slices"
"testing" "testing"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -51,6 +54,10 @@ func TestApplyBundlePermissions(t *testing.T) {
"endpoint_1": {}, "endpoint_1": {},
"endpoint_2": {}, "endpoint_2": {},
}, },
Dashboards: map[string]*resources.Dashboard{
"dashboard_1": {},
"dashboard_2": {},
},
}, },
}, },
} }
@ -103,6 +110,10 @@ func TestApplyBundlePermissions(t *testing.T) {
require.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint_2"].Permissions, resources.Permission{Level: "CAN_MANAGE", UserName: "TestUser"}) require.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint_2"].Permissions, resources.Permission{Level: "CAN_MANAGE", UserName: "TestUser"})
require.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint_2"].Permissions, resources.Permission{Level: "CAN_VIEW", GroupName: "TestGroup"}) require.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint_2"].Permissions, resources.Permission{Level: "CAN_VIEW", GroupName: "TestGroup"})
require.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint_2"].Permissions, resources.Permission{Level: "CAN_QUERY", ServicePrincipalName: "TestServicePrincipal"}) require.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint_2"].Permissions, resources.Permission{Level: "CAN_QUERY", ServicePrincipalName: "TestServicePrincipal"})
require.Len(t, b.Config.Resources.Dashboards["dashboard_1"].Permissions, 2)
require.Contains(t, b.Config.Resources.Dashboards["dashboard_1"].Permissions, resources.Permission{Level: "CAN_MANAGE", UserName: "TestUser"})
require.Contains(t, b.Config.Resources.Dashboards["dashboard_1"].Permissions, resources.Permission{Level: "CAN_READ", GroupName: "TestGroup"})
} }
func TestWarningOnOverlapPermission(t *testing.T) { func TestWarningOnOverlapPermission(t *testing.T) {
@ -146,5 +157,20 @@ func TestWarningOnOverlapPermission(t *testing.T) {
require.Contains(t, b.Config.Resources.Jobs["job_2"].Permissions, resources.Permission{Level: "CAN_VIEW", UserName: "TestUser2"}) require.Contains(t, b.Config.Resources.Jobs["job_2"].Permissions, resources.Permission{Level: "CAN_VIEW", UserName: "TestUser2"})
require.Contains(t, b.Config.Resources.Jobs["job_2"].Permissions, resources.Permission{Level: "CAN_MANAGE", UserName: "TestUser"}) require.Contains(t, b.Config.Resources.Jobs["job_2"].Permissions, resources.Permission{Level: "CAN_MANAGE", UserName: "TestUser"})
require.Contains(t, b.Config.Resources.Jobs["job_2"].Permissions, resources.Permission{Level: "CAN_VIEW", GroupName: "TestGroup"}) require.Contains(t, b.Config.Resources.Jobs["job_2"].Permissions, resources.Permission{Level: "CAN_VIEW", GroupName: "TestGroup"})
}
func TestAllResourcesExplicitlyDefinedForPermissionsSupport(t *testing.T) {
r := config.Resources{}
for _, resource := range unsupportedResources {
_, ok := levelsMap[resource]
assert.False(t, ok, fmt.Sprintf("Resource %s is defined in both levelsMap and unsupportedResources", resource))
}
for _, resource := range r.AllResources() {
_, ok := levelsMap[resource.Description.PluralName]
if !slices.Contains(unsupportedResources, resource.Description.PluralName) && !ok {
assert.Fail(t, fmt.Sprintf("Resource %s is not explicitly defined in levelsMap or unsupportedResources", resource.Description.PluralName))
}
}
} }

View File

@ -7,7 +7,7 @@ import (
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
) )
func convert( func convertPermissions(
ctx context.Context, ctx context.Context,
bundlePermissions []resources.Permission, bundlePermissions []resources.Permission,
resourcePermissions []resources.Permission, resourcePermissions []resources.Permission,

View File

@ -23,10 +23,10 @@ var renderFuncMap = template.FuncMap{
"yellow": color.YellowString, "yellow": color.YellowString,
"magenta": color.MagentaString, "magenta": color.MagentaString,
"cyan": color.CyanString, "cyan": color.CyanString,
"bold": func(format string, a ...interface{}) string { "bold": func(format string, a ...any) string {
return color.New(color.Bold).Sprintf(format, a...) return color.New(color.Bold).Sprintf(format, a...)
}, },
"italic": func(format string, a ...interface{}) string { "italic": func(format string, a ...any) string {
return color.New(color.Italic).Sprintf(format, a...) return color.New(color.Italic).Sprintf(format, a...)
}, },
} }

View File

@ -489,7 +489,8 @@ func TestRenderSummaryTemplate_nilBundle(t *testing.T) {
err := renderSummaryHeaderTemplate(writer, nil) err := renderSummaryHeaderTemplate(writer, nil)
require.NoError(t, err) require.NoError(t, err)
io.WriteString(writer, buildTrailer(nil)) _, err = io.WriteString(writer, buildTrailer(nil))
require.NoError(t, err)
assert.Equal(t, "Validation OK!\n", writer.String()) assert.Equal(t, "Validation OK!\n", writer.String())
} }

View File

@ -42,7 +42,8 @@ func TestConvertPythonParams(t *testing.T) {
opts := &Options{ opts := &Options{
Job: JobOptions{}, Job: JobOptions{},
} }
runner.convertPythonParams(opts) err := runner.convertPythonParams(opts)
require.NoError(t, err)
require.NotContains(t, opts.Job.notebookParams, "__python_params") require.NotContains(t, opts.Job.notebookParams, "__python_params")
opts = &Options{ opts = &Options{
@ -50,7 +51,8 @@ func TestConvertPythonParams(t *testing.T) {
pythonParams: []string{"param1", "param2", "param3"}, pythonParams: []string{"param1", "param2", "param3"},
}, },
} }
runner.convertPythonParams(opts) err = runner.convertPythonParams(opts)
require.NoError(t, err)
require.Contains(t, opts.Job.notebookParams, "__python_params") require.Contains(t, opts.Job.notebookParams, "__python_params")
require.Equal(t, opts.Job.notebookParams["__python_params"], `["param1","param2","param3"]`) require.Equal(t, opts.Job.notebookParams["__python_params"], `["param1","param2","param3"]`)
} }

View File

@ -15,7 +15,7 @@ type LogsOutput struct {
LogsTruncated bool `json:"logs_truncated"` LogsTruncated bool `json:"logs_truncated"`
} }
func structToString(val interface{}) (string, error) { func structToString(val any) (string, error) {
b, err := json.MarshalIndent(val, "", " ") b, err := json.MarshalIndent(val, "", " ")
if err != nil { if err != nil {
return "", err return "", err

View File

@ -249,9 +249,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"principal": { "principal": {
"description": "The name of the principal that will be granted privileges",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"privileges": { "privileges": {
"description": "The privileges to grant to the specified entity",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
} }
}, },
@ -503,15 +505,19 @@
"type": "object", "type": "object",
"properties": { "properties": {
"group_name": { "group_name": {
"description": "The name of the group that has the permission set in level.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"level": { "level": {
"description": "The allowed permission for user, group, service principal defined for this permission.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"service_principal_name": { "service_principal_name": {
"description": "The name of the service principal that has the permission set in level.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"user_name": { "user_name": {
"description": "The name of the user that has the permission set in level.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -895,12 +901,15 @@
"$ref": "#/$defs/interface" "$ref": "#/$defs/interface"
}, },
"description": { "description": {
"description": "The description of the variable",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"lookup": { "lookup": {
"description": "The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.Lookup" "$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.Lookup"
}, },
"type": { "type": {
"description": "The type of the variable.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.VariableType" "$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.VariableType"
} }
}, },
@ -916,12 +925,15 @@
"$ref": "#/$defs/interface" "$ref": "#/$defs/interface"
}, },
"description": { "description": {
"description": "The description of the variable",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"lookup": { "lookup": {
"description": "The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.Lookup" "$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.Lookup"
}, },
"type": { "type": {
"description": "The type of the variable. Valid values are complex.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.VariableType" "$ref": "#/$defs/github.com/databricks/cli/bundle/config/variable.VariableType"
} }
}, },
@ -937,19 +949,24 @@
"type": "object", "type": "object",
"properties": { "properties": {
"build": { "build": {
"description": "An optional set of non-default build commands that you want to run locally before deployment. For Python wheel builds, the Databricks CLI assumes that it can find a local install of the Python wheel package to run builds, and it runs the command python setup.py bdist_wheel by default during each bundle deployment. To specify multiple build commands, separate each command with double-ampersand (\u0026\u0026) characters.", "description": "An optional set of non-default build commands that you want to run locally before deployment.\n\nFor Python wheel builds, the Databricks CLI assumes that it can find a local install of the Python wheel package to run builds, and it runs the command python setup.py bdist_wheel by default during each bundle deployment.\n\nTo specify multiple build commands, separate each command with double-ampersand (\u0026\u0026) characters.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"executable": { "executable": {
"description": "The executable type.",
"$ref": "#/$defs/github.com/databricks/cli/libs/exec.ExecutableType" "$ref": "#/$defs/github.com/databricks/cli/libs/exec.ExecutableType"
}, },
"files": { "files": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config.ArtifactFile" "description": "The source files for the artifact.",
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config.ArtifactFile",
"markdownDescription": "The source files for the artifact, defined as an [artifact_file](https://docs.databricks.com/dev-tools/bundles/reference.html#artifact_file)."
}, },
"path": { "path": {
"description": "The location where the built artifact will be saved.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"type": { "type": {
"description": "The type of the artifact. Valid values are wheel or jar.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.ArtifactType" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.ArtifactType"
} }
}, },
@ -970,6 +987,7 @@
"type": "object", "type": "object",
"properties": { "properties": {
"source": { "source": {
"description": "The path of the files used to build the artifact.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -993,21 +1011,30 @@
"type": "object", "type": "object",
"properties": { "properties": {
"cluster_id": { "cluster_id": {
"$ref": "#/$defs/string" "description": "The ID of a cluster to use to run the bundle.",
"$ref": "#/$defs/string",
"markdownDescription": "The ID of a cluster to use to run the bundle. See [cluster_id](https://docs.databricks.com/dev-tools/bundles/settings.html#cluster_id)."
}, },
"compute_id": { "compute_id": {
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"databricks_cli_version": { "databricks_cli_version": {
"$ref": "#/$defs/string" "description": "The Databricks CLI version to use for the bundle.",
"$ref": "#/$defs/string",
"markdownDescription": "The Databricks CLI version to use for the bundle. See [databricks_cli_version](https://docs.databricks.com/dev-tools/bundles/settings.html#databricks_cli_version)."
}, },
"deployment": { "deployment": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Deployment" "description": "The definition of the bundle deployment",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Deployment",
"markdownDescription": "The definition of the bundle deployment. For supported attributes, see [deployment](https://docs.databricks.com/dev-tools/bundles/reference.html#deployment) and [link](https://docs.databricks.com/dev-tools/bundles/deployment-modes.html)."
}, },
"git": { "git": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Git" "description": "The Git version control details that are associated with your bundle.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Git",
"markdownDescription": "The Git version control details that are associated with your bundle. For supported attributes, see [git](https://docs.databricks.com/dev-tools/bundles/reference.html#git) and [git](https://docs.databricks.com/dev-tools/bundles/settings.html#git)."
}, },
"name": { "name": {
"description": "The name of the bundle.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"uuid": { "uuid": {
@ -1034,10 +1061,13 @@
"type": "object", "type": "object",
"properties": { "properties": {
"fail_on_active_runs": { "fail_on_active_runs": {
"description": "Whether to fail on active runs. If this is set to true a deployment that is running can be interrupted.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"lock": { "lock": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Lock" "description": "The deployment lock attributes.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Lock",
"markdownDescription": "The deployment lock attributes. See [lock](https://docs.databricks.com/dev-tools/bundles/reference.html#lock)."
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -1054,15 +1084,19 @@
"type": "object", "type": "object",
"properties": { "properties": {
"pydabs": { "pydabs": {
"description": "The PyDABs configuration.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.PyDABs" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.PyDABs"
}, },
"python_wheel_wrapper": { "python_wheel_wrapper": {
"description": "Whether to use a Python wheel wrapper",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"scripts": { "scripts": {
"description": "The commands to run",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Command" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Command"
}, },
"use_legacy_run_as": { "use_legacy_run_as": {
"description": "Whether to use the legacy run_as behavior",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
} }
}, },
@ -1080,10 +1114,14 @@
"type": "object", "type": "object",
"properties": { "properties": {
"branch": { "branch": {
"$ref": "#/$defs/string" "description": "The Git branch name.",
"$ref": "#/$defs/string",
"markdownDescription": "The Git branch name. See [git](https://docs.databricks.com/dev-tools/bundles/settings.html#git)."
}, },
"origin_url": { "origin_url": {
"$ref": "#/$defs/string" "description": "The origin URL of the repository.",
"$ref": "#/$defs/string",
"markdownDescription": "The origin URL of the repository. See [git](https://docs.databricks.com/dev-tools/bundles/settings.html#git)."
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -1100,9 +1138,11 @@
"type": "object", "type": "object",
"properties": { "properties": {
"enabled": { "enabled": {
"description": "Whether this lock is enabled.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"force": { "force": {
"description": "Whether to force this lock if it is enabled.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
} }
}, },
@ -1123,21 +1163,27 @@
"type": "object", "type": "object",
"properties": { "properties": {
"jobs_max_concurrent_runs": { "jobs_max_concurrent_runs": {
"description": "The maximum concurrent runs for a job.",
"$ref": "#/$defs/int" "$ref": "#/$defs/int"
}, },
"name_prefix": { "name_prefix": {
"description": "The prefix for job runs of the bundle.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"pipelines_development": { "pipelines_development": {
"description": "Whether pipeline deployments should be locked in development mode.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"source_linked_deployment": { "source_linked_deployment": {
"description": "Whether to link the deployment to the bundle source.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"tags": { "tags": {
"description": "The tags for the bundle deployment.",
"$ref": "#/$defs/map/string" "$ref": "#/$defs/map/string"
}, },
"trigger_pause_status": { "trigger_pause_status": {
"description": "A pause status to apply to all job triggers and schedules. Valid values are PAUSED or UNPAUSED.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -1155,12 +1201,15 @@
"type": "object", "type": "object",
"properties": { "properties": {
"enabled": { "enabled": {
"description": "Whether or not PyDABs (Private Preview) is enabled",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"import": { "import": {
"description": "The PyDABs project to import to discover resources, resource generator and mutators",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
}, },
"venv_path": { "venv_path": {
"description": "The Python virtual environment path",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -1178,34 +1227,54 @@
"type": "object", "type": "object",
"properties": { "properties": {
"clusters": { "clusters": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster" "description": "The cluster definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster",
"markdownDescription": "The cluster definitions for the bundle. See [cluster](https://docs.databricks.com/dev-tools/bundles/resources.html#cluster)"
}, },
"dashboards": { "dashboards": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Dashboard" "description": "The dashboard definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Dashboard",
"markdownDescription": "The dashboard definitions for the bundle. See [dashboard](https://docs.databricks.com/dev-tools/bundles/resources.html#dashboard)"
}, },
"experiments": { "experiments": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment" "description": "The experiment definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment",
"markdownDescription": "The experiment definitions for the bundle. See [experiment](https://docs.databricks.com/dev-tools/bundles/resources.html#experiment)"
}, },
"jobs": { "jobs": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Job" "description": "The job definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Job",
"markdownDescription": "The job definitions for the bundle. See [job](https://docs.databricks.com/dev-tools/bundles/resources.html#job)"
}, },
"model_serving_endpoints": { "model_serving_endpoints": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint" "description": "The model serving endpoint definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint",
"markdownDescription": "The model serving endpoint definitions for the bundle. See [model_serving_endpoint](https://docs.databricks.com/dev-tools/bundles/resources.html#model_serving_endpoint)"
}, },
"models": { "models": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowModel" "description": "The model definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowModel",
"markdownDescription": "The model definitions for the bundle. See [model](https://docs.databricks.com/dev-tools/bundles/resources.html#model)"
}, },
"pipelines": { "pipelines": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Pipeline" "description": "The pipeline definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Pipeline",
"markdownDescription": "The pipeline definitions for the bundle. See [pipeline](https://docs.databricks.com/dev-tools/bundles/resources.html#pipeline)"
}, },
"quality_monitors": { "quality_monitors": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.QualityMonitor" "description": "The quality monitor definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.QualityMonitor",
"markdownDescription": "The quality monitor definitions for the bundle. See [quality_monitor](https://docs.databricks.com/dev-tools/bundles/resources.html#quality_monitor)"
}, },
"registered_models": { "registered_models": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.RegisteredModel" "description": "The registered model definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.RegisteredModel",
"markdownDescription": "The registered model definitions for the bundle. See [registered_model](https://docs.databricks.com/dev-tools/bundles/resources.html#registered_model)"
}, },
"schemas": { "schemas": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Schema" "description": "The schema definitions for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Schema",
"markdownDescription": "The schema definitions for the bundle. See [schema](https://docs.databricks.com/dev-tools/bundles/resources.html#schema)"
}, },
"volumes": { "volumes": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Volume" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Volume"
@ -1225,12 +1294,15 @@
"type": "object", "type": "object",
"properties": { "properties": {
"exclude": { "exclude": {
"description": "A list of files or folders to exclude from the bundle.",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
}, },
"include": { "include": {
"description": "A list of files or folders to include in the bundle.",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
}, },
"paths": { "paths": {
"description": "The local folder paths, which can be outside the bundle root, to synchronize to the workspace when the bundle is deployed.",
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
} }
}, },
@ -1248,46 +1320,70 @@
"type": "object", "type": "object",
"properties": { "properties": {
"artifacts": { "artifacts": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Artifact" "description": "The artifacts to include in the target deployment.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Artifact",
"markdownDescription": "The artifacts to include in the target deployment. See [artifact](https://docs.databricks.com/dev-tools/bundles/reference.html#artifact)"
}, },
"bundle": { "bundle": {
"description": "The name of the bundle when deploying to this target.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Bundle" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.Bundle"
}, },
"cluster_id": { "cluster_id": {
"description": "The ID of the cluster to use for this target.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"compute_id": { "compute_id": {
"description": "Deprecated. The ID of the compute to use for this target.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"default": { "default": {
"description": "Whether this target is the default target.",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"git": { "git": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Git" "description": "The Git version control settings for the target.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Git",
"markdownDescription": "The Git version control settings for the target. See [git](https://docs.databricks.com/dev-tools/bundles/reference.html#git)."
}, },
"mode": { "mode": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Mode" "description": "The deployment mode for the target. Valid values are development or production.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Mode",
"markdownDescription": "The deployment mode for the target. Valid values are development or production. See [link](https://docs.databricks.com/dev-tools/bundles/deployment-modes.html)."
}, },
"permissions": { "permissions": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission" "description": "The permissions for deploying and running the bundle in the target.",
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission",
"markdownDescription": "The permissions for deploying and running the bundle in the target. See [permission](https://docs.databricks.com/dev-tools/bundles/reference.html#permission)."
}, },
"presets": { "presets": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Presets" "description": "The deployment presets for the target.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Presets",
"markdownDescription": "The deployment presets for the target. See [preset](https://docs.databricks.com/dev-tools/bundles/reference.html#preset)."
}, },
"resources": { "resources": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Resources" "description": "The resource definitions for the target.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Resources",
"markdownDescription": "The resource definitions for the target. See [resources](https://docs.databricks.com/dev-tools/bundles/reference.html#resources)."
}, },
"run_as": { "run_as": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.JobRunAs" "description": "The identity to use to run the bundle.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.JobRunAs",
"markdownDescription": "The identity to use to run the bundle. See [job_run_as](https://docs.databricks.com/dev-tools/bundles/reference.html#job_run_as) and [link](https://docs.databricks.com/dev-tools/bundles/run_as.html)."
}, },
"sync": { "sync": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Sync" "description": "The local paths to sync to the target workspace when a bundle is run or deployed.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Sync",
"markdownDescription": "The local paths to sync to the target workspace when a bundle is run or deployed. See [sync](https://docs.databricks.com/dev-tools/bundles/reference.html#sync)."
}, },
"variables": { "variables": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/variable.TargetVariable" "description": "The custom variable definitions for the target.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/variable.TargetVariable",
"markdownDescription": "The custom variable definitions for the target. See [variables](https://docs.databricks.com/dev-tools/bundles/settings.html#variables) and [link](https://docs.databricks.com/dev-tools/bundles/variables.html)."
}, },
"workspace": { "workspace": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Workspace" "description": "The Databricks workspace for the target. _.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Workspace",
"markdownDescription": "The Databricks workspace for the target. [workspace](https://docs.databricks.com/dev-tools/bundles/reference.html#workspace)"
} }
}, },
"additionalProperties": false "additionalProperties": false
@ -1304,51 +1400,67 @@
"type": "object", "type": "object",
"properties": { "properties": {
"artifact_path": { "artifact_path": {
"description": "The artifact path to use within the workspace for both deployments and workflow runs",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"auth_type": { "auth_type": {
"description": "The authentication type.",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"azure_client_id": { "azure_client_id": {
"description": "The Azure client ID",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"azure_environment": { "azure_environment": {
"description": "The Azure environment",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"azure_login_app_id": { "azure_login_app_id": {
"description": "The Azure login app ID",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"azure_tenant_id": { "azure_tenant_id": {
"description": "The Azure tenant ID",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"azure_use_msi": { "azure_use_msi": {
"description": "Whether to use MSI for Azure",
"$ref": "#/$defs/bool" "$ref": "#/$defs/bool"
}, },
"azure_workspace_resource_id": { "azure_workspace_resource_id": {
"description": "The Azure workspace resource ID",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"client_id": { "client_id": {
"description": "The client ID for the workspace",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"file_path": { "file_path": {
"description": "The file path to use within the workspace for both deployments and workflow runs",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"google_service_account": { "google_service_account": {
"description": "The Google service account name",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"host": { "host": {
"description": "The Databricks workspace host URL",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"profile": { "profile": {
"description": "The Databricks workspace profile name",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"resource_path": { "resource_path": {
"description": "The workspace resource path",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"root_path": { "root_path": {
"description": "The Databricks workspace root path",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
"state_path": { "state_path": {
"description": "The workspace state path",
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
} }
}, },
@ -6161,36 +6273,50 @@
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Artifact" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Artifact"
}, },
"bundle": { "bundle": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Bundle" "description": "The attributes of the bundle.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Bundle",
"markdownDescription": "The attributes of the bundle. See [bundle](https://docs.databricks.com/dev-tools/bundles/settings.html#bundle)"
}, },
"experimental": { "experimental": {
"description": "Defines attributes for experimental features.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Experimental" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.Experimental"
}, },
"include": { "include": {
"$ref": "#/$defs/slice/string" "$ref": "#/$defs/slice/string"
}, },
"permissions": { "permissions": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission" "description": "Defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle",
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission",
"markdownDescription": "Defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle. See [permissions](https://docs.databricks.com/dev-tools/bundles/settings.html#permissions) and [link](https://docs.databricks.com/dev-tools/bundles/permissions.html)."
}, },
"presets": { "presets": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Presets" "description": "Defines bundle deployment presets.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Presets",
"markdownDescription": "Defines bundle deployment presets. See [presets](https://docs.databricks.com/dev-tools/bundles/deployment-modes.html#presets)."
}, },
"resources": { "resources": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Resources" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.Resources",
"markdownDescription": "See [link](https://docs.databricks.com/dev-tools/bundles/resources.html)."
}, },
"run_as": { "run_as": {
"description": "The identity to use to run the bundle.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.JobRunAs" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.JobRunAs"
}, },
"sync": { "sync": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Sync" "description": "The files and file paths to include or exclude in the bundle.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Sync",
"markdownDescription": "The files and file paths to include or exclude in the bundle. See [link](https://docs.databricks.com/dev-tools/bundles/)"
}, },
"targets": { "targets": {
"description": "Defines deployment targets for the bundle.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Target" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config.Target"
}, },
"variables": { "variables": {
"description": "A Map that defines the custom variables for the bundle, where each key is the name of the variable, and the value is a Map that defines the variable.",
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/variable.Variable" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/variable.Variable"
}, },
"workspace": { "workspace": {
"description": "Defines the Databricks workspace for the bundle.",
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Workspace" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.Workspace"
} }
}, },

View File

@ -104,5 +104,5 @@ func TestComplexVariablesOverrideWithFullSyntax(t *testing.T) {
require.Empty(t, diags) require.Empty(t, diags)
complexvar := b.Config.Variables["complexvar"].Value complexvar := b.Config.Variables["complexvar"].Value
require.Equal(t, map[string]interface{}{"key1": "1", "key2": "2", "key3": "3"}, complexvar) require.Equal(t, map[string]any{"key1": "1", "key2": "2", "key3": "3"}, complexvar)
} }

View File

@ -31,7 +31,8 @@ func TestGetWorkspaceAuthStatus(t *testing.T) {
cmd.Flags().String("host", "", "") cmd.Flags().String("host", "", "")
cmd.Flags().String("profile", "", "") cmd.Flags().String("profile", "", "")
cmd.Flag("profile").Value.Set("my-profile") err := cmd.Flag("profile").Value.Set("my-profile")
require.NoError(t, err)
cmd.Flag("profile").Changed = true cmd.Flag("profile").Changed = true
cfg := &config.Config{ cfg := &config.Config{
@ -39,14 +40,16 @@ func TestGetWorkspaceAuthStatus(t *testing.T) {
} }
m.WorkspaceClient.Config = cfg m.WorkspaceClient.Config = cfg
t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli") t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli")
config.ConfigAttributes.Configure(cfg) err = config.ConfigAttributes.Configure(cfg)
require.NoError(t, err)
status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) { status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) {
config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{ err := config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{
"host": "https://test.com", "host": "https://test.com",
"token": "test-token", "token": "test-token",
"auth_type": "azure-cli", "auth_type": "azure-cli",
}) })
require.NoError(t, err)
return cfg, false, nil return cfg, false, nil
}) })
require.NoError(t, err) require.NoError(t, err)
@ -81,7 +84,8 @@ func TestGetWorkspaceAuthStatusError(t *testing.T) {
cmd.Flags().String("host", "", "") cmd.Flags().String("host", "", "")
cmd.Flags().String("profile", "", "") cmd.Flags().String("profile", "", "")
cmd.Flag("profile").Value.Set("my-profile") err := cmd.Flag("profile").Value.Set("my-profile")
require.NoError(t, err)
cmd.Flag("profile").Changed = true cmd.Flag("profile").Changed = true
cfg := &config.Config{ cfg := &config.Config{
@ -89,10 +93,11 @@ func TestGetWorkspaceAuthStatusError(t *testing.T) {
} }
m.WorkspaceClient.Config = cfg m.WorkspaceClient.Config = cfg
t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli") t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli")
config.ConfigAttributes.Configure(cfg) err = config.ConfigAttributes.Configure(cfg)
require.NoError(t, err)
status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) { status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) {
config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{ err = config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{
"host": "https://test.com", "host": "https://test.com",
"token": "test-token", "token": "test-token",
"auth_type": "azure-cli", "auth_type": "azure-cli",
@ -128,7 +133,8 @@ func TestGetWorkspaceAuthStatusSensitive(t *testing.T) {
cmd.Flags().String("host", "", "") cmd.Flags().String("host", "", "")
cmd.Flags().String("profile", "", "") cmd.Flags().String("profile", "", "")
cmd.Flag("profile").Value.Set("my-profile") err := cmd.Flag("profile").Value.Set("my-profile")
require.NoError(t, err)
cmd.Flag("profile").Changed = true cmd.Flag("profile").Changed = true
cfg := &config.Config{ cfg := &config.Config{
@ -136,10 +142,11 @@ func TestGetWorkspaceAuthStatusSensitive(t *testing.T) {
} }
m.WorkspaceClient.Config = cfg m.WorkspaceClient.Config = cfg
t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli") t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli")
config.ConfigAttributes.Configure(cfg) err = config.ConfigAttributes.Configure(cfg)
require.NoError(t, err)
status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) { status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) {
config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{ err = config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{
"host": "https://test.com", "host": "https://test.com",
"token": "test-token", "token": "test-token",
"auth_type": "azure-cli", "auth_type": "azure-cli",
@ -171,7 +178,8 @@ func TestGetAccountAuthStatus(t *testing.T) {
cmd.Flags().String("host", "", "") cmd.Flags().String("host", "", "")
cmd.Flags().String("profile", "", "") cmd.Flags().String("profile", "", "")
cmd.Flag("profile").Value.Set("my-profile") err := cmd.Flag("profile").Value.Set("my-profile")
require.NoError(t, err)
cmd.Flag("profile").Changed = true cmd.Flag("profile").Changed = true
cfg := &config.Config{ cfg := &config.Config{
@ -179,13 +187,14 @@ func TestGetAccountAuthStatus(t *testing.T) {
} }
m.AccountClient.Config = cfg m.AccountClient.Config = cfg
t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli") t.Setenv("DATABRICKS_AUTH_TYPE", "azure-cli")
config.ConfigAttributes.Configure(cfg) err = config.ConfigAttributes.Configure(cfg)
require.NoError(t, err)
wsApi := m.GetMockWorkspacesAPI() wsApi := m.GetMockWorkspacesAPI()
wsApi.EXPECT().List(mock.Anything).Return(nil, nil) wsApi.EXPECT().List(mock.Anything).Return(nil, nil)
status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) { status, err := getAuthStatus(cmd, []string{}, showSensitive, func(cmd *cobra.Command, args []string) (*config.Config, bool, error) {
config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{ err = config.ConfigAttributes.ResolveFromStringMap(cfg, map[string]string{
"account_id": "test-account-id", "account_id": "test-account-id",
"username": "test-user", "username": "test-user",
"host": "https://test.com", "host": "https://test.com",

View File

@ -67,9 +67,10 @@ func TestDashboard_ExistingID_Nominal(t *testing.T) {
ctx := bundle.Context(context.Background(), b) ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand() cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx) cmd.SetContext(ctx)
cmd.Flag("existing-id").Value.Set("f00dcafe") err := cmd.Flag("existing-id").Value.Set("f00dcafe")
require.NoError(t, err)
err := cmd.RunE(cmd, []string{}) err = cmd.RunE(cmd, []string{})
require.NoError(t, err) require.NoError(t, err)
// Assert the contents of the generated configuration // Assert the contents of the generated configuration
@ -105,9 +106,10 @@ func TestDashboard_ExistingID_NotFound(t *testing.T) {
ctx := bundle.Context(context.Background(), b) ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand() cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx) cmd.SetContext(ctx)
cmd.Flag("existing-id").Value.Set("f00dcafe") err := cmd.Flag("existing-id").Value.Set("f00dcafe")
require.NoError(t, err)
err := cmd.RunE(cmd, []string{}) err = cmd.RunE(cmd, []string{})
require.Error(t, err) require.Error(t, err)
} }
@ -137,9 +139,10 @@ func TestDashboard_ExistingPath_Nominal(t *testing.T) {
ctx := bundle.Context(context.Background(), b) ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand() cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx) cmd.SetContext(ctx)
cmd.Flag("existing-path").Value.Set("/path/to/dashboard") err := cmd.Flag("existing-path").Value.Set("/path/to/dashboard")
require.NoError(t, err)
err := cmd.RunE(cmd, []string{}) err = cmd.RunE(cmd, []string{})
require.NoError(t, err) require.NoError(t, err)
// Assert the contents of the generated configuration // Assert the contents of the generated configuration
@ -175,8 +178,9 @@ func TestDashboard_ExistingPath_NotFound(t *testing.T) {
ctx := bundle.Context(context.Background(), b) ctx := bundle.Context(context.Background(), b)
cmd := NewGenerateDashboardCommand() cmd := NewGenerateDashboardCommand()
cmd.SetContext(ctx) cmd.SetContext(ctx)
cmd.Flag("existing-path").Value.Set("/path/to/dashboard") err := cmd.Flag("existing-path").Value.Set("/path/to/dashboard")
require.NoError(t, err)
err := cmd.RunE(cmd, []string{}) err = cmd.RunE(cmd, []string{})
require.Error(t, err) require.Error(t, err)
} }

View File

@ -78,13 +78,13 @@ func TestGeneratePipelineCommand(t *testing.T) {
workspaceApi.EXPECT().Download(mock.Anything, "/test/file.py", mock.Anything).Return(pyContent, nil) workspaceApi.EXPECT().Download(mock.Anything, "/test/file.py", mock.Anything).Return(pyContent, nil)
cmd.SetContext(bundle.Context(context.Background(), b)) cmd.SetContext(bundle.Context(context.Background(), b))
cmd.Flag("existing-pipeline-id").Value.Set("test-pipeline") require.NoError(t, cmd.Flag("existing-pipeline-id").Value.Set("test-pipeline"))
configDir := filepath.Join(root, "resources") configDir := filepath.Join(root, "resources")
cmd.Flag("config-dir").Value.Set(configDir) require.NoError(t, cmd.Flag("config-dir").Value.Set(configDir))
srcDir := filepath.Join(root, "src") srcDir := filepath.Join(root, "src")
cmd.Flag("source-dir").Value.Set(srcDir) require.NoError(t, cmd.Flag("source-dir").Value.Set(srcDir))
var key string var key string
cmd.Flags().StringVar(&key, "key", "test_pipeline", "") cmd.Flags().StringVar(&key, "key", "test_pipeline", "")
@ -174,13 +174,13 @@ func TestGenerateJobCommand(t *testing.T) {
workspaceApi.EXPECT().Download(mock.Anything, "/test/notebook", mock.Anything).Return(notebookContent, nil) workspaceApi.EXPECT().Download(mock.Anything, "/test/notebook", mock.Anything).Return(notebookContent, nil)
cmd.SetContext(bundle.Context(context.Background(), b)) cmd.SetContext(bundle.Context(context.Background(), b))
cmd.Flag("existing-job-id").Value.Set("1234") require.NoError(t, cmd.Flag("existing-job-id").Value.Set("1234"))
configDir := filepath.Join(root, "resources") configDir := filepath.Join(root, "resources")
cmd.Flag("config-dir").Value.Set(configDir) require.NoError(t, cmd.Flag("config-dir").Value.Set(configDir))
srcDir := filepath.Join(root, "src") srcDir := filepath.Join(root, "src")
cmd.Flag("source-dir").Value.Set(srcDir) require.NoError(t, cmd.Flag("source-dir").Value.Set(srcDir))
var key string var key string
cmd.Flags().StringVar(&key, "key", "test_job", "") cmd.Flags().StringVar(&key, "key", "test_job", "")
@ -279,13 +279,13 @@ func TestGenerateJobCommandOldFileRename(t *testing.T) {
workspaceApi.EXPECT().Download(mock.Anything, "/test/notebook", mock.Anything).Return(notebookContent, nil) workspaceApi.EXPECT().Download(mock.Anything, "/test/notebook", mock.Anything).Return(notebookContent, nil)
cmd.SetContext(bundle.Context(context.Background(), b)) cmd.SetContext(bundle.Context(context.Background(), b))
cmd.Flag("existing-job-id").Value.Set("1234") require.NoError(t, cmd.Flag("existing-job-id").Value.Set("1234"))
configDir := filepath.Join(root, "resources") configDir := filepath.Join(root, "resources")
cmd.Flag("config-dir").Value.Set(configDir) require.NoError(t, cmd.Flag("config-dir").Value.Set(configDir))
srcDir := filepath.Join(root, "src") srcDir := filepath.Join(root, "src")
cmd.Flag("source-dir").Value.Set(srcDir) require.NoError(t, cmd.Flag("source-dir").Value.Set(srcDir))
var key string var key string
cmd.Flags().StringVar(&key, "key", "test_job", "") cmd.Flags().StringVar(&key, "key", "test_job", "")
@ -295,7 +295,7 @@ func TestGenerateJobCommandOldFileRename(t *testing.T) {
touchEmptyFile(t, oldFilename) touchEmptyFile(t, oldFilename)
// Having an existing files require --force flag to regenerate them // Having an existing files require --force flag to regenerate them
cmd.Flag("force").Value.Set("true") require.NoError(t, cmd.Flag("force").Value.Set("true"))
err := cmd.RunE(cmd, []string{}) err := cmd.RunE(cmd, []string{})
require.NoError(t, err) require.NoError(t, err)

View File

@ -7,12 +7,14 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestFileFromRef(t *testing.T) { func TestFileFromRef(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/databrickslabs/ucx/main/README.md" { if r.URL.Path == "/databrickslabs/ucx/main/README.md" {
w.Write([]byte(`abc`)) _, err := w.Write([]byte(`abc`))
require.NoError(t, err)
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)
@ -31,7 +33,8 @@ func TestFileFromRef(t *testing.T) {
func TestDownloadZipball(t *testing.T) { func TestDownloadZipball(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/repos/databrickslabs/ucx/zipball/main" { if r.URL.Path == "/repos/databrickslabs/ucx/zipball/main" {
w.Write([]byte(`abc`)) _, err := w.Write([]byte(`abc`))
require.NoError(t, err)
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)

View File

@ -7,12 +7,14 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestLoadsReleasesForCLI(t *testing.T) { func TestLoadsReleasesForCLI(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/repos/databricks/cli/releases" { if r.URL.Path == "/repos/databricks/cli/releases" {
w.Write([]byte(`[{"tag_name": "v1.2.3"}, {"tag_name": "v1.2.2"}]`)) _, err := w.Write([]byte(`[{"tag_name": "v1.2.3"}, {"tag_name": "v1.2.2"}]`))
require.NoError(t, err)
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)

View File

@ -7,12 +7,14 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestRepositories(t *testing.T) { func TestRepositories(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/users/databrickslabs/repos" { if r.URL.Path == "/users/databrickslabs/repos" {
w.Write([]byte(`[{"name": "x"}]`)) _, err := w.Write([]byte(`[{"name": "x"}]`))
require.NoError(t, err)
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)

View File

@ -117,10 +117,10 @@ func installerContext(t *testing.T, server *httptest.Server) context.Context {
func respondWithJSON(t *testing.T, w http.ResponseWriter, v any) { func respondWithJSON(t *testing.T, w http.ResponseWriter, v any) {
raw, err := json.Marshal(v) raw, err := json.Marshal(v)
if err != nil {
require.NoError(t, err) require.NoError(t, err)
}
w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err)
} }
type fileTree struct { type fileTree struct {
@ -167,19 +167,17 @@ func TestInstallerWorksForReleases(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/databrickslabs/blueprint/v0.3.15/labs.yml" { if r.URL.Path == "/databrickslabs/blueprint/v0.3.15/labs.yml" {
raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml") raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml")
if err != nil { require.NoError(t, err)
panic(err) _, err = w.Write(raw)
} require.NoError(t, err)
w.Write(raw)
return return
} }
if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.3.15" { if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.3.15" {
raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib") raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib")
if err != nil { require.NoError(t, err)
panic(err)
}
w.Header().Add("Content-Type", "application/octet-stream") w.Header().Add("Content-Type", "application/octet-stream")
w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err)
return return
} }
if r.URL.Path == "/api/2.1/clusters/get" { if r.URL.Path == "/api/2.1/clusters/get" {
@ -314,7 +312,10 @@ func TestInstallerWorksForDevelopment(t *testing.T) {
defer server.Close() defer server.Close()
wd, _ := os.Getwd() wd, _ := os.Getwd()
defer os.Chdir(wd) defer func() {
err := os.Chdir(wd)
require.NoError(t, err)
}()
devDir := copyTestdata(t, "testdata/installed-in-home/.databricks/labs/blueprint/lib") devDir := copyTestdata(t, "testdata/installed-in-home/.databricks/labs/blueprint/lib")
err := os.Chdir(devDir) err := os.Chdir(devDir)
@ -373,19 +374,17 @@ func TestUpgraderWorksForReleases(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/databrickslabs/blueprint/v0.4.0/labs.yml" { if r.URL.Path == "/databrickslabs/blueprint/v0.4.0/labs.yml" {
raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml") raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml")
if err != nil { require.NoError(t, err)
panic(err) _, err = w.Write(raw)
} require.NoError(t, err)
w.Write(raw)
return return
} }
if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.4.0" { if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.4.0" {
raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib") raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib")
if err != nil { require.NoError(t, err)
panic(err)
}
w.Header().Add("Content-Type", "application/octet-stream") w.Header().Add("Content-Type", "application/octet-stream")
w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err)
return return
} }
if r.URL.Path == "/api/2.1/clusters/get" { if r.URL.Path == "/api/2.1/clusters/get" {

View File

@ -99,10 +99,11 @@ func TestBundleConfigureWithNonExistentProfileFlag(t *testing.T) {
testutil.CleanupEnvironment(t) testutil.CleanupEnvironment(t)
cmd := emptyCommand(t) cmd := emptyCommand(t)
cmd.Flag("profile").Value.Set("NOEXIST") err := cmd.Flag("profile").Value.Set("NOEXIST")
require.NoError(t, err)
b := setupWithHost(t, cmd, "https://x.com") b := setupWithHost(t, cmd, "https://x.com")
_, err := b.InitializeWorkspaceClient() _, err = b.InitializeWorkspaceClient()
assert.ErrorContains(t, err, "has no NOEXIST profile configured") assert.ErrorContains(t, err, "has no NOEXIST profile configured")
} }
@ -110,10 +111,11 @@ func TestBundleConfigureWithMismatchedProfile(t *testing.T) {
testutil.CleanupEnvironment(t) testutil.CleanupEnvironment(t)
cmd := emptyCommand(t) cmd := emptyCommand(t)
cmd.Flag("profile").Value.Set("PROFILE-1") err := cmd.Flag("profile").Value.Set("PROFILE-1")
require.NoError(t, err)
b := setupWithHost(t, cmd, "https://x.com") b := setupWithHost(t, cmd, "https://x.com")
_, err := b.InitializeWorkspaceClient() _, err = b.InitializeWorkspaceClient()
assert.ErrorContains(t, err, "config host mismatch: profile uses host https://a.com, but CLI configured to use https://x.com") assert.ErrorContains(t, err, "config host mismatch: profile uses host https://a.com, but CLI configured to use https://x.com")
} }
@ -121,7 +123,8 @@ func TestBundleConfigureWithCorrectProfile(t *testing.T) {
testutil.CleanupEnvironment(t) testutil.CleanupEnvironment(t)
cmd := emptyCommand(t) cmd := emptyCommand(t)
cmd.Flag("profile").Value.Set("PROFILE-1") err := cmd.Flag("profile").Value.Set("PROFILE-1")
require.NoError(t, err)
b := setupWithHost(t, cmd, "https://a.com") b := setupWithHost(t, cmd, "https://a.com")
client, err := b.InitializeWorkspaceClient() client, err := b.InitializeWorkspaceClient()
@ -146,7 +149,8 @@ func TestBundleConfigureWithProfileFlagAndEnvVariable(t *testing.T) {
t.Setenv("DATABRICKS_CONFIG_PROFILE", "NOEXIST") t.Setenv("DATABRICKS_CONFIG_PROFILE", "NOEXIST")
cmd := emptyCommand(t) cmd := emptyCommand(t)
cmd.Flag("profile").Value.Set("PROFILE-1") err := cmd.Flag("profile").Value.Set("PROFILE-1")
require.NoError(t, err)
b := setupWithHost(t, cmd, "https://a.com") b := setupWithHost(t, cmd, "https://a.com")
client, err := b.InitializeWorkspaceClient() client, err := b.InitializeWorkspaceClient()
@ -174,7 +178,8 @@ func TestBundleConfigureProfileFlag(t *testing.T) {
// The --profile flag takes precedence over the profile in the databricks.yml file // The --profile flag takes precedence over the profile in the databricks.yml file
cmd := emptyCommand(t) cmd := emptyCommand(t)
cmd.Flag("profile").Value.Set("PROFILE-2") err := cmd.Flag("profile").Value.Set("PROFILE-2")
require.NoError(t, err)
b := setupWithProfile(t, cmd, "PROFILE-1") b := setupWithProfile(t, cmd, "PROFILE-1")
client, err := b.InitializeWorkspaceClient() client, err := b.InitializeWorkspaceClient()
@ -205,7 +210,8 @@ func TestBundleConfigureProfileFlagAndEnvVariable(t *testing.T) {
// The --profile flag takes precedence over the DATABRICKS_CONFIG_PROFILE environment variable // The --profile flag takes precedence over the DATABRICKS_CONFIG_PROFILE environment variable
t.Setenv("DATABRICKS_CONFIG_PROFILE", "NOEXIST") t.Setenv("DATABRICKS_CONFIG_PROFILE", "NOEXIST")
cmd := emptyCommand(t) cmd := emptyCommand(t)
cmd.Flag("profile").Value.Set("PROFILE-2") err := cmd.Flag("profile").Value.Set("PROFILE-2")
require.NoError(t, err)
b := setupWithProfile(t, cmd, "PROFILE-1") b := setupWithProfile(t, cmd, "PROFILE-1")
client, err := b.InitializeWorkspaceClient() client, err := b.InitializeWorkspaceClient()

View File

@ -33,27 +33,27 @@ func initializeProgressLoggerTest(t *testing.T) (
func TestInitializeErrorOnIncompatibleConfig(t *testing.T) { func TestInitializeErrorOnIncompatibleConfig(t *testing.T) {
plt, logLevel, logFile, progressFormat := initializeProgressLoggerTest(t) plt, logLevel, logFile, progressFormat := initializeProgressLoggerTest(t)
logLevel.Set("info") require.NoError(t, logLevel.Set("info"))
logFile.Set("stderr") require.NoError(t, logFile.Set("stderr"))
progressFormat.Set("inplace") require.NoError(t, progressFormat.Set("inplace"))
_, err := plt.progressLoggerFlag.initializeContext(context.Background()) _, err := plt.progressLoggerFlag.initializeContext(context.Background())
assert.ErrorContains(t, err, "inplace progress logging cannot be used when log-file is stderr") assert.ErrorContains(t, err, "inplace progress logging cannot be used when log-file is stderr")
} }
func TestNoErrorOnDisabledLogLevel(t *testing.T) { func TestNoErrorOnDisabledLogLevel(t *testing.T) {
plt, logLevel, logFile, progressFormat := initializeProgressLoggerTest(t) plt, logLevel, logFile, progressFormat := initializeProgressLoggerTest(t)
logLevel.Set("disabled") require.NoError(t, logLevel.Set("disabled"))
logFile.Set("stderr") require.NoError(t, logFile.Set("stderr"))
progressFormat.Set("inplace") require.NoError(t, progressFormat.Set("inplace"))
_, err := plt.progressLoggerFlag.initializeContext(context.Background()) _, err := plt.progressLoggerFlag.initializeContext(context.Background())
assert.NoError(t, err) assert.NoError(t, err)
} }
func TestNoErrorOnNonStderrLogFile(t *testing.T) { func TestNoErrorOnNonStderrLogFile(t *testing.T) {
plt, logLevel, logFile, progressFormat := initializeProgressLoggerTest(t) plt, logLevel, logFile, progressFormat := initializeProgressLoggerTest(t)
logLevel.Set("info") require.NoError(t, logLevel.Set("info"))
logFile.Set("stdout") require.NoError(t, logFile.Set("stdout"))
progressFormat.Set("inplace") require.NoError(t, progressFormat.Set("inplace"))
_, err := plt.progressLoggerFlag.initializeContext(context.Background()) _, err := plt.progressLoggerFlag.initializeContext(context.Background())
assert.NoError(t, err) assert.NoError(t, err)
} }

View File

@ -12,6 +12,8 @@ import (
"github.com/databricks/cli/bundle/deploy/files" "github.com/databricks/cli/bundle/deploy/files"
"github.com/databricks/cli/cmd/root" "github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/flags" "github.com/databricks/cli/libs/flags"
"github.com/databricks/cli/libs/git"
"github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/sync" "github.com/databricks/cli/libs/sync"
"github.com/databricks/cli/libs/vfs" "github.com/databricks/cli/libs/vfs"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -37,6 +39,7 @@ func (f *syncFlags) syncOptionsFromBundle(cmd *cobra.Command, args []string, b *
opts.Full = f.full opts.Full = f.full
opts.PollInterval = f.interval opts.PollInterval = f.interval
opts.WorktreeRoot = b.WorktreeRoot
return opts, nil return opts, nil
} }
@ -60,8 +63,27 @@ func (f *syncFlags) syncOptionsFromArgs(cmd *cobra.Command, args []string) (*syn
} }
} }
ctx := cmd.Context()
client := root.WorkspaceClient(ctx)
localRoot := vfs.MustNew(args[0])
info, err := git.FetchRepositoryInfo(ctx, localRoot.Native(), client)
if err != nil {
log.Warnf(ctx, "Failed to read git info: %s", err)
}
var worktreeRoot vfs.Path
if info.WorktreeRoot == "" {
worktreeRoot = localRoot
} else {
worktreeRoot = vfs.MustNew(info.WorktreeRoot)
}
opts := sync.SyncOptions{ opts := sync.SyncOptions{
LocalRoot: vfs.MustNew(args[0]), WorktreeRoot: worktreeRoot,
LocalRoot: localRoot,
Paths: []string{"."}, Paths: []string{"."},
Include: nil, Include: nil,
Exclude: nil, Exclude: nil,
@ -75,7 +97,7 @@ func (f *syncFlags) syncOptionsFromArgs(cmd *cobra.Command, args []string) (*syn
// The sync code will automatically create this directory if it doesn't // The sync code will automatically create this directory if it doesn't
// exist and add it to the `.gitignore` file in the root. // exist and add it to the `.gitignore` file in the root.
SnapshotBasePath: filepath.Join(args[0], ".databricks"), SnapshotBasePath: filepath.Join(args[0], ".databricks"),
WorkspaceClient: root.WorkspaceClient(cmd.Context()), WorkspaceClient: client,
OutputHandler: outputHandler, OutputHandler: outputHandler,
} }

View File

@ -99,7 +99,8 @@ func TestAccAbortBind(t *testing.T) {
jobId := gt.createTestJob(ctx) jobId := gt.createTestJob(ctx)
t.Cleanup(func() { t.Cleanup(func() {
gt.destroyJob(ctx, jobId) gt.destroyJob(ctx, jobId)
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
// Bind should fail because prompting is not possible. // Bind should fail because prompting is not possible.

View File

@ -33,7 +33,8 @@ func setupUcSchemaBundle(t *testing.T, ctx context.Context, w *databricks.Worksp
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
// Assert the schema is created // Assert the schema is created
@ -190,7 +191,8 @@ func TestAccBundlePipelineRecreateWithoutAutoApprove(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
// Assert the pipeline is created // Assert the pipeline is created
@ -258,7 +260,8 @@ func TestAccDeployUcVolume(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
// Assert the volume is created successfully // Assert the volume is created successfully

View File

@ -46,7 +46,7 @@ func TestAccFilesAreSyncedCorrectlyWhenNoSnapshot(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) require.NoError(t, destroyBundle(t, ctx, bundleRoot))
}) })
remoteRoot := getBundleRemoteRootPath(w, t, uniqueId) remoteRoot := getBundleRemoteRootPath(w, t, uniqueId)

View File

@ -22,7 +22,8 @@ func TestAccPythonWheelTaskWithEnvironmentsDeployAndRun(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
out, err := runResource(t, ctx, bundleRoot, "some_other_job") out, err := runResource(t, ctx, bundleRoot, "some_other_job")

View File

@ -29,7 +29,8 @@ func runPythonWheelTest(t *testing.T, templateName string, sparkVersion string,
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
out, err := runResource(t, ctx, bundleRoot, "some_other_job") out, err := runResource(t, ctx, bundleRoot, "some_other_job")

View File

@ -31,7 +31,8 @@ func runSparkJarTestCommon(t *testing.T, ctx context.Context, sparkVersion strin
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
destroyBundle(t, ctx, bundleRoot) err := destroyBundle(t, ctx, bundleRoot)
require.NoError(t, err)
}) })
out, err := runResource(t, ctx, bundleRoot, "jar_job") out, err := runResource(t, ctx, bundleRoot, "jar_job")

View File

@ -5,11 +5,13 @@ import (
"regexp" "regexp"
"testing" "testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/databricks-sdk-go/listing"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
var clusterId string
func TestAccClustersList(t *testing.T) { func TestAccClustersList(t *testing.T) {
t.Log(GetEnvOrSkipTest(t, "CLOUD_ENV")) t.Log(GetEnvOrSkipTest(t, "CLOUD_ENV"))
@ -21,13 +23,14 @@ func TestAccClustersList(t *testing.T) {
assert.Equal(t, "", stderr.String()) assert.Equal(t, "", stderr.String())
idRegExp := regexp.MustCompile(`[0-9]{4}\-[0-9]{6}-[a-z0-9]{8}`) idRegExp := regexp.MustCompile(`[0-9]{4}\-[0-9]{6}-[a-z0-9]{8}`)
clusterId = idRegExp.FindString(outStr) clusterId := idRegExp.FindString(outStr)
assert.NotEmpty(t, clusterId) assert.NotEmpty(t, clusterId)
} }
func TestAccClustersGet(t *testing.T) { func TestAccClustersGet(t *testing.T) {
t.Log(GetEnvOrSkipTest(t, "CLOUD_ENV")) t.Log(GetEnvOrSkipTest(t, "CLOUD_ENV"))
clusterId := findValidClusterID(t)
stdout, stderr := RequireSuccessfulRun(t, "clusters", "get", clusterId) stdout, stderr := RequireSuccessfulRun(t, "clusters", "get", clusterId)
outStr := stdout.String() outStr := stdout.String()
assert.Contains(t, outStr, fmt.Sprintf(`"cluster_id":"%s"`, clusterId)) assert.Contains(t, outStr, fmt.Sprintf(`"cluster_id":"%s"`, clusterId))
@ -38,3 +41,22 @@ func TestClusterCreateErrorWhenNoArguments(t *testing.T) {
_, _, err := RequireErrorRun(t, "clusters", "create") _, _, err := RequireErrorRun(t, "clusters", "create")
assert.Contains(t, err.Error(), "accepts 1 arg(s), received 0") assert.Contains(t, err.Error(), "accepts 1 arg(s), received 0")
} }
// findValidClusterID lists clusters in the workspace to find a valid cluster ID.
func findValidClusterID(t *testing.T) string {
ctx, wt := acc.WorkspaceTest(t)
it := wt.W.Clusters.List(ctx, compute.ListClustersRequest{
FilterBy: &compute.ListClustersFilterBy{
ClusterSources: []compute.ClusterSource{
compute.ClusterSourceApi,
compute.ClusterSourceUi,
},
},
})
clusterIDs, err := listing.ToSliceN(ctx, it, 1)
require.NoError(t, err)
require.Len(t, clusterIDs, 1)
return clusterIDs[0].ClusterId
}

View File

@ -457,7 +457,7 @@ func TestAccFilerWorkspaceNotebook(t *testing.T) {
// Assert uploading a second time fails due to overwrite mode missing // Assert uploading a second time fails due to overwrite mode missing
err = f.Write(ctx, tc.name, strings.NewReader(tc.content2)) err = f.Write(ctx, tc.name, strings.NewReader(tc.content2))
assert.ErrorIs(t, err, fs.ErrExist) require.ErrorIs(t, err, fs.ErrExist)
assert.Regexp(t, regexp.MustCompile(`file already exists: .*/`+tc.nameWithoutExt+`$`), err.Error()) assert.Regexp(t, regexp.MustCompile(`file already exists: .*/`+tc.nameWithoutExt+`$`), err.Error())
// Try uploading the notebook again with overwrite flag. This time it should succeed. // Try uploading the notebook again with overwrite flag. This time it should succeed.

172
internal/git_fetch_test.go Normal file
View File

@ -0,0 +1,172 @@
package internal
import (
"os"
"os/exec"
"path"
"path/filepath"
"testing"
"github.com/databricks/cli/internal/acc"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/git"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const examplesRepoUrl = "https://github.com/databricks/bundle-examples"
const examplesRepoProvider = "gitHub"
func assertFullGitInfo(t *testing.T, expectedRoot string, info git.RepositoryInfo) {
assert.Equal(t, "main", info.CurrentBranch)
assert.NotEmpty(t, info.LatestCommit)
assert.Equal(t, examplesRepoUrl, info.OriginURL)
assert.Equal(t, expectedRoot, info.WorktreeRoot)
}
func assertEmptyGitInfo(t *testing.T, info git.RepositoryInfo) {
assertSparseGitInfo(t, "", info)
}
func assertSparseGitInfo(t *testing.T, expectedRoot string, info git.RepositoryInfo) {
assert.Equal(t, "", info.CurrentBranch)
assert.Equal(t, "", info.LatestCommit)
assert.Equal(t, "", info.OriginURL)
assert.Equal(t, expectedRoot, info.WorktreeRoot)
}
func TestAccFetchRepositoryInfoAPI_FromRepo(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
me, err := wt.W.CurrentUser.Me(ctx)
require.NoError(t, err)
targetPath := acc.RandomName(path.Join("/Workspace/Users", me.UserName, "/testing-clone-bundle-examples-"))
stdout, stderr := RequireSuccessfulRun(t, "repos", "create", examplesRepoUrl, examplesRepoProvider, "--path", targetPath)
t.Cleanup(func() {
RequireSuccessfulRun(t, "repos", "delete", targetPath)
})
assert.Empty(t, stderr.String())
assert.NotEmpty(t, stdout.String())
ctx = dbr.MockRuntime(ctx, true)
for _, inputPath := range []string{
path.Join(targetPath, "knowledge_base/dashboard_nyc_taxi"),
targetPath,
} {
t.Run(inputPath, func(t *testing.T) {
info, err := git.FetchRepositoryInfo(ctx, inputPath, wt.W)
assert.NoError(t, err)
assertFullGitInfo(t, targetPath, info)
})
}
}
func TestAccFetchRepositoryInfoAPI_FromNonRepo(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
me, err := wt.W.CurrentUser.Me(ctx)
require.NoError(t, err)
rootPath := acc.RandomName(path.Join("/Workspace/Users", me.UserName, "testing-nonrepo-"))
_, stderr := RequireSuccessfulRun(t, "workspace", "mkdirs", path.Join(rootPath, "a/b/c"))
t.Cleanup(func() {
RequireSuccessfulRun(t, "workspace", "delete", "--recursive", rootPath)
})
assert.Empty(t, stderr.String())
ctx = dbr.MockRuntime(ctx, true)
tests := []struct {
input string
msg string
}{
{
input: path.Join(rootPath, "a/b/c"),
msg: "",
},
{
input: rootPath,
msg: "",
},
{
input: path.Join(rootPath, "/non-existent"),
msg: "doesn't exist",
},
}
for _, test := range tests {
t.Run(test.input+" <==> "+test.msg, func(t *testing.T) {
info, err := git.FetchRepositoryInfo(ctx, test.input, wt.W)
if test.msg == "" {
assert.NoError(t, err)
} else {
assert.Error(t, err)
assert.Contains(t, err.Error(), test.msg)
}
assertEmptyGitInfo(t, info)
})
}
}
func TestAccFetchRepositoryInfoDotGit_FromGitRepo(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
repo := cloneRepoLocally(t, examplesRepoUrl)
for _, inputPath := range []string{
filepath.Join(repo, "knowledge_base/dashboard_nyc_taxi"),
repo,
} {
t.Run(inputPath, func(t *testing.T) {
info, err := git.FetchRepositoryInfo(ctx, inputPath, wt.W)
assert.NoError(t, err)
assertFullGitInfo(t, repo, info)
})
}
}
func cloneRepoLocally(t *testing.T, repoUrl string) string {
tempDir := t.TempDir()
localRoot := filepath.Join(tempDir, "repo")
cmd := exec.Command("git", "clone", "--depth=1", examplesRepoUrl, localRoot)
err := cmd.Run()
require.NoError(t, err)
return localRoot
}
func TestAccFetchRepositoryInfoDotGit_FromNonGitRepo(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
tempDir := t.TempDir()
root := filepath.Join(tempDir, "repo")
require.NoError(t, os.MkdirAll(filepath.Join(root, "a/b/c"), 0700))
tests := []string{
filepath.Join(root, "a/b/c"),
root,
filepath.Join(root, "/non-existent"),
}
for _, input := range tests {
t.Run(input, func(t *testing.T) {
info, err := git.FetchRepositoryInfo(ctx, input, wt.W)
assert.NoError(t, err)
assertEmptyGitInfo(t, info)
})
}
}
func TestAccFetchRepositoryInfoDotGit_FromBrokenGitRepo(t *testing.T) {
ctx, wt := acc.WorkspaceTest(t)
tempDir := t.TempDir()
root := filepath.Join(tempDir, "repo")
path := filepath.Join(root, "a/b/c")
require.NoError(t, os.MkdirAll(path, 0700))
require.NoError(t, os.WriteFile(filepath.Join(root, ".git"), []byte(""), 0000))
info, err := git.FetchRepositoryInfo(ctx, path, wt.W)
assert.NoError(t, err)
assertSparseGitInfo(t, root, info)
}

View File

@ -176,7 +176,10 @@ func (t *cobraTestRunner) SendText(text string) {
if t.stdinW == nil { if t.stdinW == nil {
panic("no standard input configured") panic("no standard input configured")
} }
t.stdinW.Write([]byte(text + "\n")) _, err := t.stdinW.Write([]byte(text + "\n"))
if err != nil {
panic("Failed to to write to t.stdinW")
}
} }
func (t *cobraTestRunner) RunBackground() { func (t *cobraTestRunner) RunBackground() {
@ -276,7 +279,7 @@ func (t *cobraTestRunner) Run() (bytes.Buffer, bytes.Buffer, error) {
} }
// Like [require.Eventually] but errors if the underlying command has failed. // Like [require.Eventually] but errors if the underlying command has failed.
func (c *cobraTestRunner) Eventually(condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...interface{}) { func (c *cobraTestRunner) Eventually(condition func() bool, waitFor time.Duration, tick time.Duration, msgAndArgs ...any) {
ch := make(chan bool, 1) ch := make(chan bool, 1)
timer := time.NewTimer(waitFor) timer := time.NewTimer(waitFor)
@ -496,9 +499,10 @@ func TemporaryUcVolume(t *testing.T, w *databricks.WorkspaceClient) string {
}) })
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
w.Schemas.Delete(ctx, catalog.DeleteSchemaRequest{ err := w.Schemas.Delete(ctx, catalog.DeleteSchemaRequest{
FullName: schema.FullName, FullName: schema.FullName,
}) })
require.NoError(t, err)
}) })
// Create a volume // Create a volume
@ -510,9 +514,10 @@ func TemporaryUcVolume(t *testing.T, w *databricks.WorkspaceClient) string {
}) })
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { t.Cleanup(func() {
w.Volumes.Delete(ctx, catalog.DeleteVolumeRequest{ err := w.Volumes.Delete(ctx, catalog.DeleteVolumeRequest{
Name: volume.FullName, Name: volume.FullName,
}) })
require.NoError(t, err)
}) })
return path.Join("/Volumes", "main", schema.Name, volume.Name) return path.Join("/Volumes", "main", schema.Name, volume.Name)

View File

@ -39,7 +39,6 @@ func TestAccBundleInitErrorOnUnknownFields(t *testing.T) {
// make changes that can break the MLOps Stacks DAB. In which case we should // make changes that can break the MLOps Stacks DAB. In which case we should
// skip this test until the MLOps Stacks DAB is updated to work again. // skip this test until the MLOps Stacks DAB is updated to work again.
func TestAccBundleInitOnMlopsStacks(t *testing.T) { func TestAccBundleInitOnMlopsStacks(t *testing.T) {
t.Parallel()
env := testutil.GetCloud(t).String() env := testutil.GetCloud(t).String()
tmpDir1 := t.TempDir() tmpDir1 := t.TempDir()
@ -59,7 +58,8 @@ func TestAccBundleInitOnMlopsStacks(t *testing.T) {
} }
b, err := json.Marshal(initConfig) b, err := json.Marshal(initConfig)
require.NoError(t, err) require.NoError(t, err)
os.WriteFile(filepath.Join(tmpDir1, "config.json"), b, 0644) err = os.WriteFile(filepath.Join(tmpDir1, "config.json"), b, 0644)
require.NoError(t, err)
// Run bundle init // Run bundle init
assert.NoFileExists(t, filepath.Join(tmpDir2, "repo_name", projectName, "README.md")) assert.NoFileExists(t, filepath.Join(tmpDir2, "repo_name", projectName, "README.md"))

View File

@ -133,7 +133,8 @@ func TestAccLock(t *testing.T) {
// assert on active locker content // assert on active locker content
var res map[string]string var res map[string]string
json.Unmarshal(b, &res) err = json.Unmarshal(b, &res)
require.NoError(t, err)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "Khan", res["surname"]) assert.Equal(t, "Khan", res["surname"])
assert.Equal(t, "Shah Rukh", res["name"]) assert.Equal(t, "Shah Rukh", res["name"])

View File

@ -28,11 +28,11 @@ func (_m *MockFiler) EXPECT() *MockFiler_Expecter {
// Delete provides a mock function with given fields: ctx, path, mode // Delete provides a mock function with given fields: ctx, path, mode
func (_m *MockFiler) Delete(ctx context.Context, path string, mode ...filer.DeleteMode) error { func (_m *MockFiler) Delete(ctx context.Context, path string, mode ...filer.DeleteMode) error {
_va := make([]interface{}, len(mode)) _va := make([]any, len(mode))
for _i := range mode { for _i := range mode {
_va[_i] = mode[_i] _va[_i] = mode[_i]
} }
var _ca []interface{} var _ca []any
_ca = append(_ca, ctx, path) _ca = append(_ca, ctx, path)
_ca = append(_ca, _va...) _ca = append(_ca, _va...)
ret := _m.Called(_ca...) ret := _m.Called(_ca...)
@ -60,9 +60,9 @@ type MockFiler_Delete_Call struct {
// - ctx context.Context // - ctx context.Context
// - path string // - path string
// - mode ...filer.DeleteMode // - mode ...filer.DeleteMode
func (_e *MockFiler_Expecter) Delete(ctx interface{}, path interface{}, mode ...interface{}) *MockFiler_Delete_Call { func (_e *MockFiler_Expecter) Delete(ctx any, path any, mode ...any) *MockFiler_Delete_Call {
return &MockFiler_Delete_Call{Call: _e.mock.On("Delete", return &MockFiler_Delete_Call{Call: _e.mock.On("Delete",
append([]interface{}{ctx, path}, mode...)...)} append([]any{ctx, path}, mode...)...)}
} }
func (_c *MockFiler_Delete_Call) Run(run func(ctx context.Context, path string, mode ...filer.DeleteMode)) *MockFiler_Delete_Call { func (_c *MockFiler_Delete_Call) Run(run func(ctx context.Context, path string, mode ...filer.DeleteMode)) *MockFiler_Delete_Call {
@ -114,7 +114,7 @@ type MockFiler_Mkdir_Call struct {
// Mkdir is a helper method to define mock.On call // Mkdir is a helper method to define mock.On call
// - ctx context.Context // - ctx context.Context
// - path string // - path string
func (_e *MockFiler_Expecter) Mkdir(ctx interface{}, path interface{}) *MockFiler_Mkdir_Call { func (_e *MockFiler_Expecter) Mkdir(ctx any, path any) *MockFiler_Mkdir_Call {
return &MockFiler_Mkdir_Call{Call: _e.mock.On("Mkdir", ctx, path)} return &MockFiler_Mkdir_Call{Call: _e.mock.On("Mkdir", ctx, path)}
} }
@ -173,7 +173,7 @@ type MockFiler_Read_Call struct {
// Read is a helper method to define mock.On call // Read is a helper method to define mock.On call
// - ctx context.Context // - ctx context.Context
// - path string // - path string
func (_e *MockFiler_Expecter) Read(ctx interface{}, path interface{}) *MockFiler_Read_Call { func (_e *MockFiler_Expecter) Read(ctx any, path any) *MockFiler_Read_Call {
return &MockFiler_Read_Call{Call: _e.mock.On("Read", ctx, path)} return &MockFiler_Read_Call{Call: _e.mock.On("Read", ctx, path)}
} }
@ -232,7 +232,7 @@ type MockFiler_ReadDir_Call struct {
// ReadDir is a helper method to define mock.On call // ReadDir is a helper method to define mock.On call
// - ctx context.Context // - ctx context.Context
// - path string // - path string
func (_e *MockFiler_Expecter) ReadDir(ctx interface{}, path interface{}) *MockFiler_ReadDir_Call { func (_e *MockFiler_Expecter) ReadDir(ctx any, path any) *MockFiler_ReadDir_Call {
return &MockFiler_ReadDir_Call{Call: _e.mock.On("ReadDir", ctx, path)} return &MockFiler_ReadDir_Call{Call: _e.mock.On("ReadDir", ctx, path)}
} }
@ -291,7 +291,7 @@ type MockFiler_Stat_Call struct {
// Stat is a helper method to define mock.On call // Stat is a helper method to define mock.On call
// - ctx context.Context // - ctx context.Context
// - name string // - name string
func (_e *MockFiler_Expecter) Stat(ctx interface{}, name interface{}) *MockFiler_Stat_Call { func (_e *MockFiler_Expecter) Stat(ctx any, name any) *MockFiler_Stat_Call {
return &MockFiler_Stat_Call{Call: _e.mock.On("Stat", ctx, name)} return &MockFiler_Stat_Call{Call: _e.mock.On("Stat", ctx, name)}
} }
@ -314,11 +314,11 @@ func (_c *MockFiler_Stat_Call) RunAndReturn(run func(context.Context, string) (f
// Write provides a mock function with given fields: ctx, path, reader, mode // Write provides a mock function with given fields: ctx, path, reader, mode
func (_m *MockFiler) Write(ctx context.Context, path string, reader io.Reader, mode ...filer.WriteMode) error { func (_m *MockFiler) Write(ctx context.Context, path string, reader io.Reader, mode ...filer.WriteMode) error {
_va := make([]interface{}, len(mode)) _va := make([]any, len(mode))
for _i := range mode { for _i := range mode {
_va[_i] = mode[_i] _va[_i] = mode[_i]
} }
var _ca []interface{} var _ca []any
_ca = append(_ca, ctx, path, reader) _ca = append(_ca, ctx, path, reader)
_ca = append(_ca, _va...) _ca = append(_ca, _va...)
ret := _m.Called(_ca...) ret := _m.Called(_ca...)
@ -347,9 +347,9 @@ type MockFiler_Write_Call struct {
// - path string // - path string
// - reader io.Reader // - reader io.Reader
// - mode ...filer.WriteMode // - mode ...filer.WriteMode
func (_e *MockFiler_Expecter) Write(ctx interface{}, path interface{}, reader interface{}, mode ...interface{}) *MockFiler_Write_Call { func (_e *MockFiler_Expecter) Write(ctx any, path any, reader any, mode ...any) *MockFiler_Write_Call {
return &MockFiler_Write_Call{Call: _e.mock.On("Write", return &MockFiler_Write_Call{Call: _e.mock.On("Write",
append([]interface{}{ctx, path, reader}, mode...)...)} append([]any{ctx, path, reader}, mode...)...)}
} }
func (_c *MockFiler_Write_Call) Run(run func(ctx context.Context, path string, reader io.Reader, mode ...filer.WriteMode)) *MockFiler_Write_Call { func (_c *MockFiler_Write_Call) Run(run func(ctx context.Context, path string, reader io.Reader, mode ...filer.WriteMode)) *MockFiler_Write_Call {

View File

@ -47,7 +47,11 @@ func testTags(t *testing.T, tags map[string]string) error {
if resp != nil { if resp != nil {
t.Cleanup(func() { t.Cleanup(func() {
w.Jobs.DeleteByJobId(ctx, resp.JobId) _ = w.Jobs.DeleteByJobId(ctx, resp.JobId)
// Cannot enable errchecking there, tests fail with:
// Error: Received unexpected error:
// Job 0 does not exist.
// require.NoError(t, err)
}) })
} }

View File

@ -51,6 +51,10 @@ func GetEnvOrSkipTest(t *testing.T, name string) string {
// Changes into specified directory for the duration of the test. // Changes into specified directory for the duration of the test.
// Returns the current working directory. // Returns the current working directory.
func Chdir(t *testing.T, dir string) string { func Chdir(t *testing.T, dir string) string {
// Prevent parallel execution when changing the working directory.
// t.Setenv automatically fails if t.Parallel is set.
t.Setenv("DO_NOT_RUN_IN_PARALLEL", "true")
wd, err := os.Getwd() wd, err := os.Getwd()
require.NoError(t, err) require.NoError(t, err)

View File

@ -15,6 +15,7 @@ import (
"github.com/databricks/databricks-sdk-go/httpclient/fixtures" "github.com/databricks/databricks-sdk-go/httpclient/fixtures"
"github.com/databricks/databricks-sdk-go/qa" "github.com/databricks/databricks-sdk-go/qa"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/oauth2" "golang.org/x/oauth2"
) )
@ -182,7 +183,8 @@ func TestChallenge(t *testing.T) {
state := <-browserOpened state := <-browserOpened
resp, err := http.Get(fmt.Sprintf("http://%s?code=__THIS__&state=%s", appRedirectAddr, state)) resp, err := http.Get(fmt.Sprintf("http://%s?code=__THIS__&state=%s", appRedirectAddr, state))
assert.NoError(t, err) require.NoError(t, err)
defer resp.Body.Close()
assert.Equal(t, 200, resp.StatusCode) assert.Equal(t, 200, resp.StatusCode)
err = <-errc err = <-errc
@ -221,7 +223,8 @@ func TestChallengeFailed(t *testing.T) {
resp, err := http.Get(fmt.Sprintf( resp, err := http.Get(fmt.Sprintf(
"http://%s?error=access_denied&error_description=Policy%%20evaluation%%20failed%%20for%%20this%%20request", "http://%s?error=access_denied&error_description=Policy%%20evaluation%%20failed%%20for%%20this%%20request",
appRedirectAddr)) appRedirectAddr))
assert.NoError(t, err) require.NoError(t, err)
defer resp.Body.Close()
assert.Equal(t, 400, resp.StatusCode) assert.Equal(t, 400, resp.StatusCode)
err = <-errc err = <-errc

View File

@ -100,7 +100,7 @@ func trimRightSpace(s string) string {
} }
// tmpl executes the given template text on data, writing the result to w. // tmpl executes the given template text on data, writing the result to w.
func tmpl(w io.Writer, text string, data interface{}) error { func tmpl(w io.Writer, text string, data any) error {
t := template.New("top") t := template.New("top")
t.Funcs(templateFuncs) t.Funcs(templateFuncs)
template.Must(t.Parse(text)) template.Must(t.Parse(text))

View File

@ -42,7 +42,8 @@ func TestCommandFlagGrouping(t *testing.T) {
buf := bytes.NewBuffer(nil) buf := bytes.NewBuffer(nil)
cmd.SetOutput(buf) cmd.SetOutput(buf)
cmd.Usage() err := cmd.Usage()
require.NoError(t, err)
expected := `Usage: expected := `Usage:
parent test [flags] parent test [flags]

View File

@ -297,10 +297,10 @@ var renderFuncMap = template.FuncMap{
"yellow": color.YellowString, "yellow": color.YellowString,
"magenta": color.MagentaString, "magenta": color.MagentaString,
"cyan": color.CyanString, "cyan": color.CyanString,
"bold": func(format string, a ...interface{}) string { "bold": func(format string, a ...any) string {
return color.New(color.Bold).Sprintf(format, a...) return color.New(color.Bold).Sprintf(format, a...)
}, },
"italic": func(format string, a ...interface{}) string { "italic": func(format string, a ...any) string {
return color.New(color.Italic).Sprintf(format, a...) return color.New(color.Italic).Sprintf(format, a...)
}, },
"replace": strings.ReplaceAll, "replace": strings.ReplaceAll,

View File

@ -53,6 +53,19 @@ func FromErr(err error) Diagnostics {
} }
} }
// FromErr returns a new warning diagnostic from the specified error, if any.
func WarningFromErr(err error) Diagnostics {
if err == nil {
return nil
}
return []Diagnostic{
{
Severity: Warning,
Summary: err.Error(),
},
}
}
// Warningf creates a new warning diagnostic. // Warningf creates a new warning diagnostic.
func Warningf(format string, args ...any) Diagnostics { func Warningf(format string, args ...any) Diagnostics {
return []Diagnostic{ return []Diagnostic{

View File

@ -6,6 +6,7 @@ import (
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
assert "github.com/databricks/cli/libs/dyn/dynassert" assert "github.com/databricks/cli/libs/dyn/dynassert"
"github.com/stretchr/testify/require"
) )
func TestNormalizeStruct(t *testing.T) { func TestNormalizeStruct(t *testing.T) {
@ -20,8 +21,8 @@ func TestNormalizeStruct(t *testing.T) {
"bar": dyn.V("baz"), "bar": dyn.V("baz"),
}) })
vout, err := Normalize(typ, vin) vout, diags := Normalize(typ, vin)
assert.Empty(t, err) assert.Empty(t, diags)
assert.Equal(t, vin, vout) assert.Equal(t, vin, vout)
} }
@ -37,14 +38,14 @@ func TestNormalizeStructElementDiagnostic(t *testing.T) {
"bar": dyn.V(map[string]dyn.Value{"an": dyn.V("error")}), "bar": dyn.V(map[string]dyn.Value{"an": dyn.V("error")}),
}) })
vout, err := Normalize(typ, vin) vout, diags := Normalize(typ, vin)
assert.Len(t, err, 1) assert.Len(t, diags, 1)
assert.Equal(t, diag.Diagnostic{ assert.Equal(t, diag.Diagnostic{
Severity: diag.Warning, Severity: diag.Warning,
Summary: `expected string, found map`, Summary: `expected string, found map`,
Locations: []dyn.Location{{}}, Locations: []dyn.Location{{}},
Paths: []dyn.Path{dyn.NewPath(dyn.Key("bar"))}, Paths: []dyn.Path{dyn.NewPath(dyn.Key("bar"))},
}, err[0]) }, diags[0])
// Elements that encounter an error during normalization are dropped. // Elements that encounter an error during normalization are dropped.
assert.Equal(t, map[string]any{ assert.Equal(t, map[string]any{
@ -60,17 +61,20 @@ func TestNormalizeStructUnknownField(t *testing.T) {
var typ Tmp var typ Tmp
m := dyn.NewMapping() m := dyn.NewMapping()
m.Set(dyn.V("foo"), dyn.V("val-foo")) err := m.Set(dyn.V("foo"), dyn.V("val-foo"))
require.NoError(t, err)
// Set the unknown field, with location information. // Set the unknown field, with location information.
m.Set(dyn.NewValue("bar", []dyn.Location{ err = m.Set(dyn.NewValue("bar", []dyn.Location{
{File: "hello.yaml", Line: 1, Column: 1}, {File: "hello.yaml", Line: 1, Column: 1},
{File: "world.yaml", Line: 2, Column: 2}, {File: "world.yaml", Line: 2, Column: 2},
}), dyn.V("var-bar")) }), dyn.V("var-bar"))
require.NoError(t, err)
vin := dyn.V(m) vin := dyn.V(m)
vout, err := Normalize(typ, vin) vout, diags := Normalize(typ, vin)
assert.Len(t, err, 1) assert.Len(t, diags, 1)
assert.Equal(t, diag.Diagnostic{ assert.Equal(t, diag.Diagnostic{
Severity: diag.Warning, Severity: diag.Warning,
Summary: `unknown field: bar`, Summary: `unknown field: bar`,
@ -80,7 +84,7 @@ func TestNormalizeStructUnknownField(t *testing.T) {
{File: "world.yaml", Line: 2, Column: 2}, {File: "world.yaml", Line: 2, Column: 2},
}, },
Paths: []dyn.Path{dyn.EmptyPath}, Paths: []dyn.Path{dyn.EmptyPath},
}, err[0]) }, diags[0])
// The field that can be mapped to the struct field is retained. // The field that can be mapped to the struct field is retained.
assert.Equal(t, map[string]any{ assert.Equal(t, map[string]any{

View File

@ -5,7 +5,7 @@ import (
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func Equal(t assert.TestingT, expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { func Equal(t assert.TestingT, expected any, actual any, msgAndArgs ...any) bool {
ev, eok := expected.(dyn.Value) ev, eok := expected.(dyn.Value)
av, aok := actual.(dyn.Value) av, aok := actual.(dyn.Value)
if eok && aok && ev.IsValid() && av.IsValid() { if eok && aok && ev.IsValid() && av.IsValid() {
@ -32,86 +32,86 @@ func Equal(t assert.TestingT, expected interface{}, actual interface{}, msgAndAr
return assert.Equal(t, expected, actual, msgAndArgs...) return assert.Equal(t, expected, actual, msgAndArgs...)
} }
func EqualValues(t assert.TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { func EqualValues(t assert.TestingT, expected, actual any, msgAndArgs ...any) bool {
return assert.EqualValues(t, expected, actual, msgAndArgs...) return assert.EqualValues(t, expected, actual, msgAndArgs...)
} }
func NotEqual(t assert.TestingT, expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { func NotEqual(t assert.TestingT, expected any, actual any, msgAndArgs ...any) bool {
return assert.NotEqual(t, expected, actual, msgAndArgs...) return assert.NotEqual(t, expected, actual, msgAndArgs...)
} }
func Len(t assert.TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool { func Len(t assert.TestingT, object any, length int, msgAndArgs ...any) bool {
return assert.Len(t, object, length, msgAndArgs...) return assert.Len(t, object, length, msgAndArgs...)
} }
func Empty(t assert.TestingT, object interface{}, msgAndArgs ...interface{}) bool { func Empty(t assert.TestingT, object any, msgAndArgs ...any) bool {
return assert.Empty(t, object, msgAndArgs...) return assert.Empty(t, object, msgAndArgs...)
} }
func Nil(t assert.TestingT, object interface{}, msgAndArgs ...interface{}) bool { func Nil(t assert.TestingT, object any, msgAndArgs ...any) bool {
return assert.Nil(t, object, msgAndArgs...) return assert.Nil(t, object, msgAndArgs...)
} }
func NotNil(t assert.TestingT, object interface{}, msgAndArgs ...interface{}) bool { func NotNil(t assert.TestingT, object any, msgAndArgs ...any) bool {
return assert.NotNil(t, object, msgAndArgs...) return assert.NotNil(t, object, msgAndArgs...)
} }
func NoError(t assert.TestingT, err error, msgAndArgs ...interface{}) bool { func NoError(t assert.TestingT, err error, msgAndArgs ...any) bool {
return assert.NoError(t, err, msgAndArgs...) return assert.NoError(t, err, msgAndArgs...)
} }
func Error(t assert.TestingT, err error, msgAndArgs ...interface{}) bool { func Error(t assert.TestingT, err error, msgAndArgs ...any) bool {
return assert.Error(t, err, msgAndArgs...) return assert.Error(t, err, msgAndArgs...)
} }
func EqualError(t assert.TestingT, theError error, errString string, msgAndArgs ...interface{}) bool { func EqualError(t assert.TestingT, theError error, errString string, msgAndArgs ...any) bool {
return assert.EqualError(t, theError, errString, msgAndArgs...) return assert.EqualError(t, theError, errString, msgAndArgs...)
} }
func ErrorContains(t assert.TestingT, theError error, contains string, msgAndArgs ...interface{}) bool { func ErrorContains(t assert.TestingT, theError error, contains string, msgAndArgs ...any) bool {
return assert.ErrorContains(t, theError, contains, msgAndArgs...) return assert.ErrorContains(t, theError, contains, msgAndArgs...)
} }
func ErrorIs(t assert.TestingT, theError, target error, msgAndArgs ...interface{}) bool { func ErrorIs(t assert.TestingT, theError, target error, msgAndArgs ...any) bool {
return assert.ErrorIs(t, theError, target, msgAndArgs...) return assert.ErrorIs(t, theError, target, msgAndArgs...)
} }
func True(t assert.TestingT, value bool, msgAndArgs ...interface{}) bool { func True(t assert.TestingT, value bool, msgAndArgs ...any) bool {
return assert.True(t, value, msgAndArgs...) return assert.True(t, value, msgAndArgs...)
} }
func False(t assert.TestingT, value bool, msgAndArgs ...interface{}) bool { func False(t assert.TestingT, value bool, msgAndArgs ...any) bool {
return assert.False(t, value, msgAndArgs...) return assert.False(t, value, msgAndArgs...)
} }
func Contains(t assert.TestingT, list interface{}, element interface{}, msgAndArgs ...interface{}) bool { func Contains(t assert.TestingT, list any, element any, msgAndArgs ...any) bool {
return assert.Contains(t, list, element, msgAndArgs...) return assert.Contains(t, list, element, msgAndArgs...)
} }
func NotContains(t assert.TestingT, list interface{}, element interface{}, msgAndArgs ...interface{}) bool { func NotContains(t assert.TestingT, list any, element any, msgAndArgs ...any) bool {
return assert.NotContains(t, list, element, msgAndArgs...) return assert.NotContains(t, list, element, msgAndArgs...)
} }
func ElementsMatch(t assert.TestingT, listA, listB interface{}, msgAndArgs ...interface{}) bool { func ElementsMatch(t assert.TestingT, listA, listB any, msgAndArgs ...any) bool {
return assert.ElementsMatch(t, listA, listB, msgAndArgs...) return assert.ElementsMatch(t, listA, listB, msgAndArgs...)
} }
func Panics(t assert.TestingT, f func(), msgAndArgs ...interface{}) bool { func Panics(t assert.TestingT, f func(), msgAndArgs ...any) bool {
return assert.Panics(t, f, msgAndArgs...) return assert.Panics(t, f, msgAndArgs...)
} }
func PanicsWithValue(t assert.TestingT, expected interface{}, f func(), msgAndArgs ...interface{}) bool { func PanicsWithValue(t assert.TestingT, expected any, f func(), msgAndArgs ...any) bool {
return assert.PanicsWithValue(t, expected, f, msgAndArgs...) return assert.PanicsWithValue(t, expected, f, msgAndArgs...)
} }
func PanicsWithError(t assert.TestingT, errString string, f func(), msgAndArgs ...interface{}) bool { func PanicsWithError(t assert.TestingT, errString string, f func(), msgAndArgs ...any) bool {
return assert.PanicsWithError(t, errString, f, msgAndArgs...) return assert.PanicsWithError(t, errString, f, msgAndArgs...)
} }
func NotPanics(t assert.TestingT, f func(), msgAndArgs ...interface{}) bool { func NotPanics(t assert.TestingT, f func(), msgAndArgs ...any) bool {
return assert.NotPanics(t, f, msgAndArgs...) return assert.NotPanics(t, f, msgAndArgs...)
} }
func JSONEq(t assert.TestingT, expected string, actual string, msgAndArgs ...interface{}) bool { func JSONEq(t assert.TestingT, expected string, actual string, msgAndArgs ...any) bool {
return assert.JSONEq(t, expected, actual, msgAndArgs...) return assert.JSONEq(t, expected, actual, msgAndArgs...)
} }

View File

@ -5,6 +5,7 @@ import (
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
assert "github.com/databricks/cli/libs/dyn/dynassert" assert "github.com/databricks/cli/libs/dyn/dynassert"
"github.com/stretchr/testify/require"
) )
func TestMarshal_String(t *testing.T) { func TestMarshal_String(t *testing.T) {
@ -44,8 +45,8 @@ func TestMarshal_Time(t *testing.T) {
func TestMarshal_Map(t *testing.T) { func TestMarshal_Map(t *testing.T) {
m := dyn.NewMapping() m := dyn.NewMapping()
m.Set(dyn.V("key1"), dyn.V("value1")) require.NoError(t, m.Set(dyn.V("key1"), dyn.V("value1")))
m.Set(dyn.V("key2"), dyn.V("value2")) require.NoError(t, m.Set(dyn.V("key2"), dyn.V("value2")))
b, err := Marshal(dyn.V(m)) b, err := Marshal(dyn.V(m))
if assert.NoError(t, err) { if assert.NoError(t, err) {
@ -66,16 +67,16 @@ func TestMarshal_Sequence(t *testing.T) {
func TestMarshal_Complex(t *testing.T) { func TestMarshal_Complex(t *testing.T) {
map1 := dyn.NewMapping() map1 := dyn.NewMapping()
map1.Set(dyn.V("str1"), dyn.V("value1")) require.NoError(t, map1.Set(dyn.V("str1"), dyn.V("value1")))
map1.Set(dyn.V("str2"), dyn.V("value2")) require.NoError(t, map1.Set(dyn.V("str2"), dyn.V("value2")))
seq1 := []dyn.Value{} seq1 := []dyn.Value{}
seq1 = append(seq1, dyn.V("value1")) seq1 = append(seq1, dyn.V("value1"))
seq1 = append(seq1, dyn.V("value2")) seq1 = append(seq1, dyn.V("value2"))
root := dyn.NewMapping() root := dyn.NewMapping()
root.Set(dyn.V("map1"), dyn.V(map1)) require.NoError(t, root.Set(dyn.V("map1"), dyn.V(map1)))
root.Set(dyn.V("seq1"), dyn.V(seq1)) require.NoError(t, root.Set(dyn.V("seq1"), dyn.V(seq1)))
// Marshal without indent. // Marshal without indent.
b, err := Marshal(dyn.V(root)) b, err := Marshal(dyn.V(root))

View File

@ -432,10 +432,12 @@ func TestOverride_PreserveMappingKeys(t *testing.T) {
rightValueLocation := dyn.Location{File: "right.yml", Line: 3, Column: 1} rightValueLocation := dyn.Location{File: "right.yml", Line: 3, Column: 1}
left := dyn.NewMapping() left := dyn.NewMapping()
left.Set(dyn.NewValue("a", []dyn.Location{leftKeyLocation}), dyn.NewValue(42, []dyn.Location{leftValueLocation})) err := left.Set(dyn.NewValue("a", []dyn.Location{leftKeyLocation}), dyn.NewValue(42, []dyn.Location{leftValueLocation}))
require.NoError(t, err)
right := dyn.NewMapping() right := dyn.NewMapping()
right.Set(dyn.NewValue("a", []dyn.Location{rightKeyLocation}), dyn.NewValue(7, []dyn.Location{rightValueLocation})) err = right.Set(dyn.NewValue("a", []dyn.Location{rightKeyLocation}), dyn.NewValue(7, []dyn.Location{rightValueLocation}))
require.NoError(t, err)
state, visitor := createVisitor(visitorOpts{}) state, visitor := createVisitor(visitorOpts{})

View File

@ -14,7 +14,7 @@ type loader struct {
path string path string
} }
func errorf(loc dyn.Location, format string, args ...interface{}) error { func errorf(loc dyn.Location, format string, args ...any) error {
return fmt.Errorf("yaml (%s): %s", loc, fmt.Sprintf(format, args...)) return fmt.Errorf("yaml (%s): %s", loc, fmt.Sprintf(format, args...))
} }

View File

@ -59,7 +59,7 @@ func (ec errorChain) Unwrap() error {
return ec[1:] return ec[1:]
} }
func (ec errorChain) As(target interface{}) bool { func (ec errorChain) As(target any) bool {
return errors.As(ec[0], target) return errors.As(ec[0], target)
} }

View File

@ -12,6 +12,7 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestExecutorWithSimpleInput(t *testing.T) { func TestExecutorWithSimpleInput(t *testing.T) {
@ -86,9 +87,11 @@ func testExecutorWithShell(t *testing.T, shell string) {
tmpDir := t.TempDir() tmpDir := t.TempDir()
t.Setenv("PATH", tmpDir) t.Setenv("PATH", tmpDir)
if runtime.GOOS == "windows" { if runtime.GOOS == "windows" {
os.Symlink(p, fmt.Sprintf("%s/%s.exe", tmpDir, shell)) err = os.Symlink(p, fmt.Sprintf("%s/%s.exe", tmpDir, shell))
require.NoError(t, err)
} else { } else {
os.Symlink(p, fmt.Sprintf("%s/%s", tmpDir, shell)) err = os.Symlink(p, fmt.Sprintf("%s/%s", tmpDir, shell))
require.NoError(t, err)
} }
executor, err := NewCommandExecutor(".") executor, err := NewCommandExecutor(".")

View File

@ -62,7 +62,8 @@ func TestJsonFlagFile(t *testing.T) {
{ {
f, err := os.Create(path.Join(t.TempDir(), "file")) f, err := os.Create(path.Join(t.TempDir(), "file"))
require.NoError(t, err) require.NoError(t, err)
f.Write(payload) _, err = f.Write(payload)
require.NoError(t, err)
f.Close() f.Close()
fpath = f.Name() fpath = f.Name()
} }

View File

@ -13,10 +13,10 @@ type FileSet struct {
view *View view *View
} }
// NewFileSet returns [FileSet] for the Git repository located at `root`. // NewFileSet returns [FileSet] for the directory `root` which is contained within Git worktree located at `worktreeRoot`.
func NewFileSet(root vfs.Path, paths ...[]string) (*FileSet, error) { func NewFileSet(worktreeRoot, root vfs.Path, paths ...[]string) (*FileSet, error) {
fs := fileset.New(root, paths...) fs := fileset.New(root, paths...)
v, err := NewView(root) v, err := NewView(worktreeRoot, root)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -27,6 +27,10 @@ func NewFileSet(root vfs.Path, paths ...[]string) (*FileSet, error) {
}, nil }, nil
} }
func NewFileSetAtRoot(root vfs.Path, paths ...[]string) (*FileSet, error) {
return NewFileSet(root, root, paths...)
}
func (f *FileSet) IgnoreFile(file string) (bool, error) { func (f *FileSet) IgnoreFile(file string) (bool, error) {
return f.view.IgnoreFile(file) return f.view.IgnoreFile(file)
} }

View File

@ -12,8 +12,8 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func testFileSetAll(t *testing.T, root string) { func testFileSetAll(t *testing.T, worktreeRoot, root string) {
fileSet, err := NewFileSet(vfs.MustNew(root)) fileSet, err := NewFileSet(vfs.MustNew(worktreeRoot), vfs.MustNew(root))
require.NoError(t, err) require.NoError(t, err)
files, err := fileSet.Files() files, err := fileSet.Files()
require.NoError(t, err) require.NoError(t, err)
@ -24,18 +24,28 @@ func testFileSetAll(t *testing.T, root string) {
} }
func TestFileSetListAllInRepo(t *testing.T) { func TestFileSetListAllInRepo(t *testing.T) {
testFileSetAll(t, "./testdata") testFileSetAll(t, "./testdata", "./testdata")
}
func TestFileSetListAllInRepoDifferentRoot(t *testing.T) {
testFileSetAll(t, ".", "./testdata")
} }
func TestFileSetListAllInTempDir(t *testing.T) { func TestFileSetListAllInTempDir(t *testing.T) {
testFileSetAll(t, copyTestdata(t, "./testdata")) dir := copyTestdata(t, "./testdata")
testFileSetAll(t, dir, dir)
}
func TestFileSetListAllInTempDirDifferentRoot(t *testing.T) {
dir := copyTestdata(t, "./testdata")
testFileSetAll(t, filepath.Dir(dir), dir)
} }
func TestFileSetNonCleanRoot(t *testing.T) { func TestFileSetNonCleanRoot(t *testing.T) {
// Test what happens if the root directory can be simplified. // Test what happens if the root directory can be simplified.
// Path simplification is done by most filepath functions. // Path simplification is done by most filepath functions.
// This should yield the same result as above test. // This should yield the same result as above test.
fileSet, err := NewFileSet(vfs.MustNew("./testdata/../testdata")) fileSet, err := NewFileSetAtRoot(vfs.MustNew("./testdata/../testdata"))
require.NoError(t, err) require.NoError(t, err)
files, err := fileSet.Files() files, err := fileSet.Files()
require.NoError(t, err) require.NoError(t, err)
@ -44,9 +54,10 @@ func TestFileSetNonCleanRoot(t *testing.T) {
func TestFileSetAddsCacheDirToGitIgnore(t *testing.T) { func TestFileSetAddsCacheDirToGitIgnore(t *testing.T) {
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := NewFileSet(vfs.MustNew(projectDir)) fileSet, err := NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err)
err = fileSet.EnsureValidGitIgnoreExists()
require.NoError(t, err) require.NoError(t, err)
fileSet.EnsureValidGitIgnoreExists()
gitIgnorePath := filepath.Join(projectDir, ".gitignore") gitIgnorePath := filepath.Join(projectDir, ".gitignore")
assert.FileExists(t, gitIgnorePath) assert.FileExists(t, gitIgnorePath)
@ -59,12 +70,13 @@ func TestFileSetDoesNotCacheDirToGitIgnoreIfAlreadyPresent(t *testing.T) {
projectDir := t.TempDir() projectDir := t.TempDir()
gitIgnorePath := filepath.Join(projectDir, ".gitignore") gitIgnorePath := filepath.Join(projectDir, ".gitignore")
fileSet, err := NewFileSet(vfs.MustNew(projectDir)) fileSet, err := NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
err = os.WriteFile(gitIgnorePath, []byte(".databricks"), 0o644) err = os.WriteFile(gitIgnorePath, []byte(".databricks"), 0o644)
require.NoError(t, err) require.NoError(t, err)
fileSet.EnsureValidGitIgnoreExists() err = fileSet.EnsureValidGitIgnoreExists()
require.NoError(t, err)
b, err := os.ReadFile(gitIgnorePath) b, err := os.ReadFile(gitIgnorePath)
require.NoError(t, err) require.NoError(t, err)

161
libs/git/info.go Normal file
View File

@ -0,0 +1,161 @@
package git
import (
"context"
"errors"
"io/fs"
"net/http"
"os"
"path"
"path/filepath"
"strings"
"github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/log"
"github.com/databricks/cli/libs/vfs"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/client"
)
type RepositoryInfo struct {
// Various metadata about the repo. Each could be "" if it could not be read. No error is returned for such case.
OriginURL string
LatestCommit string
CurrentBranch string
// Absolute path to determined worktree root or "" if worktree root could not be determined.
WorktreeRoot string
}
type gitInfo struct {
Branch string `json:"branch"`
HeadCommitID string `json:"head_commit_id"`
Path string `json:"path"`
URL string `json:"url"`
}
type response struct {
GitInfo *gitInfo `json:"git_info,omitempty"`
}
// Fetch repository information either by quering .git or by fetching it from API (for dabs-in-workspace case).
// - In case we could not find git repository, all string fields of RepositoryInfo will be "" and err will be nil.
// - If there were any errors when trying to determine git root (e.g. API call returned an error or there were permission issues
// reading the file system), all strings fields of RepositoryInfo will be "" and err will be non-nil.
// - If we could determine git worktree root but there were errors when reading metadata (origin, branch, commit), those errors
// will be logged as warnings, RepositoryInfo is guaranteed to have non-empty WorktreeRoot and other fields on best effort basis.
// - In successful case, all fields are set to proper git repository metadata.
func FetchRepositoryInfo(ctx context.Context, path string, w *databricks.WorkspaceClient) (RepositoryInfo, error) {
if strings.HasPrefix(path, "/Workspace/") && dbr.RunsOnRuntime(ctx) {
return fetchRepositoryInfoAPI(ctx, path, w)
} else {
return fetchRepositoryInfoDotGit(ctx, path)
}
}
func fetchRepositoryInfoAPI(ctx context.Context, path string, w *databricks.WorkspaceClient) (RepositoryInfo, error) {
result := RepositoryInfo{}
apiClient, err := client.New(w.Config)
if err != nil {
return result, err
}
var response response
const apiEndpoint = "/api/2.0/workspace/get-status"
err = apiClient.Do(
ctx,
http.MethodGet,
apiEndpoint,
nil,
map[string]string{
"path": path,
"return_git_info": "true",
},
&response,
)
if err != nil {
return result, err
}
// Check if GitInfo is present and extract relevant fields
gi := response.GitInfo
if gi != nil {
fixedPath := ensureWorkspacePrefix(gi.Path)
result.OriginURL = gi.URL
result.LatestCommit = gi.HeadCommitID
result.CurrentBranch = gi.Branch
result.WorktreeRoot = fixedPath
} else {
log.Warnf(ctx, "Failed to load git info from %s", apiEndpoint)
}
return result, nil
}
func ensureWorkspacePrefix(p string) string {
if !strings.HasPrefix(p, "/Workspace/") {
return path.Join("/Workspace", p)
}
return p
}
func fetchRepositoryInfoDotGit(ctx context.Context, path string) (RepositoryInfo, error) {
result := RepositoryInfo{}
rootDir, err := findLeafInTree(path, GitDirectoryName)
if rootDir == "" {
return result, err
}
result.WorktreeRoot = rootDir
repo, err := NewRepository(vfs.MustNew(rootDir))
if err != nil {
log.Warnf(ctx, "failed to read .git: %s", err)
// return early since operations below won't work
return result, nil
}
result.OriginURL = repo.OriginUrl()
result.CurrentBranch, err = repo.CurrentBranch()
if err != nil {
log.Warnf(ctx, "failed to load current branch: %s", err)
}
result.LatestCommit, err = repo.LatestCommit()
if err != nil {
log.Warnf(ctx, "failed to load latest commit: %s", err)
}
return result, nil
}
func findLeafInTree(p string, leafName string) (string, error) {
var err error
for i := 0; i < 10000; i++ {
_, err = os.Stat(filepath.Join(p, leafName))
if err == nil {
// Found [leafName] in p
return p, nil
}
// ErrNotExist means we continue traversal up the tree.
if errors.Is(err, fs.ErrNotExist) {
parent := filepath.Dir(p)
if parent == p {
return "", nil
}
p = parent
continue
}
break
}
return "", err
}

View File

@ -54,7 +54,8 @@ func TestReferenceLoadingForObjectID(t *testing.T) {
f, err := os.Create(filepath.Join(tmp, "HEAD")) f, err := os.Create(filepath.Join(tmp, "HEAD"))
require.NoError(t, err) require.NoError(t, err)
defer f.Close() defer f.Close()
f.WriteString(strings.Repeat("e", 40) + "\r\n") _, err = f.WriteString(strings.Repeat("e", 40) + "\r\n")
require.NoError(t, err)
ref, err := LoadReferenceFile(vfs.MustNew(tmp), "HEAD") ref, err := LoadReferenceFile(vfs.MustNew(tmp), "HEAD")
assert.NoError(t, err) assert.NoError(t, err)
@ -67,7 +68,8 @@ func TestReferenceLoadingForReference(t *testing.T) {
f, err := os.OpenFile(filepath.Join(tmp, "HEAD"), os.O_CREATE|os.O_WRONLY, os.ModePerm) f, err := os.OpenFile(filepath.Join(tmp, "HEAD"), os.O_CREATE|os.O_WRONLY, os.ModePerm)
require.NoError(t, err) require.NoError(t, err)
defer f.Close() defer f.Close()
f.WriteString("ref: refs/heads/foo\n") _, err = f.WriteString("ref: refs/heads/foo\n")
require.NoError(t, err)
ref, err := LoadReferenceFile(vfs.MustNew(tmp), "HEAD") ref, err := LoadReferenceFile(vfs.MustNew(tmp), "HEAD")
assert.NoError(t, err) assert.NoError(t, err)
@ -80,7 +82,8 @@ func TestReferenceLoadingFailsForInvalidContent(t *testing.T) {
f, err := os.OpenFile(filepath.Join(tmp, "HEAD"), os.O_CREATE|os.O_WRONLY, os.ModePerm) f, err := os.OpenFile(filepath.Join(tmp, "HEAD"), os.O_CREATE|os.O_WRONLY, os.ModePerm)
require.NoError(t, err) require.NoError(t, err)
defer f.Close() defer f.Close()
f.WriteString("abc") _, err = f.WriteString("abc")
require.NoError(t, err)
_, err = LoadReferenceFile(vfs.MustNew(tmp), "HEAD") _, err = LoadReferenceFile(vfs.MustNew(tmp), "HEAD")
assert.ErrorContains(t, err, "unknown format for git HEAD") assert.ErrorContains(t, err, "unknown format for git HEAD")

View File

@ -1,9 +1,7 @@
package git package git
import ( import (
"errors"
"fmt" "fmt"
"io/fs"
"net/url" "net/url"
"path" "path"
"path/filepath" "path/filepath"
@ -204,17 +202,7 @@ func (r *Repository) Ignore(relPath string) (bool, error) {
return false, nil return false, nil
} }
func NewRepository(path vfs.Path) (*Repository, error) { func NewRepository(rootDir vfs.Path) (*Repository, error) {
rootDir, err := vfs.FindLeafInTree(path, GitDirectoryName)
if err != nil {
if !errors.Is(err, fs.ErrNotExist) {
return nil, err
}
// Cannot find `.git` directory.
// Treat the specified path as a potential repository root checkout.
rootDir = path
}
// Derive $GIT_DIR and $GIT_COMMON_DIR paths if this is a real repository. // Derive $GIT_DIR and $GIT_COMMON_DIR paths if this is a real repository.
// If it isn't a real repository, they'll point to the (non-existent) `.git` directory. // If it isn't a real repository, they'll point to the (non-existent) `.git` directory.
gitDir, gitCommonDir, err := resolveGitDirs(rootDir) gitDir, gitCommonDir, err := resolveGitDirs(rootDir)

View File

@ -27,7 +27,7 @@ func newTestRepository(t *testing.T) *testRepository {
require.NoError(t, err) require.NoError(t, err)
defer f1.Close() defer f1.Close()
f1.WriteString( _, err = f1.WriteString(
`[core] `[core]
repositoryformatversion = 0 repositoryformatversion = 0
filemode = true filemode = true
@ -36,6 +36,7 @@ func newTestRepository(t *testing.T) *testRepository {
ignorecase = true ignorecase = true
precomposeunicode = true precomposeunicode = true
`) `)
require.NoError(t, err)
f2, err := os.Create(filepath.Join(tmp, ".git", "HEAD")) f2, err := os.Create(filepath.Join(tmp, ".git", "HEAD"))
require.NoError(t, err) require.NoError(t, err)

View File

@ -72,8 +72,8 @@ func (v *View) IgnoreDirectory(dir string) (bool, error) {
return v.Ignore(dir + "/") return v.Ignore(dir + "/")
} }
func NewView(root vfs.Path) (*View, error) { func NewView(worktreeRoot, root vfs.Path) (*View, error) {
repo, err := NewRepository(root) repo, err := NewRepository(worktreeRoot)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -96,6 +96,10 @@ func NewView(root vfs.Path) (*View, error) {
}, nil }, nil
} }
func NewViewAtRoot(root vfs.Path) (*View, error) {
return NewView(root, root)
}
func (v *View) EnsureValidGitIgnoreExists() error { func (v *View) EnsureValidGitIgnoreExists() error {
ign, err := v.IgnoreDirectory(".databricks") ign, err := v.IgnoreDirectory(".databricks")
if err != nil { if err != nil {

View File

@ -90,19 +90,19 @@ func testViewAtRoot(t *testing.T, tv testView) {
} }
func TestViewRootInBricksRepo(t *testing.T) { func TestViewRootInBricksRepo(t *testing.T) {
v, err := NewView(vfs.MustNew("./testdata")) v, err := NewViewAtRoot(vfs.MustNew("./testdata"))
require.NoError(t, err) require.NoError(t, err)
testViewAtRoot(t, testView{t, v}) testViewAtRoot(t, testView{t, v})
} }
func TestViewRootInTempRepo(t *testing.T) { func TestViewRootInTempRepo(t *testing.T) {
v, err := NewView(vfs.MustNew(createFakeRepo(t, "testdata"))) v, err := NewViewAtRoot(vfs.MustNew(createFakeRepo(t, "testdata")))
require.NoError(t, err) require.NoError(t, err)
testViewAtRoot(t, testView{t, v}) testViewAtRoot(t, testView{t, v})
} }
func TestViewRootInTempDir(t *testing.T) { func TestViewRootInTempDir(t *testing.T) {
v, err := NewView(vfs.MustNew(copyTestdata(t, "testdata"))) v, err := NewViewAtRoot(vfs.MustNew(copyTestdata(t, "testdata")))
require.NoError(t, err) require.NoError(t, err)
testViewAtRoot(t, testView{t, v}) testViewAtRoot(t, testView{t, v})
} }
@ -125,20 +125,21 @@ func testViewAtA(t *testing.T, tv testView) {
} }
func TestViewAInBricksRepo(t *testing.T) { func TestViewAInBricksRepo(t *testing.T) {
v, err := NewView(vfs.MustNew("./testdata/a")) v, err := NewView(vfs.MustNew("."), vfs.MustNew("./testdata/a"))
require.NoError(t, err) require.NoError(t, err)
testViewAtA(t, testView{t, v}) testViewAtA(t, testView{t, v})
} }
func TestViewAInTempRepo(t *testing.T) { func TestViewAInTempRepo(t *testing.T) {
v, err := NewView(vfs.MustNew(filepath.Join(createFakeRepo(t, "testdata"), "a"))) repo := createFakeRepo(t, "testdata")
v, err := NewView(vfs.MustNew(repo), vfs.MustNew(filepath.Join(repo, "a")))
require.NoError(t, err) require.NoError(t, err)
testViewAtA(t, testView{t, v}) testViewAtA(t, testView{t, v})
} }
func TestViewAInTempDir(t *testing.T) { func TestViewAInTempDir(t *testing.T) {
// Since this is not a fake repo it should not traverse up the tree. // Since this is not a fake repo it should not traverse up the tree.
v, err := NewView(vfs.MustNew(filepath.Join(copyTestdata(t, "testdata"), "a"))) v, err := NewViewAtRoot(vfs.MustNew(filepath.Join(copyTestdata(t, "testdata"), "a")))
require.NoError(t, err) require.NoError(t, err)
tv := testView{t, v} tv := testView{t, v}
@ -175,20 +176,21 @@ func testViewAtAB(t *testing.T, tv testView) {
} }
func TestViewABInBricksRepo(t *testing.T) { func TestViewABInBricksRepo(t *testing.T) {
v, err := NewView(vfs.MustNew("./testdata/a/b")) v, err := NewView(vfs.MustNew("."), vfs.MustNew("./testdata/a/b"))
require.NoError(t, err) require.NoError(t, err)
testViewAtAB(t, testView{t, v}) testViewAtAB(t, testView{t, v})
} }
func TestViewABInTempRepo(t *testing.T) { func TestViewABInTempRepo(t *testing.T) {
v, err := NewView(vfs.MustNew(filepath.Join(createFakeRepo(t, "testdata"), "a", "b"))) repo := createFakeRepo(t, "testdata")
v, err := NewView(vfs.MustNew(repo), vfs.MustNew(filepath.Join(repo, "a", "b")))
require.NoError(t, err) require.NoError(t, err)
testViewAtAB(t, testView{t, v}) testViewAtAB(t, testView{t, v})
} }
func TestViewABInTempDir(t *testing.T) { func TestViewABInTempDir(t *testing.T) {
// Since this is not a fake repo it should not traverse up the tree. // Since this is not a fake repo it should not traverse up the tree.
v, err := NewView(vfs.MustNew(filepath.Join(copyTestdata(t, "testdata"), "a", "b"))) v, err := NewViewAtRoot(vfs.MustNew(filepath.Join(copyTestdata(t, "testdata"), "a", "b")))
tv := testView{t, v} tv := testView{t, v}
require.NoError(t, err) require.NoError(t, err)
@ -215,7 +217,7 @@ func TestViewDoesNotChangeGitignoreIfCacheDirAlreadyIgnoredAtRoot(t *testing.T)
// Since root .gitignore already has .databricks, there should be no edits // Since root .gitignore already has .databricks, there should be no edits
// to root .gitignore // to root .gitignore
v, err := NewView(vfs.MustNew(repoPath)) v, err := NewViewAtRoot(vfs.MustNew(repoPath))
require.NoError(t, err) require.NoError(t, err)
err = v.EnsureValidGitIgnoreExists() err = v.EnsureValidGitIgnoreExists()
@ -235,7 +237,7 @@ func TestViewDoesNotChangeGitignoreIfCacheDirAlreadyIgnoredInSubdir(t *testing.T
// Since root .gitignore already has .databricks, there should be no edits // Since root .gitignore already has .databricks, there should be no edits
// to a/.gitignore // to a/.gitignore
v, err := NewView(vfs.MustNew(filepath.Join(repoPath, "a"))) v, err := NewView(vfs.MustNew(repoPath), vfs.MustNew(filepath.Join(repoPath, "a")))
require.NoError(t, err) require.NoError(t, err)
err = v.EnsureValidGitIgnoreExists() err = v.EnsureValidGitIgnoreExists()
@ -253,7 +255,7 @@ func TestViewAddsGitignoreWithCacheDir(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
// Since root .gitignore was deleted, new view adds .databricks to root .gitignore // Since root .gitignore was deleted, new view adds .databricks to root .gitignore
v, err := NewView(vfs.MustNew(repoPath)) v, err := NewViewAtRoot(vfs.MustNew(repoPath))
require.NoError(t, err) require.NoError(t, err)
err = v.EnsureValidGitIgnoreExists() err = v.EnsureValidGitIgnoreExists()
@ -271,7 +273,7 @@ func TestViewAddsGitignoreWithCacheDirAtSubdir(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
// Since root .gitignore was deleted, new view adds .databricks to a/.gitignore // Since root .gitignore was deleted, new view adds .databricks to a/.gitignore
v, err := NewView(vfs.MustNew(filepath.Join(repoPath, "a"))) v, err := NewView(vfs.MustNew(repoPath), vfs.MustNew(filepath.Join(repoPath, "a")))
require.NoError(t, err) require.NoError(t, err)
err = v.EnsureValidGitIgnoreExists() err = v.EnsureValidGitIgnoreExists()
@ -288,7 +290,7 @@ func TestViewAddsGitignoreWithCacheDirAtSubdir(t *testing.T) {
func TestViewAlwaysIgnoresCacheDir(t *testing.T) { func TestViewAlwaysIgnoresCacheDir(t *testing.T) {
repoPath := createFakeRepo(t, "testdata") repoPath := createFakeRepo(t, "testdata")
v, err := NewView(vfs.MustNew(repoPath)) v, err := NewViewAtRoot(vfs.MustNew(repoPath))
require.NoError(t, err) require.NoError(t, err)
err = v.EnsureValidGitIgnoreExists() err = v.EnsureValidGitIgnoreExists()

View File

@ -13,7 +13,7 @@ func TestFromTypeBasic(t *testing.T) {
type myStruct struct { type myStruct struct {
S string `json:"s"` S string `json:"s"`
I *int `json:"i,omitempty"` I *int `json:"i,omitempty"`
V interface{} `json:"v,omitempty"` V any `json:"v,omitempty"`
TriplePointer ***int `json:"triple_pointer,omitempty"` TriplePointer ***int `json:"triple_pointer,omitempty"`
// These fields should be ignored in the resulting schema. // These fields should be ignored in the resulting schema.
@ -403,7 +403,8 @@ func TestFromTypeError(t *testing.T) {
// Maps with non-string keys should panic. // Maps with non-string keys should panic.
type mapOfInts map[int]int type mapOfInts map[int]int
assert.PanicsWithValue(t, "found map with non-string key: int", func() { assert.PanicsWithValue(t, "found map with non-string key: int", func() {
FromType(reflect.TypeOf(mapOfInts{}), nil) _, err := FromType(reflect.TypeOf(mapOfInts{}), nil)
require.NoError(t, err)
}) })
// Unsupported types should return an error. // Unsupported types should return an error.

View File

@ -43,8 +43,14 @@ func TestStubCallback(t *testing.T) {
ctx := context.Background() ctx := context.Background()
ctx, stub := process.WithStub(ctx) ctx, stub := process.WithStub(ctx)
stub.WithCallback(func(cmd *exec.Cmd) error { stub.WithCallback(func(cmd *exec.Cmd) error {
cmd.Stderr.Write([]byte("something...")) _, err := cmd.Stderr.Write([]byte("something..."))
cmd.Stdout.Write([]byte("else...")) if err != nil {
return err
}
_, err = cmd.Stdout.Write([]byte("else..."))
if err != nil {
return err
}
return fmt.Errorf("yep") return fmt.Errorf("yep")
}) })

View File

@ -34,7 +34,8 @@ func TestFilteringInterpreters(t *testing.T) {
rogueBin := filepath.Join(t.TempDir(), "rogue-bin") rogueBin := filepath.Join(t.TempDir(), "rogue-bin")
err := os.Mkdir(rogueBin, 0o777) err := os.Mkdir(rogueBin, 0o777)
assert.NoError(t, err) assert.NoError(t, err)
os.Chmod(rogueBin, 0o777) err = os.Chmod(rogueBin, 0o777)
assert.NoError(t, err)
raw, err := os.ReadFile("testdata/world-writeable/python8.4") raw, err := os.ReadFile("testdata/world-writeable/python8.4")
assert.NoError(t, err) assert.NoError(t, err)

View File

@ -30,7 +30,7 @@ func TestDiff(t *testing.T) {
// Create temp project dir // Create temp project dir
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := git.NewFileSet(vfs.MustNew(projectDir)) fileSet, err := git.NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
state := Snapshot{ state := Snapshot{
SnapshotState: &SnapshotState{ SnapshotState: &SnapshotState{
@ -94,7 +94,7 @@ func TestSymlinkDiff(t *testing.T) {
// Create temp project dir // Create temp project dir
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := git.NewFileSet(vfs.MustNew(projectDir)) fileSet, err := git.NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
state := Snapshot{ state := Snapshot{
SnapshotState: &SnapshotState{ SnapshotState: &SnapshotState{
@ -125,7 +125,7 @@ func TestFolderDiff(t *testing.T) {
// Create temp project dir // Create temp project dir
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := git.NewFileSet(vfs.MustNew(projectDir)) fileSet, err := git.NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
state := Snapshot{ state := Snapshot{
SnapshotState: &SnapshotState{ SnapshotState: &SnapshotState{
@ -170,7 +170,7 @@ func TestPythonNotebookDiff(t *testing.T) {
// Create temp project dir // Create temp project dir
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := git.NewFileSet(vfs.MustNew(projectDir)) fileSet, err := git.NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
state := Snapshot{ state := Snapshot{
SnapshotState: &SnapshotState{ SnapshotState: &SnapshotState{
@ -245,7 +245,7 @@ func TestErrorWhenIdenticalRemoteName(t *testing.T) {
// Create temp project dir // Create temp project dir
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := git.NewFileSet(vfs.MustNew(projectDir)) fileSet, err := git.NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
state := Snapshot{ state := Snapshot{
SnapshotState: &SnapshotState{ SnapshotState: &SnapshotState{
@ -282,7 +282,7 @@ func TestNoErrorRenameWithIdenticalRemoteName(t *testing.T) {
// Create temp project dir // Create temp project dir
projectDir := t.TempDir() projectDir := t.TempDir()
fileSet, err := git.NewFileSet(vfs.MustNew(projectDir)) fileSet, err := git.NewFileSetAtRoot(vfs.MustNew(projectDir))
require.NoError(t, err) require.NoError(t, err)
state := Snapshot{ state := Snapshot{
SnapshotState: &SnapshotState{ SnapshotState: &SnapshotState{

View File

@ -19,6 +19,7 @@ import (
type OutputHandler func(context.Context, <-chan Event) type OutputHandler func(context.Context, <-chan Event)
type SyncOptions struct { type SyncOptions struct {
WorktreeRoot vfs.Path
LocalRoot vfs.Path LocalRoot vfs.Path
Paths []string Paths []string
Include []string Include []string
@ -62,7 +63,7 @@ type Sync struct {
// New initializes and returns a new [Sync] instance. // New initializes and returns a new [Sync] instance.
func New(ctx context.Context, opts SyncOptions) (*Sync, error) { func New(ctx context.Context, opts SyncOptions) (*Sync, error) {
fileSet, err := git.NewFileSet(opts.LocalRoot, opts.Paths) fileSet, err := git.NewFileSet(opts.WorktreeRoot, opts.LocalRoot, opts.Paths)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -37,7 +37,7 @@ func TestGetFileSet(t *testing.T) {
dir := setupFiles(t) dir := setupFiles(t)
root := vfs.MustNew(dir) root := vfs.MustNew(dir)
fileSet, err := git.NewFileSet(root) fileSet, err := git.NewFileSetAtRoot(root)
require.NoError(t, err) require.NoError(t, err)
err = fileSet.EnsureValidGitIgnoreExists() err = fileSet.EnsureValidGitIgnoreExists()
@ -103,7 +103,7 @@ func TestRecursiveExclude(t *testing.T) {
dir := setupFiles(t) dir := setupFiles(t)
root := vfs.MustNew(dir) root := vfs.MustNew(dir)
fileSet, err := git.NewFileSet(root) fileSet, err := git.NewFileSetAtRoot(root)
require.NoError(t, err) require.NoError(t, err)
err = fileSet.EnsureValidGitIgnoreExists() err = fileSet.EnsureValidGitIgnoreExists()
@ -133,7 +133,7 @@ func TestNegateExclude(t *testing.T) {
dir := setupFiles(t) dir := setupFiles(t)
root := vfs.MustNew(dir) root := vfs.MustNew(dir)
fileSet, err := git.NewFileSet(root) fileSet, err := git.NewFileSetAtRoot(root)
require.NoError(t, err) require.NoError(t, err)
err = fileSet.EnsureValidGitIgnoreExists() err = fileSet.EnsureValidGitIgnoreExists()