Compare commits

..

No commits in common. "caa8d64ffd0aeef1f0a973f6c0f6cd424578903b" and "90f360b08b8ddb0d4a7c926faf5d2a6eb7f61b45" have entirely different histories.

42 changed files with 95 additions and 1465 deletions

View File

@ -1 +1 @@
6f6b1371e640f2dfeba72d365ac566368656f6b6 d05898328669a3f8ab0c2ecee37db2673d3ea3f7

3
.gitattributes vendored
View File

@ -6,7 +6,6 @@ cmd/account/cmd.go linguist-generated=true
cmd/account/credentials/credentials.go linguist-generated=true cmd/account/credentials/credentials.go linguist-generated=true
cmd/account/csp-enablement-account/csp-enablement-account.go linguist-generated=true cmd/account/csp-enablement-account/csp-enablement-account.go linguist-generated=true
cmd/account/custom-app-integration/custom-app-integration.go linguist-generated=true cmd/account/custom-app-integration/custom-app-integration.go linguist-generated=true
cmd/account/disable-legacy-features/disable-legacy-features.go linguist-generated=true
cmd/account/encryption-keys/encryption-keys.go linguist-generated=true cmd/account/encryption-keys/encryption-keys.go linguist-generated=true
cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true cmd/account/esm-enablement-account/esm-enablement-account.go linguist-generated=true
cmd/account/groups/groups.go linguist-generated=true cmd/account/groups/groups.go linguist-generated=true
@ -53,7 +52,6 @@ cmd/workspace/dashboard-widgets/dashboard-widgets.go linguist-generated=true
cmd/workspace/dashboards/dashboards.go linguist-generated=true cmd/workspace/dashboards/dashboards.go linguist-generated=true
cmd/workspace/data-sources/data-sources.go linguist-generated=true cmd/workspace/data-sources/data-sources.go linguist-generated=true
cmd/workspace/default-namespace/default-namespace.go linguist-generated=true cmd/workspace/default-namespace/default-namespace.go linguist-generated=true
cmd/workspace/disable-legacy-access/disable-legacy-access.go linguist-generated=true
cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true
cmd/workspace/experiments/experiments.go linguist-generated=true cmd/workspace/experiments/experiments.go linguist-generated=true
cmd/workspace/external-locations/external-locations.go linguist-generated=true cmd/workspace/external-locations/external-locations.go linguist-generated=true
@ -110,7 +108,6 @@ cmd/workspace/storage-credentials/storage-credentials.go linguist-generated=true
cmd/workspace/system-schemas/system-schemas.go linguist-generated=true cmd/workspace/system-schemas/system-schemas.go linguist-generated=true
cmd/workspace/table-constraints/table-constraints.go linguist-generated=true cmd/workspace/table-constraints/table-constraints.go linguist-generated=true
cmd/workspace/tables/tables.go linguist-generated=true cmd/workspace/tables/tables.go linguist-generated=true
cmd/workspace/temporary-table-credentials/temporary-table-credentials.go linguist-generated=true
cmd/workspace/token-management/token-management.go linguist-generated=true cmd/workspace/token-management/token-management.go linguist-generated=true
cmd/workspace/tokens/tokens.go linguist-generated=true cmd/workspace/tokens/tokens.go linguist-generated=true
cmd/workspace/users/users.go linguist-generated=true cmd/workspace/users/users.go linguist-generated=true

View File

@ -1,45 +1,5 @@
# Version changelog # Version changelog
## [Release] Release v0.229.0
Bundles:
* Added support for creating all-purpose clusters ([#1698](https://github.com/databricks/cli/pull/1698)).
* Reduce time until the prompt is shown for bundle run ([#1727](https://github.com/databricks/cli/pull/1727)).
* Use Unity Catalog for pipelines in the default-python template ([#1766](https://github.com/databricks/cli/pull/1766)).
* Add verbose flag to the "bundle deploy" command ([#1774](https://github.com/databricks/cli/pull/1774)).
* Fixed full variable override detection ([#1787](https://github.com/databricks/cli/pull/1787)).
* Add sub-extension to resource files in built-in templates ([#1777](https://github.com/databricks/cli/pull/1777)).
* Fix panic in `apply_presets.go` ([#1796](https://github.com/databricks/cli/pull/1796)).
Internal:
* Assert tokens are redacted in origin URL when username is not specified ([#1785](https://github.com/databricks/cli/pull/1785)).
* Refactor jobs path translation ([#1782](https://github.com/databricks/cli/pull/1782)).
* Add JobTaskClusterSpec validate mutator ([#1784](https://github.com/databricks/cli/pull/1784)).
* Pin Go toolchain to 1.22.7 ([#1790](https://github.com/databricks/cli/pull/1790)).
* Modify SetLocation test utility to take full locations as argument ([#1788](https://github.com/databricks/cli/pull/1788)).
* Simplified isFullVariableOverrideDef implementation ([#1791](https://github.com/databricks/cli/pull/1791)).
* Sort tasks by `task_key` before generating the Terraform configuration ([#1776](https://github.com/databricks/cli/pull/1776)).
* Trim trailing whitespace ([#1794](https://github.com/databricks/cli/pull/1794)).
* Move trampoline code into trampoline package ([#1793](https://github.com/databricks/cli/pull/1793)).
* Rename `RootPath` -> `BundleRootPath` ([#1792](https://github.com/databricks/cli/pull/1792)).
API Changes:
* Changed `databricks apps delete` command to return .
* Changed `databricks apps deploy` command with new required argument order.
* Changed `databricks apps start` command to return .
* Changed `databricks apps stop` command to return .
* Added `databricks temporary-table-credentials` command group.
* Added `databricks serving-endpoints put-ai-gateway` command.
* Added `databricks disable-legacy-access` command group.
* Added `databricks account disable-legacy-features` command group.
OpenAPI commit 6f6b1371e640f2dfeba72d365ac566368656f6b6 (2024-09-19)
Dependency updates:
* Upgrade to Go SDK 0.47.0 ([#1799](https://github.com/databricks/cli/pull/1799)).
* Upgrade to TF provider 1.52 ([#1781](https://github.com/databricks/cli/pull/1781)).
* Bump golang.org/x/mod from 0.20.0 to 0.21.0 ([#1758](https://github.com/databricks/cli/pull/1758)).
* Bump github.com/hashicorp/hc-install from 0.7.0 to 0.9.0 ([#1772](https://github.com/databricks/cli/pull/1772)).
## [Release] Release v0.228.1 ## [Release] Release v0.228.1
Bundles: Bundles:

View File

@ -16,9 +16,7 @@ import (
func validateFileFormat(configRoot dyn.Value, filePath string) diag.Diagnostics { func validateFileFormat(configRoot dyn.Value, filePath string) diag.Diagnostics {
for _, resourceDescription := range config.SupportedResources() { for _, resourceDescription := range config.SupportedResources() {
singularName := resourceDescription.SingularName singularName := resourceDescription.SingularName
for _, ext := range []string{fmt.Sprintf(".%s.yml", singularName), fmt.Sprintf(".%s.yaml", singularName)} {
for _, yamlExt := range []string{"yml", "yaml"} {
ext := fmt.Sprintf(".%s.%s", singularName, yamlExt)
if strings.HasSuffix(filePath, ext) { if strings.HasSuffix(filePath, ext) {
return validateSingleResourceDefined(configRoot, ext, singularName) return validateSingleResourceDefined(configRoot, ext, singularName)
} }
@ -44,9 +42,9 @@ func validateSingleResourceDefined(configRoot dyn.Value, ext, typ string) diag.D
configRoot, configRoot,
dyn.NewPattern(dyn.Key("resources"), dyn.AnyKey(), dyn.AnyKey()), dyn.NewPattern(dyn.Key("resources"), dyn.AnyKey(), dyn.AnyKey()),
func(p dyn.Path, v dyn.Value) (dyn.Value, error) { func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
// The key for the resource, e.g. "my_job" for jobs.my_job. // The key for the resource. Eg: "my_job" for jobs.my_job.
k := p[2].Key() k := p[2].Key()
// The type of the resource, e.g. "job" for jobs.my_job. // The type of the resource. Eg: "job" for jobs.my_job.
typ := supportedResources[p[1].Key()].SingularName typ := supportedResources[p[1].Key()].SingularName
resources = append(resources, resource{path: p, value: v, typ: typ, key: k}) resources = append(resources, resource{path: p, value: v, typ: typ, key: k})
@ -61,9 +59,9 @@ func validateSingleResourceDefined(configRoot dyn.Value, ext, typ string) diag.D
configRoot, configRoot,
dyn.NewPattern(dyn.Key("targets"), dyn.AnyKey(), dyn.Key("resources"), dyn.AnyKey(), dyn.AnyKey()), dyn.NewPattern(dyn.Key("targets"), dyn.AnyKey(), dyn.Key("resources"), dyn.AnyKey(), dyn.AnyKey()),
func(p dyn.Path, v dyn.Value) (dyn.Value, error) { func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
// The key for the resource, e.g. "my_job" for jobs.my_job. // The key for the resource. Eg: "my_job" for jobs.my_job.
k := p[4].Key() k := p[4].Key()
// The type of the resource, e.g. "job" for jobs.my_job. // The type of the resource. Eg: "job" for jobs.my_job.
typ := supportedResources[p[3].Key()].SingularName typ := supportedResources[p[3].Key()].SingularName
resources = append(resources, resource{path: p, value: v, typ: typ, key: k}) resources = append(resources, resource{path: p, value: v, typ: typ, key: k})

View File

@ -35,10 +35,8 @@ func (m *applyPresets) Name() string {
} }
func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
var diags diag.Diagnostics
if d := validatePauseStatus(b); d != nil { if d := validatePauseStatus(b); d != nil {
diags = diags.Extend(d) return d
} }
r := b.Config.Resources r := b.Config.Resources
@ -47,11 +45,7 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
tags := toTagArray(t.Tags) tags := toTagArray(t.Tags)
// Jobs presets: Prefix, Tags, JobsMaxConcurrentRuns, TriggerPauseStatus // Jobs presets: Prefix, Tags, JobsMaxConcurrentRuns, TriggerPauseStatus
for key, j := range r.Jobs { for _, j := range r.Jobs {
if j.JobSettings == nil {
diags = diags.Extend(diag.Errorf("job %s is not defined", key))
continue
}
j.Name = prefix + j.Name j.Name = prefix + j.Name
if j.Tags == nil { if j.Tags == nil {
j.Tags = make(map[string]string) j.Tags = make(map[string]string)
@ -83,27 +77,20 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
// Pipelines presets: Prefix, PipelinesDevelopment // Pipelines presets: Prefix, PipelinesDevelopment
for key, p := range r.Pipelines { for i := range r.Pipelines {
if p.PipelineSpec == nil { r.Pipelines[i].Name = prefix + r.Pipelines[i].Name
diags = diags.Extend(diag.Errorf("pipeline %s is not defined", key))
continue
}
p.Name = prefix + p.Name
if config.IsExplicitlyEnabled(t.PipelinesDevelopment) { if config.IsExplicitlyEnabled(t.PipelinesDevelopment) {
p.Development = true r.Pipelines[i].Development = true
} }
if t.TriggerPauseStatus == config.Paused { if t.TriggerPauseStatus == config.Paused {
p.Continuous = false r.Pipelines[i].Continuous = false
} }
// As of 2024-06, pipelines don't yet support tags // As of 2024-06, pipelines don't yet support tags
} }
// Models presets: Prefix, Tags // Models presets: Prefix, Tags
for key, m := range r.Models { for _, m := range r.Models {
if m.Model == nil {
diags = diags.Extend(diag.Errorf("model %s is not defined", key))
continue
}
m.Name = prefix + m.Name m.Name = prefix + m.Name
for _, t := range tags { for _, t := range tags {
exists := slices.ContainsFunc(m.Tags, func(modelTag ml.ModelTag) bool { exists := slices.ContainsFunc(m.Tags, func(modelTag ml.ModelTag) bool {
@ -117,11 +104,7 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
// Experiments presets: Prefix, Tags // Experiments presets: Prefix, Tags
for key, e := range r.Experiments { for _, e := range r.Experiments {
if e.Experiment == nil {
diags = diags.Extend(diag.Errorf("experiment %s is not defined", key))
continue
}
filepath := e.Name filepath := e.Name
dir := path.Dir(filepath) dir := path.Dir(filepath)
base := path.Base(filepath) base := path.Base(filepath)
@ -145,60 +128,40 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
// Model serving endpoint presets: Prefix // Model serving endpoint presets: Prefix
for key, e := range r.ModelServingEndpoints { for i := range r.ModelServingEndpoints {
if e.CreateServingEndpoint == nil { r.ModelServingEndpoints[i].Name = normalizePrefix(prefix) + r.ModelServingEndpoints[i].Name
diags = diags.Extend(diag.Errorf("model serving endpoint %s is not defined", key))
continue
}
e.Name = normalizePrefix(prefix) + e.Name
// As of 2024-06, model serving endpoints don't yet support tags // As of 2024-06, model serving endpoints don't yet support tags
} }
// Registered models presets: Prefix // Registered models presets: Prefix
for key, m := range r.RegisteredModels { for i := range r.RegisteredModels {
if m.CreateRegisteredModelRequest == nil { r.RegisteredModels[i].Name = normalizePrefix(prefix) + r.RegisteredModels[i].Name
diags = diags.Extend(diag.Errorf("registered model %s is not defined", key))
continue
}
m.Name = normalizePrefix(prefix) + m.Name
// As of 2024-06, registered models don't yet support tags // As of 2024-06, registered models don't yet support tags
} }
// Quality monitors presets: Schedule // Quality monitors presets: Prefix
if t.TriggerPauseStatus == config.Paused { if t.TriggerPauseStatus == config.Paused {
for key, q := range r.QualityMonitors { for i := range r.QualityMonitors {
if q.CreateMonitor == nil {
diags = diags.Extend(diag.Errorf("quality monitor %s is not defined", key))
continue
}
// Remove all schedules from monitors, since they don't support pausing/unpausing. // Remove all schedules from monitors, since they don't support pausing/unpausing.
// Quality monitors might support the "pause" property in the future, so at the // Quality monitors might support the "pause" property in the future, so at the
// CLI level we do respect that property if it is set to "unpaused." // CLI level we do respect that property if it is set to "unpaused."
if q.Schedule != nil && q.Schedule.PauseStatus != catalog.MonitorCronSchedulePauseStatusUnpaused { if r.QualityMonitors[i].Schedule != nil && r.QualityMonitors[i].Schedule.PauseStatus != catalog.MonitorCronSchedulePauseStatusUnpaused {
q.Schedule = nil r.QualityMonitors[i].Schedule = nil
} }
} }
} }
// Schemas: Prefix // Schemas: Prefix
for key, s := range r.Schemas { for i := range r.Schemas {
if s.CreateSchema == nil { r.Schemas[i].Name = normalizePrefix(prefix) + r.Schemas[i].Name
diags = diags.Extend(diag.Errorf("schema %s is not defined", key))
continue
}
s.Name = normalizePrefix(prefix) + s.Name
// HTTP API for schemas doesn't yet support tags. It's only supported in // HTTP API for schemas doesn't yet support tags. It's only supported in
// the Databricks UI and via the SQL API. // the Databricks UI and via the SQL API.
} }
// Clusters: Prefix, Tags // Clusters: Prefix, Tags
for key, c := range r.Clusters { for _, c := range r.Clusters {
if c.ClusterSpec == nil {
diags = diags.Extend(diag.Errorf("cluster %s is not defined", key))
continue
}
c.ClusterName = prefix + c.ClusterName c.ClusterName = prefix + c.ClusterName
if c.CustomTags == nil { if c.CustomTags == nil {
c.CustomTags = make(map[string]string) c.CustomTags = make(map[string]string)
@ -212,7 +175,7 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
} }
} }
return diags return nil
} }
func validatePauseStatus(b *bundle.Bundle) diag.Diagnostics { func validatePauseStatus(b *bundle.Bundle) diag.Diagnostics {

View File

@ -251,116 +251,3 @@ func TestApplyPresetsJobsMaxConcurrentRuns(t *testing.T) {
}) })
} }
} }
func TestApplyPresetsPrefixWithoutJobSettings(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"job1": {}, // no jobsettings inside
},
},
Presets: config.Presets{
NamePrefix: "prefix-",
},
},
}
ctx := context.Background()
diags := bundle.Apply(ctx, b, mutator.ApplyPresets())
require.ErrorContains(t, diags.Error(), "job job1 is not defined")
}
func TestApplyPresetsResourceNotDefined(t *testing.T) {
tests := []struct {
resources config.Resources
error string
}{
{
resources: config.Resources{
Jobs: map[string]*resources.Job{
"job1": {}, // no jobsettings inside
},
},
error: "job job1 is not defined",
},
{
resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"pipeline1": {}, // no pipelinespec inside
},
},
error: "pipeline pipeline1 is not defined",
},
{
resources: config.Resources{
Models: map[string]*resources.MlflowModel{
"model1": {}, // no model inside
},
},
error: "model model1 is not defined",
},
{
resources: config.Resources{
Experiments: map[string]*resources.MlflowExperiment{
"experiment1": {}, // no experiment inside
},
},
error: "experiment experiment1 is not defined",
},
{
resources: config.Resources{
ModelServingEndpoints: map[string]*resources.ModelServingEndpoint{
"endpoint1": {}, // no CreateServingEndpoint inside
},
RegisteredModels: map[string]*resources.RegisteredModel{
"model1": {}, // no CreateRegisteredModelRequest inside
},
},
error: "model serving endpoint endpoint1 is not defined",
},
{
resources: config.Resources{
QualityMonitors: map[string]*resources.QualityMonitor{
"monitor1": {}, // no CreateMonitor inside
},
},
error: "quality monitor monitor1 is not defined",
},
{
resources: config.Resources{
Schemas: map[string]*resources.Schema{
"schema1": {}, // no CreateSchema inside
},
},
error: "schema schema1 is not defined",
},
{
resources: config.Resources{
Clusters: map[string]*resources.Cluster{
"cluster1": {}, // no ClusterSpec inside
},
},
error: "cluster cluster1 is not defined",
},
}
for _, tt := range tests {
t.Run(tt.error, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: tt.resources,
Presets: config.Presets{
TriggerPauseStatus: config.Paused,
},
},
}
ctx := context.Background()
diags := bundle.Apply(ctx, b, mutator.ApplyPresets())
require.ErrorContains(t, diags.Error(), tt.error)
})
}
}

View File

@ -29,10 +29,6 @@ func (m *defineDefaultWorkspacePaths) Apply(ctx context.Context, b *bundle.Bundl
b.Config.Workspace.FilePath = path.Join(root, "files") b.Config.Workspace.FilePath = path.Join(root, "files")
} }
if b.Config.Workspace.ResourcePath == "" {
b.Config.Workspace.ResourcePath = path.Join(root, "resources")
}
if b.Config.Workspace.ArtifactPath == "" { if b.Config.Workspace.ArtifactPath == "" {
b.Config.Workspace.ArtifactPath = path.Join(root, "artifacts") b.Config.Workspace.ArtifactPath = path.Join(root, "artifacts")
} }

View File

@ -22,7 +22,6 @@ func TestDefineDefaultWorkspacePaths(t *testing.T) {
diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths()) diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, "/files", b.Config.Workspace.FilePath) assert.Equal(t, "/files", b.Config.Workspace.FilePath)
assert.Equal(t, "/resources", b.Config.Workspace.ResourcePath)
assert.Equal(t, "/artifacts", b.Config.Workspace.ArtifactPath) assert.Equal(t, "/artifacts", b.Config.Workspace.ArtifactPath)
assert.Equal(t, "/state", b.Config.Workspace.StatePath) assert.Equal(t, "/state", b.Config.Workspace.StatePath)
} }
@ -33,7 +32,6 @@ func TestDefineDefaultWorkspacePathsAlreadySet(t *testing.T) {
Workspace: config.Workspace{ Workspace: config.Workspace{
RootPath: "/", RootPath: "/",
FilePath: "/foo/bar", FilePath: "/foo/bar",
ResourcePath: "/foo/bar",
ArtifactPath: "/foo/bar", ArtifactPath: "/foo/bar",
StatePath: "/foo/bar", StatePath: "/foo/bar",
}, },
@ -42,7 +40,6 @@ func TestDefineDefaultWorkspacePathsAlreadySet(t *testing.T) {
diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths()) diags := bundle.Apply(context.Background(), b, mutator.DefineDefaultWorkspacePaths())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, "/foo/bar", b.Config.Workspace.FilePath) assert.Equal(t, "/foo/bar", b.Config.Workspace.FilePath)
assert.Equal(t, "/foo/bar", b.Config.Workspace.ResourcePath)
assert.Equal(t, "/foo/bar", b.Config.Workspace.ArtifactPath) assert.Equal(t, "/foo/bar", b.Config.Workspace.ArtifactPath)
assert.Equal(t, "/foo/bar", b.Config.Workspace.StatePath) assert.Equal(t, "/foo/bar", b.Config.Workspace.StatePath)
} }

View File

@ -33,7 +33,7 @@ func (m *expandWorkspaceRoot) Apply(ctx context.Context, b *bundle.Bundle) diag.
} }
if strings.HasPrefix(root, "~/") { if strings.HasPrefix(root, "~/") {
home := fmt.Sprintf("/Workspace/Users/%s", currentUser.UserName) home := fmt.Sprintf("/Users/%s", currentUser.UserName)
b.Config.Workspace.RootPath = path.Join(home, root[2:]) b.Config.Workspace.RootPath = path.Join(home, root[2:])
} }

View File

@ -27,7 +27,7 @@ func TestExpandWorkspaceRoot(t *testing.T) {
} }
diags := bundle.Apply(context.Background(), b, mutator.ExpandWorkspaceRoot()) diags := bundle.Apply(context.Background(), b, mutator.ExpandWorkspaceRoot())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, "/Workspace/Users/jane@doe.com/foo", b.Config.Workspace.RootPath) assert.Equal(t, "/Users/jane@doe.com/foo", b.Config.Workspace.RootPath)
} }
func TestExpandWorkspaceRootDoesNothing(t *testing.T) { func TestExpandWorkspaceRootDoesNothing(t *testing.T) {

View File

@ -1,67 +0,0 @@
package mutator
import (
"context"
"fmt"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
)
type prependWorkspacePrefix struct{}
// PrependWorkspacePrefix prepends the workspace root path to all paths in the bundle.
func PrependWorkspacePrefix() bundle.Mutator {
return &prependWorkspacePrefix{}
}
func (m *prependWorkspacePrefix) Name() string {
return "PrependWorkspacePrefix"
}
var skipPrefixes = []string{
"/Workspace/",
"/Volumes/",
}
func (m *prependWorkspacePrefix) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
patterns := []dyn.Pattern{
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("root_path")),
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("file_path")),
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("artifact_path")),
dyn.NewPattern(dyn.Key("workspace"), dyn.Key("state_path")),
}
err := b.Config.Mutate(func(v dyn.Value) (dyn.Value, error) {
var err error
for _, pattern := range patterns {
v, err = dyn.MapByPattern(v, pattern, func(p dyn.Path, pv dyn.Value) (dyn.Value, error) {
path, ok := pv.AsString()
if !ok {
return dyn.InvalidValue, fmt.Errorf("expected string, got %s", v.Kind())
}
for _, prefix := range skipPrefixes {
if strings.HasPrefix(path, prefix) {
return pv, nil
}
}
return dyn.NewValue(fmt.Sprintf("/Workspace%s", path), v.Locations()), nil
})
if err != nil {
return dyn.InvalidValue, err
}
}
return v, nil
})
if err != nil {
return diag.FromErr(err)
}
return nil
}

View File

@ -1,79 +0,0 @@
package mutator
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/databricks-sdk-go/service/iam"
"github.com/stretchr/testify/require"
)
func TestPrependWorkspacePrefix(t *testing.T) {
testCases := []struct {
path string
expected string
}{
{
path: "/Users/test",
expected: "/Workspace/Users/test",
},
{
path: "/Shared/test",
expected: "/Workspace/Shared/test",
},
{
path: "/Workspace/Users/test",
expected: "/Workspace/Users/test",
},
{
path: "/Volumes/Users/test",
expected: "/Volumes/Users/test",
},
}
for _, tc := range testCases {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: tc.path,
ArtifactPath: tc.path,
FilePath: tc.path,
StatePath: tc.path,
},
},
}
diags := bundle.Apply(context.Background(), b, PrependWorkspacePrefix())
require.Empty(t, diags)
require.Equal(t, tc.expected, b.Config.Workspace.RootPath)
require.Equal(t, tc.expected, b.Config.Workspace.ArtifactPath)
require.Equal(t, tc.expected, b.Config.Workspace.FilePath)
require.Equal(t, tc.expected, b.Config.Workspace.StatePath)
}
}
func TestPrependWorkspaceForDefaultConfig(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Bundle: config.Bundle{
Name: "test",
Target: "dev",
},
Workspace: config.Workspace{
CurrentUser: &config.User{
User: &iam.User{
UserName: "jane@doe.com",
},
},
},
},
}
diags := bundle.Apply(context.Background(), b, bundle.Seq(DefineDefaultWorkspaceRoot(), ExpandWorkspaceRoot(), DefineDefaultWorkspacePaths(), PrependWorkspacePrefix()))
require.Empty(t, diags)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev", b.Config.Workspace.RootPath)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/artifacts", b.Config.Workspace.ArtifactPath)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/files", b.Config.Workspace.FilePath)
require.Equal(t, "/Workspace/Users/jane@doe.com/.bundle/test/dev/state", b.Config.Workspace.StatePath)
}

View File

@ -118,18 +118,15 @@ func findNonUserPath(b *bundle.Bundle) string {
if b.Config.Workspace.RootPath != "" && !containsName(b.Config.Workspace.RootPath) { if b.Config.Workspace.RootPath != "" && !containsName(b.Config.Workspace.RootPath) {
return "root_path" return "root_path"
} }
if b.Config.Workspace.StatePath != "" && !containsName(b.Config.Workspace.StatePath) {
return "state_path"
}
if b.Config.Workspace.FilePath != "" && !containsName(b.Config.Workspace.FilePath) { if b.Config.Workspace.FilePath != "" && !containsName(b.Config.Workspace.FilePath) {
return "file_path" return "file_path"
} }
if b.Config.Workspace.ResourcePath != "" && !containsName(b.Config.Workspace.ResourcePath) {
return "resource_path"
}
if b.Config.Workspace.ArtifactPath != "" && !containsName(b.Config.Workspace.ArtifactPath) { if b.Config.Workspace.ArtifactPath != "" && !containsName(b.Config.Workspace.ArtifactPath) {
return "artifact_path" return "artifact_path"
} }
if b.Config.Workspace.StatePath != "" && !containsName(b.Config.Workspace.StatePath) {
return "state_path"
}
return "" return ""
} }

View File

@ -1,72 +0,0 @@
package mutator
import (
"context"
"fmt"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
)
type rewriteWorkspacePrefix struct{}
// RewriteWorkspacePrefix finds any strings in bundle configration that have
// workspace prefix plus workspace path variable used and removes workspace prefix from it.
func RewriteWorkspacePrefix() bundle.Mutator {
return &rewriteWorkspacePrefix{}
}
func (m *rewriteWorkspacePrefix) Name() string {
return "RewriteWorkspacePrefix"
}
func (m *rewriteWorkspacePrefix) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
diags := diag.Diagnostics{}
paths := map[string]string{
"/Workspace/${workspace.root_path}": "${workspace.root_path}",
"/Workspace${workspace.root_path}": "${workspace.root_path}",
"/Workspace/${workspace.file_path}": "${workspace.file_path}",
"/Workspace${workspace.file_path}": "${workspace.file_path}",
"/Workspace/${workspace.artifact_path}": "${workspace.artifact_path}",
"/Workspace${workspace.artifact_path}": "${workspace.artifact_path}",
"/Workspace/${workspace.state_path}": "${workspace.state_path}",
"/Workspace${workspace.state_path}": "${workspace.state_path}",
}
err := b.Config.Mutate(func(root dyn.Value) (dyn.Value, error) {
// Walk through the bundle configuration, check all the string leafs and
// see if any of the prefixes are used in the remote path.
return dyn.Walk(root, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
vv, ok := v.AsString()
if !ok {
return v, nil
}
for path, replacePath := range paths {
if strings.Contains(vv, path) {
newPath := strings.Replace(vv, path, replacePath, 1)
diags = append(diags, diag.Diagnostic{
Severity: diag.Warning,
Summary: fmt.Sprintf("substring %q found in %q. Please update this to %q.", path, vv, newPath),
Detail: "For more information, please refer to: https://docs.databricks.com/en/release-notes/dev-tools/bundles.html#workspace-paths",
Locations: v.Locations(),
Paths: []dyn.Path{p},
})
// Remove the workspace prefix from the string.
return dyn.NewValue(newPath, v.Locations()), nil
}
}
return v, nil
})
})
if err != nil {
return diag.FromErr(err)
}
return diags
}

View File

@ -1,85 +0,0 @@
package mutator
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require"
)
func TestNoWorkspacePrefixUsed(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/test",
ArtifactPath: "/Workspace/Users/test/artifacts",
FilePath: "/Workspace/Users/test/files",
StatePath: "/Workspace/Users/test/state",
},
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"test_job": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
SparkPythonTask: &jobs.SparkPythonTask{
PythonFile: "/Workspace/${workspace.root_path}/file1.py",
},
},
{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "/Workspace${workspace.file_path}/notebook1",
},
Libraries: []compute.Library{
{
Jar: "/Workspace/${workspace.artifact_path}/jar1.jar",
},
},
},
{
NotebookTask: &jobs.NotebookTask{
NotebookPath: "${workspace.file_path}/notebook2",
},
Libraries: []compute.Library{
{
Jar: "${workspace.artifact_path}/jar2.jar",
},
},
},
},
},
},
},
},
},
}
diags := bundle.Apply(context.Background(), b, RewriteWorkspacePrefix())
require.Len(t, diags, 3)
expectedErrors := map[string]bool{
`substring "/Workspace/${workspace.root_path}" found in "/Workspace/${workspace.root_path}/file1.py". Please update this to "${workspace.root_path}/file1.py".`: true,
`substring "/Workspace${workspace.file_path}" found in "/Workspace${workspace.file_path}/notebook1". Please update this to "${workspace.file_path}/notebook1".`: true,
`substring "/Workspace/${workspace.artifact_path}" found in "/Workspace/${workspace.artifact_path}/jar1.jar". Please update this to "${workspace.artifact_path}/jar1.jar".`: true,
}
for _, d := range diags {
require.Equal(t, d.Severity, diag.Warning)
require.Contains(t, expectedErrors, d.Summary)
delete(expectedErrors, d.Summary)
}
require.Equal(t, "${workspace.root_path}/file1.py", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[0].SparkPythonTask.PythonFile)
require.Equal(t, "${workspace.file_path}/notebook1", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[1].NotebookTask.NotebookPath)
require.Equal(t, "${workspace.artifact_path}/jar1.jar", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[1].Libraries[0].Jar)
require.Equal(t, "${workspace.file_path}/notebook2", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[2].NotebookTask.NotebookPath)
require.Equal(t, "${workspace.artifact_path}/jar2.jar", b.Config.Resources.Jobs["test_job"].JobSettings.Tasks[2].Libraries[0].Jar)
}

View File

@ -47,18 +47,13 @@ type Workspace struct {
// Remote workspace base path for deployment state, for artifacts, as synchronization target. // Remote workspace base path for deployment state, for artifacts, as synchronization target.
// This defaults to "~/.bundle/${bundle.name}/${bundle.target}" where "~" expands to // This defaults to "~/.bundle/${bundle.name}/${bundle.target}" where "~" expands to
// the current user's home directory in the workspace (e.g. `/Workspace/Users/jane@doe.com`). // the current user's home directory in the workspace (e.g. `/Users/jane@doe.com`).
RootPath string `json:"root_path,omitempty"` RootPath string `json:"root_path,omitempty"`
// Remote workspace path to synchronize local files to. // Remote workspace path to synchronize local files to.
// This defaults to "${workspace.root}/files". // This defaults to "${workspace.root}/files".
FilePath string `json:"file_path,omitempty"` FilePath string `json:"file_path,omitempty"`
// Remote workspace path for resources with a presence in the workspace.
// These are kept outside [FilePath] to avoid potential naming collisions.
// This defaults to "${workspace.root}/resources".
ResourcePath string `json:"resource_path,omitempty"`
// Remote workspace path for build artifacts. // Remote workspace path for build artifacts.
// This defaults to "${workspace.root}/artifacts". // This defaults to "${workspace.root}/artifacts".
ArtifactPath string `json:"artifact_path,omitempty"` ArtifactPath string `json:"artifact_path,omitempty"`

View File

@ -10,8 +10,6 @@ import (
"github.com/databricks/cli/libs/log" "github.com/databricks/cli/libs/log"
) )
const MaxStateFileSize = 10 * 1024 * 1024 // 10MB
type statePush struct { type statePush struct {
filerFactory FilerFactory filerFactory FilerFactory
} }
@ -37,17 +35,6 @@ func (s *statePush) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostic
} }
defer local.Close() defer local.Close()
if !b.Config.Bundle.Force {
state, err := local.Stat()
if err != nil {
return diag.FromErr(err)
}
if state.Size() > MaxStateFileSize {
return diag.Errorf("Deployment state file size exceeds the maximum allowed size of %d bytes. Please reduce the number of resources in your bundle, split your bundle into multiple or re-run the command with --force flag.", MaxStateFileSize)
}
}
log.Infof(ctx, "Writing local deployment state file to remote state directory") log.Infof(ctx, "Writing local deployment state file to remote state directory")
err = f.Write(ctx, DeploymentStateFileName, local, filer.CreateParentDirectories, filer.OverwriteIfExists) err = f.Write(ctx, DeploymentStateFileName, local, filer.CreateParentDirectories, filer.OverwriteIfExists)
if err != nil { if err != nil {

View File

@ -47,17 +47,6 @@ func (l *statePush) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostic
} }
defer local.Close() defer local.Close()
if !b.Config.Bundle.Force {
state, err := local.Stat()
if err != nil {
return diag.FromErr(err)
}
if state.Size() > deploy.MaxStateFileSize {
return diag.Errorf("Terraform state file size exceeds the maximum allowed size of %d bytes. Please reduce the number of resources in your bundle, split your bundle into multiple or re-run the command with --force flag", deploy.MaxStateFileSize)
}
}
// Upload state file from local cache directory to filer. // Upload state file from local cache directory to filer.
cmdio.LogString(ctx, "Updating deployment state...") cmdio.LogString(ctx, "Updating deployment state...")
log.Infof(ctx, "Writing local state file to remote state directory") log.Infof(ctx, "Writing local state file to remote state directory")

View File

@ -3,7 +3,6 @@ package terraform
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"io" "io"
"testing" "testing"
@ -60,29 +59,3 @@ func TestStatePush(t *testing.T) {
diags := bundle.Apply(ctx, b, m) diags := bundle.Apply(ctx, b, m)
assert.NoError(t, diags.Error()) assert.NoError(t, diags.Error())
} }
func TestStatePushLargeState(t *testing.T) {
mock := mockfiler.NewMockFiler(t)
m := &statePush{
identityFiler(mock),
}
ctx := context.Background()
b := statePushTestBundle(t)
largeState := map[string]any{}
for i := 0; i < 1000000; i++ {
largeState[fmt.Sprintf("field_%d", i)] = i
}
// Write a stale local state file.
writeLocalState(t, ctx, b, largeState)
diags := bundle.Apply(ctx, b, m)
assert.ErrorContains(t, diags.Error(), "Terraform state file size exceeds the maximum allowed size of 10485760 bytes. Please reduce the number of resources in your bundle, split your bundle into multiple or re-run the command with --force flag")
// Force the write.
b = statePushTestBundle(t)
b.Config.Bundle.Force = true
diags = bundle.Apply(ctx, b, m)
assert.NoError(t, diags.Error())
}

View File

@ -39,16 +39,9 @@ func Initialize() bundle.Mutator {
mutator.MergePipelineClusters(), mutator.MergePipelineClusters(),
mutator.InitializeWorkspaceClient(), mutator.InitializeWorkspaceClient(),
mutator.PopulateCurrentUser(), mutator.PopulateCurrentUser(),
mutator.DefineDefaultWorkspaceRoot(), mutator.DefineDefaultWorkspaceRoot(),
mutator.ExpandWorkspaceRoot(), mutator.ExpandWorkspaceRoot(),
mutator.DefineDefaultWorkspacePaths(), mutator.DefineDefaultWorkspacePaths(),
mutator.PrependWorkspacePrefix(),
// This mutator needs to be run before variable interpolation because it
// searches for strings with variable references in them.
mutator.RewriteWorkspacePrefix(),
mutator.SetVariables(), mutator.SetVariables(),
// Intentionally placed before ResolveVariableReferencesInLookup, ResolveResourceReferences, // Intentionally placed before ResolveVariableReferencesInLookup, ResolveResourceReferences,
// ResolveVariableReferencesInComplexVariables and ResolveVariableReferences. // ResolveVariableReferencesInComplexVariables and ResolveVariableReferences.

View File

@ -59,127 +59,6 @@
"cli": { "cli": {
"bundle": { "bundle": {
"config": { "config": {
"resources.Cluster": {
"anyOf": [
{
"type": "object",
"properties": {
"apply_policy_default_values": {
"description": "When set to true, fixed and default values from the policy will be used for fields that are omitted. When set to false, only fixed values from the policy will be applied.",
"$ref": "#/$defs/bool"
},
"autoscale": {
"description": "Parameters needed in order to automatically scale clusters up and down based on load.\nNote: autoscaling works best with DB runtime versions 3.0 or later.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.AutoScale"
},
"autotermination_minutes": {
"description": "Automatically terminates the cluster after it is inactive for this time in minutes. If not set,\nthis cluster will not be automatically terminated. If specified, the threshold must be between\n10 and 10000 minutes.\nUsers can also set this value to 0 to explicitly disable automatic termination.",
"$ref": "#/$defs/int"
},
"aws_attributes": {
"description": "Attributes related to clusters running on Amazon Web Services.\nIf not specified at cluster creation, a set of default values will be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes"
},
"azure_attributes": {
"description": "Attributes related to clusters running on Microsoft Azure.\nIf not specified at cluster creation, a set of default values will be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.AzureAttributes"
},
"cluster_log_conf": {
"description": "The configuration for delivering spark logs to a long-term storage destination.\nTwo kinds of destinations (dbfs and s3) are supported. Only one destination can be specified\nfor one cluster. If the conf is given, the logs will be delivered to the destination every\n`5 mins`. The destination of driver logs is `$destination/$clusterId/driver`, while\nthe destination of executor logs is `$destination/$clusterId/executor`.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.ClusterLogConf"
},
"cluster_name": {
"description": "Cluster name requested by the user. This doesn't have to be unique.\nIf not specified at creation, the cluster name will be an empty string.\n",
"$ref": "#/$defs/string"
},
"custom_tags": {
"description": "Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS\ninstances and EBS volumes) with these tags in addition to `default_tags`. Notes:\n\n- Currently, Databricks allows at most 45 custom tags\n\n- Clusters can only reuse cloud resources if the resources' tags are a subset of the cluster tags",
"$ref": "#/$defs/map/string"
},
"data_security_mode": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.DataSecurityMode"
},
"docker_image": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.DockerImage"
},
"driver_instance_pool_id": {
"description": "The optional ID of the instance pool for the driver of the cluster belongs.\nThe pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not\nassigned.",
"$ref": "#/$defs/string"
},
"driver_node_type_id": {
"description": "The node type of the Spark driver. Note that this field is optional;\nif unset, the driver node type will be set as the same value\nas `node_type_id` defined above.\n",
"$ref": "#/$defs/string"
},
"enable_elastic_disk": {
"description": "Autoscaling Local Storage: when enabled, this cluster will dynamically acquire additional disk\nspace when its Spark workers are running low on disk space. This feature requires specific AWS\npermissions to function correctly - refer to the User Guide for more details.",
"$ref": "#/$defs/bool"
},
"enable_local_disk_encryption": {
"description": "Whether to enable LUKS on cluster VMs' local disks",
"$ref": "#/$defs/bool"
},
"gcp_attributes": {
"description": "Attributes related to clusters running on Google Cloud Platform.\nIf not specified at cluster creation, a set of default values will be used.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.GcpAttributes"
},
"init_scripts": {
"description": "The configuration for storing init scripts. Any number of destinations can be specified. The scripts are executed sequentially in the order provided. If `cluster_log_conf` is specified, init script logs are sent to `\u003cdestination\u003e/\u003ccluster-ID\u003e/init_scripts`.",
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/compute.InitScriptInfo"
},
"instance_pool_id": {
"description": "The optional ID of the instance pool to which the cluster belongs.",
"$ref": "#/$defs/string"
},
"node_type_id": {
"description": "This field encodes, through a single value, the resources available to each of\nthe Spark nodes in this cluster. For example, the Spark nodes can be provisioned\nand optimized for memory or compute intensive workloads. A list of available node\ntypes can be retrieved by using the :method:clusters/listNodeTypes API call.\n",
"$ref": "#/$defs/string"
},
"num_workers": {
"description": "Number of worker nodes that this cluster should have. A cluster has one Spark Driver\nand `num_workers` Executors for a total of `num_workers` + 1 Spark nodes.\n\nNote: When reading the properties of a cluster, this field reflects the desired number\nof workers rather than the actual current number of workers. For instance, if a cluster\nis resized from 5 to 10 workers, this field will immediately be updated to reflect\nthe target size of 10 workers, whereas the workers listed in `spark_info` will gradually\nincrease from 5 to 10 as the new nodes are provisioned.",
"$ref": "#/$defs/int"
},
"permissions": {
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission"
},
"policy_id": {
"description": "The ID of the cluster policy used to create the cluster if applicable.",
"$ref": "#/$defs/string"
},
"runtime_engine": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.RuntimeEngine"
},
"single_user_name": {
"description": "Single user name if data_security_mode is `SINGLE_USER`",
"$ref": "#/$defs/string"
},
"spark_conf": {
"description": "An object containing a set of optional, user-specified Spark configuration key-value pairs.\nUsers can also pass in a string of extra JVM options to the driver and the executors via\n`spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions` respectively.\n",
"$ref": "#/$defs/map/string"
},
"spark_env_vars": {
"description": "An object containing a set of optional, user-specified environment variable key-value pairs.\nPlease note that key-value pair of the form (X,Y) will be exported as is (i.e.,\n`export X='Y'`) while launching the driver and workers.\n\nIn order to specify an additional set of `SPARK_DAEMON_JAVA_OPTS`, we recommend appending\nthem to `$SPARK_DAEMON_JAVA_OPTS` as shown in the example below. This ensures that all\ndefault databricks managed environmental variables are included as well.\n\nExample Spark environment variables:\n`{\"SPARK_WORKER_MEMORY\": \"28000m\", \"SPARK_LOCAL_DIRS\": \"/local_disk0\"}` or\n`{\"SPARK_DAEMON_JAVA_OPTS\": \"$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true\"}`",
"$ref": "#/$defs/map/string"
},
"spark_version": {
"description": "The Spark version of the cluster, e.g. `3.3.x-scala2.11`.\nA list of available Spark versions can be retrieved by using\nthe :method:clusters/sparkVersions API call.\n",
"$ref": "#/$defs/string"
},
"ssh_public_keys": {
"description": "SSH public key contents that will be added to each Spark node in this cluster. The\ncorresponding private keys can be used to login with the user name `ubuntu` on port `2200`.\nUp to 10 keys can be specified.",
"$ref": "#/$defs/slice/string"
},
"workload_type": {
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/compute.WorkloadType"
}
},
"additionalProperties": false
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Grant": { "resources.Grant": {
"anyOf": [ "anyOf": [
{ {
@ -230,7 +109,7 @@
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.JobEmailNotifications" "$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/jobs.JobEmailNotifications"
}, },
"environments": { "environments": {
"description": "A list of task execution environment specifications that can be referenced by serverless tasks of this job.\nAn environment is required to be present for serverless tasks.\nFor serverless notebook tasks, the environment is accessible in the notebook environment panel.\nFor other serverless tasks, the task environment is required to be specified using environment_key in the task settings.", "description": "A list of task execution environment specifications that can be referenced by tasks of this job.",
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/jobs.JobEnvironment" "$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/jobs.JobEnvironment"
}, },
"format": { "format": {
@ -414,7 +293,7 @@
"$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission" "$ref": "#/$defs/slice/github.com/databricks/cli/bundle/config/resources.Permission"
}, },
"rate_limits": { "rate_limits": {
"description": "Rate limits to be applied to the serving endpoint. NOTE: this field is deprecated, please use AI Gateway to manage rate limits.", "description": "Rate limits to be applied to the serving endpoint. NOTE: only external and foundation model endpoints are supported as of now.",
"$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.RateLimit" "$ref": "#/$defs/slice/github.com/databricks/databricks-sdk-go/service/serving.RateLimit"
}, },
"route_optimized": { "route_optimized": {
@ -868,9 +747,6 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"cluster_id": {
"$ref": "#/$defs/string"
},
"compute_id": { "compute_id": {
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
@ -1047,9 +923,6 @@
{ {
"type": "object", "type": "object",
"properties": { "properties": {
"clusters": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.Cluster"
},
"experiments": { "experiments": {
"$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment" "$ref": "#/$defs/map/github.com/databricks/cli/bundle/config/resources.MlflowExperiment"
}, },
@ -1117,9 +990,6 @@
"bundle": { "bundle": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config.Bundle" "$ref": "#/$defs/github.com/databricks/cli/bundle/config.Bundle"
}, },
"cluster_id": {
"$ref": "#/$defs/string"
},
"compute_id": { "compute_id": {
"$ref": "#/$defs/string" "$ref": "#/$defs/string"
}, },
@ -2158,7 +2028,7 @@
}, },
"compute.RuntimeEngine": { "compute.RuntimeEngine": {
"type": "string", "type": "string",
"description": "Determines the cluster's runtime engine, either standard or Photon.\n\nThis field is not compatible with legacy `spark_version` values that contain `-photon-`.\nRemove `-photon-` from the `spark_version` and set `runtime_engine` to `PHOTON`.\n\nIf left unspecified, the runtime engine defaults to standard unless the spark_version\ncontains -photon-, in which case Photon will be used.\n", "description": "Decides which runtime engine to be use, e.g. Standard vs. Photon. If unspecified, the runtime\nengine is inferred from spark_version.",
"enum": [ "enum": [
"NULL", "NULL",
"STANDARD", "STANDARD",
@ -2740,7 +2610,7 @@
"anyOf": [ "anyOf": [
{ {
"type": "object", "type": "object",
"description": "Write-only setting. Specifies the user, service principal or group that the job/pipeline runs as. If not specified, the job/pipeline runs as the user who created the job/pipeline.\n\nExactly one of `user_name`, `service_principal_name`, `group_name` should be specified. If not, an error is thrown.", "description": "Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who created the job.\n\nOnly `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown.",
"properties": { "properties": {
"service_principal_name": { "service_principal_name": {
"description": "Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role.", "description": "Application ID of an active service principal. Setting this field requires the `servicePrincipal/user` role.",
@ -5034,20 +4904,6 @@
"cli": { "cli": {
"bundle": { "bundle": {
"config": { "config": {
"resources.Cluster": {
"anyOf": [
{
"type": "object",
"additionalProperties": {
"$ref": "#/$defs/github.com/databricks/cli/bundle/config/resources.Cluster"
}
},
{
"type": "string",
"pattern": "\\$\\{(var(\\.[a-zA-Z]+([-_]?[a-zA-Z0-9]+)*(\\[[0-9]+\\])*)+)\\}"
}
]
},
"resources.Job": { "resources.Job": {
"anyOf": [ "anyOf": [
{ {

View File

@ -11,7 +11,7 @@ func TestExpandPipelineGlobPaths(t *testing.T) {
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
require.Equal( require.Equal(
t, t,
"/Workspace/Users/user@domain.com/.bundle/pipeline_glob_paths/default/files/dlt/nyc_taxi_loader", "/Users/user@domain.com/.bundle/pipeline_glob_paths/default/files/dlt/nyc_taxi_loader",
b.Config.Resources.Pipelines["nyc_taxi_pipeline"].Libraries[0].Notebook.Path, b.Config.Resources.Pipelines["nyc_taxi_pipeline"].Libraries[0].Notebook.Path,
) )
} }

View File

@ -12,9 +12,9 @@ func TestRelativePathTranslationDefault(t *testing.T) {
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
t0 := b.Config.Resources.Jobs["job"].Tasks[0] t0 := b.Config.Resources.Jobs["job"].Tasks[0]
assert.Equal(t, "/Workspace/remote/src/file1.py", t0.SparkPythonTask.PythonFile) assert.Equal(t, "/remote/src/file1.py", t0.SparkPythonTask.PythonFile)
t1 := b.Config.Resources.Jobs["job"].Tasks[1] t1 := b.Config.Resources.Jobs["job"].Tasks[1]
assert.Equal(t, "/Workspace/remote/src/file1.py", t1.SparkPythonTask.PythonFile) assert.Equal(t, "/remote/src/file1.py", t1.SparkPythonTask.PythonFile)
} }
func TestRelativePathTranslationOverride(t *testing.T) { func TestRelativePathTranslationOverride(t *testing.T) {
@ -22,7 +22,7 @@ func TestRelativePathTranslationOverride(t *testing.T) {
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
t0 := b.Config.Resources.Jobs["job"].Tasks[0] t0 := b.Config.Resources.Jobs["job"].Tasks[0]
assert.Equal(t, "/Workspace/remote/src/file2.py", t0.SparkPythonTask.PythonFile) assert.Equal(t, "/remote/src/file2.py", t0.SparkPythonTask.PythonFile)
t1 := b.Config.Resources.Jobs["job"].Tasks[1] t1 := b.Config.Resources.Jobs["job"].Tasks[1]
assert.Equal(t, "/Workspace/remote/src/file2.py", t1.SparkPythonTask.PythonFile) assert.Equal(t, "/remote/src/file2.py", t1.SparkPythonTask.PythonFile)
} }

View File

@ -1,215 +0,0 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package disable_legacy_features
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "disable-legacy-features",
Short: `Disable legacy features for new Databricks workspaces.`,
Long: `Disable legacy features for new Databricks workspaces.
For newly created workspaces: 1. Disables the use of DBFS root and mounts. 2.
Hive Metastore will not be provisioned. 3. Disables the use of No-isolation
clusters. 4. Disables Databricks Runtime versions prior to 13.3LTS.`,
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteDisableLegacyFeaturesRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteDisableLegacyFeaturesRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete the disable legacy features setting.`
cmd.Long = `Delete the disable legacy features setting.
Deletes the disable legacy features setting.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
response, err := a.Settings.DisableLegacyFeatures().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*settings.GetDisableLegacyFeaturesRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq settings.GetDisableLegacyFeaturesRequest
// TODO: short flags
cmd.Flags().StringVar(&getReq.Etag, "etag", getReq.Etag, `etag used for versioning.`)
cmd.Use = "get"
cmd.Short = `Get the disable legacy features setting.`
cmd.Long = `Get the disable legacy features setting.
Gets the value of the disable legacy features setting.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
response, err := a.Settings.DisableLegacyFeatures().Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*settings.UpdateDisableLegacyFeaturesRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq settings.UpdateDisableLegacyFeaturesRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Use = "update"
cmd.Short = `Update the disable legacy features setting.`
cmd.Long = `Update the disable legacy features setting.
Updates the value of the disable legacy features setting.`
cmd.Annotations = make(map[string]string)
cmd.PreRunE = root.MustAccountClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
a := root.AccountClient(ctx)
if cmd.Flags().Changed("json") {
err = updateJson.Unmarshal(&updateReq)
if err != nil {
return err
}
} else {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}
response, err := a.Settings.DisableLegacyFeatures().Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service DisableLegacyFeatures

View File

@ -6,7 +6,6 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
csp_enablement_account "github.com/databricks/cli/cmd/account/csp-enablement-account" csp_enablement_account "github.com/databricks/cli/cmd/account/csp-enablement-account"
disable_legacy_features "github.com/databricks/cli/cmd/account/disable-legacy-features"
esm_enablement_account "github.com/databricks/cli/cmd/account/esm-enablement-account" esm_enablement_account "github.com/databricks/cli/cmd/account/esm-enablement-account"
personal_compute "github.com/databricks/cli/cmd/account/personal-compute" personal_compute "github.com/databricks/cli/cmd/account/personal-compute"
) )
@ -28,7 +27,6 @@ func New() *cobra.Command {
// Add subservices // Add subservices
cmd.AddCommand(csp_enablement_account.New()) cmd.AddCommand(csp_enablement_account.New())
cmd.AddCommand(disable_legacy_features.New())
cmd.AddCommand(esm_enablement_account.New()) cmd.AddCommand(esm_enablement_account.New())
cmd.AddCommand(personal_compute.New()) cmd.AddCommand(personal_compute.New())

View File

@ -75,8 +75,8 @@ func newCreate() *cobra.Command {
var createSkipWait bool var createSkipWait bool
var createTimeout time.Duration var createTimeout time.Duration
cmd.Flags().BoolVar(&createSkipWait, "no-wait", createSkipWait, `do not wait to reach ACTIVE state`) cmd.Flags().BoolVar(&createSkipWait, "no-wait", createSkipWait, `do not wait to reach IDLE state`)
cmd.Flags().DurationVar(&createTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach ACTIVE state`) cmd.Flags().DurationVar(&createTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach IDLE state`)
// TODO: short flags // TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
@ -130,13 +130,13 @@ func newCreate() *cobra.Command {
} }
spinner := cmdio.Spinner(ctx) spinner := cmdio.Spinner(ctx)
info, err := wait.OnProgress(func(i *apps.App) { info, err := wait.OnProgress(func(i *apps.App) {
if i.ComputeStatus == nil { if i.Status == nil {
return return
} }
status := i.ComputeStatus.State status := i.Status.State
statusMessage := fmt.Sprintf("current status: %s", status) statusMessage := fmt.Sprintf("current status: %s", status)
if i.ComputeStatus != nil { if i.Status != nil {
statusMessage = i.ComputeStatus.Message statusMessage = i.Status.Message
} }
spinner <- statusMessage spinner <- statusMessage
}).GetWithTimeout(createTimeout) }).GetWithTimeout(createTimeout)
@ -198,11 +198,11 @@ func newDelete() *cobra.Command {
deleteReq.Name = args[0] deleteReq.Name = args[0]
response, err := w.Apps.Delete(ctx, deleteReq) err = w.Apps.Delete(ctx, deleteReq)
if err != nil { if err != nil {
return err return err
} }
return cmdio.Render(ctx, response) return nil
} }
// Disable completions since they are not applicable. // Disable completions since they are not applicable.
@ -240,23 +240,35 @@ func newDeploy() *cobra.Command {
// TODO: short flags // TODO: short flags
cmd.Flags().Var(&deployJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&deployJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&deployReq.DeploymentId, "deployment-id", deployReq.DeploymentId, `The unique id of the deployment.`)
cmd.Flags().Var(&deployReq.Mode, "mode", `The mode of which the deployment will manage the source code. Supported values: [AUTO_SYNC, SNAPSHOT]`) cmd.Flags().Var(&deployReq.Mode, "mode", `The mode of which the deployment will manage the source code. Supported values: [AUTO_SYNC, SNAPSHOT]`)
cmd.Flags().StringVar(&deployReq.SourceCodePath, "source-code-path", deployReq.SourceCodePath, `The workspace file system path of the source code used to create the app deployment.`)
cmd.Use = "deploy APP_NAME" cmd.Use = "deploy APP_NAME SOURCE_CODE_PATH"
cmd.Short = `Create an app deployment.` cmd.Short = `Create an app deployment.`
cmd.Long = `Create an app deployment. cmd.Long = `Create an app deployment.
Creates an app deployment for the app with the supplied name. Creates an app deployment for the app with the supplied name.
Arguments: Arguments:
APP_NAME: The name of the app.` APP_NAME: The name of the app.
SOURCE_CODE_PATH: The workspace file system path of the source code used to create the app
deployment. This is different from
deployment_artifacts.source_code_path, which is the path used by the
deployed app. The former refers to the original source code location of
the app in the workspace during deployment creation, whereas the latter
provides a system generated stable snapshotted source code path used by
the deployment.`
cmd.Annotations = make(map[string]string) cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error { cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1) if cmd.Flags().Changed("json") {
err := root.ExactArgs(1)(cmd, args)
if err != nil {
return fmt.Errorf("when --json flag is specified, provide only APP_NAME as positional arguments. Provide 'source_code_path' in your JSON input")
}
return nil
}
check := root.ExactArgs(2)
return check(cmd, args) return check(cmd, args)
} }
@ -272,6 +284,9 @@ func newDeploy() *cobra.Command {
} }
} }
deployReq.AppName = args[0] deployReq.AppName = args[0]
if !cmd.Flags().Changed("json") {
deployReq.SourceCodePath = args[1]
}
wait, err := w.Apps.Deploy(ctx, deployReq) wait, err := w.Apps.Deploy(ctx, deployReq)
if err != nil { if err != nil {
@ -744,8 +759,8 @@ func newStart() *cobra.Command {
var startSkipWait bool var startSkipWait bool
var startTimeout time.Duration var startTimeout time.Duration
cmd.Flags().BoolVar(&startSkipWait, "no-wait", startSkipWait, `do not wait to reach ACTIVE state`) cmd.Flags().BoolVar(&startSkipWait, "no-wait", startSkipWait, `do not wait to reach SUCCEEDED state`)
cmd.Flags().DurationVar(&startTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach ACTIVE state`) cmd.Flags().DurationVar(&startTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach SUCCEEDED state`)
// TODO: short flags // TODO: short flags
cmd.Use = "start NAME" cmd.Use = "start NAME"
@ -779,14 +794,14 @@ func newStart() *cobra.Command {
return cmdio.Render(ctx, wait.Response) return cmdio.Render(ctx, wait.Response)
} }
spinner := cmdio.Spinner(ctx) spinner := cmdio.Spinner(ctx)
info, err := wait.OnProgress(func(i *apps.App) { info, err := wait.OnProgress(func(i *apps.AppDeployment) {
if i.ComputeStatus == nil { if i.Status == nil {
return return
} }
status := i.ComputeStatus.State status := i.Status.State
statusMessage := fmt.Sprintf("current status: %s", status) statusMessage := fmt.Sprintf("current status: %s", status)
if i.ComputeStatus != nil { if i.Status != nil {
statusMessage = i.ComputeStatus.Message statusMessage = i.Status.Message
} }
spinner <- statusMessage spinner <- statusMessage
}).GetWithTimeout(startTimeout) }).GetWithTimeout(startTimeout)
@ -823,11 +838,6 @@ func newStop() *cobra.Command {
var stopReq apps.StopAppRequest var stopReq apps.StopAppRequest
var stopSkipWait bool
var stopTimeout time.Duration
cmd.Flags().BoolVar(&stopSkipWait, "no-wait", stopSkipWait, `do not wait to reach STOPPED state`)
cmd.Flags().DurationVar(&stopTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach STOPPED state`)
// TODO: short flags // TODO: short flags
cmd.Use = "stop NAME" cmd.Use = "stop NAME"
@ -853,30 +863,11 @@ func newStop() *cobra.Command {
stopReq.Name = args[0] stopReq.Name = args[0]
wait, err := w.Apps.Stop(ctx, stopReq) err = w.Apps.Stop(ctx, stopReq)
if err != nil { if err != nil {
return err return err
} }
if stopSkipWait { return nil
return cmdio.Render(ctx, wait.Response)
}
spinner := cmdio.Spinner(ctx)
info, err := wait.OnProgress(func(i *apps.App) {
if i.ComputeStatus == nil {
return
}
status := i.ComputeStatus.State
statusMessage := fmt.Sprintf("current status: %s", status)
if i.ComputeStatus != nil {
statusMessage = i.ComputeStatus.Message
}
spinner <- statusMessage
}).GetWithTimeout(stopTimeout)
close(spinner)
if err != nil {
return err
}
return cmdio.Render(ctx, info)
} }
// Disable completions since they are not applicable. // Disable completions since they are not applicable.

View File

@ -217,7 +217,7 @@ func newCreate() *cobra.Command {
cmd.Flags().StringVar(&createReq.NodeTypeId, "node-type-id", createReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`) cmd.Flags().StringVar(&createReq.NodeTypeId, "node-type-id", createReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`)
cmd.Flags().IntVar(&createReq.NumWorkers, "num-workers", createReq.NumWorkers, `Number of worker nodes that this cluster should have.`) cmd.Flags().IntVar(&createReq.NumWorkers, "num-workers", createReq.NumWorkers, `Number of worker nodes that this cluster should have.`)
cmd.Flags().StringVar(&createReq.PolicyId, "policy-id", createReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`) cmd.Flags().StringVar(&createReq.PolicyId, "policy-id", createReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`)
cmd.Flags().Var(&createReq.RuntimeEngine, "runtime-engine", `Determines the cluster's runtime engine, either standard or Photon. Supported values: [NULL, PHOTON, STANDARD]`) cmd.Flags().Var(&createReq.RuntimeEngine, "runtime-engine", `Decides which runtime engine to be use, e.g. Supported values: [NULL, PHOTON, STANDARD]`)
cmd.Flags().StringVar(&createReq.SingleUserName, "single-user-name", createReq.SingleUserName, `Single user name if data_security_mode is SINGLE_USER.`) cmd.Flags().StringVar(&createReq.SingleUserName, "single-user-name", createReq.SingleUserName, `Single user name if data_security_mode is SINGLE_USER.`)
// TODO: map via StringToStringVar: spark_conf // TODO: map via StringToStringVar: spark_conf
// TODO: map via StringToStringVar: spark_env_vars // TODO: map via StringToStringVar: spark_env_vars
@ -236,12 +236,6 @@ func newCreate() *cobra.Command {
If Databricks acquires at least 85% of the requested on-demand nodes, cluster If Databricks acquires at least 85% of the requested on-demand nodes, cluster
creation will succeed. Otherwise the cluster will terminate with an creation will succeed. Otherwise the cluster will terminate with an
informative error message. informative error message.
Rather than authoring the cluster's JSON definition from scratch, Databricks
recommends filling out the [create compute UI] and then copying the generated
JSON definition from the UI.
[create compute UI]: https://docs.databricks.com/compute/configure.html
Arguments: Arguments:
SPARK_VERSION: The Spark version of the cluster, e.g. 3.3.x-scala2.11. A list of SPARK_VERSION: The Spark version of the cluster, e.g. 3.3.x-scala2.11. A list of
@ -469,7 +463,7 @@ func newEdit() *cobra.Command {
cmd.Flags().StringVar(&editReq.NodeTypeId, "node-type-id", editReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`) cmd.Flags().StringVar(&editReq.NodeTypeId, "node-type-id", editReq.NodeTypeId, `This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster.`)
cmd.Flags().IntVar(&editReq.NumWorkers, "num-workers", editReq.NumWorkers, `Number of worker nodes that this cluster should have.`) cmd.Flags().IntVar(&editReq.NumWorkers, "num-workers", editReq.NumWorkers, `Number of worker nodes that this cluster should have.`)
cmd.Flags().StringVar(&editReq.PolicyId, "policy-id", editReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`) cmd.Flags().StringVar(&editReq.PolicyId, "policy-id", editReq.PolicyId, `The ID of the cluster policy used to create the cluster if applicable.`)
cmd.Flags().Var(&editReq.RuntimeEngine, "runtime-engine", `Determines the cluster's runtime engine, either standard or Photon. Supported values: [NULL, PHOTON, STANDARD]`) cmd.Flags().Var(&editReq.RuntimeEngine, "runtime-engine", `Decides which runtime engine to be use, e.g. Supported values: [NULL, PHOTON, STANDARD]`)
cmd.Flags().StringVar(&editReq.SingleUserName, "single-user-name", editReq.SingleUserName, `Single user name if data_security_mode is SINGLE_USER.`) cmd.Flags().StringVar(&editReq.SingleUserName, "single-user-name", editReq.SingleUserName, `Single user name if data_security_mode is SINGLE_USER.`)
// TODO: map via StringToStringVar: spark_conf // TODO: map via StringToStringVar: spark_conf
// TODO: map via StringToStringVar: spark_env_vars // TODO: map via StringToStringVar: spark_env_vars

2
cmd/workspace/cmd.go generated
View File

@ -76,7 +76,6 @@ import (
system_schemas "github.com/databricks/cli/cmd/workspace/system-schemas" system_schemas "github.com/databricks/cli/cmd/workspace/system-schemas"
table_constraints "github.com/databricks/cli/cmd/workspace/table-constraints" table_constraints "github.com/databricks/cli/cmd/workspace/table-constraints"
tables "github.com/databricks/cli/cmd/workspace/tables" tables "github.com/databricks/cli/cmd/workspace/tables"
temporary_table_credentials "github.com/databricks/cli/cmd/workspace/temporary-table-credentials"
token_management "github.com/databricks/cli/cmd/workspace/token-management" token_management "github.com/databricks/cli/cmd/workspace/token-management"
tokens "github.com/databricks/cli/cmd/workspace/tokens" tokens "github.com/databricks/cli/cmd/workspace/tokens"
users "github.com/databricks/cli/cmd/workspace/users" users "github.com/databricks/cli/cmd/workspace/users"
@ -166,7 +165,6 @@ func All() []*cobra.Command {
out = append(out, system_schemas.New()) out = append(out, system_schemas.New())
out = append(out, table_constraints.New()) out = append(out, table_constraints.New())
out = append(out, tables.New()) out = append(out, tables.New())
out = append(out, temporary_table_credentials.New())
out = append(out, token_management.New()) out = append(out, token_management.New())
out = append(out, tokens.New()) out = append(out, tokens.New())
out = append(out, users.New()) out = append(out, users.New())

View File

@ -1,217 +0,0 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package disable_legacy_access
import (
"fmt"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "disable-legacy-access",
Short: `'Disabling legacy access' has the following impacts: 1.`,
Long: `'Disabling legacy access' has the following impacts:
1. Disables direct access to the Hive Metastore. However, you can still access
Hive Metastore through HMS Federation. 2. Disables Fallback Mode (docs link)
on any External Location access from the workspace. 3. Alters DBFS path access
to use External Location permissions in place of legacy credentials. 4.
Enforces Unity Catalog access on all path based access.`,
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newDelete())
cmd.AddCommand(newGet())
cmd.AddCommand(newUpdate())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start delete command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var deleteOverrides []func(
*cobra.Command,
*settings.DeleteDisableLegacyAccessRequest,
)
func newDelete() *cobra.Command {
cmd := &cobra.Command{}
var deleteReq settings.DeleteDisableLegacyAccessRequest
// TODO: short flags
cmd.Flags().StringVar(&deleteReq.Etag, "etag", deleteReq.Etag, `etag used for versioning.`)
cmd.Use = "delete"
cmd.Short = `Delete Legacy Access Disablement Status.`
cmd.Long = `Delete Legacy Access Disablement Status.
Deletes legacy access disablement status.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyAccess().Delete(ctx, deleteReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range deleteOverrides {
fn(cmd, &deleteReq)
}
return cmd
}
// start get command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getOverrides []func(
*cobra.Command,
*settings.GetDisableLegacyAccessRequest,
)
func newGet() *cobra.Command {
cmd := &cobra.Command{}
var getReq settings.GetDisableLegacyAccessRequest
// TODO: short flags
cmd.Flags().StringVar(&getReq.Etag, "etag", getReq.Etag, `etag used for versioning.`)
cmd.Use = "get"
cmd.Short = `Retrieve Legacy Access Disablement Status.`
cmd.Long = `Retrieve Legacy Access Disablement Status.
Retrieves legacy access disablement Status.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
response, err := w.Settings.DisableLegacyAccess().Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getOverrides {
fn(cmd, &getReq)
}
return cmd
}
// start update command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var updateOverrides []func(
*cobra.Command,
*settings.UpdateDisableLegacyAccessRequest,
)
func newUpdate() *cobra.Command {
cmd := &cobra.Command{}
var updateReq settings.UpdateDisableLegacyAccessRequest
var updateJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Use = "update"
cmd.Short = `Update Legacy Access Disablement Status.`
cmd.Long = `Update Legacy Access Disablement Status.
Updates legacy access disablement status.`
cmd.Annotations = make(map[string]string)
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
err = updateJson.Unmarshal(&updateReq)
if err != nil {
return err
}
} else {
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
}
response, err := w.Settings.DisableLegacyAccess().Update(ctx, updateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range updateOverrides {
fn(cmd, &updateReq)
}
return cmd
}
// end service DisableLegacyAccess

View File

@ -935,7 +935,6 @@ func newUpdate() *cobra.Command {
cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().BoolVar(&updateReq.AllowDuplicateNames, "allow-duplicate-names", updateReq.AllowDuplicateNames, `If false, deployment will fail if name has changed and conflicts the name of another pipeline.`) cmd.Flags().BoolVar(&updateReq.AllowDuplicateNames, "allow-duplicate-names", updateReq.AllowDuplicateNames, `If false, deployment will fail if name has changed and conflicts the name of another pipeline.`)
cmd.Flags().StringVar(&updateReq.BudgetPolicyId, "budget-policy-id", updateReq.BudgetPolicyId, `Budget policy of this pipeline.`)
cmd.Flags().StringVar(&updateReq.Catalog, "catalog", updateReq.Catalog, `A catalog in Unity Catalog to publish data from this pipeline to.`) cmd.Flags().StringVar(&updateReq.Catalog, "catalog", updateReq.Catalog, `A catalog in Unity Catalog to publish data from this pipeline to.`)
cmd.Flags().StringVar(&updateReq.Channel, "channel", updateReq.Channel, `DLT Release Channel that specifies which version to use.`) cmd.Flags().StringVar(&updateReq.Channel, "channel", updateReq.Channel, `DLT Release Channel that specifies which version to use.`)
// TODO: array: clusters // TODO: array: clusters

View File

@ -53,7 +53,6 @@ func New() *cobra.Command {
cmd.AddCommand(newLogs()) cmd.AddCommand(newLogs())
cmd.AddCommand(newPatch()) cmd.AddCommand(newPatch())
cmd.AddCommand(newPut()) cmd.AddCommand(newPut())
cmd.AddCommand(newPutAiGateway())
cmd.AddCommand(newQuery()) cmd.AddCommand(newQuery())
cmd.AddCommand(newSetPermissions()) cmd.AddCommand(newSetPermissions())
cmd.AddCommand(newUpdateConfig()) cmd.AddCommand(newUpdateConfig())
@ -152,7 +151,6 @@ func newCreate() *cobra.Command {
// TODO: short flags // TODO: short flags
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`) cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: ai_gateway
// TODO: array: rate_limits // TODO: array: rate_limits
cmd.Flags().BoolVar(&createReq.RouteOptimized, "route-optimized", createReq.RouteOptimized, `Enable route optimization for the serving endpoint.`) cmd.Flags().BoolVar(&createReq.RouteOptimized, "route-optimized", createReq.RouteOptimized, `Enable route optimization for the serving endpoint.`)
// TODO: array: tags // TODO: array: tags
@ -756,9 +754,8 @@ func newPut() *cobra.Command {
cmd.Short = `Update rate limits of a serving endpoint.` cmd.Short = `Update rate limits of a serving endpoint.`
cmd.Long = `Update rate limits of a serving endpoint. cmd.Long = `Update rate limits of a serving endpoint.
Used to update the rate limits of a serving endpoint. NOTE: Only foundation Used to update the rate limits of a serving endpoint. NOTE: only external and
model endpoints are currently supported. For external models, use AI Gateway foundation model endpoints are supported as of now.
to manage rate limits.
Arguments: Arguments:
NAME: The name of the serving endpoint whose rate limits are being updated. This NAME: The name of the serving endpoint whose rate limits are being updated. This
@ -803,79 +800,6 @@ func newPut() *cobra.Command {
return cmd return cmd
} }
// start put-ai-gateway command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var putAiGatewayOverrides []func(
*cobra.Command,
*serving.PutAiGatewayRequest,
)
func newPutAiGateway() *cobra.Command {
cmd := &cobra.Command{}
var putAiGatewayReq serving.PutAiGatewayRequest
var putAiGatewayJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&putAiGatewayJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: guardrails
// TODO: complex arg: inference_table_config
// TODO: array: rate_limits
// TODO: complex arg: usage_tracking_config
cmd.Use = "put-ai-gateway NAME"
cmd.Short = `Update AI Gateway of a serving endpoint.`
cmd.Long = `Update AI Gateway of a serving endpoint.
Used to update the AI Gateway of a serving endpoint. NOTE: Only external model
endpoints are currently supported.
Arguments:
NAME: The name of the serving endpoint whose AI Gateway is being updated. This
field is required.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
err = putAiGatewayJson.Unmarshal(&putAiGatewayReq)
if err != nil {
return err
}
}
putAiGatewayReq.Name = args[0]
response, err := w.ServingEndpoints.PutAiGateway(ctx, putAiGatewayReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range putAiGatewayOverrides {
fn(cmd, &putAiGatewayReq)
}
return cmd
}
// start query command // start query command
// Slice with functions to override default command behavior. // Slice with functions to override default command behavior.

View File

@ -8,7 +8,6 @@ import (
automatic_cluster_update "github.com/databricks/cli/cmd/workspace/automatic-cluster-update" automatic_cluster_update "github.com/databricks/cli/cmd/workspace/automatic-cluster-update"
compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile" compliance_security_profile "github.com/databricks/cli/cmd/workspace/compliance-security-profile"
default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace" default_namespace "github.com/databricks/cli/cmd/workspace/default-namespace"
disable_legacy_access "github.com/databricks/cli/cmd/workspace/disable-legacy-access"
enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring" enhanced_security_monitoring "github.com/databricks/cli/cmd/workspace/enhanced-security-monitoring"
restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins" restrict_workspace_admins "github.com/databricks/cli/cmd/workspace/restrict-workspace-admins"
) )
@ -32,7 +31,6 @@ func New() *cobra.Command {
cmd.AddCommand(automatic_cluster_update.New()) cmd.AddCommand(automatic_cluster_update.New())
cmd.AddCommand(compliance_security_profile.New()) cmd.AddCommand(compliance_security_profile.New())
cmd.AddCommand(default_namespace.New()) cmd.AddCommand(default_namespace.New())
cmd.AddCommand(disable_legacy_access.New())
cmd.AddCommand(enhanced_security_monitoring.New()) cmd.AddCommand(enhanced_security_monitoring.New())
cmd.AddCommand(restrict_workspace_admins.New()) cmd.AddCommand(restrict_workspace_admins.New())

View File

@ -220,7 +220,6 @@ func newGet() *cobra.Command {
cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include tables in the response for which the principal can only access selective metadata for.`) cmd.Flags().BoolVar(&getReq.IncludeBrowse, "include-browse", getReq.IncludeBrowse, `Whether to include tables in the response for which the principal can only access selective metadata for.`)
cmd.Flags().BoolVar(&getReq.IncludeDeltaMetadata, "include-delta-metadata", getReq.IncludeDeltaMetadata, `Whether delta metadata should be included in the response.`) cmd.Flags().BoolVar(&getReq.IncludeDeltaMetadata, "include-delta-metadata", getReq.IncludeDeltaMetadata, `Whether delta metadata should be included in the response.`)
cmd.Flags().BoolVar(&getReq.IncludeManifestCapabilities, "include-manifest-capabilities", getReq.IncludeManifestCapabilities, `Whether to include a manifest containing capabilities the table has.`)
cmd.Use = "get FULL_NAME" cmd.Use = "get FULL_NAME"
cmd.Short = `Get a table.` cmd.Short = `Get a table.`
@ -300,7 +299,6 @@ func newList() *cobra.Command {
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include tables in the response for which the principal can only access selective metadata for.`) cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include tables in the response for which the principal can only access selective metadata for.`)
cmd.Flags().BoolVar(&listReq.IncludeDeltaMetadata, "include-delta-metadata", listReq.IncludeDeltaMetadata, `Whether delta metadata should be included in the response.`) cmd.Flags().BoolVar(&listReq.IncludeDeltaMetadata, "include-delta-metadata", listReq.IncludeDeltaMetadata, `Whether delta metadata should be included in the response.`)
cmd.Flags().BoolVar(&listReq.IncludeManifestCapabilities, "include-manifest-capabilities", listReq.IncludeManifestCapabilities, `Whether to include a manifest containing capabilities the table has.`)
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of tables to return.`) cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of tables to return.`)
cmd.Flags().BoolVar(&listReq.OmitColumns, "omit-columns", listReq.OmitColumns, `Whether to omit the columns of the table from the response or not.`) cmd.Flags().BoolVar(&listReq.OmitColumns, "omit-columns", listReq.OmitColumns, `Whether to omit the columns of the table from the response or not.`)
cmd.Flags().BoolVar(&listReq.OmitProperties, "omit-properties", listReq.OmitProperties, `Whether to omit the properties of the table from the response or not.`) cmd.Flags().BoolVar(&listReq.OmitProperties, "omit-properties", listReq.OmitProperties, `Whether to omit the properties of the table from the response or not.`)
@ -368,7 +366,6 @@ func newListSummaries() *cobra.Command {
// TODO: short flags // TODO: short flags
cmd.Flags().BoolVar(&listSummariesReq.IncludeManifestCapabilities, "include-manifest-capabilities", listSummariesReq.IncludeManifestCapabilities, `Whether to include a manifest containing capabilities the table has.`)
cmd.Flags().IntVar(&listSummariesReq.MaxResults, "max-results", listSummariesReq.MaxResults, `Maximum number of summaries for tables to return.`) cmd.Flags().IntVar(&listSummariesReq.MaxResults, "max-results", listSummariesReq.MaxResults, `Maximum number of summaries for tables to return.`)
cmd.Flags().StringVar(&listSummariesReq.PageToken, "page-token", listSummariesReq.PageToken, `Opaque pagination token to go to next page based on previous query.`) cmd.Flags().StringVar(&listSummariesReq.PageToken, "page-token", listSummariesReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
cmd.Flags().StringVar(&listSummariesReq.SchemaNamePattern, "schema-name-pattern", listSummariesReq.SchemaNamePattern, `A sql LIKE pattern (% and _) for schema names.`) cmd.Flags().StringVar(&listSummariesReq.SchemaNamePattern, "schema-name-pattern", listSummariesReq.SchemaNamePattern, `A sql LIKE pattern (% and _) for schema names.`)

View File

@ -1,122 +0,0 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package temporary_table_credentials
import (
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "temporary-table-credentials",
Short: `Temporary Table Credentials refer to short-lived, downscoped credentials used to access cloud storage locationswhere table data is stored in Databricks.`,
Long: `Temporary Table Credentials refer to short-lived, downscoped credentials used
to access cloud storage locationswhere table data is stored in Databricks.
These credentials are employed to provide secure and time-limitedaccess to
data in cloud environments such as AWS, Azure, and Google Cloud. Each cloud
provider has its own typeof credentials: AWS uses temporary session tokens via
AWS Security Token Service (STS), Azure utilizesShared Access Signatures (SAS)
for its data storage services, and Google Cloud supports temporary
credentialsthrough OAuth 2.0.Temporary table credentials ensure that data
access is limited in scope and duration, reducing the risk ofunauthorized
access or misuse. To use the temporary table credentials API, a metastore
admin needs to enable the external_access_enabled flag (off by default) at the
metastore level, and user needs to be granted the EXTERNAL USE SCHEMA
permission at the schema level by catalog admin. Note that EXTERNAL USE SCHEMA
is a schema level permission that can only be granted by catalog admin
explicitly and is not included in schema ownership or ALL PRIVILEGES on the
schema for security reason.`,
GroupID: "catalog",
Annotations: map[string]string{
"package": "catalog",
},
}
// Add methods
cmd.AddCommand(newGenerateTemporaryTableCredentials())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start generate-temporary-table-credentials command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var generateTemporaryTableCredentialsOverrides []func(
*cobra.Command,
*catalog.GenerateTemporaryTableCredentialRequest,
)
func newGenerateTemporaryTableCredentials() *cobra.Command {
cmd := &cobra.Command{}
var generateTemporaryTableCredentialsReq catalog.GenerateTemporaryTableCredentialRequest
var generateTemporaryTableCredentialsJson flags.JsonFlag
// TODO: short flags
cmd.Flags().Var(&generateTemporaryTableCredentialsJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().Var(&generateTemporaryTableCredentialsReq.Operation, "operation", `The operation performed against the table data, either READ or READ_WRITE. Supported values: [READ, READ_WRITE]`)
cmd.Flags().StringVar(&generateTemporaryTableCredentialsReq.TableId, "table-id", generateTemporaryTableCredentialsReq.TableId, `UUID of the table to read or write.`)
cmd.Use = "generate-temporary-table-credentials"
cmd.Short = `Generate a temporary table credential.`
cmd.Long = `Generate a temporary table credential.
Get a short-lived credential for directly accessing the table data on cloud
storage. The metastore must have external_access_enabled flag set to true
(default false). The caller must have EXTERNAL_USE_SCHEMA privilege on the
parent schema and this privilege can only be granted by catalog owners.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(0)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
err = generateTemporaryTableCredentialsJson.Unmarshal(&generateTemporaryTableCredentialsReq)
if err != nil {
return err
}
}
response, err := w.TemporaryTableCredentials.GenerateTemporaryTableCredentials(ctx, generateTemporaryTableCredentialsReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range generateTemporaryTableCredentialsOverrides {
fn(cmd, &generateTemporaryTableCredentialsReq)
}
return cmd
}
// end service TemporaryTableCredentials

2
go.mod
View File

@ -7,7 +7,7 @@ toolchain go1.22.7
require ( require (
github.com/Masterminds/semver/v3 v3.3.0 // MIT github.com/Masterminds/semver/v3 v3.3.0 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0 github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.47.0 // Apache 2.0 github.com/databricks/databricks-sdk-go v0.46.0 // Apache 2.0
github.com/fatih/color v1.17.0 // MIT github.com/fatih/color v1.17.0 // MIT
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
github.com/google/uuid v1.6.0 // BSD-3-Clause github.com/google/uuid v1.6.0 // BSD-3-Clause

4
go.sum generated
View File

@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg= github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4= github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.47.0 h1:eE7dN9axviL8+s10jnQAayOYDaR+Mfu7E9COGjO4lrQ= github.com/databricks/databricks-sdk-go v0.46.0 h1:D0TxmtSVAOsdnfzH4OGtAmcq+8TyA7Z6fA6JEYhupeY=
github.com/databricks/databricks-sdk-go v0.47.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU= github.com/databricks/databricks-sdk-go v0.46.0/go.mod h1:ds+zbv5mlQG7nFEU5ojLtgN/u0/9YzZmKQES/CfedzU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -236,7 +236,7 @@ func TestAccDeployBasicBundleLogs(t *testing.T) {
stdout, stderr := blackBoxRun(t, root, "bundle", "deploy") stdout, stderr := blackBoxRun(t, root, "bundle", "deploy")
assert.Equal(t, strings.Join([]string{ assert.Equal(t, strings.Join([]string{
fmt.Sprintf("Uploading bundle files to /Workspace/Users/%s/.bundle/%s/files...", currentUser.UserName, uniqueId), fmt.Sprintf("Uploading bundle files to /Users/%s/.bundle/%s/files...", currentUser.UserName, uniqueId),
"Deploying resources...", "Deploying resources...",
"Updating deployment state...", "Updating deployment state...",
"Deployment complete!\n", "Deployment complete!\n",

View File

@ -114,7 +114,7 @@ func getBundleRemoteRootPath(w *databricks.WorkspaceClient, t *testing.T, unique
// Compute root path for the bundle deployment // Compute root path for the bundle deployment
me, err := w.CurrentUser.Me(context.Background()) me, err := w.CurrentUser.Me(context.Background())
require.NoError(t, err) require.NoError(t, err)
root := fmt.Sprintf("/Workspace/Users/%s/.bundle/%s", me.UserName, uniqueId) root := fmt.Sprintf("/Users/%s/.bundle/%s", me.UserName, uniqueId)
return root return root
} }

View File

@ -24,8 +24,8 @@ targets:
mode: production mode: production
workspace: workspace:
host: {{workspace_host}} host: {{workspace_host}}
# We explicitly specify /Workspace/Users/{{user_name}} to make sure we only have a single copy. # We explicitly specify /Users/{{user_name}} to make sure we only have a single copy.
root_path: /Workspace/Users/{{user_name}}/.bundle/${bundle.name}/${bundle.target} root_path: /Users/{{user_name}}/.bundle/${bundle.name}/${bundle.target}
permissions: permissions:
- {{if is_service_principal}}service_principal{{else}}user{{end}}_name: {{user_name}} - {{if is_service_principal}}service_principal{{else}}user{{end}}_name: {{user_name}}
level: CAN_MANAGE level: CAN_MANAGE

View File

@ -21,8 +21,8 @@ targets:
mode: production mode: production
workspace: workspace:
host: {{workspace_host}} host: {{workspace_host}}
# We explicitly specify /Workspace/Users/{{user_name}} to make sure we only have a single copy. # We explicitly specify /Users/{{user_name}} to make sure we only have a single copy.
root_path: /Workspace/Users/{{user_name}}/.bundle/${bundle.name}/${bundle.target} root_path: /Users/{{user_name}}/.bundle/${bundle.name}/${bundle.target}
permissions: permissions:
- {{if is_service_principal}}service_principal{{else}}user{{end}}_name: {{user_name}} - {{if is_service_principal}}service_principal{{else}}user{{end}}_name: {{user_name}}
level: CAN_MANAGE level: CAN_MANAGE

View File

@ -15,4 +15,4 @@ resources:
path: ../src/dlt_pipeline.ipynb path: ../src/dlt_pipeline.ipynb
configuration: configuration:
bundle.sourcePath: ${workspace.file_path}/src bundle.sourcePath: /Workspace/${workspace.file_path}/src

View File

@ -41,8 +41,8 @@ targets:
mode: production mode: production
workspace: workspace:
host: {{workspace_host}} host: {{workspace_host}}
# We explicitly specify /Workspace/Users/{{user_name}} to make sure we only have a single copy. # We explicitly specify /Users/{{user_name}} to make sure we only have a single copy.
root_path: /Workspace/Users/{{user_name}}/.bundle/${bundle.name}/${bundle.target} root_path: /Users/{{user_name}}/.bundle/${bundle.name}/${bundle.target}
variables: variables:
warehouse_id: {{index ((regexp "[^/]+$").FindStringSubmatch .http_path) 0}} warehouse_id: {{index ((regexp "[^/]+$").FindStringSubmatch .http_path) 0}}
catalog: {{.default_catalog}} catalog: {{.default_catalog}}