Compare commits

...

8 Commits

Author SHA1 Message Date
Andrew Nester 735d72225a
fix fmt 2024-11-26 16:16:47 +01:00
Andrew Nester 989af951a7
Merge branch 'main' into feature/apps 2024-11-26 16:14:51 +01:00
Andrew Nester ee0d261b25
move app deployment to bundle run 2024-11-26 16:08:26 +01:00
dependabot[bot] 4b069bb6e1
Bump golang.org/x/term from 0.25.0 to 0.26.0 (#1907)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.25.0 to
0.26.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b725e362a8"><code>b725e36</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="54df7da90d"><code>54df7da</code></a>
README: don't recommend go get</li>
<li>See full diff in <a
href="https://github.com/golang/term/compare/v0.25.0...v0.26.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/term&package-manager=go_modules&previous-version=0.25.0&new-version=0.26.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 13:46:20 +00:00
shreyas-goenka b323703c1b
Add validation for single node clusters (#1909)
## Changes
This PR adds a warning validating that the configuration for a single
node cluster is valid for interactive, job, job-task, and pipeline
clusters.

Note: We skip the validation if a cluster policy is configured because
the policy is likely to configure `spark_conf` / `custom_tags` itself.

Note: Terrform originally only had validation for interactive, job, and
job-task clusters. This PR adding the validation for pipeline clusters
as well is new.

This PR follows the same logic as we used to have in Terraform. The
validation was removed from Terraform because we had no way to demote
the error to a warning:
https://github.com/databricks/terraform-provider-databricks/pull/4222

### Background
Single-node clusters require `spark_conf` and `custom_tags` to be
correctly set in the cluster definition for them to function optimally.
The cluster will be created even if incorrectly configured, but its
performance will not be great.

For example, if both `spark_conf` and `custom_tags` are not set and
`num_workers` is 0, then only the driver process will be launched on the
cluster compute instance thus leading to sub-optimal utilization of
available compute resources and no parallelization across worker
processes when processing a spark query.

### Issue

This PR addresses some issues reported in
https://github.com/databricks/cli/issues/1546

## Tests
Unit tests and manually.

Example output of the warning:
```
➜  bundle-playground git:(master) ✗ cli bundle validate
Warning: Single node cluster is not correctly configured
  at resources.pipelines.bar.clusters[0]
  in databricks.yml:29:11

num_workers should be 0 only for single-node clusters. To create a
valid single node cluster please ensure that the following properties
are correctly set in the cluster specification:

  spark_conf:
    spark.databricks.cluster.profile: singleNode
    spark.master: local[*]

  custom_tags:
    ResourceClass: SingleNode
  

Name: foobar
Target: default
Workspace:
  User: shreyas.goenka@databricks.com
  Path: /Workspace/Users/shreyas.goenka@databricks.com/.bundle/foobar/default

Found 1 warning
```
2024-11-22 15:48:09 +00:00
Ilya Kuznetsov 490dd058aa
Extended message for warning when source-linked mode is used outside of the workspace (#1929)
## Changes

Added path and locations to the warning which displayed when
source-linked mode is used outside of the workspace
2024-11-22 14:44:33 +00:00
Pieter Noordhuis abfd1713e0
Skip sync warning if no sync paths are defined (#1926)
## Changes

Users can configure the bundle to not synchronize any files with:
```yaml
sync:
  paths: []
```

If it is explicitly configured as an empty list, the validate command
must not warn about not having any files to synchronize. The warning
exists to alert users who are unintentionally not synchronizing any
files (they might have a `.gitignore` pattern that matches everything).

Closes #1663.

## Tests

* New unit test.
2024-11-21 15:03:13 +00:00
Pieter Noordhuis a3cea07c9e
Support lookup by name of notification destinations (#1922)
## Changes

Add support for notification destinations in variable lookups.

More information:
https://docs.databricks.com/en/admin/workspace-settings/notification-destinations.html

Depends on #1921.

## Tests

* New unit test
* Manually confirmed that the lookup works
2024-11-21 15:52:14 +01:00
27 changed files with 1578 additions and 298 deletions

View File

@ -222,14 +222,37 @@ func (m *applyPresets) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnos
dashboard.DisplayName = prefix + dashboard.DisplayName dashboard.DisplayName = prefix + dashboard.DisplayName
} }
// Apps doesn't support tags or prefixes yet. // Apps: Prefix
for key, app := range r.Apps {
if app == nil || app.App == nil {
diags = diags.Extend(diag.Errorf("app %s is not defined", key))
continue
}
app.Name = textutil.NormalizeString(prefix + app.Name)
// Normalize the app name to ensure it is a valid identifier.
// App supports only alphanumeric characters and hyphens.
app.Name = strings.ReplaceAll(app.Name, "_", "-")
}
if config.IsExplicitlyEnabled((b.Config.Presets.SourceLinkedDeployment)) { if config.IsExplicitlyEnabled((b.Config.Presets.SourceLinkedDeployment)) {
isDatabricksWorkspace := dbr.RunsOnRuntime(ctx) && strings.HasPrefix(b.SyncRootPath, "/Workspace/") isDatabricksWorkspace := dbr.RunsOnRuntime(ctx) && strings.HasPrefix(b.SyncRootPath, "/Workspace/")
if !isDatabricksWorkspace { if !isDatabricksWorkspace {
target := b.Config.Bundle.Target
path := dyn.NewPath(dyn.Key("targets"), dyn.Key(target), dyn.Key("presets"), dyn.Key("source_linked_deployment"))
diags = diags.Append(
diag.Diagnostic{
Severity: diag.Warning,
Summary: "source-linked deployment is available only in the Databricks Workspace",
Paths: []dyn.Path{
path,
},
Locations: b.Config.GetLocations(path[2:].String()),
},
)
disabled := false disabled := false
b.Config.Presets.SourceLinkedDeployment = &disabled b.Config.Presets.SourceLinkedDeployment = &disabled
diags = diags.Extend(diag.Warningf("source-linked deployment is available only in the Databricks Workspace"))
} }
} }

View File

@ -9,7 +9,10 @@ import (
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator" "github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/dbr" "github.com/databricks/cli/libs/dbr"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/apps"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -435,6 +438,7 @@ func TestApplyPresetsSourceLinkedDeployment(t *testing.T) {
}, },
} }
bundletest.SetLocation(b, "presets.source_linked_deployment", []dyn.Location{{File: "databricks.yml"}})
diags := bundle.Apply(tt.ctx, b, mutator.ApplyPresets()) diags := bundle.Apply(tt.ctx, b, mutator.ApplyPresets())
if diags.HasError() { if diags.HasError() {
t.Fatalf("unexpected error: %v", diags) t.Fatalf("unexpected error: %v", diags)
@ -442,6 +446,7 @@ func TestApplyPresetsSourceLinkedDeployment(t *testing.T) {
if tt.expectedWarning != "" { if tt.expectedWarning != "" {
require.Equal(t, tt.expectedWarning, diags[0].Summary) require.Equal(t, tt.expectedWarning, diags[0].Summary)
require.NotEmpty(t, diags[0].Locations)
} }
require.Equal(t, tt.expectedValue, b.Config.Presets.SourceLinkedDeployment) require.Equal(t, tt.expectedValue, b.Config.Presets.SourceLinkedDeployment)
@ -449,3 +454,59 @@ func TestApplyPresetsSourceLinkedDeployment(t *testing.T) {
} }
} }
func TestApplyPresetsPrefixForApps(t *testing.T) {
tests := []struct {
name string
prefix string
app *resources.App
want string
}{
{
name: "add prefix to app",
prefix: "[prefix] ",
app: &resources.App{
App: &apps.App{
Name: "app1",
},
},
want: "prefix-app1",
},
{
name: "add empty prefix to app",
prefix: "",
app: &resources.App{
App: &apps.App{
Name: "app1",
},
},
want: "app1",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Apps: map[string]*resources.App{
"app1": tt.app,
},
},
Presets: config.Presets{
NamePrefix: tt.prefix,
},
},
}
ctx := context.Background()
diag := bundle.Apply(ctx, b, mutator.ApplyPresets())
if diag.HasError() {
t.Fatalf("unexpected error: %v", diag)
}
require.Equal(t, tt.want, b.Config.Resources.Apps["app1"].Name)
})
}
}

View File

@ -15,6 +15,7 @@ import (
"github.com/databricks/cli/libs/tags" "github.com/databricks/cli/libs/tags"
"github.com/databricks/cli/libs/vfs" "github.com/databricks/cli/libs/vfs"
sdkconfig "github.com/databricks/databricks-sdk-go/config" sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/service/apps"
"github.com/databricks/databricks-sdk-go/service/catalog" "github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/dashboards" "github.com/databricks/databricks-sdk-go/service/dashboards"
@ -141,6 +142,13 @@ func mockBundle(mode config.Mode) *bundle.Bundle {
}, },
}, },
}, },
Apps: map[string]*resources.App{
"app1": {
App: &apps.App{
Name: "app1",
},
},
},
}, },
}, },
SyncRoot: vfs.MustNew("/Users/lennart.kats@databricks.com"), SyncRoot: vfs.MustNew("/Users/lennart.kats@databricks.com"),

View File

@ -32,6 +32,7 @@ func allResourceTypes(t *testing.T) []string {
// the dyn library gives us the correct list of all resources supported. Please // the dyn library gives us the correct list of all resources supported. Please
// also update this check when adding a new resource // also update this check when adding a new resource
require.Equal(t, []string{ require.Equal(t, []string{
"apps",
"clusters", "clusters",
"dashboards", "dashboards",
"experiments", "experiments",
@ -141,6 +142,7 @@ func TestRunAsErrorForUnsupportedResources(t *testing.T) {
"registered_models", "registered_models",
"experiments", "experiments",
"schemas", "schemas",
"apps",
} }
base := config.Root{ base := config.Root{

View File

@ -19,7 +19,7 @@ import (
func TestTranslatePathsApps_FilePathRelativeSubDirectory(t *testing.T) { func TestTranslatePathsApps_FilePathRelativeSubDirectory(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
touchEmptyFile(t, filepath.Join(dir, "src", "my_app.lvdash.json")) touchEmptyFile(t, filepath.Join(dir, "src", "app", "app.py"))
b := &bundle.Bundle{ b := &bundle.Bundle{
SyncRootPath: dir, SyncRootPath: dir,
@ -31,7 +31,7 @@ func TestTranslatePathsApps_FilePathRelativeSubDirectory(t *testing.T) {
App: &apps.App{ App: &apps.App{
Name: "My App", Name: "My App",
}, },
SourceCodePath: "../src/", SourceCodePath: "../src/app",
}, },
}, },
}, },
@ -48,7 +48,7 @@ func TestTranslatePathsApps_FilePathRelativeSubDirectory(t *testing.T) {
// Assert that the file path for the app has been converted to its local absolute path. // Assert that the file path for the app has been converted to its local absolute path.
assert.Equal( assert.Equal(
t, t,
filepath.Join(dir, "src"), filepath.Join("src", "app"),
b.Config.Resources.Apps["app"].SourceCodePath, b.Config.Resources.Apps["app"].SourceCodePath,
) )
} }

View File

@ -21,6 +21,12 @@ func (v *filesToSync) Name() string {
} }
func (v *filesToSync) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics { func (v *filesToSync) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
// The user may be intentional about not synchronizing any files.
// In this case, we should not show any warnings.
if len(rb.Config().Sync.Paths) == 0 {
return nil
}
sync, err := files.GetSync(ctx, rb) sync, err := files.GetSync(ctx, rb)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
@ -31,6 +37,7 @@ func (v *filesToSync) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.
return diag.FromErr(err) return diag.FromErr(err)
} }
// If there are files to sync, we don't need to show any warnings.
if len(fl) != 0 { if len(fl) != 0 {
return nil return nil
} }

View File

@ -0,0 +1,105 @@
package validate
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/vfs"
sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/iam"
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestFilesToSync_NoPaths(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Sync: config.Sync{
Paths: []string{},
},
},
}
ctx := context.Background()
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
assert.Empty(t, diags)
}
func setupBundleForFilesToSyncTest(t *testing.T) *bundle.Bundle {
dir := t.TempDir()
testutil.Touch(t, dir, "file1")
testutil.Touch(t, dir, "file2")
b := &bundle.Bundle{
BundleRootPath: dir,
BundleRoot: vfs.MustNew(dir),
SyncRootPath: dir,
SyncRoot: vfs.MustNew(dir),
Config: config.Root{
Bundle: config.Bundle{
Target: "default",
},
Workspace: config.Workspace{
FilePath: "/this/doesnt/matter",
CurrentUser: &config.User{
User: &iam.User{},
},
},
Sync: config.Sync{
// Paths are relative to [SyncRootPath].
Paths: []string{"."},
},
},
}
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{
Host: "https://foo.com",
}
// The initialization logic in [sync.New] performs a check on the destination path.
// Removing this check at initialization time is tbd...
m.GetMockWorkspaceAPI().EXPECT().GetStatusByPath(mock.Anything, "/this/doesnt/matter").Return(&workspace.ObjectInfo{
ObjectType: workspace.ObjectTypeDirectory,
}, nil)
b.SetWorkpaceClient(m.WorkspaceClient)
return b
}
func TestFilesToSync_EverythingIgnored(t *testing.T) {
b := setupBundleForFilesToSyncTest(t)
// Ignore all files.
testutil.WriteFile(t, "*\n.*\n", b.BundleRootPath, ".gitignore")
ctx := context.Background()
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
require.Equal(t, 1, len(diags))
assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "There are no files to sync, please check your .gitignore", diags[0].Summary)
}
func TestFilesToSync_EverythingExcluded(t *testing.T) {
b := setupBundleForFilesToSyncTest(t)
// Exclude all files.
b.Config.Sync.Exclude = []string{"*"}
ctx := context.Background()
rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
require.Equal(t, 1, len(diags))
assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "There are no files to sync, please check your .gitignore and sync.exclude configuration", diags[0].Summary)
}

View File

@ -0,0 +1,137 @@
package validate
import (
"context"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/convert"
"github.com/databricks/cli/libs/log"
)
// Validates that any single node clusters defined in the bundle are correctly configured.
func SingleNodeCluster() bundle.ReadOnlyMutator {
return &singleNodeCluster{}
}
type singleNodeCluster struct{}
func (m *singleNodeCluster) Name() string {
return "validate:SingleNodeCluster"
}
const singleNodeWarningDetail = `num_workers should be 0 only for single-node clusters. To create a
valid single node cluster please ensure that the following properties
are correctly set in the cluster specification:
spark_conf:
spark.databricks.cluster.profile: singleNode
spark.master: local[*]
custom_tags:
ResourceClass: SingleNode
`
const singleNodeWarningSummary = `Single node cluster is not correctly configured`
func showSingleNodeClusterWarning(ctx context.Context, v dyn.Value) bool {
// Check if the user has explicitly set the num_workers to 0. Skip the warning
// if that's not the case.
numWorkers, ok := v.Get("num_workers").AsInt()
if !ok || numWorkers > 0 {
return false
}
// Convenient type that contains the common fields from compute.ClusterSpec and
// pipelines.PipelineCluster that we are interested in.
type ClusterConf struct {
SparkConf map[string]string `json:"spark_conf"`
CustomTags map[string]string `json:"custom_tags"`
PolicyId string `json:"policy_id"`
}
conf := &ClusterConf{}
err := convert.ToTyped(conf, v)
if err != nil {
return false
}
// If the policy id is set, we don't want to show the warning. This is because
// the user might have configured `spark_conf` and `custom_tags` correctly
// in their cluster policy.
if conf.PolicyId != "" {
return false
}
profile, ok := conf.SparkConf["spark.databricks.cluster.profile"]
if !ok {
log.Debugf(ctx, "spark_conf spark.databricks.cluster.profile not found in single-node cluster spec")
return true
}
if profile != "singleNode" {
log.Debugf(ctx, "spark_conf spark.databricks.cluster.profile is not singleNode in single-node cluster spec: %s", profile)
return true
}
master, ok := conf.SparkConf["spark.master"]
if !ok {
log.Debugf(ctx, "spark_conf spark.master not found in single-node cluster spec")
return true
}
if !strings.HasPrefix(master, "local") {
log.Debugf(ctx, "spark_conf spark.master does not start with local in single-node cluster spec: %s", master)
return true
}
resourceClass, ok := conf.CustomTags["ResourceClass"]
if !ok {
log.Debugf(ctx, "custom_tag ResourceClass not found in single-node cluster spec")
return true
}
if resourceClass != "SingleNode" {
log.Debugf(ctx, "custom_tag ResourceClass is not SingleNode in single-node cluster spec: %s", resourceClass)
return true
}
return false
}
func (m *singleNodeCluster) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
diags := diag.Diagnostics{}
patterns := []dyn.Pattern{
// Interactive clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("clusters"), dyn.AnyKey()),
// Job clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("jobs"), dyn.AnyKey(), dyn.Key("job_clusters"), dyn.AnyIndex(), dyn.Key("new_cluster")),
// Job task clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("jobs"), dyn.AnyKey(), dyn.Key("tasks"), dyn.AnyIndex(), dyn.Key("new_cluster")),
// Job for each task clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("jobs"), dyn.AnyKey(), dyn.Key("tasks"), dyn.AnyIndex(), dyn.Key("for_each_task"), dyn.Key("task"), dyn.Key("new_cluster")),
// Pipeline clusters
dyn.NewPattern(dyn.Key("resources"), dyn.Key("pipelines"), dyn.AnyKey(), dyn.Key("clusters"), dyn.AnyIndex()),
}
for _, p := range patterns {
_, err := dyn.MapByPattern(rb.Config().Value(), p, func(p dyn.Path, v dyn.Value) (dyn.Value, error) {
warning := diag.Diagnostic{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: v.Locations(),
Paths: []dyn.Path{p},
}
if showSingleNodeClusterWarning(ctx, v) {
diags = append(diags, warning)
}
return v, nil
})
if err != nil {
log.Debugf(ctx, "Error while applying single node cluster validation: %s", err)
}
}
return diags
}

View File

@ -0,0 +1,566 @@
package validate
import (
"context"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/stretchr/testify/assert"
)
func failCases() []struct {
name string
sparkConf map[string]string
customTags map[string]string
} {
return []struct {
name string
sparkConf map[string]string
customTags map[string]string
}{
{
name: "no tags or conf",
},
{
name: "no tags",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
},
{
name: "no conf",
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "invalid spark cluster profile",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "invalid",
"spark.master": "local[*]",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "invalid spark.master",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "invalid",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "invalid tags",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
customTags: map[string]string{"ResourceClass": "invalid"},
},
{
name: "missing ResourceClass tag",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
customTags: map[string]string{"what": "ever"},
},
{
name: "missing spark.master",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
{
name: "missing spark.databricks.cluster.profile",
sparkConf: map[string]string{
"spark.master": "local[*]",
},
customTags: map[string]string{"ResourceClass": "SingleNode"},
},
}
}
func TestValidateSingleNodeClusterFailForInteractiveClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Clusters: map[string]*resources.Cluster{
"foo": {
ClusterSpec: &compute.ClusterSpec{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.clusters.foo", []dyn.Location{{File: "a.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.clusters.foo.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "a.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.NewPath(dyn.Key("resources"), dyn.Key("clusters"), dyn.Key("foo"))},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForJobClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
JobClusters: []jobs.JobCluster{
{
NewCluster: compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.jobs.foo.job_clusters[0].new_cluster", []dyn.Location{{File: "b.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.job_clusters[0].new_cluster.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "b.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.jobs.foo.job_clusters[0].new_cluster")},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForJobTaskClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.jobs.foo.tasks[0].new_cluster", []dyn.Location{{File: "c.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].new_cluster.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "c.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.jobs.foo.tasks[0].new_cluster")},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForPipelineClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"foo": {
PipelineSpec: &pipelines.PipelineSpec{
Clusters: []pipelines.PipelineCluster{
{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.pipelines.foo.clusters[0]", []dyn.Location{{File: "d.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.pipelines.foo.clusters[0].num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "d.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.pipelines.foo.clusters[0]")},
},
}, diags)
})
}
}
func TestValidateSingleNodeClusterFailForJobForEachTaskCluster(t *testing.T) {
ctx := context.Background()
for _, tc := range failCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
},
},
},
},
},
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.jobs.foo.tasks[0].for_each_task.task.new_cluster", []dyn.Location{{File: "e.yml", Line: 1, Column: 1}})
// We can't set num_workers to 0 explicitly in the typed configuration.
// Do it on the dyn.Value directly.
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].for_each_task.task.new_cluster.num_workers", dyn.V(0))
})
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Warning,
Summary: singleNodeWarningSummary,
Detail: singleNodeWarningDetail,
Locations: []dyn.Location{{File: "e.yml", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("resources.jobs.foo.tasks[0].for_each_task.task.new_cluster")},
},
}, diags)
})
}
}
func passCases() []struct {
name string
numWorkers *int
sparkConf map[string]string
customTags map[string]string
policyId string
} {
zero := 0
one := 1
return []struct {
name string
numWorkers *int
sparkConf map[string]string
customTags map[string]string
policyId string
}{
{
name: "single node cluster",
sparkConf: map[string]string{
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
customTags: map[string]string{
"ResourceClass": "SingleNode",
},
numWorkers: &zero,
},
{
name: "num workers is not zero",
numWorkers: &one,
},
{
name: "num workers is not set",
},
{
name: "policy id is not empty",
policyId: "policy-abc",
numWorkers: &zero,
},
}
}
func TestValidateSingleNodeClusterPassInteractiveClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Clusters: map[string]*resources.Cluster{
"foo": {
ClusterSpec: &compute.ClusterSpec{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.clusters.foo.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassJobClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
JobClusters: []jobs.JobCluster{
{
NewCluster: compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.job_clusters[0].new_cluster.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassJobTaskClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].new_cluster.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassPipelineClusters(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Pipelines: map[string]*resources.Pipeline{
"foo": {
PipelineSpec: &pipelines.PipelineSpec{
Clusters: []pipelines.PipelineCluster{
{
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.pipelines.foo.clusters[0].num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}
func TestValidateSingleNodeClusterPassJobForEachTaskCluster(t *testing.T) {
ctx := context.Background()
for _, tc := range passCases() {
t.Run(tc.name, func(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Jobs: map[string]*resources.Job{
"foo": {
JobSettings: &jobs.JobSettings{
Tasks: []jobs.Task{
{
ForEachTask: &jobs.ForEachTask{
Task: jobs.Task{
NewCluster: &compute.ClusterSpec{
ClusterName: "my_cluster",
SparkConf: tc.sparkConf,
CustomTags: tc.customTags,
PolicyId: tc.policyId,
},
},
},
},
},
},
},
},
},
},
}
if tc.numWorkers != nil {
bundletest.Mutate(t, b, func(v dyn.Value) (dyn.Value, error) {
return dyn.Set(v, "resources.jobs.foo.tasks[0].for_each_task.task.new_cluster.num_workers", dyn.V(*tc.numWorkers))
})
}
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), SingleNodeCluster())
assert.Empty(t, diags)
})
}
}

View File

@ -36,6 +36,7 @@ func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics
ValidateSyncPatterns(), ValidateSyncPatterns(),
JobTaskClusterSpec(), JobTaskClusterSpec(),
ValidateFolderPermissions(), ValidateFolderPermissions(),
SingleNodeCluster(),
)) ))
} }

View File

@ -22,6 +22,8 @@ type Lookup struct {
Metastore string `json:"metastore,omitempty"` Metastore string `json:"metastore,omitempty"`
NotificationDestination string `json:"notification_destination,omitempty"`
Pipeline string `json:"pipeline,omitempty"` Pipeline string `json:"pipeline,omitempty"`
Query string `json:"query,omitempty"` Query string `json:"query,omitempty"`
@ -63,6 +65,9 @@ func (l *Lookup) constructResolver() (resolver, error) {
if l.Metastore != "" { if l.Metastore != "" {
resolvers = append(resolvers, resolveMetastore{name: l.Metastore}) resolvers = append(resolvers, resolveMetastore{name: l.Metastore})
} }
if l.NotificationDestination != "" {
resolvers = append(resolvers, resolveNotificationDestination{name: l.NotificationDestination})
}
if l.Pipeline != "" { if l.Pipeline != "" {
resolvers = append(resolvers, resolvePipeline{name: l.Pipeline}) resolvers = append(resolvers, resolvePipeline{name: l.Pipeline})
} }

View File

@ -0,0 +1,46 @@
package variable
import (
"context"
"fmt"
"github.com/databricks/databricks-sdk-go"
"github.com/databricks/databricks-sdk-go/service/settings"
)
type resolveNotificationDestination struct {
name string
}
func (l resolveNotificationDestination) Resolve(ctx context.Context, w *databricks.WorkspaceClient) (string, error) {
result, err := w.NotificationDestinations.ListAll(ctx, settings.ListNotificationDestinationsRequest{
// The default page size for this API is 20.
// We use a higher value to make fewer API calls.
PageSize: 200,
})
if err != nil {
return "", err
}
// Collect all notification destinations with the given name.
var entities []settings.ListNotificationDestinationsResult
for _, entity := range result {
if entity.DisplayName == l.name {
entities = append(entities, entity)
}
}
// Return the ID of the first matching notification destination.
switch len(entities) {
case 0:
return "", fmt.Errorf("notification destination named %q does not exist", l.name)
case 1:
return entities[0].Id, nil
default:
return "", fmt.Errorf("there are %d instances of clusters named %q", len(entities), l.name)
}
}
func (l resolveNotificationDestination) String() string {
return fmt.Sprintf("notification-destination: %s", l.name)
}

View File

@ -0,0 +1,82 @@
package variable
import (
"context"
"fmt"
"testing"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/settings"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestResolveNotificationDestination_ResolveSuccess(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockNotificationDestinationsAPI()
api.EXPECT().
ListAll(mock.Anything, mock.Anything).
Return([]settings.ListNotificationDestinationsResult{
{Id: "1234", DisplayName: "destination"},
}, nil)
ctx := context.Background()
l := resolveNotificationDestination{name: "destination"}
result, err := l.Resolve(ctx, m.WorkspaceClient)
require.NoError(t, err)
assert.Equal(t, "1234", result)
}
func TestResolveNotificationDestination_ResolveError(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockNotificationDestinationsAPI()
api.EXPECT().
ListAll(mock.Anything, mock.Anything).
Return(nil, fmt.Errorf("bad"))
ctx := context.Background()
l := resolveNotificationDestination{name: "destination"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
assert.ErrorContains(t, err, "bad")
}
func TestResolveNotificationDestination_ResolveNotFound(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockNotificationDestinationsAPI()
api.EXPECT().
ListAll(mock.Anything, mock.Anything).
Return([]settings.ListNotificationDestinationsResult{}, nil)
ctx := context.Background()
l := resolveNotificationDestination{name: "destination"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.Error(t, err)
assert.ErrorContains(t, err, `notification destination named "destination" does not exist`)
}
func TestResolveNotificationDestination_ResolveMultiple(t *testing.T) {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockNotificationDestinationsAPI()
api.EXPECT().
ListAll(mock.Anything, mock.Anything).
Return([]settings.ListNotificationDestinationsResult{
{Id: "1234", DisplayName: "destination"},
{Id: "5678", DisplayName: "destination"},
}, nil)
ctx := context.Background()
l := resolveNotificationDestination{name: "destination"}
_, err := l.Resolve(ctx, m.WorkspaceClient)
require.Error(t, err)
assert.ErrorContains(t, err, `there are 2 instances of clusters named "destination"`)
}
func TestResolveNotificationDestination_String(t *testing.T) {
l := resolveNotificationDestination{name: "name"}
assert.Equal(t, "notification-destination: name", l.String())
}

View File

@ -1,106 +0,0 @@
package apps
import (
"bytes"
"context"
"fmt"
"path"
"path/filepath"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/deploy"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go/service/apps"
"golang.org/x/sync/errgroup"
"gopkg.in/yaml.v3"
)
type appsDeploy struct {
filerFactory deploy.FilerFactory
}
func Deploy() bundle.Mutator {
return appsDeploy{deploy.AppFiler}
}
func (a appsDeploy) Name() string {
return "apps.Deploy"
}
func (a appsDeploy) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
if len(b.Config.Resources.Apps) == 0 {
return nil
}
errGrp, ctx := errgroup.WithContext(ctx)
w := b.WorkspaceClient()
f, err := a.filerFactory(b)
if err != nil {
return diag.FromErr(err)
}
for _, app := range b.Config.Resources.Apps {
cmdio.LogString(ctx, fmt.Sprintf("Deploying app %s...", app.Name))
errGrp.Go(func() error {
// If the app has a config, we need to deploy it first.
// It means we need to write app.yml file with the content of the config field
// to the remote source code path of the app.
if app.Config != nil {
appPath, err := filepath.Rel(b.Config.Workspace.FilePath, app.SourceCodePath)
if err != nil {
return fmt.Errorf("failed to get relative path of app source code path: %w", err)
}
buf, err := configToYaml(app)
if err != nil {
return err
}
err = f.Write(ctx, path.Join(appPath, "app.yml"), buf, filer.OverwriteIfExists)
if err != nil {
return fmt.Errorf("failed to write %s file: %w", path.Join(app.SourceCodePath, "app.yml"), err)
}
}
wait, err := w.Apps.Deploy(ctx, apps.CreateAppDeploymentRequest{
AppName: app.Name,
AppDeployment: &apps.AppDeployment{
Mode: apps.AppDeploymentModeSnapshot,
SourceCodePath: app.SourceCodePath,
},
})
if err != nil {
return err
}
_, err = wait.Get()
return err
})
}
if err := errGrp.Wait(); err != nil {
return diag.FromErr(err)
}
return nil
}
func configToYaml(app *resources.App) (*bytes.Buffer, error) {
buf := bytes.NewBuffer(nil)
enc := yaml.NewEncoder(buf)
enc.SetIndent(2)
err := enc.Encode(app.Config)
defer enc.Close()
if err != nil {
return nil, fmt.Errorf("failed to encode app config to yaml: %w", err)
}
return buf, nil
}

View File

@ -1,113 +0,0 @@
package apps
import (
"bytes"
"context"
"os"
"path/filepath"
"testing"
"time"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
mockfiler "github.com/databricks/cli/internal/mocks/libs/filer"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/cli/libs/vfs"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/apps"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestAppDeploy(t *testing.T) {
root := t.TempDir()
err := os.MkdirAll(filepath.Join(root, "app1"), 0700)
require.NoError(t, err)
err = os.MkdirAll(filepath.Join(root, "app2"), 0700)
require.NoError(t, err)
b := &bundle.Bundle{
BundleRootPath: root,
SyncRoot: vfs.MustNew(root),
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com/",
},
Resources: config.Resources{
Apps: map[string]*resources.App{
"app1": {
App: &apps.App{
Name: "app1",
},
SourceCodePath: "./app1",
Config: map[string]interface{}{
"command": []string{"echo", "hello"},
"env": []map[string]string{
{"name": "MY_APP", "value": "my value"},
},
},
},
"app2": {
App: &apps.App{
Name: "app2",
},
SourceCodePath: "./app2",
},
},
},
},
}
mwc := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(mwc.WorkspaceClient)
wait := &apps.WaitGetDeploymentAppSucceeded[apps.AppDeployment]{
Poll: func(_ time.Duration, _ func(*apps.AppDeployment)) (*apps.AppDeployment, error) {
return nil, nil
},
}
appApi := mwc.GetMockAppsAPI()
appApi.EXPECT().Deploy(mock.Anything, apps.CreateAppDeploymentRequest{
AppName: "app1",
AppDeployment: &apps.AppDeployment{
Mode: apps.AppDeploymentModeSnapshot,
SourceCodePath: "/Workspace/Users/foo@bar.com/files/app1",
},
}).Return(wait, nil)
appApi.EXPECT().Deploy(mock.Anything, apps.CreateAppDeploymentRequest{
AppName: "app2",
AppDeployment: &apps.AppDeployment{
Mode: apps.AppDeploymentModeSnapshot,
SourceCodePath: "/Workspace/Users/foo@bar.com/files/app2",
},
}).Return(wait, nil)
mockFiler := mockfiler.NewMockFiler(t)
mockFiler.EXPECT().Write(mock.Anything, "app1/app.yml", bytes.NewBufferString(`command:
- echo
- hello
env:
- name: MY_APP
value: my value
`), filer.OverwriteIfExists).Return(nil)
bundletest.SetLocation(b, "resources.apps.app1", []dyn.Location{{File: "./databricks.yml"}})
bundletest.SetLocation(b, "resources.apps.app2", []dyn.Location{{File: "./databricks.yml"}})
ctx := context.Background()
diags := bundle.Apply(ctx, b, bundle.Seq(
mutator.DefineDefaultWorkspacePaths(),
mutator.TranslatePaths(),
appsDeploy{
func(b *bundle.Bundle) (filer.Filer, error) {
return mockFiler, nil
},
}))
require.Empty(t, diags)
}

View File

@ -12,8 +12,3 @@ type FilerFactory func(b *bundle.Bundle) (filer.Filer, error)
func StateFiler(b *bundle.Bundle) (filer.Filer, error) { func StateFiler(b *bundle.Bundle) (filer.Filer, error) {
return filer.NewWorkspaceFilesClient(b.WorkspaceClient(), b.Config.Workspace.StatePath) return filer.NewWorkspaceFilesClient(b.WorkspaceClient(), b.Config.Workspace.StatePath)
} }
// AppFiler returns a filer.Filer that can be used to read/write Databricks apps related files.
func AppFiler(b *bundle.Bundle) (filer.Filer, error) {
return filer.NewWorkspaceFilesClient(b.WorkspaceClient(), b.Config.Workspace.FilePath)
}

View File

@ -2,6 +2,7 @@ package tfdyn
import ( import (
"context" "context"
"fmt"
"github.com/databricks/cli/bundle/internal/tf/schema" "github.com/databricks/cli/bundle/internal/tf/schema"
"github.com/databricks/cli/libs/dyn" "github.com/databricks/cli/libs/dyn"
@ -42,11 +43,8 @@ func (appConverter) Convert(ctx context.Context, key string, vin dyn.Value, out
// Configure permissions for this resource. // Configure permissions for this resource.
if permissions := convertPermissionsResource(ctx, vin); permissions != nil { if permissions := convertPermissionsResource(ctx, vin); permissions != nil {
// TODO: add when permissions are supported in TF permissions.AppName = fmt.Sprintf("${databricks_app.%s.name}", key)
/*
permissions.AppId = fmt.Sprintf("${databricks_app.%s.id}", key)
out.Permissions["app_"+key] = permissions out.Permissions["app_"+key] = permissions
*/
} }
return nil return nil

View File

@ -81,11 +81,9 @@ func TestConvertApp(t *testing.T) {
}, },
}, app) }, app)
// TODO: Add when permissions are supported in TF
/*
// Assert equality on the permissions // Assert equality on the permissions
assert.Equal(t, &schema.ResourcePermissions{ assert.Equal(t, &schema.ResourcePermissions{
AppId: "${databricks_app.my_app.id}", AppName: "${databricks_app.my_app.name}",
AccessControl: []schema.ResourcePermissionsAccessControl{ AccessControl: []schema.ResourcePermissionsAccessControl{
{ {
PermissionLevel: "CAN_RUN", PermissionLevel: "CAN_RUN",
@ -97,6 +95,5 @@ func TestConvertApp(t *testing.T) {
}, },
}, },
}, out.Permissions["app_my_app"]) }, out.Permissions["app_my_app"])
*/
} }

View File

@ -89,6 +89,7 @@ type ResourceApp struct {
Description string `json:"description,omitempty"` Description string `json:"description,omitempty"`
Id string `json:"id,omitempty"` Id string `json:"id,omitempty"`
Name string `json:"name"` Name string `json:"name"`
ServicePrincipalClientId string `json:"service_principal_client_id,omitempty"`
ServicePrincipalId int `json:"service_principal_id,omitempty"` ServicePrincipalId int `json:"service_principal_id,omitempty"`
ServicePrincipalName string `json:"service_principal_name,omitempty"` ServicePrincipalName string `json:"service_principal_name,omitempty"`
UpdateTime string `json:"update_time,omitempty"` UpdateTime string `json:"update_time,omitempty"`

View File

@ -10,6 +10,7 @@ type ResourcePermissionsAccessControl struct {
} }
type ResourcePermissions struct { type ResourcePermissions struct {
AppName string `json:"app_name,omitempty"`
Authorization string `json:"authorization,omitempty"` Authorization string `json:"authorization,omitempty"`
ClusterId string `json:"cluster_id,omitempty"` ClusterId string `json:"cluster_id,omitempty"`
ClusterPolicyId string `json:"cluster_policy_id,omitempty"` ClusterPolicyId string `json:"cluster_policy_id,omitempty"`

View File

@ -33,8 +33,8 @@ type ResourceQualityMonitorNotificationsOnNewClassificationTagDetected struct {
} }
type ResourceQualityMonitorNotifications struct { type ResourceQualityMonitorNotifications struct {
OnFailure []ResourceQualityMonitorNotificationsOnFailure `json:"on_failure,omitempty"` OnFailure *ResourceQualityMonitorNotificationsOnFailure `json:"on_failure,omitempty"`
OnNewClassificationTagDetected []ResourceQualityMonitorNotificationsOnNewClassificationTagDetected `json:"on_new_classification_tag_detected,omitempty"` OnNewClassificationTagDetected *ResourceQualityMonitorNotificationsOnNewClassificationTagDetected `json:"on_new_classification_tag_detected,omitempty"`
} }
type ResourceQualityMonitorSchedule struct { type ResourceQualityMonitorSchedule struct {
@ -67,10 +67,10 @@ type ResourceQualityMonitor struct {
TableName string `json:"table_name"` TableName string `json:"table_name"`
WarehouseId string `json:"warehouse_id,omitempty"` WarehouseId string `json:"warehouse_id,omitempty"`
CustomMetrics []ResourceQualityMonitorCustomMetrics `json:"custom_metrics,omitempty"` CustomMetrics []ResourceQualityMonitorCustomMetrics `json:"custom_metrics,omitempty"`
DataClassificationConfig []ResourceQualityMonitorDataClassificationConfig `json:"data_classification_config,omitempty"` DataClassificationConfig *ResourceQualityMonitorDataClassificationConfig `json:"data_classification_config,omitempty"`
InferenceLog []ResourceQualityMonitorInferenceLog `json:"inference_log,omitempty"` InferenceLog *ResourceQualityMonitorInferenceLog `json:"inference_log,omitempty"`
Notifications []ResourceQualityMonitorNotifications `json:"notifications,omitempty"` Notifications *ResourceQualityMonitorNotifications `json:"notifications,omitempty"`
Schedule []ResourceQualityMonitorSchedule `json:"schedule,omitempty"` Schedule *ResourceQualityMonitorSchedule `json:"schedule,omitempty"`
Snapshot []ResourceQualityMonitorSnapshot `json:"snapshot,omitempty"` Snapshot *ResourceQualityMonitorSnapshot `json:"snapshot,omitempty"`
TimeSeries []ResourceQualityMonitorTimeSeries `json:"time_series,omitempty"` TimeSeries *ResourceQualityMonitorTimeSeries `json:"time_series,omitempty"`
} }

View File

@ -9,7 +9,6 @@ import (
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator" "github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/deploy" "github.com/databricks/cli/bundle/deploy"
"github.com/databricks/cli/bundle/deploy/apps"
"github.com/databricks/cli/bundle/deploy/files" "github.com/databricks/cli/bundle/deploy/files"
"github.com/databricks/cli/bundle/deploy/lock" "github.com/databricks/cli/bundle/deploy/lock"
"github.com/databricks/cli/bundle/deploy/metadata" "github.com/databricks/cli/bundle/deploy/metadata"
@ -137,7 +136,6 @@ func Deploy(outputHandler sync.OutputHandler) bundle.Mutator {
bundle.Seq( bundle.Seq(
bundle.LogString("Deploying resources..."), bundle.LogString("Deploying resources..."),
terraform.Apply(), terraform.Apply(),
apps.Deploy(),
), ),
bundle.Seq( bundle.Seq(
terraform.StatePush(), terraform.StatePush(),

248
bundle/run/app.go Normal file
View File

@ -0,0 +1,248 @@
package run
import (
"bytes"
"context"
"fmt"
"path"
"path/filepath"
"time"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/deploy"
"github.com/databricks/cli/bundle/run/output"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go/service/apps"
"github.com/spf13/cobra"
"gopkg.in/yaml.v3"
)
func logProgress(ctx context.Context, msg string) {
if msg == "" {
return
}
cmdio.LogString(ctx, fmt.Sprintf("✓ %s", msg))
}
type appRunner struct {
key
bundle *bundle.Bundle
app *resources.App
filerFactory deploy.FilerFactory
}
func (a *appRunner) Name() string {
if a.app == nil {
return ""
}
return a.app.Name
}
func (a *appRunner) Run(ctx context.Context, opts *Options) (output.RunOutput, error) {
app := a.app
b := a.bundle
if app == nil {
return nil, fmt.Errorf("app is not defined")
}
logProgress(ctx, fmt.Sprintf("Getting the status of the app %s", app.Name))
w := b.WorkspaceClient()
// Check the status of the app first.
createdApp, err := w.Apps.Get(ctx, apps.GetAppRequest{Name: app.Name})
if err != nil {
return nil, err
}
if createdApp.AppStatus != nil {
logProgress(ctx, fmt.Sprintf("App is in %s state", createdApp.AppStatus.State))
}
// If the app is not running, start it.
if createdApp.AppStatus == nil || createdApp.AppStatus.State != apps.ApplicationStateRunning {
err := a.start(ctx)
if err != nil {
return nil, err
}
}
// Deploy the app.
err = a.deploy(ctx)
if err != nil {
return nil, err
}
// TODO: We should return the app URL here.
cmdio.LogString(ctx, "You can access the app at <app-url>")
return nil, nil
}
func (a *appRunner) start(ctx context.Context) error {
app := a.app
b := a.bundle
w := b.WorkspaceClient()
logProgress(ctx, fmt.Sprintf("Starting the app %s", app.Name))
wait, err := w.Apps.Start(ctx, apps.StartAppRequest{Name: app.Name})
if err != nil {
return err
}
startedApp, err := wait.OnProgress(func(p *apps.App) {
if p.AppStatus == nil {
return
}
logProgress(ctx, "App is starting...")
}).Get()
if err != nil {
return err
}
// If the app has a pending deployment, wait for it to complete.
if startedApp.PendingDeployment != nil {
_, err := w.Apps.WaitGetDeploymentAppSucceeded(ctx,
startedApp.Name, startedApp.PendingDeployment.DeploymentId,
20*time.Minute, nil)
if err != nil {
return err
}
}
// If the app has an active deployment, wait for it to complete as well
if startedApp.ActiveDeployment != nil {
_, err := w.Apps.WaitGetDeploymentAppSucceeded(ctx,
startedApp.Name, startedApp.ActiveDeployment.DeploymentId,
20*time.Minute, nil)
if err != nil {
return err
}
}
logProgress(ctx, "App is started!")
return nil
}
func (a *appRunner) deploy(ctx context.Context) error {
app := a.app
b := a.bundle
w := b.WorkspaceClient()
// If the app has a config, we need to deploy it first.
// It means we need to write app.yml file with the content of the config field
// to the remote source code path of the app.
if app.Config != nil {
appPath, err := filepath.Rel(b.Config.Workspace.FilePath, app.SourceCodePath)
if err != nil {
return fmt.Errorf("failed to get relative path of app source code path: %w", err)
}
buf, err := configToYaml(app)
if err != nil {
return err
}
// When the app is started, create a new app deployment and wait for it to complete.
f, err := a.filerFactory(b)
if err != nil {
return err
}
err = f.Write(ctx, path.Join(appPath, "app.yml"), buf, filer.OverwriteIfExists)
if err != nil {
return fmt.Errorf("failed to write %s file: %w", path.Join(app.SourceCodePath, "app.yml"), err)
}
}
wait, err := w.Apps.Deploy(ctx, apps.CreateAppDeploymentRequest{
AppName: app.Name,
AppDeployment: &apps.AppDeployment{
Mode: apps.AppDeploymentModeSnapshot,
SourceCodePath: app.SourceCodePath,
},
})
if err != nil {
return err
}
_, err = wait.OnProgress(func(ad *apps.AppDeployment) {
if ad.Status == nil {
return
}
logProgress(ctx, ad.Status.Message)
}).Get()
if err != nil {
return err
}
return nil
}
func (a *appRunner) Cancel(ctx context.Context) error {
// We should cancel the app by stopping it.
app := a.app
b := a.bundle
if app == nil {
return fmt.Errorf("app is not defined")
}
w := b.WorkspaceClient()
logProgress(ctx, fmt.Sprintf("Stopping app %s", app.Name))
wait, err := w.Apps.Stop(ctx, apps.StopAppRequest{Name: app.Name})
if err != nil {
return err
}
_, err = wait.OnProgress(func(p *apps.App) {
if p.AppStatus == nil {
return
}
logProgress(ctx, p.AppStatus.Message)
}).Get()
logProgress(ctx, "App is stopped!")
return err
}
func (a *appRunner) Restart(ctx context.Context, opts *Options) (output.RunOutput, error) {
// We should restart the app by just running it again meaning a new app deployment will be done.
return a.Run(ctx, opts)
}
func (a *appRunner) ParseArgs(args []string, opts *Options) error {
if len(args) == 0 {
return nil
}
return fmt.Errorf("received %d unexpected positional arguments", len(args))
}
func (a *appRunner) CompleteArgs(args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return nil, cobra.ShellCompDirectiveNoFileComp
}
func configToYaml(app *resources.App) (*bytes.Buffer, error) {
buf := bytes.NewBuffer(nil)
enc := yaml.NewEncoder(buf)
enc.SetIndent(2)
err := enc.Encode(app.Config)
defer enc.Close()
if err != nil {
return nil, fmt.Errorf("failed to encode app config to yaml: %w", err)
}
return buf, nil
}

208
bundle/run/app_test.go Normal file
View File

@ -0,0 +1,208 @@
package run
import (
"bytes"
"context"
"os"
"path/filepath"
"testing"
"time"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/mutator"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
mockfiler "github.com/databricks/cli/internal/mocks/libs/filer"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/filer"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/cli/libs/vfs"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/apps"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
type testAppRunner struct {
m *mocks.MockWorkspaceClient
b *bundle.Bundle
mockFiler *mockfiler.MockFiler
ctx context.Context
}
func (ta *testAppRunner) run(t *testing.T) {
r := appRunner{
key: "my_app",
bundle: ta.b,
app: ta.b.Config.Resources.Apps["my_app"],
filerFactory: func(b *bundle.Bundle) (filer.Filer, error) {
return ta.mockFiler, nil
},
}
_, err := r.Run(ta.ctx, &Options{})
require.NoError(t, err)
}
func setupBundle(t *testing.T) (context.Context, *bundle.Bundle, *mocks.MockWorkspaceClient) {
root := t.TempDir()
err := os.MkdirAll(filepath.Join(root, "my_app"), 0700)
require.NoError(t, err)
b := &bundle.Bundle{
BundleRootPath: root,
SyncRoot: vfs.MustNew(root),
Config: config.Root{
Workspace: config.Workspace{
RootPath: "/Workspace/Users/foo@bar.com/",
},
Resources: config.Resources{
Apps: map[string]*resources.App{
"my_app": {
App: &apps.App{
Name: "my_app",
},
SourceCodePath: "./my_app",
Config: map[string]interface{}{
"command": []string{"echo", "hello"},
"env": []map[string]string{
{"name": "MY_APP", "value": "my value"},
},
},
},
},
},
},
}
mwc := mocks.NewMockWorkspaceClient(t)
b.SetWorkpaceClient(mwc.WorkspaceClient)
bundletest.SetLocation(b, "resources.apps.my_app", []dyn.Location{{File: "./databricks.yml"}})
ctx := context.Background()
ctx = cmdio.InContext(ctx, cmdio.NewIO(flags.OutputText, &bytes.Buffer{}, &bytes.Buffer{}, &bytes.Buffer{}, "", "..."))
ctx = cmdio.NewContext(ctx, cmdio.NewLogger(flags.ModeAppend))
diags := bundle.Apply(ctx, b, bundle.Seq(
mutator.DefineDefaultWorkspacePaths(),
mutator.TranslatePaths(),
))
require.Empty(t, diags)
return ctx, b, mwc
}
func setupTestApp(t *testing.T, initialAppState apps.ApplicationState) *testAppRunner {
ctx, b, mwc := setupBundle(t)
appApi := mwc.GetMockAppsAPI()
appApi.EXPECT().Get(mock.Anything, apps.GetAppRequest{
Name: "my_app",
}).Return(&apps.App{
Name: "my_app",
AppStatus: &apps.ApplicationStatus{
State: initialAppState,
},
}, nil)
wait := &apps.WaitGetDeploymentAppSucceeded[apps.AppDeployment]{
Poll: func(_ time.Duration, _ func(*apps.AppDeployment)) (*apps.AppDeployment, error) {
return nil, nil
},
}
appApi.EXPECT().Deploy(mock.Anything, apps.CreateAppDeploymentRequest{
AppName: "my_app",
AppDeployment: &apps.AppDeployment{
Mode: apps.AppDeploymentModeSnapshot,
SourceCodePath: "/Workspace/Users/foo@bar.com/files/my_app",
},
}).Return(wait, nil)
mockFiler := mockfiler.NewMockFiler(t)
mockFiler.EXPECT().Write(mock.Anything, "my_app/app.yml", bytes.NewBufferString(`command:
- echo
- hello
env:
- name: MY_APP
value: my value
`), filer.OverwriteIfExists).Return(nil)
return &testAppRunner{
m: mwc,
b: b,
mockFiler: mockFiler,
ctx: ctx,
}
}
func TestAppRunStartedApp(t *testing.T) {
r := setupTestApp(t, apps.ApplicationStateRunning)
r.run(t)
}
func TestAppRunStoppedApp(t *testing.T) {
r := setupTestApp(t, apps.ApplicationStateCrashed)
appsApi := r.m.GetMockAppsAPI()
appsApi.EXPECT().Start(mock.Anything, apps.StartAppRequest{
Name: "my_app",
}).Return(&apps.WaitGetAppActive[apps.App]{
Poll: func(_ time.Duration, _ func(*apps.App)) (*apps.App, error) {
return &apps.App{
Name: "my_app",
AppStatus: &apps.ApplicationStatus{
State: apps.ApplicationStateRunning,
},
ActiveDeployment: &apps.AppDeployment{
SourceCodePath: "/foo/bar",
DeploymentId: "123",
Status: &apps.AppDeploymentStatus{
State: apps.AppDeploymentStateInProgress,
},
},
PendingDeployment: &apps.AppDeployment{
SourceCodePath: "/foo/bar",
DeploymentId: "456",
Status: &apps.AppDeploymentStatus{
State: apps.AppDeploymentStateInProgress,
},
},
}, nil
},
}, nil)
appsApi.EXPECT().WaitGetDeploymentAppSucceeded(mock.Anything, "my_app", "123", mock.Anything, mock.Anything).Return(nil, nil)
appsApi.EXPECT().WaitGetDeploymentAppSucceeded(mock.Anything, "my_app", "456", mock.Anything, mock.Anything).Return(nil, nil)
r.run(t)
}
func TestStopApp(t *testing.T) {
ctx, b, mwc := setupBundle(t)
appsApi := mwc.GetMockAppsAPI()
appsApi.EXPECT().Stop(mock.Anything, apps.StopAppRequest{
Name: "my_app",
}).Return(&apps.WaitGetAppStopped[apps.App]{
Poll: func(_ time.Duration, _ func(*apps.App)) (*apps.App, error) {
return &apps.App{
Name: "my_app",
AppStatus: &apps.ApplicationStatus{
State: apps.ApplicationStateUnavailable,
},
}, nil
},
}, nil)
r := appRunner{
key: "my_app",
bundle: b,
app: b.Config.Resources.Apps["my_app"],
filerFactory: func(b *bundle.Bundle) (filer.Filer, error) {
return nil, nil
},
}
err := r.Cancel(ctx)
require.NoError(t, err)
}

View File

@ -8,6 +8,7 @@ import (
"github.com/databricks/cli/bundle/config/resources" "github.com/databricks/cli/bundle/config/resources"
refs "github.com/databricks/cli/bundle/resources" refs "github.com/databricks/cli/bundle/resources"
"github.com/databricks/cli/bundle/run/output" "github.com/databricks/cli/bundle/run/output"
"github.com/databricks/cli/libs/filer"
) )
type key string type key string
@ -42,7 +43,7 @@ type Runner interface {
// IsRunnable returns a filter that only allows runnable resources. // IsRunnable returns a filter that only allows runnable resources.
func IsRunnable(ref refs.Reference) bool { func IsRunnable(ref refs.Reference) bool {
switch ref.Resource.(type) { switch ref.Resource.(type) {
case *resources.Job, *resources.Pipeline: case *resources.Job, *resources.Pipeline, *resources.App:
return true return true
default: default:
return false return false
@ -56,6 +57,15 @@ func ToRunner(b *bundle.Bundle, ref refs.Reference) (Runner, error) {
return &jobRunner{key: key(ref.KeyWithType), bundle: b, job: resource}, nil return &jobRunner{key: key(ref.KeyWithType), bundle: b, job: resource}, nil
case *resources.Pipeline: case *resources.Pipeline:
return &pipelineRunner{key: key(ref.KeyWithType), bundle: b, pipeline: resource}, nil return &pipelineRunner{key: key(ref.KeyWithType), bundle: b, pipeline: resource}, nil
case *resources.App:
return &appRunner{
key: key(ref.KeyWithType),
bundle: b,
app: resource,
filerFactory: func(b *bundle.Bundle) (filer.Filer, error) {
return filer.NewWorkspaceFilesClient(b.WorkspaceClient(), b.Config.Workspace.FilePath)
},
}, nil
default: default:
return nil, fmt.Errorf("unsupported resource type: %T", resource) return nil, fmt.Errorf("unsupported resource type: %T", resource)
} }

4
go.mod
View File

@ -27,7 +27,7 @@ require (
golang.org/x/mod v0.22.0 golang.org/x/mod v0.22.0
golang.org/x/oauth2 v0.24.0 golang.org/x/oauth2 v0.24.0
golang.org/x/sync v0.9.0 golang.org/x/sync v0.9.0
golang.org/x/term v0.25.0 golang.org/x/term v0.26.0
golang.org/x/text v0.20.0 golang.org/x/text v0.20.0
gopkg.in/ini.v1 v1.67.0 // Apache 2.0 gopkg.in/ini.v1 v1.67.0 // Apache 2.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
@ -64,7 +64,7 @@ require (
go.opentelemetry.io/otel/trace v1.24.0 // indirect go.opentelemetry.io/otel/trace v1.24.0 // indirect
golang.org/x/crypto v0.24.0 // indirect golang.org/x/crypto v0.24.0 // indirect
golang.org/x/net v0.26.0 // indirect golang.org/x/net v0.26.0 // indirect
golang.org/x/sys v0.26.0 // indirect golang.org/x/sys v0.27.0 // indirect
golang.org/x/time v0.5.0 // indirect golang.org/x/time v0.5.0 // indirect
google.golang.org/api v0.182.0 // indirect google.golang.org/api v0.182.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240521202816-d264139d666e // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240521202816-d264139d666e // indirect

8
go.sum generated
View File

@ -212,10 +212,10 @@ golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo= golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24= golang.org/x/term v0.26.0 h1:WEQa6V3Gja/BhNxg540hBip/kkaYtRg3cxg4oXSw4AU=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M= golang.org/x/term v0.26.0/go.mod h1:Si5m1o57C5nBNQo5z1iq+XDijt21BDBDp2bK0QI8e3E=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug= golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=