Compare commits

...

12 Commits

Author SHA1 Message Date
Andrew Nester 60782b57bd
Added close stale issues workflow (#634)
## Changes
Added workflows for closing stale issues.

It adds a Github Action that warns and then auto closes stale issues.
2025-01-02 14:23:00 +01:00
shreyas-goenka 7beb0fb8b5
Add validation mutator for volume `artifact_path` (#2050)
## Changes
This PR:
1. Incrementally improves the error messages shown to the user when the
volume they are referring to in `workspace.artifact_path` does not
exist.
2. Performs this validation in both `bundle validate` and `bundle
deploy` compared to before on just deployments.
3. It runs "fast" validations on `bundle deploy`, which earlier were
only run on `bundle validate`.


## Tests
Unit tests and manually. Also, existing integration tests provide
coverage (`TestUploadArtifactToVolumeNotYetDeployed`,
`TestUploadArtifactFileToVolumeThatDoesNotExist`)

Examples:
```
.venv➜  bundle-playground git:(master) ✗ cli bundle validate
Error: cannot access volume capital.whatever.my_volume: User does not have READ VOLUME on Volume 'capital.whatever.my_volume'.
  at workspace.artifact_path
  in databricks.yml:7:18
```

and

```
.venv➜  bundle-playground git:(master) ✗ cli bundle validate
Error: volume capital.whatever.foobar does not exist
  at workspace.artifact_path
     resources.volumes.foo
  in databricks.yml:7:18
     databricks.yml:12:7

You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.
```
2025-01-02 17:23:15 +05:30
shreyas-goenka 509f5aba6a
Snooze mlops-stacks integration test (#2063)
## Changes
https://github.com/databricks/mlops-stacks/pull/187 broke mlops-stacks
deployments for non-UC projects. Snoozing the test until upstream is
fixed.

## Tests
The test is skipped on my local machine. CI run will verify it's also
skipped on Github Actions runners.
2025-01-02 11:39:11 +00:00
Denis Bilenko cae21693bb
lint: Raise max issues output (#2067)
By default it stops after 3 issues of a given type, which gives false
impression and also unhelpful if you fixing it with aider.

1000 is almost like unlimited but not unlimited in case there is a bug
in a linter.
2025-01-02 12:23:48 +01:00
shreyas-goenka 890c57eabe
Enable debugging integration tests in VS Code (#2053)
## Changes
This PR adds back debugging functionality that was lost during migration
to `internal.Main` as an entry point for integration tests.

The PR that caused the regression:
https://github.com/databricks/cli/pull/2009. Specifically the addition
of internal.Main as the entrypoint for all integration tests.

## Tests
Manually, by trying to debug a test.
2025-01-02 16:52:33 +05:30
Denis Bilenko ea8445af9e
Make "make" output the commands it runs (#2066)
This is useful on CI and locally for debugging and being able to
copy-paste command to tweak the options.

Removed redundant and imprecise messages like "✓ Running tests ...".
2025-01-02 12:18:38 +01:00
Denis Bilenko ef86d2bcae
Speed up best case for "make test" 12x (#2060)
On main branch: ‘make test’ takes about 33s
On this branch: ‘make test’ takes about 2.7s

(all measurements are for hot cache)

What’s done (from highest impact to lowest):
- Remove -coverprofile= option - this option was disabling "go test"'s
built-in cache and also it took extra time to calculate the coverage
(extra 21s).
- Exclude ./integration/ folder, there are no unit tests there, but
having it included adds significant time. "go test"'s caching also does
not work there for me, due to TestMain() presence (extra 7.2s).
- Remove dependency on "make lint" - nice to have, but slow to re-check
the whole repo and should already be done by IDE (extra 2.5s).
- Remove dependency on "make vendor" — rarely needed; on CI it is
already executed separately (extra 1.1s).

The coverage option is still available under "make cover". Use "make
showcover" to show it.

I’ve also removed separate "make testonly". If you only want tests, run
"make test". If you want lint+test run "make lint test" etc.

I've also modified the test command, removed unnecessary -short, -v,
--raw-command.
2025-01-02 12:06:01 +01:00
Denis Bilenko 0b80784df7
Enable testifylint and fix the issues (#2065)
## Changes
- Enable new linter: testifylint.
- Apply fixes with --fix.
- Fix remaining issues (mostly with aider).

There were 2 cases we --fix did the wrong thing - this seems to a be a
bug in linter: https://github.com/Antonboom/testifylint/issues/210

Nonetheless, I kept that check enabled, it seems useful, just need to be
fixed manually after autofix.

## Tests
Existing tests
2025-01-02 12:03:41 +01:00
Pieter Noordhuis b053bfb4de
Verify that the bundle schema is up to date in CI (#2061)
## Changes

I noticed a diff in the schema in #2052.

This check should be performed automatically.

## Tests

This PR includes a commit that changes the schema to check that the
workflow actually fails.
2025-01-02 11:41:55 +01:00
Denis Bilenko c262b75c3d
Make lint.sh to run golangci-lint only once in the best case (#2062)
Follow up to #2051 and #2056.

Running golangci-lint twice always is measurably slower (1.5s vs 2.5s),
so only run it twice in case it is necessary.
2025-01-02 11:33:06 +01:00
Denis Bilenko d7f69f6a5d
Upgrade golangci-lint to v1.63.1 (#2064)
Upgrade your laptops with: brew install golangci-lint

This has a lot more autofixes, which makes it easier to adopt those
linters.

https://golangci-lint.run/product/changelog/#v1630
2025-01-02 11:31:35 +01:00
Denis Bilenko 3f75240a56
Improve test output to include correct location (#2058)
## Changes
- Add t.Helper() in testcli-related helpers, this ensures that output is
attributed correctly to test case and not to the helper.
- Modify testlcli.Run() to run process in foreground. This is needed for
t.Helper to work.
- Extend a few assertions with message to help attribute it to proper
helper where needed.

## Tests
Manually reviewed test output.

Before:

```
+ go test --timeout 3h -v -run TestDefaultPython/3.9 ./integration/bundle/
=== RUN   TestDefaultPython
=== RUN   TestDefaultPython/3.9
    workspace.go:26: aws
    golden.go:14: run args: [bundle, init, default-python, --config-file, config.json]
    runner.go:206: [databricks stderr]:
    runner.go:206: [databricks stderr]: Welcome to the default Python template for Databricks Asset Bundles!
...
    testdiff.go:23:
                Error Trace:    /Users/denis.bilenko/work/cli/libs/testdiff/testdiff.go:23
                                                        /Users/denis.bilenko/work/cli/libs/testdiff/golden.go:43
                                                        /Users/denis.bilenko/work/cli/internal/testcli/golden.go:23
                                                        /Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:92
                                                        /Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:45
...
```

After:

```
+ go test --timeout 3h -v -run TestDefaultPython/3.9 ./integration/bundle/
=== RUN   TestDefaultPython
=== RUN   TestDefaultPython/3.9
    init_default_python_test.go:51: CLOUD_ENV=aws
    init_default_python_test.go:92:   args: bundle, init, default-python, --config-file, config.json
    init_default_python_test.go:92: stderr:
    init_default_python_test.go:92: stderr: Welcome to the default Python template for Databricks Asset Bundles!
...
    init_default_python_test.go:92:
                Error Trace:    /Users/denis.bilenko/work/cli/libs/testdiff/testdiff.go:24
                                                        /Users/denis.bilenko/work/cli/libs/testdiff/golden.go:46
                                                        /Users/denis.bilenko/work/cli/internal/testcli/golden.go:23
                                                        /Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:92
                                                        /Users/denis.bilenko/work/cli/integration/bundle/init_default_python_test.go:45
...
```
2025-01-02 10:49:21 +01:00
100 changed files with 882 additions and 726 deletions

View File

@ -0,0 +1,36 @@
name: "Close Stale Issues"
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *" # Run at midnight every day
jobs:
cleanup:
permissions:
issues: write
contents: read
pull-requests: write
runs-on: ubuntu-latest
name: Stale issue job
steps:
- uses: actions/stale@v9
with:
stale-issue-message: This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.
stale-pr-message: This PR has not received an update in a while. If you want to keep this PR open, please leave a comment below or push a new commit and auto-close will be canceled.
# These labels are required
stale-issue-label: Stale
stale-pr-label: Stale
exempt-issue-labels: No Autoclose
exempt-pr-labels: No Autoclose
# Issue timing
days-before-stale: 30
days-before-close: 7
repo-token: ${{ secrets.GITHUB_TOKEN }}
loglevel: DEBUG
# TODO: Remove dry-run after merge when confirmed it works correctly
dry-run: true

View File

@ -55,7 +55,7 @@ jobs:
pip3 install wheel pip3 install wheel
- name: Run tests - name: Run tests
run: make testonly run: make test
golangci: golangci:
name: lint name: lint
@ -75,7 +75,7 @@ jobs:
- name: golangci-lint - name: golangci-lint
uses: golangci/golangci-lint-action@v6 uses: golangci/golangci-lint-action@v6
with: with:
version: v1.62.2 version: v1.63.1
args: --timeout=15m args: --timeout=15m
validate-bundle-schema: validate-bundle-schema:
@ -90,6 +90,13 @@ jobs:
with: with:
go-version: 1.23.4 go-version: 1.23.4
- name: Verify that the schema is up to date
run: |
if ! ( make schema && git diff --exit-code ); then
echo "The schema is not up to date. Please run 'make schema' and commit the changes."
exit 1
fi
# Github repo: https://github.com/ajv-validator/ajv-cli # Github repo: https://github.com/ajv-validator/ajv-cli
- name: Install ajv-cli - name: Install ajv-cli
run: npm install -g ajv-cli@5.0.0 run: npm install -g ajv-cli@5.0.0

View File

@ -11,6 +11,7 @@ linters:
- gofmt - gofmt
- gofumpt - gofumpt
- goimports - goimports
- testifylint
linters-settings: linters-settings:
govet: govet:
enable-all: true enable-all: true
@ -32,7 +33,12 @@ linters-settings:
gofumpt: gofumpt:
module-path: github.com/databricks/cli module-path: github.com/databricks/cli
extra-rules: true extra-rules: true
#goimports: testifylint:
# local-prefixes: github.com/databricks/cli enable-all: true
disable:
# good check, but we have too many assert.(No)?Errorf? so excluding for now
- require-error
issues: issues:
exclude-dirs-use-default: false # recommended by docs https://golangci-lint.run/usage/false-positives/ exclude-dirs-use-default: false # recommended by docs https://golangci-lint.run/usage/false-positives/
max-issues-per-linter: 1000
max-same-issues: 1000

View File

@ -1,38 +1,33 @@
default: build default: build
lint: vendor PACKAGES=./libs/... ./internal/... ./cmd/... ./bundle/... .
@echo "✓ Linting source code with https://golangci-lint.run/ (with --fix)..."
@./lint.sh ./...
lintcheck: vendor lint:
@echo "✓ Linting source code with https://golangci-lint.run/ ..." ./lint.sh ./...
@golangci-lint run ./...
test: lint testonly lintcheck:
golangci-lint run ./...
testonly: test:
@echo "✓ Running tests ..." gotestsum --format pkgname-and-test-fails --no-summary=skipped -- ${PACKAGES}
@gotestsum --format pkgname-and-test-fails --no-summary=skipped --raw-command go test -v -json -short -coverprofile=coverage.txt ./...
coverage: test cover:
@echo "✓ Opening coverage for unit tests ..." gotestsum --format pkgname-and-test-fails --no-summary=skipped -- -coverprofile=coverage.txt ${PACKAGES}
@go tool cover -html=coverage.txt
showcover:
go tool cover -html=coverage.txt
build: vendor build: vendor
@echo "✓ Building source code with go build ..." go build -mod vendor
@go build -mod vendor
snapshot: snapshot:
@echo "✓ Building dev snapshot" go build -o .databricks/databricks
@go build -o .databricks/databricks
vendor: vendor:
@echo "✓ Filling vendor folder with library code ..." go mod vendor
@go mod vendor
schema: schema:
@echo "✓ Generating json-schema ..." go run ./bundle/internal/schema ./bundle/internal/schema ./bundle/schema/jsonschema.json
@go run ./bundle/internal/schema ./bundle/internal/schema ./bundle/schema/jsonschema.json
INTEGRATION = gotestsum --format github-actions --rerun-fails --jsonfile output.json --packages "./integration/..." -- -parallel 4 -timeout=2h INTEGRATION = gotestsum --format github-actions --rerun-fails --jsonfile output.json --packages "./integration/..." -- -parallel 4 -timeout=2h
@ -42,4 +37,4 @@ integration:
integration-short: integration-short:
$(INTEGRATION) -short $(INTEGRATION) -short
.PHONY: lint lintcheck test testonly coverage build snapshot vendor schema integration integration-short .PHONY: lint lintcheck test cover showcover build snapshot vendor schema integration integration-short

View File

@ -2,7 +2,6 @@ package bundle
import ( import (
"context" "context"
"errors"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
@ -16,7 +15,7 @@ import (
func TestLoadNotExists(t *testing.T) { func TestLoadNotExists(t *testing.T) {
b, err := Load(context.Background(), "/doesntexist") b, err := Load(context.Background(), "/doesntexist")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
assert.Nil(t, b) assert.Nil(t, b)
} }

View File

@ -109,19 +109,19 @@ func TestConfigureDashboardDefaultsEmbedCredentials(t *testing.T) {
// Set to true; still true. // Set to true; still true.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d1.embed_credentials") v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d1.embed_credentials")
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, true, v.MustBool()) assert.True(t, v.MustBool())
} }
// Set to false; still false. // Set to false; still false.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d2.embed_credentials") v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d2.embed_credentials")
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, false, v.MustBool()) assert.False(t, v.MustBool())
} }
// Not set; now false. // Not set; now false.
v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d3.embed_credentials") v, err = dyn.Get(b.Config.Value(), "resources.dashboards.d3.embed_credentials")
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, false, v.MustBool()) assert.False(t, v.MustBool())
} }
// No valid dashboard; no change. // No valid dashboard; no change.

View File

@ -28,8 +28,8 @@ func TestDefaultQueueingApplyNoJobs(t *testing.T) {
}, },
} }
d := bundle.Apply(context.Background(), b, DefaultQueueing()) d := bundle.Apply(context.Background(), b, DefaultQueueing())
assert.Len(t, d, 0) assert.Empty(t, d)
assert.Len(t, b.Config.Resources.Jobs, 0) assert.Empty(t, b.Config.Resources.Jobs)
} }
func TestDefaultQueueingApplyJobsAlreadyEnabled(t *testing.T) { func TestDefaultQueueingApplyJobsAlreadyEnabled(t *testing.T) {
@ -47,7 +47,7 @@ func TestDefaultQueueingApplyJobsAlreadyEnabled(t *testing.T) {
}, },
} }
d := bundle.Apply(context.Background(), b, DefaultQueueing()) d := bundle.Apply(context.Background(), b, DefaultQueueing())
assert.Len(t, d, 0) assert.Empty(t, d)
assert.True(t, b.Config.Resources.Jobs["job"].Queue.Enabled) assert.True(t, b.Config.Resources.Jobs["job"].Queue.Enabled)
} }
@ -66,7 +66,7 @@ func TestDefaultQueueingApplyEnableQueueing(t *testing.T) {
}, },
} }
d := bundle.Apply(context.Background(), b, DefaultQueueing()) d := bundle.Apply(context.Background(), b, DefaultQueueing())
assert.Len(t, d, 0) assert.Empty(t, d)
assert.NotNil(t, b.Config.Resources.Jobs["job"].Queue) assert.NotNil(t, b.Config.Resources.Jobs["job"].Queue)
assert.True(t, b.Config.Resources.Jobs["job"].Queue.Enabled) assert.True(t, b.Config.Resources.Jobs["job"].Queue.Enabled)
} }
@ -96,7 +96,7 @@ func TestDefaultQueueingApplyWithMultipleJobs(t *testing.T) {
}, },
} }
d := bundle.Apply(context.Background(), b, DefaultQueueing()) d := bundle.Apply(context.Background(), b, DefaultQueueing())
assert.Len(t, d, 0) assert.Empty(t, d)
assert.False(t, b.Config.Resources.Jobs["job1"].Queue.Enabled) assert.False(t, b.Config.Resources.Jobs["job1"].Queue.Enabled)
assert.True(t, b.Config.Resources.Jobs["job2"].Queue.Enabled) assert.True(t, b.Config.Resources.Jobs["job2"].Queue.Enabled)
assert.True(t, b.Config.Resources.Jobs["job3"].Queue.Enabled) assert.True(t, b.Config.Resources.Jobs["job3"].Queue.Enabled)

View File

@ -44,7 +44,7 @@ func TestEnvironmentsToTargetsWithEnvironmentsDefined(t *testing.T) {
diags := bundle.Apply(context.Background(), b, mutator.EnvironmentsToTargets()) diags := bundle.Apply(context.Background(), b, mutator.EnvironmentsToTargets())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Len(t, b.Config.Environments, 0) assert.Empty(t, b.Config.Environments)
assert.Len(t, b.Config.Targets, 1) assert.Len(t, b.Config.Targets, 1)
} }
@ -61,6 +61,6 @@ func TestEnvironmentsToTargetsWithTargetsDefined(t *testing.T) {
diags := bundle.Apply(context.Background(), b, mutator.EnvironmentsToTargets()) diags := bundle.Apply(context.Background(), b, mutator.EnvironmentsToTargets())
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Len(t, b.Config.Environments, 0) assert.Empty(t, b.Config.Environments)
assert.Len(t, b.Config.Targets, 1) assert.Len(t, b.Config.Targets, 1)
} }

View File

@ -74,8 +74,8 @@ func TestMergeJobTasks(t *testing.T) {
assert.Equal(t, "i3.2xlarge", cluster.NodeTypeId) assert.Equal(t, "i3.2xlarge", cluster.NodeTypeId)
assert.Equal(t, 4, cluster.NumWorkers) assert.Equal(t, 4, cluster.NumWorkers)
assert.Len(t, task0.Libraries, 2) assert.Len(t, task0.Libraries, 2)
assert.Equal(t, task0.Libraries[0].Whl, "package1") assert.Equal(t, "package1", task0.Libraries[0].Whl)
assert.Equal(t, task0.Libraries[1].Pypi.Package, "package2") assert.Equal(t, "package2", task0.Libraries[1].Pypi.Package)
// This task was left untouched. // This task was left untouched.
task1 := j.Tasks[1].NewCluster task1 := j.Tasks[1].NewCluster

View File

@ -163,18 +163,18 @@ func TestProcessTargetModeDevelopment(t *testing.T) {
// Job 1 // Job 1
assert.Equal(t, "[dev lennart] job1", b.Config.Resources.Jobs["job1"].Name) assert.Equal(t, "[dev lennart] job1", b.Config.Resources.Jobs["job1"].Name)
assert.Equal(t, b.Config.Resources.Jobs["job1"].Tags["existing"], "tag") assert.Equal(t, "tag", b.Config.Resources.Jobs["job1"].Tags["existing"])
assert.Equal(t, b.Config.Resources.Jobs["job1"].Tags["dev"], "lennart") assert.Equal(t, "lennart", b.Config.Resources.Jobs["job1"].Tags["dev"])
assert.Equal(t, b.Config.Resources.Jobs["job1"].Schedule.PauseStatus, jobs.PauseStatusPaused) assert.Equal(t, jobs.PauseStatusPaused, b.Config.Resources.Jobs["job1"].Schedule.PauseStatus)
// Job 2 // Job 2
assert.Equal(t, "[dev lennart] job2", b.Config.Resources.Jobs["job2"].Name) assert.Equal(t, "[dev lennart] job2", b.Config.Resources.Jobs["job2"].Name)
assert.Equal(t, b.Config.Resources.Jobs["job2"].Tags["dev"], "lennart") assert.Equal(t, "lennart", b.Config.Resources.Jobs["job2"].Tags["dev"])
assert.Equal(t, b.Config.Resources.Jobs["job2"].Schedule.PauseStatus, jobs.PauseStatusUnpaused) assert.Equal(t, jobs.PauseStatusUnpaused, b.Config.Resources.Jobs["job2"].Schedule.PauseStatus)
// Pipeline 1 // Pipeline 1
assert.Equal(t, "[dev lennart] pipeline1", b.Config.Resources.Pipelines["pipeline1"].Name) assert.Equal(t, "[dev lennart] pipeline1", b.Config.Resources.Pipelines["pipeline1"].Name)
assert.Equal(t, false, b.Config.Resources.Pipelines["pipeline1"].Continuous) assert.False(t, b.Config.Resources.Pipelines["pipeline1"].Continuous)
assert.True(t, b.Config.Resources.Pipelines["pipeline1"].PipelineSpec.Development) assert.True(t, b.Config.Resources.Pipelines["pipeline1"].PipelineSpec.Development)
// Experiment 1 // Experiment 1

View File

@ -185,11 +185,11 @@ func TestResolveVariableReferencesForPrimitiveNonStringFields(t *testing.T) {
// Apply for the variable prefix. This should resolve the variables to their values. // Apply for the variable prefix. This should resolve the variables to their values.
diags = bundle.Apply(context.Background(), b, ResolveVariableReferences("variables")) diags = bundle.Apply(context.Background(), b, ResolveVariableReferences("variables"))
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, true, b.Config.Resources.Jobs["job1"].JobSettings.NotificationSettings.NoAlertForCanceledRuns) assert.True(t, b.Config.Resources.Jobs["job1"].JobSettings.NotificationSettings.NoAlertForCanceledRuns)
assert.Equal(t, true, b.Config.Resources.Jobs["job1"].JobSettings.NotificationSettings.NoAlertForSkippedRuns) assert.True(t, b.Config.Resources.Jobs["job1"].JobSettings.NotificationSettings.NoAlertForSkippedRuns)
assert.Equal(t, 1, b.Config.Resources.Jobs["job1"].JobSettings.Tasks[0].NewCluster.Autoscale.MinWorkers) assert.Equal(t, 1, b.Config.Resources.Jobs["job1"].JobSettings.Tasks[0].NewCluster.Autoscale.MinWorkers)
assert.Equal(t, 2, b.Config.Resources.Jobs["job1"].JobSettings.Tasks[0].NewCluster.Autoscale.MaxWorkers) assert.Equal(t, 2, b.Config.Resources.Jobs["job1"].JobSettings.Tasks[0].NewCluster.Autoscale.MaxWorkers)
assert.Equal(t, 0.5, b.Config.Resources.Jobs["job1"].JobSettings.Tasks[0].NewCluster.AzureAttributes.SpotBidMaxPrice) assert.InDelta(t, 0.5, b.Config.Resources.Jobs["job1"].JobSettings.Tasks[0].NewCluster.AzureAttributes.SpotBidMaxPrice, 0.0001)
} }
func TestResolveComplexVariable(t *testing.T) { func TestResolveComplexVariable(t *testing.T) {

View File

@ -71,7 +71,7 @@ func TestNoWorkspacePrefixUsed(t *testing.T) {
} }
for _, d := range diags { for _, d := range diags {
require.Equal(t, d.Severity, diag.Warning) require.Equal(t, diag.Warning, d.Severity)
require.Contains(t, expectedErrors, d.Summary) require.Contains(t, expectedErrors, d.Summary)
delete(expectedErrors, d.Summary) delete(expectedErrors, d.Summary)
} }

View File

@ -30,7 +30,7 @@ func TestSetVariableFromProcessEnvVar(t *testing.T) {
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, variable.Value, "process-env") assert.Equal(t, "process-env", variable.Value)
} }
func TestSetVariableUsingDefaultValue(t *testing.T) { func TestSetVariableUsingDefaultValue(t *testing.T) {
@ -48,7 +48,7 @@ func TestSetVariableUsingDefaultValue(t *testing.T) {
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, variable.Value, "default") assert.Equal(t, "default", variable.Value)
} }
func TestSetVariableWhenAlreadyAValueIsAssigned(t *testing.T) { func TestSetVariableWhenAlreadyAValueIsAssigned(t *testing.T) {
@ -70,7 +70,7 @@ func TestSetVariableWhenAlreadyAValueIsAssigned(t *testing.T) {
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, variable.Value, "assigned-value") assert.Equal(t, "assigned-value", variable.Value)
} }
func TestSetVariableEnvVarValueDoesNotOverridePresetValue(t *testing.T) { func TestSetVariableEnvVarValueDoesNotOverridePresetValue(t *testing.T) {
@ -95,7 +95,7 @@ func TestSetVariableEnvVarValueDoesNotOverridePresetValue(t *testing.T) {
err = convert.ToTyped(&variable, v) err = convert.ToTyped(&variable, v)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, variable.Value, "assigned-value") assert.Equal(t, "assigned-value", variable.Value)
} }
func TestSetVariablesErrorsIfAValueCouldNotBeResolved(t *testing.T) { func TestSetVariablesErrorsIfAValueCouldNotBeResolved(t *testing.T) {

View File

@ -37,11 +37,11 @@ func TestCustomMarshallerIsImplemented(t *testing.T) {
field := rt.Field(i) field := rt.Field(i)
// Fields in Resources are expected be of the form map[string]*resourceStruct // Fields in Resources are expected be of the form map[string]*resourceStruct
assert.Equal(t, field.Type.Kind(), reflect.Map, "Resource %s is not a map", field.Name) assert.Equal(t, reflect.Map, field.Type.Kind(), "Resource %s is not a map", field.Name)
kt := field.Type.Key() kt := field.Type.Key()
assert.Equal(t, kt.Kind(), reflect.String, "Resource %s is not a map with string keys", field.Name) assert.Equal(t, reflect.String, kt.Kind(), "Resource %s is not a map with string keys", field.Name)
vt := field.Type.Elem() vt := field.Type.Elem()
assert.Equal(t, vt.Kind(), reflect.Ptr, "Resource %s is not a map with pointer values", field.Name) assert.Equal(t, reflect.Ptr, vt.Kind(), "Resource %s is not a map with pointer values", field.Name)
// Marshalling a resourceStruct will panic if resourceStruct does not have a custom marshaller // Marshalling a resourceStruct will panic if resourceStruct does not have a custom marshaller
// This is because resourceStruct embeds a Go SDK struct that implements // This is because resourceStruct embeds a Go SDK struct that implements

View File

@ -0,0 +1,51 @@
package validate
import (
"context"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag"
)
// FastValidate runs a subset of fast validation checks. This is a subset of the full
// suite of validation mutators that satisfy ANY ONE of the following criteria:
//
// 1. No file i/o or network requests are made in the mutator.
// 2. The validation is blocking for bundle deployments.
//
// The full suite of validation mutators is available in the [Validate] mutator.
type fastValidateReadonly struct{}
func FastValidateReadonly() bundle.ReadOnlyMutator {
return &fastValidateReadonly{}
}
func (f *fastValidateReadonly) Name() string {
return "fast_validate(readonly)"
}
func (f *fastValidateReadonly) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
return bundle.ApplyReadOnly(ctx, rb, bundle.Parallel(
// Fast mutators with only in-memory checks
JobClusterKeyDefined(),
JobTaskClusterSpec(),
SingleNodeCluster(),
// Blocking mutators. Deployments will fail if these checks fail.
ValidateArtifactPath(),
))
}
type fastValidate struct{}
func FastValidate() bundle.Mutator {
return &fastValidate{}
}
func (f *fastValidate) Name() string {
return "fast_validate"
}
func (f *fastValidate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
return bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), FastValidateReadonly())
}

View File

@ -87,7 +87,7 @@ func TestFilesToSync_EverythingIgnored(t *testing.T) {
ctx := context.Background() ctx := context.Background()
rb := bundle.ReadOnly(b) rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync()) diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
require.Equal(t, 1, len(diags)) require.Len(t, diags, 1)
assert.Equal(t, diag.Warning, diags[0].Severity) assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "There are no files to sync, please check your .gitignore", diags[0].Summary) assert.Equal(t, "There are no files to sync, please check your .gitignore", diags[0].Summary)
} }
@ -101,7 +101,7 @@ func TestFilesToSync_EverythingExcluded(t *testing.T) {
ctx := context.Background() ctx := context.Background()
rb := bundle.ReadOnly(b) rb := bundle.ReadOnly(b)
diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync()) diags := bundle.ApplyReadOnly(ctx, rb, FilesToSync())
require.Equal(t, 1, len(diags)) require.Len(t, diags, 1)
assert.Equal(t, diag.Warning, diags[0].Severity) assert.Equal(t, diag.Warning, diags[0].Severity)
assert.Equal(t, "There are no files to sync, please check your .gitignore and sync.exclude configuration", diags[0].Summary) assert.Equal(t, "There are no files to sync, please check your .gitignore and sync.exclude configuration", diags[0].Summary)
} }

View File

@ -34,7 +34,7 @@ func TestJobClusterKeyDefined(t *testing.T) {
} }
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobClusterKeyDefined()) diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobClusterKeyDefined())
require.Len(t, diags, 0) require.Empty(t, diags)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
} }
@ -59,8 +59,8 @@ func TestJobClusterKeyNotDefined(t *testing.T) {
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobClusterKeyDefined()) diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobClusterKeyDefined())
require.Len(t, diags, 1) require.Len(t, diags, 1)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
require.Equal(t, diags[0].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, diags[0].Summary, "job_cluster_key do-not-exist is not defined") require.Equal(t, "job_cluster_key do-not-exist is not defined", diags[0].Summary)
} }
func TestJobClusterKeyDefinedInDifferentJob(t *testing.T) { func TestJobClusterKeyDefinedInDifferentJob(t *testing.T) {
@ -92,6 +92,6 @@ func TestJobClusterKeyDefinedInDifferentJob(t *testing.T) {
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobClusterKeyDefined()) diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), JobClusterKeyDefined())
require.Len(t, diags, 1) require.Len(t, diags, 1)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
require.Equal(t, diags[0].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, diags[0].Summary, "job_cluster_key do-not-exist is not defined") require.Equal(t, "job_cluster_key do-not-exist is not defined", diags[0].Summary)
} }

View File

@ -30,12 +30,13 @@ func (l location) Path() dyn.Path {
// Apply implements bundle.Mutator. // Apply implements bundle.Mutator.
func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics { func (v *validate) Apply(ctx context.Context, b *bundle.Bundle) diag.Diagnostics {
return bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), bundle.Parallel( return bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), bundle.Parallel(
JobClusterKeyDefined(), FastValidateReadonly(),
// Slow mutators that require network or file i/o. These are only
// run in the `bundle validate` command.
FilesToSync(), FilesToSync(),
ValidateSyncPatterns(),
JobTaskClusterSpec(),
ValidateFolderPermissions(), ValidateFolderPermissions(),
SingleNodeCluster(), ValidateSyncPatterns(),
)) ))
} }

View File

@ -0,0 +1,129 @@
package validate
import (
"context"
"errors"
"fmt"
"slices"
"strings"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/libraries"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/dynvar"
"github.com/databricks/databricks-sdk-go/apierr"
)
type validateArtifactPath struct{}
func ValidateArtifactPath() bundle.ReadOnlyMutator {
return &validateArtifactPath{}
}
func (v *validateArtifactPath) Name() string {
return "validate:artifact_paths"
}
func extractVolumeFromPath(artifactPath string) (string, string, string, error) {
if !libraries.IsVolumesPath(artifactPath) {
return "", "", "", fmt.Errorf("expected artifact_path to start with /Volumes/, got %s", artifactPath)
}
parts := strings.Split(artifactPath, "/")
volumeFormatErr := fmt.Errorf("expected UC volume path to be in the format /Volumes/<catalog>/<schema>/<volume>/..., got %s", artifactPath)
// Incorrect format.
if len(parts) < 5 {
return "", "", "", volumeFormatErr
}
catalogName := parts[2]
schemaName := parts[3]
volumeName := parts[4]
// Incorrect format.
if catalogName == "" || schemaName == "" || volumeName == "" {
return "", "", "", volumeFormatErr
}
return catalogName, schemaName, volumeName, nil
}
func findVolumeInBundle(r config.Root, catalogName, schemaName, volumeName string) (dyn.Path, []dyn.Location, bool) {
volumes := r.Resources.Volumes
for k, v := range volumes {
if v.CatalogName != catalogName || v.Name != volumeName {
continue
}
// UC schemas can be defined in the bundle itself, and thus might be interpolated
// at runtime via the ${resources.schemas.<name>} syntax. Thus we match the volume
// definition if the schema name is the same as the one in the bundle, or if the
// schema name is interpolated.
// We only have to check for ${resources.schemas...} references because any
// other valid reference (like ${var.foo}) would have been interpolated by this point.
p, ok := dynvar.PureReferenceToPath(v.SchemaName)
isSchemaDefinedInBundle := ok && p.HasPrefix(dyn.Path{dyn.Key("resources"), dyn.Key("schemas")})
if v.SchemaName != schemaName && !isSchemaDefinedInBundle {
continue
}
pathString := fmt.Sprintf("resources.volumes.%s", k)
return dyn.MustPathFromString(pathString), r.GetLocations(pathString), true
}
return nil, nil, false
}
func (v *validateArtifactPath) Apply(ctx context.Context, rb bundle.ReadOnlyBundle) diag.Diagnostics {
// We only validate UC Volumes paths right now.
if !libraries.IsVolumesPath(rb.Config().Workspace.ArtifactPath) {
return nil
}
wrapErrorMsg := func(s string) diag.Diagnostics {
return diag.Diagnostics{
{
Summary: s,
Severity: diag.Error,
Locations: rb.Config().GetLocations("workspace.artifact_path"),
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
},
}
}
catalogName, schemaName, volumeName, err := extractVolumeFromPath(rb.Config().Workspace.ArtifactPath)
if err != nil {
return wrapErrorMsg(err.Error())
}
volumeFullName := fmt.Sprintf("%s.%s.%s", catalogName, schemaName, volumeName)
w := rb.WorkspaceClient()
_, err = w.Volumes.ReadByName(ctx, volumeFullName)
if errors.Is(err, apierr.ErrPermissionDenied) {
return wrapErrorMsg(fmt.Sprintf("cannot access volume %s: %s", volumeFullName, err))
}
if errors.Is(err, apierr.ErrNotFound) {
path, locations, ok := findVolumeInBundle(rb.Config(), catalogName, schemaName, volumeName)
if !ok {
return wrapErrorMsg(fmt.Sprintf("volume %s does not exist", volumeFullName))
}
// If the volume is defined in the bundle, provide a more helpful error diagnostic,
// with more details and location information.
return diag.Diagnostics{{
Summary: fmt.Sprintf("volume %s does not exist", volumeFullName),
Severity: diag.Error,
Detail: `You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.`,
Locations: slices.Concat(rb.Config().GetLocations("workspace.artifact_path"), locations),
Paths: append([]dyn.Path{dyn.MustPathFromString("workspace.artifact_path")}, path),
}}
}
if err != nil {
return wrapErrorMsg(fmt.Sprintf("cannot read volume %s: %s", volumeFullName, err))
}
return nil
}

View File

@ -0,0 +1,244 @@
package validate
import (
"context"
"fmt"
"testing"
"github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/databricks-sdk-go/apierr"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
func TestValidateArtifactPathWithVolumeInBundle(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: "/Volumes/catalogN/schemaN/volumeN/abc",
},
Resources: config.Resources{
Volumes: map[string]*resources.Volume{
"foo": {
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
CatalogName: "catalogN",
Name: "volumeN",
SchemaName: "schemaN",
},
},
},
},
},
}
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "file", Line: 1, Column: 1}})
bundletest.SetLocation(b, "resources.volumes.foo", []dyn.Location{{File: "file", Line: 2, Column: 2}})
ctx := context.Background()
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockVolumesAPI()
api.EXPECT().ReadByName(mock.Anything, "catalogN.schemaN.volumeN").Return(nil, &apierr.APIError{
StatusCode: 404,
})
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.ApplyReadOnly(ctx, bundle.ReadOnly(b), ValidateArtifactPath())
assert.Equal(t, diag.Diagnostics{{
Severity: diag.Error,
Summary: "volume catalogN.schemaN.volumeN does not exist",
Locations: []dyn.Location{
{File: "file", Line: 1, Column: 1},
{File: "file", Line: 2, Column: 2},
},
Paths: []dyn.Path{
dyn.MustPathFromString("workspace.artifact_path"),
dyn.MustPathFromString("resources.volumes.foo"),
},
Detail: `You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.`,
}}, diags)
}
func TestValidateArtifactPath(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: "/Volumes/catalogN/schemaN/volumeN/abc",
},
},
}
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "file", Line: 1, Column: 1}})
assertDiags := func(t *testing.T, diags diag.Diagnostics, expected string) {
assert.Len(t, diags, 1)
assert.Equal(t, diag.Diagnostics{{
Severity: diag.Error,
Summary: expected,
Locations: []dyn.Location{{File: "file", Line: 1, Column: 1}},
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
}}, diags)
}
rb := bundle.ReadOnly(b)
ctx := context.Background()
tcases := []struct {
err error
expectedSummary string
}{
{
err: &apierr.APIError{
StatusCode: 403,
Message: "User does not have USE SCHEMA on Schema 'catalogN.schemaN'",
},
expectedSummary: "cannot access volume catalogN.schemaN.volumeN: User does not have USE SCHEMA on Schema 'catalogN.schemaN'",
},
{
err: &apierr.APIError{
StatusCode: 404,
},
expectedSummary: "volume catalogN.schemaN.volumeN does not exist",
},
{
err: &apierr.APIError{
StatusCode: 500,
Message: "Internal Server Error",
},
expectedSummary: "cannot read volume catalogN.schemaN.volumeN: Internal Server Error",
},
}
for _, tc := range tcases {
m := mocks.NewMockWorkspaceClient(t)
api := m.GetMockVolumesAPI()
api.EXPECT().ReadByName(mock.Anything, "catalogN.schemaN.volumeN").Return(nil, tc.err)
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.ApplyReadOnly(ctx, rb, ValidateArtifactPath())
assertDiags(t, diags, tc.expectedSummary)
}
}
func invalidVolumePaths() []string {
return []string{
"/Volumes/",
"/Volumes/main",
"/Volumes/main/",
"/Volumes/main//",
"/Volumes/main//my_schema",
"/Volumes/main/my_schema",
"/Volumes/main/my_schema/",
"/Volumes/main/my_schema//",
"/Volumes//my_schema/my_volume",
}
}
func TestExtractVolumeFromPath(t *testing.T) {
catalogName, schemaName, volumeName, err := extractVolumeFromPath("/Volumes/main/my_schema/my_volume")
require.NoError(t, err)
assert.Equal(t, "main", catalogName)
assert.Equal(t, "my_schema", schemaName)
assert.Equal(t, "my_volume", volumeName)
for _, p := range invalidVolumePaths() {
_, _, _, err := extractVolumeFromPath(p)
assert.EqualError(t, err, fmt.Sprintf("expected UC volume path to be in the format /Volumes/<catalog>/<schema>/<volume>/..., got %s", p))
}
}
func TestValidateArtifactPathWithInvalidPaths(t *testing.T) {
for _, p := range invalidVolumePaths() {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: p,
},
},
}
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "config.yml", Line: 1, Column: 2}})
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), ValidateArtifactPath())
require.Equal(t, diag.Diagnostics{{
Severity: diag.Error,
Summary: fmt.Sprintf("expected UC volume path to be in the format /Volumes/<catalog>/<schema>/<volume>/..., got %s", p),
Locations: []dyn.Location{{File: "config.yml", Line: 1, Column: 2}},
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
}}, diags)
}
}
func TestFindVolumeInBundle(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Volumes: map[string]*resources.Volume{
"foo": {
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
CatalogName: "main",
Name: "my_volume",
SchemaName: "my_schema",
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.volumes.foo", []dyn.Location{
{
File: "volume.yml",
Line: 1,
Column: 2,
},
})
// volume is in DAB.
path, locations, ok := findVolumeInBundle(b.Config, "main", "my_schema", "my_volume")
assert.True(t, ok)
assert.Equal(t, []dyn.Location{{
File: "volume.yml",
Line: 1,
Column: 2,
}}, locations)
assert.Equal(t, dyn.MustPathFromString("resources.volumes.foo"), path)
// wrong volume name
_, _, ok = findVolumeInBundle(b.Config, "main", "my_schema", "doesnotexist")
assert.False(t, ok)
// wrong schema name
_, _, ok = findVolumeInBundle(b.Config, "main", "doesnotexist", "my_volume")
assert.False(t, ok)
// wrong catalog name
_, _, ok = findVolumeInBundle(b.Config, "doesnotexist", "my_schema", "my_volume")
assert.False(t, ok)
// schema name is interpolated but does not have the right prefix. In this case
// we should not match the volume.
b.Config.Resources.Volumes["foo"].SchemaName = "${foo.bar.baz}"
_, _, ok = findVolumeInBundle(b.Config, "main", "my_schema", "my_volume")
assert.False(t, ok)
// schema name is interpolated.
b.Config.Resources.Volumes["foo"].SchemaName = "${resources.schemas.my_schema.name}"
path, locations, ok = findVolumeInBundle(b.Config, "main", "valuedoesnotmatter", "my_volume")
assert.True(t, ok)
assert.Equal(t, []dyn.Location{{
File: "volume.yml",
Line: 1,
Column: 2,
}}, locations)
assert.Equal(t, dyn.MustPathFromString("resources.volumes.foo"), path)
}

View File

@ -12,7 +12,7 @@ func TestGetPanics(t *testing.T) {
defer func() { defer func() {
r := recover() r := recover()
require.NotNil(t, r, "The function did not panic") require.NotNil(t, r, "The function did not panic")
assert.Equal(t, r, "context not configured with bundle") assert.Equal(t, "context not configured with bundle", r)
}() }()
Get(context.Background()) Get(context.Background())

View File

@ -4,7 +4,6 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"errors"
"io" "io"
"io/fs" "io/fs"
"os" "os"
@ -279,7 +278,7 @@ func TestStatePullNoState(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
_, err = os.Stat(statePath) _, err = os.Stat(statePath)
require.True(t, errors.Is(err, fs.ErrNotExist)) require.ErrorIs(t, err, fs.ErrNotExist)
} }
func TestStatePullOlderState(t *testing.T) { func TestStatePullOlderState(t *testing.T) {

View File

@ -60,7 +60,7 @@ func TestStateUpdate(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, int64(1), state.Seq) require.Equal(t, int64(1), state.Seq)
require.Equal(t, state.Files, Filelist{ require.Equal(t, Filelist{
{ {
LocalPath: "test1.py", LocalPath: "test1.py",
}, },
@ -68,7 +68,7 @@ func TestStateUpdate(t *testing.T) {
LocalPath: "test2.py", LocalPath: "test2.py",
IsNotebook: true, IsNotebook: true,
}, },
}) }, state.Files)
require.Equal(t, build.GetInfo().Version, state.CliVersion) require.Equal(t, build.GetInfo().Version, state.CliVersion)
diags = bundle.Apply(ctx, b, s) diags = bundle.Apply(ctx, b, s)
@ -79,7 +79,7 @@ func TestStateUpdate(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, int64(2), state.Seq) require.Equal(t, int64(2), state.Seq)
require.Equal(t, state.Files, Filelist{ require.Equal(t, Filelist{
{ {
LocalPath: "test1.py", LocalPath: "test1.py",
}, },
@ -87,7 +87,7 @@ func TestStateUpdate(t *testing.T) {
LocalPath: "test2.py", LocalPath: "test2.py",
IsNotebook: true, IsNotebook: true,
}, },
}) }, state.Files)
require.Equal(t, build.GetInfo().Version, state.CliVersion) require.Equal(t, build.GetInfo().Version, state.CliVersion)
// Valid non-empty UUID is generated. // Valid non-empty UUID is generated.
@ -130,7 +130,7 @@ func TestStateUpdateWithExistingState(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, int64(11), state.Seq) require.Equal(t, int64(11), state.Seq)
require.Equal(t, state.Files, Filelist{ require.Equal(t, Filelist{
{ {
LocalPath: "test1.py", LocalPath: "test1.py",
}, },
@ -138,7 +138,7 @@ func TestStateUpdateWithExistingState(t *testing.T) {
LocalPath: "test2.py", LocalPath: "test2.py",
IsNotebook: true, IsNotebook: true,
}, },
}) }, state.Files)
require.Equal(t, build.GetInfo().Version, state.CliVersion) require.Equal(t, build.GetInfo().Version, state.CliVersion)
// Existing UUID is not overwritten. // Existing UUID is not overwritten.

View File

@ -254,10 +254,10 @@ func TestBundleToTerraformPipeline(t *testing.T) {
assert.Equal(t, "my pipeline", resource.Name) assert.Equal(t, "my pipeline", resource.Name)
assert.Len(t, resource.Library, 2) assert.Len(t, resource.Library, 2)
assert.Len(t, resource.Notification, 2) assert.Len(t, resource.Notification, 2)
assert.Equal(t, resource.Notification[0].Alerts, []string{"on-update-fatal-failure"}) assert.Equal(t, []string{"on-update-fatal-failure"}, resource.Notification[0].Alerts)
assert.Equal(t, resource.Notification[0].EmailRecipients, []string{"jane@doe.com"}) assert.Equal(t, []string{"jane@doe.com"}, resource.Notification[0].EmailRecipients)
assert.Equal(t, resource.Notification[1].Alerts, []string{"on-update-failure", "on-flow-failure"}) assert.Equal(t, []string{"on-update-failure", "on-flow-failure"}, resource.Notification[1].Alerts)
assert.Equal(t, resource.Notification[1].EmailRecipients, []string{"jane@doe.com", "john@doe.com"}) assert.Equal(t, []string{"jane@doe.com", "john@doe.com"}, resource.Notification[1].EmailRecipients)
assert.Nil(t, out.Data) assert.Nil(t, out.Data)
} }
@ -454,7 +454,7 @@ func TestBundleToTerraformModelServing(t *testing.T) {
assert.Equal(t, "name", resource.Name) assert.Equal(t, "name", resource.Name)
assert.Equal(t, "model_name", resource.Config.ServedModels[0].ModelName) assert.Equal(t, "model_name", resource.Config.ServedModels[0].ModelName)
assert.Equal(t, "1", resource.Config.ServedModels[0].ModelVersion) assert.Equal(t, "1", resource.Config.ServedModels[0].ModelVersion)
assert.Equal(t, true, resource.Config.ServedModels[0].ScaleToZeroEnabled) assert.True(t, resource.Config.ServedModels[0].ScaleToZeroEnabled)
assert.Equal(t, "Small", resource.Config.ServedModels[0].WorkloadSize) assert.Equal(t, "Small", resource.Config.ServedModels[0].WorkloadSize)
assert.Equal(t, "model_name-1", resource.Config.TrafficConfig.Routes[0].ServedModelName) assert.Equal(t, "model_name-1", resource.Config.TrafficConfig.Routes[0].ServedModelName)
assert.Equal(t, 100, resource.Config.TrafficConfig.Routes[0].TrafficPercentage) assert.Equal(t, 100, resource.Config.TrafficConfig.Routes[0].TrafficPercentage)

View File

@ -225,7 +225,7 @@ func TestSetProxyEnvVars(t *testing.T) {
env := make(map[string]string, 0) env := make(map[string]string, 0)
err := setProxyEnvVars(context.Background(), env, b) err := setProxyEnvVars(context.Background(), env, b)
require.NoError(t, err) require.NoError(t, err)
assert.Len(t, env, 0) assert.Empty(t, env)
// Lower case set. // Lower case set.
clearEnv() clearEnv()
@ -293,7 +293,7 @@ func TestSetUserProfileFromInheritEnvVars(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
assert.Contains(t, env, "USERPROFILE") assert.Contains(t, env, "USERPROFILE")
assert.Equal(t, env["USERPROFILE"], "c:\\foo\\c") assert.Equal(t, "c:\\foo\\c", env["USERPROFILE"])
} }
func TestInheritEnvVarsWithAbsentTFConfigFile(t *testing.T) { func TestInheritEnvVarsWithAbsentTFConfigFile(t *testing.T) {

View File

@ -2,7 +2,6 @@ package main
import ( import (
"bytes" "bytes"
"fmt"
"io" "io"
"os" "os"
"path" "path"
@ -80,7 +79,7 @@ func TestRequiredAnnotationsForNewFields(t *testing.T) {
}, },
}) })
assert.NoError(t, err) assert.NoError(t, err)
assert.Empty(t, updatedFieldPaths, fmt.Sprintf("Missing JSON-schema descriptions for new config fields in bundle/internal/schema/annotations.yml:\n%s", strings.Join(updatedFieldPaths, "\n"))) assert.Empty(t, updatedFieldPaths, "Missing JSON-schema descriptions for new config fields in bundle/internal/schema/annotations.yml:\n%s", strings.Join(updatedFieldPaths, "\n"))
} }
// Checks whether types in annotation files are still present in Config type // Checks whether types in annotation files are still present in Config type

View File

@ -24,7 +24,7 @@ func GetFilerForLibraries(ctx context.Context, b *bundle.Bundle) (filer.Filer, s
switch { switch {
case IsVolumesPath(artifactPath): case IsVolumesPath(artifactPath):
return filerForVolume(ctx, b) return filerForVolume(b)
default: default:
return filerForWorkspace(b) return filerForWorkspace(b)

View File

@ -7,10 +7,7 @@ import (
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/libs/filer" "github.com/databricks/cli/libs/filer"
sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -39,11 +36,6 @@ func TestGetFilerForLibrariesValidUcVolume(t *testing.T) {
}, },
} }
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{}
m.GetMockFilesAPI().EXPECT().GetDirectoryMetadataByDirectoryPath(mock.Anything, "/Volumes/main/my_schema/my_volume").Return(nil)
b.SetWorkpaceClient(m.WorkspaceClient)
client, uploadPath, diags := GetFilerForLibraries(context.Background(), b) client, uploadPath, diags := GetFilerForLibraries(context.Background(), b)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
assert.Equal(t, "/Volumes/main/my_schema/my_volume/.internal", uploadPath) assert.Equal(t, "/Volumes/main/my_schema/my_volume/.internal", uploadPath)

View File

@ -1,132 +1,16 @@
package libraries package libraries
import ( import (
"context"
"errors"
"fmt"
"path" "path"
"strings"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/libs/diag" "github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/dyn/dynvar"
"github.com/databricks/cli/libs/filer" "github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go/apierr"
) )
func extractVolumeFromPath(artifactPath string) (string, string, string, error) { func filerForVolume(b *bundle.Bundle) (filer.Filer, string, diag.Diagnostics) {
if !IsVolumesPath(artifactPath) {
return "", "", "", fmt.Errorf("expected artifact_path to start with /Volumes/, got %s", artifactPath)
}
parts := strings.Split(artifactPath, "/")
volumeFormatErr := fmt.Errorf("expected UC volume path to be in the format /Volumes/<catalog>/<schema>/<volume>/..., got %s", artifactPath)
// Incorrect format.
if len(parts) < 5 {
return "", "", "", volumeFormatErr
}
catalogName := parts[2]
schemaName := parts[3]
volumeName := parts[4]
// Incorrect format.
if catalogName == "" || schemaName == "" || volumeName == "" {
return "", "", "", volumeFormatErr
}
return catalogName, schemaName, volumeName, nil
}
// This function returns a filer for ".internal" folder inside the directory configured
// at `workspace.artifact_path`.
// This function also checks if the UC volume exists in the workspace and then:
// 1. If the UC volume exists in the workspace:
// Returns a filer for the UC volume.
// 2. If the UC volume does not exist in the workspace but is (with high confidence) defined in
// the bundle configuration:
// Returns an error and a warning that instructs the user to deploy the
// UC volume before using it in the artifact path.
// 3. If the UC volume does not exist in the workspace and is not defined in the bundle configuration:
// Returns an error.
func filerForVolume(ctx context.Context, b *bundle.Bundle) (filer.Filer, string, diag.Diagnostics) {
artifactPath := b.Config.Workspace.ArtifactPath
w := b.WorkspaceClient() w := b.WorkspaceClient()
uploadPath := path.Join(b.Config.Workspace.ArtifactPath, InternalDirName)
catalogName, schemaName, volumeName, err := extractVolumeFromPath(artifactPath) f, err := filer.NewFilesClient(w, uploadPath)
if err != nil { return f, uploadPath, diag.FromErr(err)
return nil, "", diag.Diagnostics{
{
Severity: diag.Error,
Summary: err.Error(),
Locations: b.Config.GetLocations("workspace.artifact_path"),
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
},
}
}
// Check if the UC volume exists in the workspace.
volumePath := fmt.Sprintf("/Volumes/%s/%s/%s", catalogName, schemaName, volumeName)
err = w.Files.GetDirectoryMetadataByDirectoryPath(ctx, volumePath)
// If the volume exists already, directly return the filer for the path to
// upload the artifacts to.
if err == nil {
uploadPath := path.Join(artifactPath, InternalDirName)
f, err := filer.NewFilesClient(w, uploadPath)
return f, uploadPath, diag.FromErr(err)
}
baseErr := diag.Diagnostic{
Severity: diag.Error,
Summary: fmt.Sprintf("unable to determine if volume at %s exists: %s", volumePath, err),
Locations: b.Config.GetLocations("workspace.artifact_path"),
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
}
if errors.Is(err, apierr.ErrNotFound) {
// Since the API returned a 404, the volume does not exist.
// Modify the error message to provide more context.
baseErr.Summary = fmt.Sprintf("volume %s does not exist: %s", volumePath, err)
// If the volume is defined in the bundle, provide a more helpful error diagnostic,
// with more details and location information.
path, locations, ok := findVolumeInBundle(b, catalogName, schemaName, volumeName)
if !ok {
return nil, "", diag.Diagnostics{baseErr}
}
baseErr.Detail = `You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.`
baseErr.Paths = append(baseErr.Paths, path)
baseErr.Locations = append(baseErr.Locations, locations...)
}
return nil, "", diag.Diagnostics{baseErr}
}
func findVolumeInBundle(b *bundle.Bundle, catalogName, schemaName, volumeName string) (dyn.Path, []dyn.Location, bool) {
volumes := b.Config.Resources.Volumes
for k, v := range volumes {
if v.CatalogName != catalogName || v.Name != volumeName {
continue
}
// UC schemas can be defined in the bundle itself, and thus might be interpolated
// at runtime via the ${resources.schemas.<name>} syntax. Thus we match the volume
// definition if the schema name is the same as the one in the bundle, or if the
// schema name is interpolated.
// We only have to check for ${resources.schemas...} references because any
// other valid reference (like ${var.foo}) would have been interpolated by this point.
p, ok := dynvar.PureReferenceToPath(v.SchemaName)
isSchemaDefinedInBundle := ok && p.HasPrefix(dyn.Path{dyn.Key("resources"), dyn.Key("schemas")})
if v.SchemaName != schemaName && !isSchemaDefinedInBundle {
continue
}
pathString := fmt.Sprintf("resources.volumes.%s", k)
return dyn.MustPathFromString(pathString), b.Config.GetLocations(pathString), true
}
return nil, nil, false
} }

View File

@ -1,277 +1,27 @@
package libraries package libraries
import ( import (
"context"
"fmt"
"path" "path"
"testing" "testing"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config" "github.com/databricks/cli/bundle/config"
"github.com/databricks/cli/bundle/config/resources"
"github.com/databricks/cli/bundle/internal/bundletest"
"github.com/databricks/cli/libs/diag"
"github.com/databricks/cli/libs/dyn"
"github.com/databricks/cli/libs/filer" "github.com/databricks/cli/libs/filer"
"github.com/databricks/databricks-sdk-go/apierr"
sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestFindVolumeInBundle(t *testing.T) { func TestFilerForVolume(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Resources: config.Resources{
Volumes: map[string]*resources.Volume{
"foo": {
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
CatalogName: "main",
Name: "my_volume",
SchemaName: "my_schema",
},
},
},
},
},
}
bundletest.SetLocation(b, "resources.volumes.foo", []dyn.Location{
{
File: "volume.yml",
Line: 1,
Column: 2,
},
})
// volume is in DAB.
path, locations, ok := findVolumeInBundle(b, "main", "my_schema", "my_volume")
assert.True(t, ok)
assert.Equal(t, []dyn.Location{{
File: "volume.yml",
Line: 1,
Column: 2,
}}, locations)
assert.Equal(t, dyn.MustPathFromString("resources.volumes.foo"), path)
// wrong volume name
_, _, ok = findVolumeInBundle(b, "main", "my_schema", "doesnotexist")
assert.False(t, ok)
// wrong schema name
_, _, ok = findVolumeInBundle(b, "main", "doesnotexist", "my_volume")
assert.False(t, ok)
// wrong catalog name
_, _, ok = findVolumeInBundle(b, "doesnotexist", "my_schema", "my_volume")
assert.False(t, ok)
// schema name is interpolated but does not have the right prefix. In this case
// we should not match the volume.
b.Config.Resources.Volumes["foo"].SchemaName = "${foo.bar.baz}"
_, _, ok = findVolumeInBundle(b, "main", "my_schema", "my_volume")
assert.False(t, ok)
// schema name is interpolated.
b.Config.Resources.Volumes["foo"].SchemaName = "${resources.schemas.my_schema.name}"
path, locations, ok = findVolumeInBundle(b, "main", "valuedoesnotmatter", "my_volume")
assert.True(t, ok)
assert.Equal(t, []dyn.Location{{
File: "volume.yml",
Line: 1,
Column: 2,
}}, locations)
assert.Equal(t, dyn.MustPathFromString("resources.volumes.foo"), path)
}
func TestFilerForVolumeForErrorFromAPI(t *testing.T) {
b := &bundle.Bundle{ b := &bundle.Bundle{
Config: config.Root{ Config: config.Root{
Workspace: config.Workspace{ Workspace: config.Workspace{
ArtifactPath: "/Volumes/main/my_schema/my_volume", ArtifactPath: "/Volumes/main/my_schema/my_volume/abc",
}, },
}, },
} }
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "config.yml", Line: 1, Column: 2}}) client, uploadPath, diags := filerForVolume(b)
require.NoError(t, diags.Error())
m := mocks.NewMockWorkspaceClient(t) assert.Equal(t, path.Join("/Volumes/main/my_schema/my_volume/abc/.internal"), uploadPath)
m.WorkspaceClient.Config = &sdkconfig.Config{} assert.IsType(t, &filer.FilesClient{}, client)
m.GetMockFilesAPI().EXPECT().GetDirectoryMetadataByDirectoryPath(mock.Anything, "/Volumes/main/my_schema/my_volume").Return(fmt.Errorf("error from API"))
b.SetWorkpaceClient(m.WorkspaceClient)
_, _, diags := filerForVolume(context.Background(), b)
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Error,
Summary: "unable to determine if volume at /Volumes/main/my_schema/my_volume exists: error from API",
Locations: []dyn.Location{{File: "config.yml", Line: 1, Column: 2}},
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
},
}, diags)
}
func TestFilerForVolumeWithVolumeNotFound(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: "/Volumes/main/my_schema/doesnotexist",
},
},
}
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "config.yml", Line: 1, Column: 2}})
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{}
m.GetMockFilesAPI().EXPECT().GetDirectoryMetadataByDirectoryPath(mock.Anything, "/Volumes/main/my_schema/doesnotexist").Return(apierr.NotFound("some error message"))
b.SetWorkpaceClient(m.WorkspaceClient)
_, _, diags := filerForVolume(context.Background(), b)
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Error,
Summary: "volume /Volumes/main/my_schema/doesnotexist does not exist: some error message",
Locations: []dyn.Location{{File: "config.yml", Line: 1, Column: 2}},
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
},
}, diags)
}
func TestFilerForVolumeNotFoundAndInBundle(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: "/Volumes/main/my_schema/my_volume",
},
Resources: config.Resources{
Volumes: map[string]*resources.Volume{
"foo": {
CreateVolumeRequestContent: &catalog.CreateVolumeRequestContent{
CatalogName: "main",
Name: "my_volume",
VolumeType: "MANAGED",
SchemaName: "my_schema",
},
},
},
},
},
}
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "config.yml", Line: 1, Column: 2}})
bundletest.SetLocation(b, "resources.volumes.foo", []dyn.Location{{File: "volume.yml", Line: 1, Column: 2}})
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{}
m.GetMockFilesAPI().EXPECT().GetDirectoryMetadataByDirectoryPath(mock.Anything, "/Volumes/main/my_schema/my_volume").Return(apierr.NotFound("error from API"))
b.SetWorkpaceClient(m.WorkspaceClient)
_, _, diags := GetFilerForLibraries(context.Background(), b)
assert.Equal(t, diag.Diagnostics{
{
Severity: diag.Error,
Summary: "volume /Volumes/main/my_schema/my_volume does not exist: error from API",
Locations: []dyn.Location{{File: "config.yml", Line: 1, Column: 2}, {File: "volume.yml", Line: 1, Column: 2}},
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path"), dyn.MustPathFromString("resources.volumes.foo")},
Detail: `You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.`,
},
}, diags)
}
func invalidVolumePaths() []string {
return []string{
"/Volumes/",
"/Volumes/main",
"/Volumes/main/",
"/Volumes/main//",
"/Volumes/main//my_schema",
"/Volumes/main/my_schema",
"/Volumes/main/my_schema/",
"/Volumes/main/my_schema//",
"/Volumes//my_schema/my_volume",
}
}
func TestFilerForVolumeWithInvalidVolumePaths(t *testing.T) {
for _, p := range invalidVolumePaths() {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: p,
},
},
}
bundletest.SetLocation(b, "workspace.artifact_path", []dyn.Location{{File: "config.yml", Line: 1, Column: 2}})
_, _, diags := GetFilerForLibraries(context.Background(), b)
require.Equal(t, diags, diag.Diagnostics{{
Severity: diag.Error,
Summary: fmt.Sprintf("expected UC volume path to be in the format /Volumes/<catalog>/<schema>/<volume>/..., got %s", p),
Locations: []dyn.Location{{File: "config.yml", Line: 1, Column: 2}},
Paths: []dyn.Path{dyn.MustPathFromString("workspace.artifact_path")},
}})
}
}
func TestFilerForVolumeWithInvalidPrefix(t *testing.T) {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: "/Volume/main/my_schema/my_volume",
},
},
}
_, _, diags := filerForVolume(context.Background(), b)
require.EqualError(t, diags.Error(), "expected artifact_path to start with /Volumes/, got /Volume/main/my_schema/my_volume")
}
func TestFilerForVolumeWithValidVolumePaths(t *testing.T) {
validPaths := []string{
"/Volumes/main/my_schema/my_volume",
"/Volumes/main/my_schema/my_volume/",
"/Volumes/main/my_schema/my_volume/a/b/c",
"/Volumes/main/my_schema/my_volume/a/a/a",
}
for _, p := range validPaths {
b := &bundle.Bundle{
Config: config.Root{
Workspace: config.Workspace{
ArtifactPath: p,
},
},
}
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{}
m.GetMockFilesAPI().EXPECT().GetDirectoryMetadataByDirectoryPath(mock.Anything, "/Volumes/main/my_schema/my_volume").Return(nil)
b.SetWorkpaceClient(m.WorkspaceClient)
client, uploadPath, diags := filerForVolume(context.Background(), b)
require.NoError(t, diags.Error())
assert.Equal(t, path.Join(p, ".internal"), uploadPath)
assert.IsType(t, &filer.FilesClient{}, client)
}
}
func TestExtractVolumeFromPath(t *testing.T) {
catalogName, schemaName, volumeName, err := extractVolumeFromPath("/Volumes/main/my_schema/my_volume")
require.NoError(t, err)
assert.Equal(t, "main", catalogName)
assert.Equal(t, "my_schema", schemaName)
assert.Equal(t, "my_volume", volumeName)
for _, p := range invalidVolumePaths() {
_, _, _, err := extractVolumeFromPath(p)
assert.EqualError(t, err, fmt.Sprintf("expected UC volume path to be in the format /Volumes/<catalog>/<schema>/<volume>/..., got %s", p))
}
} }

View File

@ -12,25 +12,25 @@ func TestLibraryPath(t *testing.T) {
p, err := libraryPath(&compute.Library{Whl: path}) p, err := libraryPath(&compute.Library{Whl: path})
assert.Equal(t, path, p) assert.Equal(t, path, p)
assert.Nil(t, err) assert.NoError(t, err)
p, err = libraryPath(&compute.Library{Jar: path}) p, err = libraryPath(&compute.Library{Jar: path})
assert.Equal(t, path, p) assert.Equal(t, path, p)
assert.Nil(t, err) assert.NoError(t, err)
p, err = libraryPath(&compute.Library{Egg: path}) p, err = libraryPath(&compute.Library{Egg: path})
assert.Equal(t, path, p) assert.Equal(t, path, p)
assert.Nil(t, err) assert.NoError(t, err)
p, err = libraryPath(&compute.Library{Requirements: path}) p, err = libraryPath(&compute.Library{Requirements: path})
assert.Equal(t, path, p) assert.Equal(t, path, p)
assert.Nil(t, err) assert.NoError(t, err)
p, err = libraryPath(&compute.Library{}) p, err = libraryPath(&compute.Library{})
assert.Equal(t, "", p) assert.Equal(t, "", p)
assert.NotNil(t, err) assert.Error(t, err)
p, err = libraryPath(&compute.Library{Pypi: &compute.PythonPyPiLibrary{Package: "pypipackage"}}) p, err = libraryPath(&compute.Library{Pypi: &compute.PythonPyPiLibrary{Package: "pypipackage"}})
assert.Equal(t, "", p) assert.Equal(t, "", p)
assert.NotNil(t, err) assert.Error(t, err)
} }

View File

@ -11,8 +11,6 @@ import (
mockfiler "github.com/databricks/cli/internal/mocks/libs/filer" mockfiler "github.com/databricks/cli/internal/mocks/libs/filer"
"github.com/databricks/cli/internal/testutil" "github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/filer" "github.com/databricks/cli/libs/filer"
sdkconfig "github.com/databricks/databricks-sdk-go/config"
"github.com/databricks/databricks-sdk-go/experimental/mocks"
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/jobs" "github.com/databricks/databricks-sdk-go/service/jobs"
"github.com/stretchr/testify/mock" "github.com/stretchr/testify/mock"
@ -183,11 +181,6 @@ func TestArtifactUploadForVolumes(t *testing.T) {
filer.CreateParentDirectories, filer.CreateParentDirectories,
).Return(nil) ).Return(nil)
m := mocks.NewMockWorkspaceClient(t)
m.WorkspaceClient.Config = &sdkconfig.Config{}
m.GetMockFilesAPI().EXPECT().GetDirectoryMetadataByDirectoryPath(mock.Anything, "/Volumes/foo/bar/artifacts").Return(nil)
b.SetWorkpaceClient(m.WorkspaceClient)
diags := bundle.Apply(context.Background(), b, bundle.Seq(ExpandGlobReferences(), UploadWithClient(mockFiler))) diags := bundle.Apply(context.Background(), b, bundle.Seq(ExpandGlobReferences(), UploadWithClient(mockFiler)))
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())

View File

@ -99,32 +99,32 @@ func TestFilterCurrentUser(t *testing.T) {
assert.NoError(t, diags.Error()) assert.NoError(t, diags.Error())
// Assert current user is filtered out. // Assert current user is filtered out.
assert.Equal(t, 2, len(b.Config.Resources.Jobs["job1"].Permissions)) assert.Len(t, b.Config.Resources.Jobs["job1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, robot) assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, robot)
assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Jobs["job2"].Permissions)) assert.Len(t, b.Config.Resources.Jobs["job2"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, robot) assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, robot)
assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, bob) assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Pipelines["pipeline1"].Permissions)) assert.Len(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, robot) assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, robot)
assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Experiments["experiment1"].Permissions)) assert.Len(t, b.Config.Resources.Experiments["experiment1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, robot) assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, robot)
assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Models["model1"].Permissions)) assert.Len(t, b.Config.Resources.Models["model1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, robot) assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, robot)
assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions)) assert.Len(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, robot) assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, robot)
assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, bob) assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, bob)
// Assert there's no change to the grant. // Assert there's no change to the grant.
assert.Equal(t, 1, len(b.Config.Resources.RegisteredModels["registered_model1"].Grants)) assert.Len(t, b.Config.Resources.RegisteredModels["registered_model1"].Grants, 1)
} }
func TestFilterCurrentServicePrincipal(t *testing.T) { func TestFilterCurrentServicePrincipal(t *testing.T) {
@ -134,32 +134,32 @@ func TestFilterCurrentServicePrincipal(t *testing.T) {
assert.NoError(t, diags.Error()) assert.NoError(t, diags.Error())
// Assert current user is filtered out. // Assert current user is filtered out.
assert.Equal(t, 2, len(b.Config.Resources.Jobs["job1"].Permissions)) assert.Len(t, b.Config.Resources.Jobs["job1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, alice) assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, alice)
assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Jobs["job1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Jobs["job2"].Permissions)) assert.Len(t, b.Config.Resources.Jobs["job2"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, alice) assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, alice)
assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, bob) assert.Contains(t, b.Config.Resources.Jobs["job2"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Pipelines["pipeline1"].Permissions)) assert.Len(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, alice) assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, alice)
assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Pipelines["pipeline1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Experiments["experiment1"].Permissions)) assert.Len(t, b.Config.Resources.Experiments["experiment1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, alice) assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, alice)
assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Experiments["experiment1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.Models["model1"].Permissions)) assert.Len(t, b.Config.Resources.Models["model1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, alice) assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, alice)
assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, bob) assert.Contains(t, b.Config.Resources.Models["model1"].Permissions, bob)
assert.Equal(t, 2, len(b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions)) assert.Len(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, 2)
assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, alice) assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, alice)
assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, bob) assert.Contains(t, b.Config.Resources.ModelServingEndpoints["endpoint1"].Permissions, bob)
// Assert there's no change to the grant. // Assert there's no change to the grant.
assert.Equal(t, 1, len(b.Config.Resources.RegisteredModels["registered_model1"].Grants)) assert.Len(t, b.Config.Resources.RegisteredModels["registered_model1"].Grants, 1)
} }
func TestFilterCurrentUserDoesNotErrorWhenNoResources(t *testing.T) { func TestFilterCurrentUserDoesNotErrorWhenNoResources(t *testing.T) {

View File

@ -164,7 +164,7 @@ func TestAllResourcesExplicitlyDefinedForPermissionsSupport(t *testing.T) {
for _, resource := range unsupportedResources { for _, resource := range unsupportedResources {
_, ok := levelsMap[resource] _, ok := levelsMap[resource]
assert.False(t, ok, fmt.Sprintf("Resource %s is defined in both levelsMap and unsupportedResources", resource)) assert.False(t, ok, "Resource %s is defined in both levelsMap and unsupportedResources", resource)
} }
for _, resource := range r.AllResources() { for _, resource := range r.AllResources() {

View File

@ -28,7 +28,7 @@ func TestPermissionDiagnosticsApplyFail(t *testing.T) {
}) })
diags := permissions.PermissionDiagnostics().Apply(context.Background(), b) diags := permissions.PermissionDiagnostics().Apply(context.Background(), b)
require.Equal(t, diags[0].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[0].Severity)
require.Contains(t, diags[0].Summary, "permissions section should include testuser@databricks.com or one of their groups with CAN_MANAGE permissions") require.Contains(t, diags[0].Summary, "permissions section should include testuser@databricks.com or one of their groups with CAN_MANAGE permissions")
} }

View File

@ -54,7 +54,7 @@ func TestConvertPythonParams(t *testing.T) {
err = runner.convertPythonParams(opts) err = runner.convertPythonParams(opts)
require.NoError(t, err) require.NoError(t, err)
require.Contains(t, opts.Job.notebookParams, "__python_params") require.Contains(t, opts.Job.notebookParams, "__python_params")
require.Equal(t, opts.Job.notebookParams["__python_params"], `["param1","param2","param3"]`) require.Equal(t, `["param1","param2","param3"]`, opts.Job.notebookParams["__python_params"])
} }
func TestJobRunnerCancel(t *testing.T) { func TestJobRunnerCancel(t *testing.T) {

View File

@ -30,7 +30,7 @@ func TestComplexVariables(t *testing.T) {
require.Equal(t, "true", b.Config.Resources.Jobs["my_job"].JobClusters[0].NewCluster.SparkConf["spark.speculation"]) require.Equal(t, "true", b.Config.Resources.Jobs["my_job"].JobClusters[0].NewCluster.SparkConf["spark.speculation"])
require.Equal(t, "true", b.Config.Resources.Jobs["my_job"].JobClusters[0].NewCluster.SparkConf["spark.random"]) require.Equal(t, "true", b.Config.Resources.Jobs["my_job"].JobClusters[0].NewCluster.SparkConf["spark.random"])
require.Equal(t, 3, len(b.Config.Resources.Jobs["my_job"].Tasks[0].Libraries)) require.Len(t, b.Config.Resources.Jobs["my_job"].Tasks[0].Libraries, 3)
require.Contains(t, b.Config.Resources.Jobs["my_job"].Tasks[0].Libraries, compute.Library{ require.Contains(t, b.Config.Resources.Jobs["my_job"].Tasks[0].Libraries, compute.Library{
Jar: "/path/to/jar", Jar: "/path/to/jar",
}) })

View File

@ -2,7 +2,6 @@ package config_tests
import ( import (
"context" "context"
"fmt"
"strings" "strings"
"testing" "testing"
@ -16,7 +15,7 @@ func TestGitAutoLoadWithEnvironment(t *testing.T) {
bundle.Apply(context.Background(), b, mutator.LoadGitDetails()) bundle.Apply(context.Background(), b, mutator.LoadGitDetails())
assert.True(t, b.Config.Bundle.Git.Inferred) assert.True(t, b.Config.Bundle.Git.Inferred)
validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks") validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks")
assert.True(t, validUrl, fmt.Sprintf("Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)) assert.True(t, validUrl, "Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)
} }
func TestGitManuallySetBranchWithEnvironment(t *testing.T) { func TestGitManuallySetBranchWithEnvironment(t *testing.T) {
@ -25,5 +24,5 @@ func TestGitManuallySetBranchWithEnvironment(t *testing.T) {
assert.False(t, b.Config.Bundle.Git.Inferred) assert.False(t, b.Config.Bundle.Git.Inferred)
assert.Equal(t, "main", b.Config.Bundle.Git.Branch) assert.Equal(t, "main", b.Config.Bundle.Git.Branch)
validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks") validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks")
assert.True(t, validUrl, fmt.Sprintf("Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)) assert.True(t, validUrl, "Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)
} }

View File

@ -21,8 +21,8 @@ func TestEnvironmentOverridesResourcesDev(t *testing.T) {
assert.Equal(t, "base job", b.Config.Resources.Jobs["job1"].Name) assert.Equal(t, "base job", b.Config.Resources.Jobs["job1"].Name)
// Base values are preserved in the development environment. // Base values are preserved in the development environment.
assert.Equal(t, true, b.Config.Resources.Pipelines["boolean1"].Photon) assert.True(t, b.Config.Resources.Pipelines["boolean1"].Photon)
assert.Equal(t, false, b.Config.Resources.Pipelines["boolean2"].Photon) assert.False(t, b.Config.Resources.Pipelines["boolean2"].Photon)
} }
func TestEnvironmentOverridesResourcesStaging(t *testing.T) { func TestEnvironmentOverridesResourcesStaging(t *testing.T) {
@ -30,6 +30,6 @@ func TestEnvironmentOverridesResourcesStaging(t *testing.T) {
assert.Equal(t, "staging job", b.Config.Resources.Jobs["job1"].Name) assert.Equal(t, "staging job", b.Config.Resources.Jobs["job1"].Name)
// Override values are applied in the staging environment. // Override values are applied in the staging environment.
assert.Equal(t, false, b.Config.Resources.Pipelines["boolean1"].Photon) assert.False(t, b.Config.Resources.Pipelines["boolean1"].Photon)
assert.Equal(t, true, b.Config.Resources.Pipelines["boolean2"].Photon) assert.True(t, b.Config.Resources.Pipelines["boolean2"].Photon)
} }

View File

@ -10,11 +10,11 @@ import (
func TestJobAndPipelineDevelopmentWithEnvironment(t *testing.T) { func TestJobAndPipelineDevelopmentWithEnvironment(t *testing.T) {
b := loadTarget(t, "./environments_job_and_pipeline", "development") b := loadTarget(t, "./environments_job_and_pipeline", "development")
assert.Len(t, b.Config.Resources.Jobs, 0) assert.Empty(t, b.Config.Resources.Jobs)
assert.Len(t, b.Config.Resources.Pipelines, 1) assert.Len(t, b.Config.Resources.Pipelines, 1)
p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"] p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"]
assert.Equal(t, b.Config.Bundle.Mode, config.Development) assert.Equal(t, config.Development, b.Config.Bundle.Mode)
assert.True(t, p.Development) assert.True(t, p.Development)
require.Len(t, p.Libraries, 1) require.Len(t, p.Libraries, 1)
assert.Equal(t, "./dlt/nyc_taxi_loader", p.Libraries[0].Notebook.Path) assert.Equal(t, "./dlt/nyc_taxi_loader", p.Libraries[0].Notebook.Path)
@ -23,7 +23,7 @@ func TestJobAndPipelineDevelopmentWithEnvironment(t *testing.T) {
func TestJobAndPipelineStagingWithEnvironment(t *testing.T) { func TestJobAndPipelineStagingWithEnvironment(t *testing.T) {
b := loadTarget(t, "./environments_job_and_pipeline", "staging") b := loadTarget(t, "./environments_job_and_pipeline", "staging")
assert.Len(t, b.Config.Resources.Jobs, 0) assert.Empty(t, b.Config.Resources.Jobs)
assert.Len(t, b.Config.Resources.Pipelines, 1) assert.Len(t, b.Config.Resources.Pipelines, 1)
p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"] p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"]

View File

@ -2,7 +2,6 @@ package config_tests
import ( import (
"context" "context"
"fmt"
"strings" "strings"
"testing" "testing"
@ -17,7 +16,7 @@ func TestGitAutoLoad(t *testing.T) {
bundle.Apply(context.Background(), b, mutator.LoadGitDetails()) bundle.Apply(context.Background(), b, mutator.LoadGitDetails())
assert.True(t, b.Config.Bundle.Git.Inferred) assert.True(t, b.Config.Bundle.Git.Inferred)
validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks") validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks")
assert.True(t, validUrl, fmt.Sprintf("Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)) assert.True(t, validUrl, "Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)
} }
func TestGitManuallySetBranch(t *testing.T) { func TestGitManuallySetBranch(t *testing.T) {
@ -26,7 +25,7 @@ func TestGitManuallySetBranch(t *testing.T) {
assert.False(t, b.Config.Bundle.Git.Inferred) assert.False(t, b.Config.Bundle.Git.Inferred)
assert.Equal(t, "main", b.Config.Bundle.Git.Branch) assert.Equal(t, "main", b.Config.Bundle.Git.Branch)
validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks") validUrl := strings.Contains(b.Config.Bundle.Git.OriginURL, "/cli") || strings.Contains(b.Config.Bundle.Git.OriginURL, "/bricks")
assert.True(t, validUrl, fmt.Sprintf("Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)) assert.True(t, validUrl, "Expected URL to contain '/cli' or '/bricks', got %s", b.Config.Bundle.Git.OriginURL)
} }
func TestGitBundleBranchValidation(t *testing.T) { func TestGitBundleBranchValidation(t *testing.T) {

View File

@ -35,7 +35,7 @@ func TestIssue1828(t *testing.T) {
} }
if assert.Contains(t, b.Config.Variables, "float") { if assert.Contains(t, b.Config.Variables, "float") {
assert.Equal(t, 3.14, b.Config.Variables["float"].Default) assert.InDelta(t, 3.14, b.Config.Variables["float"].Default, 0.0001)
} }
if assert.Contains(t, b.Config.Variables, "time") { if assert.Contains(t, b.Config.Variables, "time") {
@ -43,6 +43,6 @@ func TestIssue1828(t *testing.T) {
} }
if assert.Contains(t, b.Config.Variables, "nil") { if assert.Contains(t, b.Config.Variables, "nil") {
assert.Equal(t, nil, b.Config.Variables["nil"].Default) assert.Nil(t, b.Config.Variables["nil"].Default)
} }
} }

View File

@ -10,11 +10,11 @@ import (
func TestJobAndPipelineDevelopment(t *testing.T) { func TestJobAndPipelineDevelopment(t *testing.T) {
b := loadTarget(t, "./job_and_pipeline", "development") b := loadTarget(t, "./job_and_pipeline", "development")
assert.Len(t, b.Config.Resources.Jobs, 0) assert.Empty(t, b.Config.Resources.Jobs)
assert.Len(t, b.Config.Resources.Pipelines, 1) assert.Len(t, b.Config.Resources.Pipelines, 1)
p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"] p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"]
assert.Equal(t, b.Config.Bundle.Mode, config.Development) assert.Equal(t, config.Development, b.Config.Bundle.Mode)
assert.True(t, p.Development) assert.True(t, p.Development)
require.Len(t, p.Libraries, 1) require.Len(t, p.Libraries, 1)
assert.Equal(t, "./dlt/nyc_taxi_loader", p.Libraries[0].Notebook.Path) assert.Equal(t, "./dlt/nyc_taxi_loader", p.Libraries[0].Notebook.Path)
@ -23,7 +23,7 @@ func TestJobAndPipelineDevelopment(t *testing.T) {
func TestJobAndPipelineStaging(t *testing.T) { func TestJobAndPipelineStaging(t *testing.T) {
b := loadTarget(t, "./job_and_pipeline", "staging") b := loadTarget(t, "./job_and_pipeline", "staging")
assert.Len(t, b.Config.Resources.Jobs, 0) assert.Empty(t, b.Config.Resources.Jobs)
assert.Len(t, b.Config.Resources.Pipelines, 1) assert.Len(t, b.Config.Resources.Pipelines, 1)
p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"] p := b.Config.Resources.Pipelines["nyc_taxi_pipeline"]

View File

@ -16,13 +16,13 @@ func TestJobClusterKeyNotDefinedTest(t *testing.T) {
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), validate.JobClusterKeyDefined()) diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), validate.JobClusterKeyDefined())
require.Len(t, diags, 1) require.Len(t, diags, 1)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
require.Equal(t, diags[0].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, diags[0].Summary, "job_cluster_key key is not defined") require.Equal(t, "job_cluster_key key is not defined", diags[0].Summary)
} }
func TestJobClusterKeyDefinedTest(t *testing.T) { func TestJobClusterKeyDefinedTest(t *testing.T) {
b := loadTarget(t, "./job_cluster_key", "development") b := loadTarget(t, "./job_cluster_key", "development")
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), validate.JobClusterKeyDefined()) diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), validate.JobClusterKeyDefined())
require.Len(t, diags, 0) require.Empty(t, diags)
} }

View File

@ -20,7 +20,7 @@ func assertExpected(t *testing.T, p *resources.ModelServingEndpoint) {
func TestModelServingEndpointDevelopment(t *testing.T) { func TestModelServingEndpointDevelopment(t *testing.T) {
b := loadTarget(t, "./model_serving_endpoint", "development") b := loadTarget(t, "./model_serving_endpoint", "development")
assert.Len(t, b.Config.Resources.ModelServingEndpoints, 1) assert.Len(t, b.Config.Resources.ModelServingEndpoints, 1)
assert.Equal(t, b.Config.Bundle.Mode, config.Development) assert.Equal(t, config.Development, b.Config.Bundle.Mode)
p := b.Config.Resources.ModelServingEndpoints["my_model_serving_endpoint"] p := b.Config.Resources.ModelServingEndpoints["my_model_serving_endpoint"]
assert.Equal(t, "my-dev-endpoint", p.Name) assert.Equal(t, "my-dev-endpoint", p.Name)

View File

@ -12,14 +12,14 @@ func TestOverrideTasksDev(t *testing.T) {
assert.Len(t, b.Config.Resources.Jobs["foo"].Tasks, 2) assert.Len(t, b.Config.Resources.Jobs["foo"].Tasks, 2)
tasks := b.Config.Resources.Jobs["foo"].Tasks tasks := b.Config.Resources.Jobs["foo"].Tasks
assert.Equal(t, tasks[0].TaskKey, "key1") assert.Equal(t, "key1", tasks[0].TaskKey)
assert.Equal(t, tasks[0].NewCluster.NodeTypeId, "i3.xlarge") assert.Equal(t, "i3.xlarge", tasks[0].NewCluster.NodeTypeId)
assert.Equal(t, tasks[0].NewCluster.NumWorkers, 1) assert.Equal(t, 1, tasks[0].NewCluster.NumWorkers)
assert.Equal(t, tasks[0].SparkPythonTask.PythonFile, "./test1.py") assert.Equal(t, "./test1.py", tasks[0].SparkPythonTask.PythonFile)
assert.Equal(t, tasks[1].TaskKey, "key2") assert.Equal(t, "key2", tasks[1].TaskKey)
assert.Equal(t, tasks[1].NewCluster.SparkVersion, "13.3.x-scala2.12") assert.Equal(t, "13.3.x-scala2.12", tasks[1].NewCluster.SparkVersion)
assert.Equal(t, tasks[1].SparkPythonTask.PythonFile, "./test2.py") assert.Equal(t, "./test2.py", tasks[1].SparkPythonTask.PythonFile)
} }
func TestOverrideTasksStaging(t *testing.T) { func TestOverrideTasksStaging(t *testing.T) {
@ -28,12 +28,12 @@ func TestOverrideTasksStaging(t *testing.T) {
assert.Len(t, b.Config.Resources.Jobs["foo"].Tasks, 2) assert.Len(t, b.Config.Resources.Jobs["foo"].Tasks, 2)
tasks := b.Config.Resources.Jobs["foo"].Tasks tasks := b.Config.Resources.Jobs["foo"].Tasks
assert.Equal(t, tasks[0].TaskKey, "key1") assert.Equal(t, "key1", tasks[0].TaskKey)
assert.Equal(t, tasks[0].NewCluster.SparkVersion, "13.3.x-scala2.12") assert.Equal(t, "13.3.x-scala2.12", tasks[0].NewCluster.SparkVersion)
assert.Equal(t, tasks[0].SparkPythonTask.PythonFile, "./test1.py") assert.Equal(t, "./test1.py", tasks[0].SparkPythonTask.PythonFile)
assert.Equal(t, tasks[1].TaskKey, "key2") assert.Equal(t, "key2", tasks[1].TaskKey)
assert.Equal(t, tasks[1].NewCluster.NodeTypeId, "i3.2xlarge") assert.Equal(t, "i3.2xlarge", tasks[1].NewCluster.NodeTypeId)
assert.Equal(t, tasks[1].NewCluster.NumWorkers, 4) assert.Equal(t, 4, tasks[1].NewCluster.NumWorkers)
assert.Equal(t, tasks[1].SparkPythonTask.PythonFile, "./test3.py") assert.Equal(t, "./test3.py", tasks[1].SparkPythonTask.PythonFile)
} }

View File

@ -13,7 +13,7 @@ func TestPresetsDev(t *testing.T) {
assert.Equal(t, "myprefix", b.Config.Presets.NamePrefix) assert.Equal(t, "myprefix", b.Config.Presets.NamePrefix)
assert.Equal(t, config.Paused, b.Config.Presets.TriggerPauseStatus) assert.Equal(t, config.Paused, b.Config.Presets.TriggerPauseStatus)
assert.Equal(t, 10, b.Config.Presets.JobsMaxConcurrentRuns) assert.Equal(t, 10, b.Config.Presets.JobsMaxConcurrentRuns)
assert.Equal(t, true, *b.Config.Presets.PipelinesDevelopment) assert.True(t, *b.Config.Presets.PipelinesDevelopment)
assert.Equal(t, "true", b.Config.Presets.Tags["dev"]) assert.Equal(t, "true", b.Config.Presets.Tags["dev"])
assert.Equal(t, "finance", b.Config.Presets.Tags["team"]) assert.Equal(t, "finance", b.Config.Presets.Tags["team"])
assert.Equal(t, "false", b.Config.Presets.Tags["prod"]) assert.Equal(t, "false", b.Config.Presets.Tags["prod"])
@ -22,7 +22,7 @@ func TestPresetsDev(t *testing.T) {
func TestPresetsProd(t *testing.T) { func TestPresetsProd(t *testing.T) {
b := loadTarget(t, "./presets", "prod") b := loadTarget(t, "./presets", "prod")
assert.Equal(t, false, *b.Config.Presets.PipelinesDevelopment) assert.False(t, *b.Config.Presets.PipelinesDevelopment)
assert.Equal(t, "finance", b.Config.Presets.Tags["team"]) assert.Equal(t, "finance", b.Config.Presets.Tags["team"])
assert.Equal(t, "true", b.Config.Presets.Tags["prod"]) assert.Equal(t, "true", b.Config.Presets.Tags["prod"])
} }

View File

@ -23,7 +23,7 @@ func TestPythonWheelBuild(t *testing.T) {
matches, err := filepath.Glob("./python_wheel/python_wheel/my_test_code/dist/my_test_code-*.whl") matches, err := filepath.Glob("./python_wheel/python_wheel/my_test_code/dist/my_test_code-*.whl")
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(matches)) require.Len(t, matches, 1)
match := libraries.ExpandGlobReferences() match := libraries.ExpandGlobReferences()
diags = bundle.Apply(ctx, b, match) diags = bundle.Apply(ctx, b, match)
@ -39,7 +39,7 @@ func TestPythonWheelBuildAutoDetect(t *testing.T) {
matches, err := filepath.Glob("./python_wheel/python_wheel_no_artifact/dist/my_test_code-*.whl") matches, err := filepath.Glob("./python_wheel/python_wheel_no_artifact/dist/my_test_code-*.whl")
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(matches)) require.Len(t, matches, 1)
match := libraries.ExpandGlobReferences() match := libraries.ExpandGlobReferences()
diags = bundle.Apply(ctx, b, match) diags = bundle.Apply(ctx, b, match)
@ -55,7 +55,7 @@ func TestPythonWheelBuildAutoDetectWithNotebookTask(t *testing.T) {
matches, err := filepath.Glob("./python_wheel/python_wheel_no_artifact_notebook/dist/my_test_code-*.whl") matches, err := filepath.Glob("./python_wheel/python_wheel_no_artifact_notebook/dist/my_test_code-*.whl")
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(matches)) require.Len(t, matches, 1)
match := libraries.ExpandGlobReferences() match := libraries.ExpandGlobReferences()
diags = bundle.Apply(ctx, b, match) diags = bundle.Apply(ctx, b, match)
@ -108,7 +108,7 @@ func TestPythonWheelBuildWithEnvironmentKey(t *testing.T) {
matches, err := filepath.Glob("./python_wheel/environment_key/my_test_code/dist/my_test_code-*.whl") matches, err := filepath.Glob("./python_wheel/environment_key/my_test_code/dist/my_test_code-*.whl")
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 1, len(matches)) require.Len(t, matches, 1)
match := libraries.ExpandGlobReferences() match := libraries.ExpandGlobReferences()
diags = bundle.Apply(ctx, b, match) diags = bundle.Apply(ctx, b, match)
@ -124,7 +124,7 @@ func TestPythonWheelBuildMultiple(t *testing.T) {
matches, err := filepath.Glob("./python_wheel/python_wheel_multiple/my_test_code/dist/my_test_code*.whl") matches, err := filepath.Glob("./python_wheel/python_wheel_multiple/my_test_code/dist/my_test_code*.whl")
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 2, len(matches)) require.Len(t, matches, 2)
match := libraries.ExpandGlobReferences() match := libraries.ExpandGlobReferences()
diags = bundle.Apply(ctx, b, match) diags = bundle.Apply(ctx, b, match)

View File

@ -19,7 +19,7 @@ func assertExpectedMonitor(t *testing.T, p *resources.QualityMonitor) {
func TestMonitorTableNames(t *testing.T) { func TestMonitorTableNames(t *testing.T) {
b := loadTarget(t, "./quality_monitor", "development") b := loadTarget(t, "./quality_monitor", "development")
assert.Len(t, b.Config.Resources.QualityMonitors, 1) assert.Len(t, b.Config.Resources.QualityMonitors, 1)
assert.Equal(t, b.Config.Bundle.Mode, config.Development) assert.Equal(t, config.Development, b.Config.Bundle.Mode)
p := b.Config.Resources.QualityMonitors["my_monitor"] p := b.Config.Resources.QualityMonitors["my_monitor"]
assert.Equal(t, "main.test.dev", p.TableName) assert.Equal(t, "main.test.dev", p.TableName)

View File

@ -19,7 +19,7 @@ func assertExpectedModel(t *testing.T, p *resources.RegisteredModel) {
func TestRegisteredModelDevelopment(t *testing.T) { func TestRegisteredModelDevelopment(t *testing.T) {
b := loadTarget(t, "./registered_model", "development") b := loadTarget(t, "./registered_model", "development")
assert.Len(t, b.Config.Resources.RegisteredModels, 1) assert.Len(t, b.Config.Resources.RegisteredModels, 1)
assert.Equal(t, b.Config.Bundle.Mode, config.Development) assert.Equal(t, config.Development, b.Config.Bundle.Mode)
p := b.Config.Resources.RegisteredModels["my_registered_model"] p := b.Config.Resources.RegisteredModels["my_registered_model"]
assert.Equal(t, "my-dev-model", p.Name) assert.Equal(t, "my-dev-model", p.Name)

View File

@ -20,26 +20,26 @@ func TestSyncIncludeExcludeNoMatchesTest(t *testing.T) {
require.Len(t, diags, 3) require.Len(t, diags, 3)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
require.Equal(t, diags[0].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, diags[0].Summary, "Pattern dist does not match any files") require.Equal(t, "Pattern dist does not match any files", diags[0].Summary)
require.Len(t, diags[0].Paths, 1) require.Len(t, diags[0].Paths, 1)
require.Equal(t, diags[0].Paths[0].String(), "sync.exclude[0]") require.Equal(t, "sync.exclude[0]", diags[0].Paths[0].String())
assert.Len(t, diags[0].Locations, 1) assert.Len(t, diags[0].Locations, 1)
require.Equal(t, diags[0].Locations[0].File, filepath.Join("sync", "override", "databricks.yml")) require.Equal(t, diags[0].Locations[0].File, filepath.Join("sync", "override", "databricks.yml"))
require.Equal(t, diags[0].Locations[0].Line, 17) require.Equal(t, 17, diags[0].Locations[0].Line)
require.Equal(t, diags[0].Locations[0].Column, 11) require.Equal(t, 11, diags[0].Locations[0].Column)
summaries := []string{ summaries := []string{
fmt.Sprintf("Pattern %s does not match any files", filepath.Join("src", "*")), fmt.Sprintf("Pattern %s does not match any files", filepath.Join("src", "*")),
fmt.Sprintf("Pattern %s does not match any files", filepath.Join("tests", "*")), fmt.Sprintf("Pattern %s does not match any files", filepath.Join("tests", "*")),
} }
require.Equal(t, diags[1].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[1].Severity)
require.Contains(t, summaries, diags[1].Summary) require.Contains(t, summaries, diags[1].Summary)
require.Equal(t, diags[2].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[2].Severity)
require.Contains(t, summaries, diags[2].Summary) require.Contains(t, summaries, diags[2].Summary)
} }
@ -47,7 +47,7 @@ func TestSyncIncludeWithNegate(t *testing.T) {
b := loadTarget(t, "./sync/negate", "default") b := loadTarget(t, "./sync/negate", "default")
diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), validate.ValidateSyncPatterns()) diags := bundle.ApplyReadOnly(context.Background(), bundle.ReadOnly(b), validate.ValidateSyncPatterns())
require.Len(t, diags, 0) require.Empty(t, diags)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
} }
@ -58,6 +58,6 @@ func TestSyncIncludeWithNegateNoMatches(t *testing.T) {
require.Len(t, diags, 1) require.Len(t, diags, 1)
require.NoError(t, diags.Error()) require.NoError(t, diags.Error())
require.Equal(t, diags[0].Severity, diag.Warning) require.Equal(t, diag.Warning, diags[0].Severity)
require.Equal(t, diags[0].Summary, "Pattern !*.txt2 does not match any files") require.Equal(t, "Pattern !*.txt2 does not match any files", diags[0].Summary)
} }

View File

@ -115,12 +115,12 @@ func TestSyncPathsNoRoot(t *testing.T) {
// If set to nil, it won't sync anything. // If set to nil, it won't sync anything.
b = loadTarget(t, "./sync/paths_no_root", "nil") b = loadTarget(t, "./sync/paths_no_root", "nil")
assert.Equal(t, filepath.FromSlash("sync/paths_no_root"), b.SyncRootPath) assert.Equal(t, filepath.FromSlash("sync/paths_no_root"), b.SyncRootPath)
assert.Len(t, b.Config.Sync.Paths, 0) assert.Empty(t, b.Config.Sync.Paths)
// If set to an empty sequence, it won't sync anything. // If set to an empty sequence, it won't sync anything.
b = loadTarget(t, "./sync/paths_no_root", "empty") b = loadTarget(t, "./sync/paths_no_root", "empty")
assert.Equal(t, filepath.FromSlash("sync/paths_no_root"), b.SyncRootPath) assert.Equal(t, filepath.FromSlash("sync/paths_no_root"), b.SyncRootPath)
assert.Len(t, b.Config.Sync.Paths, 0) assert.Empty(t, b.Config.Sync.Paths)
} }
func TestSyncSharedCode(t *testing.T) { func TestSyncSharedCode(t *testing.T) {

View File

@ -5,6 +5,7 @@ import (
"fmt" "fmt"
"github.com/databricks/cli/bundle" "github.com/databricks/cli/bundle"
"github.com/databricks/cli/bundle/config/validate"
"github.com/databricks/cli/bundle/phases" "github.com/databricks/cli/bundle/phases"
"github.com/databricks/cli/bundle/render" "github.com/databricks/cli/bundle/render"
"github.com/databricks/cli/cmd/bundle/utils" "github.com/databricks/cli/cmd/bundle/utils"
@ -71,6 +72,7 @@ func newDeployCommand() *cobra.Command {
diags = diags.Extend( diags = diags.Extend(
bundle.Apply(ctx, b, bundle.Seq( bundle.Apply(ctx, b, bundle.Seq(
phases.Initialize(), phases.Initialize(),
validate.FastValidate(),
phases.Build(), phases.Build(),
phases.Deploy(outputHandler), phases.Deploy(outputHandler),
)), )),

View File

@ -44,7 +44,7 @@ func TestDashboard_ErrorOnLegacyDashboard(t *testing.T) {
_, diags := d.resolveID(ctx, b) _, diags := d.resolveID(ctx, b)
require.Len(t, diags, 1) require.Len(t, diags, 1)
assert.Equal(t, diags[0].Summary, "dashboard \"legacy dashboard\" is a legacy dashboard") assert.Equal(t, "dashboard \"legacy dashboard\" is a legacy dashboard", diags[0].Summary)
} }
func TestDashboard_ExistingID_Nominal(t *testing.T) { func TestDashboard_ExistingID_Nominal(t *testing.T) {

View File

@ -3,7 +3,6 @@ package generate
import ( import (
"bytes" "bytes"
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"io/fs" "io/fs"
@ -302,7 +301,7 @@ func TestGenerateJobCommandOldFileRename(t *testing.T) {
// Make sure file do not exists after the run // Make sure file do not exists after the run
_, err = os.Stat(oldFilename) _, err = os.Stat(oldFilename)
require.True(t, errors.Is(err, fs.ErrNotExist)) require.ErrorIs(t, err, fs.ErrNotExist)
data, err := os.ReadFile(filepath.Join(configDir, "test_job.job.yml")) data, err := os.ReadFile(filepath.Join(configDir, "test_job.job.yml"))
require.NoError(t, err) require.NoError(t, err)

View File

@ -148,9 +148,9 @@ func TestEnvVarsConfigureNoInteractive(t *testing.T) {
// We should only save host and token for a profile, other env variables should not be saved // We should only save host and token for a profile, other env variables should not be saved
_, err = defaultSection.GetKey("auth_type") _, err = defaultSection.GetKey("auth_type")
assert.NotNil(t, err) assert.Error(t, err)
_, err = defaultSection.GetKey("metadata_service_url") _, err = defaultSection.GetKey("metadata_service_url")
assert.NotNil(t, err) assert.Error(t, err)
} }
func TestEnvVarsConfigureNoArgsNoInteractive(t *testing.T) { func TestEnvVarsConfigureNoArgsNoInteractive(t *testing.T) {

View File

@ -7,14 +7,15 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestFileFromRef(t *testing.T) { func TestFileFromRef(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/databrickslabs/ucx/main/README.md" { if r.URL.Path == "/databrickslabs/ucx/main/README.md" {
_, err := w.Write([]byte(`abc`)) _, err := w.Write([]byte(`abc`))
require.NoError(t, err) if !assert.NoError(t, err) {
return
}
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)
@ -34,7 +35,9 @@ func TestDownloadZipball(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/repos/databrickslabs/ucx/zipball/main" { if r.URL.Path == "/repos/databrickslabs/ucx/zipball/main" {
_, err := w.Write([]byte(`abc`)) _, err := w.Write([]byte(`abc`))
require.NoError(t, err) if !assert.NoError(t, err) {
return
}
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)

View File

@ -7,14 +7,15 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestLoadsReleasesForCLI(t *testing.T) { func TestLoadsReleasesForCLI(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/repos/databricks/cli/releases" { if r.URL.Path == "/repos/databricks/cli/releases" {
_, err := w.Write([]byte(`[{"tag_name": "v1.2.3"}, {"tag_name": "v1.2.2"}]`)) _, err := w.Write([]byte(`[{"tag_name": "v1.2.3"}, {"tag_name": "v1.2.2"}]`))
require.NoError(t, err) if !assert.NoError(t, err) {
return
}
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)

View File

@ -7,14 +7,13 @@ import (
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestRepositories(t *testing.T) { func TestRepositories(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/users/databrickslabs/repos" { if r.URL.Path == "/users/databrickslabs/repos" {
_, err := w.Write([]byte(`[{"name": "x"}]`)) _, err := w.Write([]byte(`[{"name": "x"}]`))
require.NoError(t, err) assert.NoError(t, err)
return return
} }
t.Logf("Requested: %s", r.URL.Path) t.Logf("Requested: %s", r.URL.Path)
@ -28,5 +27,5 @@ func TestRepositories(t *testing.T) {
r := NewRepositoryCache("databrickslabs", t.TempDir()) r := NewRepositoryCache("databrickslabs", t.TempDir())
all, err := r.Load(ctx) all, err := r.Load(ctx)
assert.NoError(t, err) assert.NoError(t, err)
assert.True(t, len(all) > 0) assert.NotEmpty(t, all)
} }

View File

@ -22,7 +22,7 @@ func TestCreatesDirectoryIfNeeded(t *testing.T) {
} }
first, err := c.Load(ctx, tick) first, err := c.Load(ctx, tick)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, first, int64(1)) assert.Equal(t, int64(1), first)
} }
func TestImpossibleToCreateDir(t *testing.T) { func TestImpossibleToCreateDir(t *testing.T) {

View File

@ -26,6 +26,7 @@ import (
"github.com/databricks/databricks-sdk-go/service/compute" "github.com/databricks/databricks-sdk-go/service/compute"
"github.com/databricks/databricks-sdk-go/service/iam" "github.com/databricks/databricks-sdk-go/service/iam"
"github.com/databricks/databricks-sdk-go/service/sql" "github.com/databricks/databricks-sdk-go/service/sql"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -169,17 +170,17 @@ func TestInstallerWorksForReleases(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/databrickslabs/blueprint/v0.3.15/labs.yml" { if r.URL.Path == "/databrickslabs/blueprint/v0.3.15/labs.yml" {
raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml") raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml")
require.NoError(t, err) assert.NoError(t, err)
_, err = w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err) assert.NoError(t, err)
return return
} }
if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.3.15" { if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.3.15" {
raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib") raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib")
require.NoError(t, err) assert.NoError(t, err)
w.Header().Add("Content-Type", "application/octet-stream") w.Header().Add("Content-Type", "application/octet-stream")
_, err = w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err) assert.NoError(t, err)
return return
} }
if r.URL.Path == "/api/2.1/clusters/get" { if r.URL.Path == "/api/2.1/clusters/get" {
@ -376,17 +377,17 @@ func TestUpgraderWorksForReleases(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/databrickslabs/blueprint/v0.4.0/labs.yml" { if r.URL.Path == "/databrickslabs/blueprint/v0.4.0/labs.yml" {
raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml") raw, err := os.ReadFile("testdata/installed-in-home/.databricks/labs/blueprint/lib/labs.yml")
require.NoError(t, err) assert.NoError(t, err)
_, err = w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err) assert.NoError(t, err)
return return
} }
if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.4.0" { if r.URL.Path == "/repos/databrickslabs/blueprint/zipball/v0.4.0" {
raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib") raw, err := zipballFromFolder("testdata/installed-in-home/.databricks/labs/blueprint/lib")
require.NoError(t, err) assert.NoError(t, err)
w.Header().Add("Content-Type", "application/octet-stream") w.Header().Add("Content-Type", "application/octet-stream")
_, err = w.Write(raw) _, err = w.Write(raw)
require.NoError(t, err) assert.NoError(t, err)
return return
} }
if r.URL.Path == "/api/2.1/clusters/get" { if r.URL.Path == "/api/2.1/clusters/get" {

View File

@ -85,13 +85,13 @@ func TestUploadArtifactFileToCorrectRemotePath(t *testing.T) {
// The remote path attribute on the artifact file should have been set. // The remote path attribute on the artifact file should have been set.
require.Regexp(t, require.Regexp(t,
regexp.MustCompile(path.Join(regexp.QuoteMeta(wsDir), `.internal/test\.whl`)), path.Join(regexp.QuoteMeta(wsDir), `.internal/test\.whl`),
b.Config.Artifacts["test"].Files[0].RemotePath, b.Config.Artifacts["test"].Files[0].RemotePath,
) )
// The task library path should have been updated to the remote path. // The task library path should have been updated to the remote path.
require.Regexp(t, require.Regexp(t,
regexp.MustCompile(path.Join("/Workspace", regexp.QuoteMeta(wsDir), `.internal/test\.whl`)), path.Join("/Workspace", regexp.QuoteMeta(wsDir), `.internal/test\.whl`),
b.Config.Resources.Jobs["test"].JobSettings.Tasks[0].Libraries[0].Whl, b.Config.Resources.Jobs["test"].JobSettings.Tasks[0].Libraries[0].Whl,
) )
} }
@ -149,13 +149,13 @@ func TestUploadArtifactFileToCorrectRemotePathWithEnvironments(t *testing.T) {
// The remote path attribute on the artifact file should have been set. // The remote path attribute on the artifact file should have been set.
require.Regexp(t, require.Regexp(t,
regexp.MustCompile(path.Join(regexp.QuoteMeta(wsDir), `.internal/test\.whl`)), path.Join(regexp.QuoteMeta(wsDir), `.internal/test\.whl`),
b.Config.Artifacts["test"].Files[0].RemotePath, b.Config.Artifacts["test"].Files[0].RemotePath,
) )
// The job environment deps path should have been updated to the remote path. // The job environment deps path should have been updated to the remote path.
require.Regexp(t, require.Regexp(t,
regexp.MustCompile(path.Join("/Workspace", regexp.QuoteMeta(wsDir), `.internal/test\.whl`)), path.Join("/Workspace", regexp.QuoteMeta(wsDir), `.internal/test\.whl`),
b.Config.Resources.Jobs["test"].JobSettings.Environments[0].Spec.Dependencies[0], b.Config.Resources.Jobs["test"].JobSettings.Environments[0].Spec.Dependencies[0],
) )
} }
@ -218,13 +218,13 @@ func TestUploadArtifactFileToCorrectRemotePathForVolumes(t *testing.T) {
// The remote path attribute on the artifact file should have been set. // The remote path attribute on the artifact file should have been set.
require.Regexp(t, require.Regexp(t,
regexp.MustCompile(path.Join(regexp.QuoteMeta(volumePath), `.internal/test\.whl`)), path.Join(regexp.QuoteMeta(volumePath), `.internal/test\.whl`),
b.Config.Artifacts["test"].Files[0].RemotePath, b.Config.Artifacts["test"].Files[0].RemotePath,
) )
// The task library path should have been updated to the remote path. // The task library path should have been updated to the remote path.
require.Regexp(t, require.Regexp(t,
regexp.MustCompile(path.Join(regexp.QuoteMeta(volumePath), `.internal/test\.whl`)), path.Join(regexp.QuoteMeta(volumePath), `.internal/test\.whl`),
b.Config.Resources.Jobs["test"].JobSettings.Tasks[0].Libraries[0].Whl, b.Config.Resources.Jobs["test"].JobSettings.Tasks[0].Libraries[0].Whl,
) )
} }
@ -257,7 +257,7 @@ func TestUploadArtifactFileToVolumeThatDoesNotExist(t *testing.T) {
stdout, stderr, err := testcli.RequireErrorRun(t, ctx, "bundle", "deploy") stdout, stderr, err := testcli.RequireErrorRun(t, ctx, "bundle", "deploy")
assert.Error(t, err) assert.Error(t, err)
assert.Equal(t, fmt.Sprintf(`Error: volume /Volumes/main/%s/doesnotexist does not exist: Not Found assert.Equal(t, fmt.Sprintf(`Error: volume main.%s.doesnotexist does not exist
at workspace.artifact_path at workspace.artifact_path
in databricks.yml:6:18 in databricks.yml:6:18
@ -293,7 +293,7 @@ func TestUploadArtifactToVolumeNotYetDeployed(t *testing.T) {
stdout, stderr, err := testcli.RequireErrorRun(t, ctx, "bundle", "deploy") stdout, stderr, err := testcli.RequireErrorRun(t, ctx, "bundle", "deploy")
assert.Error(t, err) assert.Error(t, err)
assert.Equal(t, fmt.Sprintf(`Error: volume /Volumes/main/%s/my_volume does not exist: Not Found assert.Equal(t, fmt.Sprintf(`Error: volume main.%s.my_volume does not exist
at workspace.artifact_path at workspace.artifact_path
resources.volumes.foo resources.volumes.foo
in databricks.yml:6:18 in databricks.yml:6:18

View File

@ -2,7 +2,6 @@ package bundle_test
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@ -99,7 +98,7 @@ func TestBundleDeployUcSchema(t *testing.T) {
// Assert the schema is deleted // Assert the schema is deleted
_, err = w.Schemas.GetByFullName(ctx, strings.Join([]string{catalogName, schemaName}, ".")) _, err = w.Schemas.GetByFullName(ctx, strings.Join([]string{catalogName, schemaName}, "."))
apiErr := &apierr.APIError{} apiErr := &apierr.APIError{}
assert.True(t, errors.As(err, &apiErr)) assert.ErrorAs(t, err, &apiErr)
assert.Equal(t, "SCHEMA_DOES_NOT_EXIST", apiErr.ErrorCode) assert.Equal(t, "SCHEMA_DOES_NOT_EXIST", apiErr.ErrorCode)
} }

View File

@ -1,7 +1,6 @@
package bundle_test package bundle_test
import ( import (
"errors"
"os" "os"
"path/filepath" "path/filepath"
"testing" "testing"
@ -71,11 +70,11 @@ func TestBundleDestroy(t *testing.T) {
// Assert snapshot file is deleted // Assert snapshot file is deleted
entries, err = os.ReadDir(snapshotsDir) entries, err = os.ReadDir(snapshotsDir)
require.NoError(t, err) require.NoError(t, err)
assert.Len(t, entries, 0) assert.Empty(t, entries)
// Assert bundle deployment path is deleted // Assert bundle deployment path is deleted
_, err = w.Workspace.GetStatusByPath(ctx, remoteRoot) _, err = w.Workspace.GetStatusByPath(ctx, remoteRoot)
apiErr := &apierr.APIError{} apiErr := &apierr.APIError{}
assert.True(t, errors.As(err, &apiErr)) assert.ErrorAs(t, err, &apiErr)
assert.Equal(t, "RESOURCE_DOES_NOT_EXIST", apiErr.ErrorCode) assert.Equal(t, "RESOURCE_DOES_NOT_EXIST", apiErr.ErrorCode)
} }

View File

@ -39,6 +39,8 @@ func TestBundleInitErrorOnUnknownFields(t *testing.T) {
// make changes that can break the MLOps Stacks DAB. In which case we should // make changes that can break the MLOps Stacks DAB. In which case we should
// skip this test until the MLOps Stacks DAB is updated to work again. // skip this test until the MLOps Stacks DAB is updated to work again.
func TestBundleInitOnMlopsStacks(t *testing.T) { func TestBundleInitOnMlopsStacks(t *testing.T) {
testutil.SkipUntil(t, "2025-01-09")
ctx, wt := acc.WorkspaceTest(t) ctx, wt := acc.WorkspaceTest(t)
w := wt.W w := wt.W

View File

@ -5,7 +5,6 @@ import (
"encoding/json" "encoding/json"
"io/fs" "io/fs"
"path" "path"
"regexp"
"strings" "strings"
"testing" "testing"
@ -65,11 +64,11 @@ func TestFsLs(t *testing.T) {
assert.Equal(t, "a", parsedStdout[0]["name"]) assert.Equal(t, "a", parsedStdout[0]["name"])
assert.Equal(t, true, parsedStdout[0]["is_directory"]) assert.Equal(t, true, parsedStdout[0]["is_directory"])
assert.Equal(t, float64(0), parsedStdout[0]["size"]) assert.InDelta(t, float64(0), parsedStdout[0]["size"], 0.0001)
assert.Equal(t, "bye.txt", parsedStdout[1]["name"]) assert.Equal(t, "bye.txt", parsedStdout[1]["name"])
assert.Equal(t, false, parsedStdout[1]["is_directory"]) assert.Equal(t, false, parsedStdout[1]["is_directory"])
assert.Equal(t, float64(3), parsedStdout[1]["size"]) assert.InDelta(t, float64(3), parsedStdout[1]["size"], 0.0001)
}) })
} }
} }
@ -99,11 +98,11 @@ func TestFsLsWithAbsolutePaths(t *testing.T) {
assert.Equal(t, path.Join(tmpDir, "a"), parsedStdout[0]["name"]) assert.Equal(t, path.Join(tmpDir, "a"), parsedStdout[0]["name"])
assert.Equal(t, true, parsedStdout[0]["is_directory"]) assert.Equal(t, true, parsedStdout[0]["is_directory"])
assert.Equal(t, float64(0), parsedStdout[0]["size"]) assert.InDelta(t, float64(0), parsedStdout[0]["size"], 0.0001)
assert.Equal(t, path.Join(tmpDir, "bye.txt"), parsedStdout[1]["name"]) assert.Equal(t, path.Join(tmpDir, "bye.txt"), parsedStdout[1]["name"])
assert.Equal(t, false, parsedStdout[1]["is_directory"]) assert.Equal(t, false, parsedStdout[1]["is_directory"])
assert.Equal(t, float64(3), parsedStdout[1]["size"]) assert.InDelta(t, float64(3), parsedStdout[1]["size"].(float64), 0.0001)
}) })
} }
} }
@ -122,7 +121,7 @@ func TestFsLsOnFile(t *testing.T) {
setupLsFiles(t, f) setupLsFiles(t, f)
_, _, err := testcli.RequireErrorRun(t, ctx, "fs", "ls", path.Join(tmpDir, "a", "hello.txt"), "--output=json") _, _, err := testcli.RequireErrorRun(t, ctx, "fs", "ls", path.Join(tmpDir, "a", "hello.txt"), "--output=json")
assert.Regexp(t, regexp.MustCompile("not a directory: .*/a/hello.txt"), err.Error()) assert.Regexp(t, "not a directory: .*/a/hello.txt", err.Error())
assert.ErrorAs(t, err, &filer.NotADirectory{}) assert.ErrorAs(t, err, &filer.NotADirectory{})
}) })
} }
@ -147,7 +146,7 @@ func TestFsLsOnEmptyDir(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
// assert on ls output // assert on ls output
assert.Equal(t, 0, len(parsedStdout)) assert.Empty(t, parsedStdout)
}) })
} }
} }
@ -166,7 +165,7 @@ func TestFsLsForNonexistingDir(t *testing.T) {
_, _, err := testcli.RequireErrorRun(t, ctx, "fs", "ls", path.Join(tmpDir, "nonexistent"), "--output=json") _, _, err := testcli.RequireErrorRun(t, ctx, "fs", "ls", path.Join(tmpDir, "nonexistent"), "--output=json")
assert.ErrorIs(t, err, fs.ErrNotExist) assert.ErrorIs(t, err, fs.ErrNotExist)
assert.Regexp(t, regexp.MustCompile("no such directory: .*/nonexistent"), err.Error()) assert.Regexp(t, "no such directory: .*/nonexistent", err.Error())
}) })
} }
} }

View File

@ -34,7 +34,7 @@ func TestFsMkdir(t *testing.T) {
info, err := f.Stat(context.Background(), "a") info, err := f.Stat(context.Background(), "a")
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "a", info.Name()) assert.Equal(t, "a", info.Name())
assert.Equal(t, true, info.IsDir()) assert.True(t, info.IsDir())
}) })
} }
} }
@ -60,19 +60,19 @@ func TestFsMkdirCreatesIntermediateDirectories(t *testing.T) {
infoA, err := f.Stat(context.Background(), "a") infoA, err := f.Stat(context.Background(), "a")
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "a", infoA.Name()) assert.Equal(t, "a", infoA.Name())
assert.Equal(t, true, infoA.IsDir()) assert.True(t, infoA.IsDir())
// assert directory "b" is created // assert directory "b" is created
infoB, err := f.Stat(context.Background(), "a/b") infoB, err := f.Stat(context.Background(), "a/b")
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "b", infoB.Name()) assert.Equal(t, "b", infoB.Name())
assert.Equal(t, true, infoB.IsDir()) assert.True(t, infoB.IsDir())
// assert directory "c" is created // assert directory "c" is created
infoC, err := f.Stat(context.Background(), "a/b/c") infoC, err := f.Stat(context.Background(), "a/b/c")
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "c", infoC.Name()) assert.Equal(t, "c", infoC.Name())
assert.Equal(t, true, infoC.IsDir()) assert.True(t, infoC.IsDir())
}) })
} }
} }

View File

@ -225,7 +225,7 @@ func (a *syncTest) snapshotContains(files []string) {
assert.Equal(a.t, s.RemotePath, a.remoteRoot) assert.Equal(a.t, s.RemotePath, a.remoteRoot)
for _, filePath := range files { for _, filePath := range files {
_, ok := s.LastModifiedTimes[filePath] _, ok := s.LastModifiedTimes[filePath]
assert.True(a.t, ok, fmt.Sprintf("%s not in snapshot file: %v", filePath, s.LastModifiedTimes)) assert.True(a.t, ok, "%s not in snapshot file: %v", filePath, s.LastModifiedTimes)
} }
assert.Equal(a.t, len(files), len(s.LastModifiedTimes)) assert.Equal(a.t, len(files), len(s.LastModifiedTimes))
} }

View File

@ -11,14 +11,14 @@ import (
) )
// Detects if test is run from "debug test" feature in VS Code. // Detects if test is run from "debug test" feature in VS Code.
func isInDebug() bool { func IsInDebug() bool {
ex, _ := os.Executable() ex, _ := os.Executable()
return strings.HasPrefix(path.Base(ex), "__debug_bin") return strings.HasPrefix(path.Base(ex), "__debug_bin")
} }
// Loads debug environment from ~/.databricks/debug-env.json. // Loads debug environment from ~/.databricks/debug-env.json.
func loadDebugEnvIfRunFromIDE(t testutil.TestingT, key string) { func loadDebugEnvIfRunFromIDE(t testutil.TestingT, key string) {
if !isInDebug() { if !IsInDebug() {
return return
} }
home, err := os.UserHomeDir() home, err := os.UserHomeDir()

View File

@ -21,9 +21,10 @@ type WorkspaceT struct {
} }
func WorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) { func WorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) {
t.Helper()
loadDebugEnvIfRunFromIDE(t, "workspace") loadDebugEnvIfRunFromIDE(t, "workspace")
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV")) t.Logf("CLOUD_ENV=%s", testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
w, err := databricks.NewWorkspaceClient() w, err := databricks.NewWorkspaceClient()
require.NoError(t, err) require.NoError(t, err)
@ -41,9 +42,10 @@ func WorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) {
// Run the workspace test only on UC workspaces. // Run the workspace test only on UC workspaces.
func UcWorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) { func UcWorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) {
t.Helper()
loadDebugEnvIfRunFromIDE(t, "workspace") loadDebugEnvIfRunFromIDE(t, "workspace")
t.Log(testutil.GetEnvOrSkipTest(t, "CLOUD_ENV")) t.Logf("CLOUD_ENV=%s", testutil.GetEnvOrSkipTest(t, "CLOUD_ENV"))
if os.Getenv("TEST_METASTORE_ID") == "" { if os.Getenv("TEST_METASTORE_ID") == "" {
t.Skipf("Skipping on non-UC workspaces") t.Skipf("Skipping on non-UC workspaces")
@ -67,19 +69,21 @@ func UcWorkspaceTest(t testutil.TestingT) (context.Context, *WorkspaceT) {
} }
func (t *WorkspaceT) TestClusterID() string { func (t *WorkspaceT) TestClusterID() string {
t.Helper()
clusterID := testutil.GetEnvOrSkipTest(t, "TEST_BRICKS_CLUSTER_ID") clusterID := testutil.GetEnvOrSkipTest(t, "TEST_BRICKS_CLUSTER_ID")
err := t.W.Clusters.EnsureClusterIsRunning(t.ctx, clusterID) err := t.W.Clusters.EnsureClusterIsRunning(t.ctx, clusterID)
require.NoError(t, err) require.NoError(t, err, "Unexpected error from EnsureClusterIsRunning for clusterID=%s", clusterID)
return clusterID return clusterID
} }
func (t *WorkspaceT) RunPython(code string) (string, error) { func (t *WorkspaceT) RunPython(code string) (string, error) {
t.Helper()
var err error var err error
// Create command executor only once per test. // Create command executor only once per test.
if t.exec == nil { if t.exec == nil {
t.exec, err = t.W.CommandExecution.Start(t.ctx, t.TestClusterID(), compute.LanguagePython) t.exec, err = t.W.CommandExecution.Start(t.ctx, t.TestClusterID(), compute.LanguagePython)
require.NoError(t, err) require.NoError(t, err, "Unexpected error from CommandExecution.Start(clusterID=%v)", t.TestClusterID())
t.Cleanup(func() { t.Cleanup(func() {
err := t.exec.Destroy(t.ctx) err := t.exec.Destroy(t.ctx)
@ -88,7 +92,7 @@ func (t *WorkspaceT) RunPython(code string) (string, error) {
} }
results, err := t.exec.Execute(t.ctx, code) results, err := t.exec.Execute(t.ctx, code)
require.NoError(t, err) require.NoError(t, err, "Unexpected error from Execute(%v)", code)
require.NotEqual(t, compute.ResultTypeError, results.ResultType, results.Cause) require.NotEqual(t, compute.ResultTypeError, results.ResultType, results.Cause)
output, ok := results.Data.(string) output, ok := results.Data.(string)
require.True(t, ok, "unexpected type %T", results.Data) require.True(t, ok, "unexpected type %T", results.Data)

View File

@ -4,6 +4,8 @@ import (
"fmt" "fmt"
"os" "os"
"testing" "testing"
"github.com/databricks/cli/integration/internal/acc"
) )
// Main is the entry point for integration tests. // Main is the entry point for integration tests.
@ -11,7 +13,7 @@ import (
// they are not inadvertently executed when calling `go test ./...`. // they are not inadvertently executed when calling `go test ./...`.
func Main(m *testing.M) { func Main(m *testing.M) {
value := os.Getenv("CLOUD_ENV") value := os.Getenv("CLOUD_ENV")
if value == "" { if value == "" && !acc.IsInDebug() {
fmt.Println("CLOUD_ENV is not set, skipping integration tests") fmt.Println("CLOUD_ENV is not set, skipping integration tests")
return return
} }

View File

@ -4,11 +4,9 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"errors"
"io" "io"
"io/fs" "io/fs"
"path" "path"
"regexp"
"strings" "strings"
"testing" "testing"
@ -106,7 +104,7 @@ func commonFilerRecursiveDeleteTest(t *testing.T, ctx context.Context, f filer.F
for _, e := range entriesBeforeDelete { for _, e := range entriesBeforeDelete {
names = append(names, e.Name()) names = append(names, e.Name())
} }
assert.Equal(t, names, []string{"file1", "file2", "subdir1", "subdir2"}) assert.Equal(t, []string{"file1", "file2", "subdir1", "subdir2"}, names)
err = f.Delete(ctx, "dir") err = f.Delete(ctx, "dir")
assert.ErrorAs(t, err, &filer.DirectoryNotEmptyError{}) assert.ErrorAs(t, err, &filer.DirectoryNotEmptyError{})
@ -149,13 +147,13 @@ func commonFilerReadWriteTests(t *testing.T, ctx context.Context, f filer.Filer)
// Write should fail because the intermediate directory doesn't exist. // Write should fail because the intermediate directory doesn't exist.
err = f.Write(ctx, "/foo/bar", strings.NewReader(`hello world`)) err = f.Write(ctx, "/foo/bar", strings.NewReader(`hello world`))
assert.True(t, errors.As(err, &filer.NoSuchDirectoryError{})) assert.ErrorAs(t, err, &filer.NoSuchDirectoryError{})
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Read should fail because the intermediate directory doesn't yet exist. // Read should fail because the intermediate directory doesn't yet exist.
_, err = f.Read(ctx, "/foo/bar") _, err = f.Read(ctx, "/foo/bar")
assert.True(t, errors.As(err, &filer.FileDoesNotExistError{})) assert.ErrorAs(t, err, &filer.FileDoesNotExistError{})
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Read should fail because the path points to a directory // Read should fail because the path points to a directory
err = f.Mkdir(ctx, "/dir") err = f.Mkdir(ctx, "/dir")
@ -170,8 +168,8 @@ func commonFilerReadWriteTests(t *testing.T, ctx context.Context, f filer.Filer)
// Write should fail because there is an existing file at the specified path. // Write should fail because there is an existing file at the specified path.
err = f.Write(ctx, "/foo/bar", strings.NewReader(`hello universe`)) err = f.Write(ctx, "/foo/bar", strings.NewReader(`hello universe`))
assert.True(t, errors.As(err, &filer.FileAlreadyExistsError{})) assert.ErrorAs(t, err, &filer.FileAlreadyExistsError{})
assert.True(t, errors.Is(err, fs.ErrExist)) assert.ErrorIs(t, err, fs.ErrExist)
// Write with OverwriteIfExists should succeed. // Write with OverwriteIfExists should succeed.
err = f.Write(ctx, "/foo/bar", strings.NewReader(`hello universe`), filer.OverwriteIfExists) err = f.Write(ctx, "/foo/bar", strings.NewReader(`hello universe`), filer.OverwriteIfExists)
@ -188,7 +186,7 @@ func commonFilerReadWriteTests(t *testing.T, ctx context.Context, f filer.Filer)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "foo", info.Name()) assert.Equal(t, "foo", info.Name())
assert.True(t, info.Mode().IsDir()) assert.True(t, info.Mode().IsDir())
assert.Equal(t, true, info.IsDir()) assert.True(t, info.IsDir())
// Stat on a file should succeed. // Stat on a file should succeed.
// Note: size and modification time behave differently between backends. // Note: size and modification time behave differently between backends.
@ -196,17 +194,17 @@ func commonFilerReadWriteTests(t *testing.T, ctx context.Context, f filer.Filer)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "bar", info.Name()) assert.Equal(t, "bar", info.Name())
assert.True(t, info.Mode().IsRegular()) assert.True(t, info.Mode().IsRegular())
assert.Equal(t, false, info.IsDir()) assert.False(t, info.IsDir())
// Delete should fail if the file doesn't exist. // Delete should fail if the file doesn't exist.
err = f.Delete(ctx, "/doesnt_exist") err = f.Delete(ctx, "/doesnt_exist")
assert.ErrorAs(t, err, &filer.FileDoesNotExistError{}) assert.ErrorAs(t, err, &filer.FileDoesNotExistError{})
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Stat should fail if the file doesn't exist. // Stat should fail if the file doesn't exist.
_, err = f.Stat(ctx, "/doesnt_exist") _, err = f.Stat(ctx, "/doesnt_exist")
assert.ErrorAs(t, err, &filer.FileDoesNotExistError{}) assert.ErrorAs(t, err, &filer.FileDoesNotExistError{})
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Delete should succeed for file that does exist. // Delete should succeed for file that does exist.
err = f.Delete(ctx, "/foo/bar") err = f.Delete(ctx, "/foo/bar")
@ -215,7 +213,7 @@ func commonFilerReadWriteTests(t *testing.T, ctx context.Context, f filer.Filer)
// Delete should fail for a non-empty directory. // Delete should fail for a non-empty directory.
err = f.Delete(ctx, "/foo") err = f.Delete(ctx, "/foo")
assert.ErrorAs(t, err, &filer.DirectoryNotEmptyError{}) assert.ErrorAs(t, err, &filer.DirectoryNotEmptyError{})
assert.True(t, errors.Is(err, fs.ErrInvalid)) assert.ErrorIs(t, err, fs.ErrInvalid)
// Delete should succeed for a non-empty directory if the DeleteRecursively flag is set. // Delete should succeed for a non-empty directory if the DeleteRecursively flag is set.
err = f.Delete(ctx, "/foo", filer.DeleteRecursively) err = f.Delete(ctx, "/foo", filer.DeleteRecursively)
@ -224,8 +222,8 @@ func commonFilerReadWriteTests(t *testing.T, ctx context.Context, f filer.Filer)
// Delete of the filer root should ALWAYS fail, otherwise subsequent writes would fail. // Delete of the filer root should ALWAYS fail, otherwise subsequent writes would fail.
// It is not in the filer's purview to delete its root directory. // It is not in the filer's purview to delete its root directory.
err = f.Delete(ctx, "/") err = f.Delete(ctx, "/")
assert.True(t, errors.As(err, &filer.CannotDeleteRootError{})) assert.ErrorAs(t, err, &filer.CannotDeleteRootError{})
assert.True(t, errors.Is(err, fs.ErrInvalid)) assert.ErrorIs(t, err, fs.ErrInvalid)
} }
func TestFilerReadWrite(t *testing.T) { func TestFilerReadWrite(t *testing.T) {
@ -262,7 +260,7 @@ func commonFilerReadDirTest(t *testing.T, ctx context.Context, f filer.Filer) {
// We start with an empty directory. // We start with an empty directory.
entries, err := f.ReadDir(ctx, ".") entries, err := f.ReadDir(ctx, ".")
require.NoError(t, err) require.NoError(t, err)
assert.Len(t, entries, 0) assert.Empty(t, entries)
// Write a file. // Write a file.
err = f.Write(ctx, "/hello.txt", strings.NewReader(`hello world`)) err = f.Write(ctx, "/hello.txt", strings.NewReader(`hello world`))
@ -282,8 +280,8 @@ func commonFilerReadDirTest(t *testing.T, ctx context.Context, f filer.Filer) {
// Expect an error if the path doesn't exist. // Expect an error if the path doesn't exist.
_, err = f.ReadDir(ctx, "/dir/a/b/c/d/e") _, err = f.ReadDir(ctx, "/dir/a/b/c/d/e")
assert.True(t, errors.As(err, &filer.NoSuchDirectoryError{}), err) assert.ErrorAs(t, err, &filer.NoSuchDirectoryError{}, err)
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Expect two entries in the root. // Expect two entries in the root.
entries, err = f.ReadDir(ctx, ".") entries, err = f.ReadDir(ctx, ".")
@ -295,7 +293,7 @@ func commonFilerReadDirTest(t *testing.T, ctx context.Context, f filer.Filer) {
assert.False(t, entries[1].IsDir()) assert.False(t, entries[1].IsDir())
info, err = entries[1].Info() info, err = entries[1].Info()
require.NoError(t, err) require.NoError(t, err)
assert.Greater(t, info.ModTime().Unix(), int64(0)) assert.Positive(t, info.ModTime().Unix())
// Expect two entries in the directory. // Expect two entries in the directory.
entries, err = f.ReadDir(ctx, "/dir") entries, err = f.ReadDir(ctx, "/dir")
@ -307,7 +305,7 @@ func commonFilerReadDirTest(t *testing.T, ctx context.Context, f filer.Filer) {
assert.False(t, entries[1].IsDir()) assert.False(t, entries[1].IsDir())
info, err = entries[1].Info() info, err = entries[1].Info()
require.NoError(t, err) require.NoError(t, err)
assert.Greater(t, info.ModTime().Unix(), int64(0)) assert.Positive(t, info.ModTime().Unix())
// Expect a single entry in the nested path. // Expect a single entry in the nested path.
entries, err = f.ReadDir(ctx, "/dir/a/b") entries, err = f.ReadDir(ctx, "/dir/a/b")
@ -325,7 +323,7 @@ func commonFilerReadDirTest(t *testing.T, ctx context.Context, f filer.Filer) {
require.NoError(t, err) require.NoError(t, err)
entries, err = f.ReadDir(ctx, "empty-dir") entries, err = f.ReadDir(ctx, "empty-dir")
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, entries, 0) assert.Empty(t, entries)
// Expect one entry for a directory with a file in it // Expect one entry for a directory with a file in it
err = f.Write(ctx, "dir-with-one-file/my-file.txt", strings.NewReader("abc"), filer.CreateParentDirectories) err = f.Write(ctx, "dir-with-one-file/my-file.txt", strings.NewReader("abc"), filer.CreateParentDirectories)
@ -333,7 +331,7 @@ func commonFilerReadDirTest(t *testing.T, ctx context.Context, f filer.Filer) {
entries, err = f.ReadDir(ctx, "dir-with-one-file") entries, err = f.ReadDir(ctx, "dir-with-one-file")
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, entries, 1) assert.Len(t, entries, 1)
assert.Equal(t, entries[0].Name(), "my-file.txt") assert.Equal(t, "my-file.txt", entries[0].Name())
assert.False(t, entries[0].IsDir()) assert.False(t, entries[0].IsDir())
} }
@ -459,7 +457,7 @@ func TestFilerWorkspaceNotebook(t *testing.T) {
// Assert uploading a second time fails due to overwrite mode missing // Assert uploading a second time fails due to overwrite mode missing
err = f.Write(ctx, tc.name, strings.NewReader(tc.content2)) err = f.Write(ctx, tc.name, strings.NewReader(tc.content2))
require.ErrorIs(t, err, fs.ErrExist) require.ErrorIs(t, err, fs.ErrExist)
assert.Regexp(t, regexp.MustCompile(`file already exists: .*/`+tc.nameWithoutExt+`$`), err.Error()) assert.Regexp(t, `file already exists: .*/`+tc.nameWithoutExt+`$`, err.Error())
// Try uploading the notebook again with overwrite flag. This time it should succeed. // Try uploading the notebook again with overwrite flag. This time it should succeed.
err = f.Write(ctx, tc.name, strings.NewReader(tc.content2), filer.OverwriteIfExists) err = f.Write(ctx, tc.name, strings.NewReader(tc.content2), filer.OverwriteIfExists)

View File

@ -3,7 +3,6 @@ package testcli
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"github.com/databricks/cli/internal/testutil" "github.com/databricks/cli/internal/testutil"
"github.com/databricks/cli/libs/testdiff" "github.com/databricks/cli/libs/testdiff"
@ -11,7 +10,7 @@ import (
) )
func captureOutput(t testutil.TestingT, ctx context.Context, args []string) string { func captureOutput(t testutil.TestingT, ctx context.Context, args []string) string {
t.Logf("run args: [%s]", strings.Join(args, ", ")) t.Helper()
r := NewRunner(t, ctx, args...) r := NewRunner(t, ctx, args...)
stdout, stderr, err := r.Run() stdout, stderr, err := r.Run()
assert.NoError(t, err) assert.NoError(t, err)
@ -19,11 +18,13 @@ func captureOutput(t testutil.TestingT, ctx context.Context, args []string) stri
} }
func AssertOutput(t testutil.TestingT, ctx context.Context, args []string, expectedPath string) { func AssertOutput(t testutil.TestingT, ctx context.Context, args []string, expectedPath string) {
t.Helper()
out := captureOutput(t, ctx, args) out := captureOutput(t, ctx, args)
testdiff.AssertOutput(t, ctx, out, fmt.Sprintf("Output from %v", args), expectedPath) testdiff.AssertOutput(t, ctx, out, fmt.Sprintf("Output from %v", args), expectedPath)
} }
func AssertOutputJQ(t testutil.TestingT, ctx context.Context, args []string, expectedPath string, ignorePaths []string) { func AssertOutputJQ(t testutil.TestingT, ctx context.Context, args []string, expectedPath string, ignorePaths []string) {
t.Helper()
out := captureOutput(t, ctx, args) out := captureOutput(t, ctx, args)
testdiff.AssertOutputJQ(t, ctx, out, fmt.Sprintf("Output from %v", args), expectedPath, ignorePaths) testdiff.AssertOutputJQ(t, ctx, out, fmt.Sprintf("Output from %v", args), expectedPath, ignorePaths)
} }

View File

@ -69,6 +69,7 @@ func consumeLines(ctx context.Context, wg *sync.WaitGroup, r io.Reader) <-chan s
} }
func (r *Runner) registerFlagCleanup(c *cobra.Command) { func (r *Runner) registerFlagCleanup(c *cobra.Command) {
r.Helper()
// Find target command that will be run. Example: if the command run is `databricks fs cp`, // Find target command that will be run. Example: if the command run is `databricks fs cp`,
// target command corresponds to `cp` // target command corresponds to `cp`
targetCmd, _, err := c.Find(r.args) targetCmd, _, err := c.Find(r.args)
@ -230,13 +231,48 @@ func (r *Runner) RunBackground() {
} }
func (r *Runner) Run() (bytes.Buffer, bytes.Buffer, error) { func (r *Runner) Run() (bytes.Buffer, bytes.Buffer, error) {
r.RunBackground() r.Helper()
err := <-r.errch var stdout, stderr bytes.Buffer
return r.stdout, r.stderr, err ctx := cmdio.NewContext(r.ctx, &cmdio.Logger{
Mode: flags.ModeAppend,
Reader: bufio.Reader{},
Writer: &stderr,
})
cli := cmd.New(ctx)
cli.SetOut(&stdout)
cli.SetErr(&stderr)
cli.SetArgs(r.args)
r.Logf(" args: %s", strings.Join(r.args, ", "))
err := root.Execute(ctx, cli)
if err != nil {
r.Logf(" error: %s", err)
}
if stdout.Len() > 0 {
// Make a copy of the buffer such that it remains "unread".
scanner := bufio.NewScanner(bytes.NewBuffer(stdout.Bytes()))
for scanner.Scan() {
r.Logf("stdout: %s", scanner.Text())
}
}
if stderr.Len() > 0 {
// Make a copy of the buffer such that it remains "unread".
scanner := bufio.NewScanner(bytes.NewBuffer(stderr.Bytes()))
for scanner.Scan() {
r.Logf("stderr: %s", scanner.Text())
}
}
return stdout, stderr, err
} }
// Like [require.Eventually] but errors if the underlying command has failed. // Like [require.Eventually] but errors if the underlying command has failed.
func (r *Runner) Eventually(condition func() bool, waitFor, tick time.Duration, msgAndArgs ...any) { func (r *Runner) Eventually(condition func() bool, waitFor, tick time.Duration, msgAndArgs ...any) {
r.Helper()
ch := make(chan bool, 1) ch := make(chan bool, 1)
timer := time.NewTimer(waitFor) timer := time.NewTimer(waitFor)
@ -269,12 +305,14 @@ func (r *Runner) Eventually(condition func() bool, waitFor, tick time.Duration,
} }
func (r *Runner) RunAndExpectOutput(heredoc string) { func (r *Runner) RunAndExpectOutput(heredoc string) {
r.Helper()
stdout, _, err := r.Run() stdout, _, err := r.Run()
require.NoError(r, err) require.NoError(r, err)
require.Equal(r, cmdio.Heredoc(heredoc), strings.TrimSpace(stdout.String())) require.Equal(r, cmdio.Heredoc(heredoc), strings.TrimSpace(stdout.String()))
} }
func (r *Runner) RunAndParseJSON(v any) { func (r *Runner) RunAndParseJSON(v any) {
r.Helper()
stdout, _, err := r.Run() stdout, _, err := r.Run()
require.NoError(r, err) require.NoError(r, err)
err = json.Unmarshal(stdout.Bytes(), &v) err = json.Unmarshal(stdout.Bytes(), &v)
@ -291,7 +329,7 @@ func NewRunner(t testutil.TestingT, ctx context.Context, args ...string) *Runner
} }
func RequireSuccessfulRun(t testutil.TestingT, ctx context.Context, args ...string) (bytes.Buffer, bytes.Buffer) { func RequireSuccessfulRun(t testutil.TestingT, ctx context.Context, args ...string) (bytes.Buffer, bytes.Buffer) {
t.Logf("run args: [%s]", strings.Join(args, ", ")) t.Helper()
r := NewRunner(t, ctx, args...) r := NewRunner(t, ctx, args...)
stdout, stderr, err := r.Run() stdout, stderr, err := r.Run()
require.NoError(t, err) require.NoError(t, err)
@ -299,6 +337,7 @@ func RequireSuccessfulRun(t testutil.TestingT, ctx context.Context, args ...stri
} }
func RequireErrorRun(t testutil.TestingT, ctx context.Context, args ...string) (bytes.Buffer, bytes.Buffer, error) { func RequireErrorRun(t testutil.TestingT, ctx context.Context, args ...string) (bytes.Buffer, bytes.Buffer, error) {
t.Helper()
r := NewRunner(t, ctx, args...) r := NewRunner(t, ctx, args...)
stdout, stderr, err := r.Run() stdout, stderr, err := r.Run()
require.Error(t, err) require.Error(t, err)

View File

@ -5,6 +5,9 @@ import (
"math/rand" "math/rand"
"os" "os"
"strings" "strings"
"time"
"github.com/stretchr/testify/require"
) )
// GetEnvOrSkipTest proceeds with test only with that env variable. // GetEnvOrSkipTest proceeds with test only with that env variable.
@ -30,3 +33,12 @@ func RandomName(prefix ...string) string {
} }
return string(b) return string(b)
} }
func SkipUntil(t TestingT, date string) {
deadline, err := time.Parse(time.DateOnly, date)
require.NoError(t, err)
if time.Now().Before(deadline) {
t.Skipf("Skipping test until %s. Time right now: %s", deadline.Format(time.DateOnly), time.Now())
}
}

View File

@ -24,4 +24,6 @@ type TestingT interface {
Setenv(key, value string) Setenv(key, value string)
TempDir() string TempDir() string
Helper()
} }

View File

@ -42,7 +42,7 @@ func TestStoreAndLookup(t *testing.T) {
tok, err := l.Lookup("x") tok, err := l.Lookup("x")
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "abc", tok.AccessToken) assert.Equal(t, "abc", tok.AccessToken)
assert.Equal(t, 2, len(l.Tokens)) assert.Len(t, l.Tokens, 2)
_, err = l.Lookup("z") _, err = l.Lookup("z")
assert.Equal(t, ErrNotConfigured, err) assert.Equal(t, ErrNotConfigured, err)

View File

@ -216,7 +216,7 @@ func TestSaveToProfile_ClearingPreviousProfile(t *testing.T) {
dlft, err := file.GetSection("DEFAULT") dlft, err := file.GetSection("DEFAULT")
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, dlft.KeysHash(), 0) assert.Empty(t, dlft.KeysHash())
abc, err := file.GetSection("abc") abc, err := file.GetSection("abc")
assert.NoError(t, err) assert.NoError(t, err)

View File

@ -11,10 +11,10 @@ import (
) )
func TestProfileCloud(t *testing.T) { func TestProfileCloud(t *testing.T) {
assert.Equal(t, Profile{Host: "https://dbc-XXXXXXXX-YYYY.cloud.databricks.com"}.Cloud(), "AWS") assert.Equal(t, "AWS", Profile{Host: "https://dbc-XXXXXXXX-YYYY.cloud.databricks.com"}.Cloud())
assert.Equal(t, Profile{Host: "https://adb-xxx.y.azuredatabricks.net/"}.Cloud(), "Azure") assert.Equal(t, "Azure", Profile{Host: "https://adb-xxx.y.azuredatabricks.net/"}.Cloud())
assert.Equal(t, Profile{Host: "https://workspace.gcp.databricks.com/"}.Cloud(), "GCP") assert.Equal(t, "GCP", Profile{Host: "https://workspace.gcp.databricks.com/"}.Cloud())
assert.Equal(t, Profile{Host: "https://some.invalid.host.com/"}.Cloud(), "AWS") assert.Equal(t, "AWS", Profile{Host: "https://some.invalid.host.com/"}.Cloud())
} }
func TestProfilesSearchCaseInsensitive(t *testing.T) { func TestProfilesSearchCaseInsensitive(t *testing.T) {

View File

@ -1,7 +1,6 @@
package errs package errs
import ( import (
"errors"
"fmt" "fmt"
"testing" "testing"
@ -14,13 +13,13 @@ func TestFromManyErrors(t *testing.T) {
e3 := fmt.Errorf("Error 3") e3 := fmt.Errorf("Error 3")
err := FromMany(e1, e2, e3) err := FromMany(e1, e2, e3)
assert.True(t, errors.Is(err, e1)) assert.ErrorIs(t, err, e1)
assert.True(t, errors.Is(err, e2)) assert.ErrorIs(t, err, e2)
assert.True(t, errors.Is(err, e3)) assert.ErrorIs(t, err, e3)
assert.Equal(t, err.Error(), `Error 1 assert.Equal(t, `Error 1
Error 2 Error 2
Error 3`) Error 3`, err.Error())
} }
func TestFromManyErrorsWihtNil(t *testing.T) { func TestFromManyErrorsWihtNil(t *testing.T) {
@ -29,9 +28,9 @@ func TestFromManyErrorsWihtNil(t *testing.T) {
e3 := fmt.Errorf("Error 3") e3 := fmt.Errorf("Error 3")
err := FromMany(e1, e2, e3) err := FromMany(e1, e2, e3)
assert.True(t, errors.Is(err, e1)) assert.ErrorIs(t, err, e1)
assert.True(t, errors.Is(err, e3)) assert.ErrorIs(t, err, e3)
assert.Equal(t, err.Error(), `Error 1 assert.Equal(t, `Error 1
Error 3`) Error 3`, err.Error())
} }

View File

@ -37,7 +37,7 @@ func TestFilerCompleterSetsPrefix(t *testing.T) {
assert.Equal(t, []string{"dbfs:/dir/dirA/", "dbfs:/dir/dirB/"}, completions) assert.Equal(t, []string{"dbfs:/dir/dirA/", "dbfs:/dir/dirB/"}, completions)
assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive) assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive)
assert.Nil(t, err) assert.NoError(t, err)
} }
func TestFilerCompleterReturnsNestedDirs(t *testing.T) { func TestFilerCompleterReturnsNestedDirs(t *testing.T) {
@ -46,7 +46,7 @@ func TestFilerCompleterReturnsNestedDirs(t *testing.T) {
assert.Equal(t, []string{"dir/dirA/", "dir/dirB/"}, completions) assert.Equal(t, []string{"dir/dirA/", "dir/dirB/"}, completions)
assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive) assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive)
assert.Nil(t, err) assert.NoError(t, err)
} }
func TestFilerCompleterReturnsAdjacentDirs(t *testing.T) { func TestFilerCompleterReturnsAdjacentDirs(t *testing.T) {
@ -55,7 +55,7 @@ func TestFilerCompleterReturnsAdjacentDirs(t *testing.T) {
assert.Equal(t, []string{"dir/dirA/", "dir/dirB/"}, completions) assert.Equal(t, []string{"dir/dirA/", "dir/dirB/"}, completions)
assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive) assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive)
assert.Nil(t, err) assert.NoError(t, err)
} }
func TestFilerCompleterReturnsNestedDirsAndFiles(t *testing.T) { func TestFilerCompleterReturnsNestedDirsAndFiles(t *testing.T) {
@ -64,7 +64,7 @@ func TestFilerCompleterReturnsNestedDirsAndFiles(t *testing.T) {
assert.Equal(t, []string{"dir/dirA/", "dir/dirB/", "dir/fileA"}, completions) assert.Equal(t, []string{"dir/dirA/", "dir/dirB/", "dir/fileA"}, completions)
assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive) assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive)
assert.Nil(t, err) assert.NoError(t, err)
} }
func TestFilerCompleterAddsDbfsPath(t *testing.T) { func TestFilerCompleterAddsDbfsPath(t *testing.T) {
@ -78,7 +78,7 @@ func TestFilerCompleterAddsDbfsPath(t *testing.T) {
assert.Equal(t, []string{"dir/dirA/", "dir/dirB/", "dbfs:/"}, completions) assert.Equal(t, []string{"dir/dirA/", "dir/dirB/", "dbfs:/"}, completions)
assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive) assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive)
assert.Nil(t, err) assert.NoError(t, err)
} }
func TestFilerCompleterWindowsSeparator(t *testing.T) { func TestFilerCompleterWindowsSeparator(t *testing.T) {
@ -92,7 +92,7 @@ func TestFilerCompleterWindowsSeparator(t *testing.T) {
assert.Equal(t, []string{"dir\\dirA\\", "dir\\dirB\\", "dbfs:/"}, completions) assert.Equal(t, []string{"dir\\dirA\\", "dir\\dirB\\", "dbfs:/"}, completions)
assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive) assert.Equal(t, cobra.ShellCompDirectiveNoSpace, directive)
assert.Nil(t, err) assert.NoError(t, err)
} }
func TestFilerCompleterNoCompletions(t *testing.T) { func TestFilerCompleterNoCompletions(t *testing.T) {

View File

@ -63,7 +63,7 @@ func TestFsOpenFile(t *testing.T) {
assert.Equal(t, "fileA", info.Name()) assert.Equal(t, "fileA", info.Name())
assert.Equal(t, int64(3), info.Size()) assert.Equal(t, int64(3), info.Size())
assert.Equal(t, fs.FileMode(0), info.Mode()) assert.Equal(t, fs.FileMode(0), info.Mode())
assert.Equal(t, false, info.IsDir()) assert.False(t, info.IsDir())
// Read until closed. // Read until closed.
b := make([]byte, 3) b := make([]byte, 3)
@ -91,7 +91,7 @@ func TestFsOpenDir(t *testing.T) {
info, err := fakeFile.Stat() info, err := fakeFile.Stat()
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "root", info.Name()) assert.Equal(t, "root", info.Name())
assert.Equal(t, true, info.IsDir()) assert.True(t, info.IsDir())
de, ok := fakeFile.(fs.ReadDirFile) de, ok := fakeFile.(fs.ReadDirFile)
require.True(t, ok) require.True(t, ok)

View File

@ -52,7 +52,7 @@ func TestGlobFileset(t *testing.T) {
files, err = g.Files() files, err = g.Files()
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, len(files), 0) require.Empty(t, files)
} }
func TestGlobFilesetWithRelativeRoot(t *testing.T) { func TestGlobFilesetWithRelativeRoot(t *testing.T) {

View File

@ -123,12 +123,12 @@ func TestJsonUnmarshalForRequest(t *testing.T) {
assert.Equal(t, "new job", r.NewSettings.Name) assert.Equal(t, "new job", r.NewSettings.Name)
assert.Equal(t, 0, r.NewSettings.TimeoutSeconds) assert.Equal(t, 0, r.NewSettings.TimeoutSeconds)
assert.Equal(t, 1, r.NewSettings.MaxConcurrentRuns) assert.Equal(t, 1, r.NewSettings.MaxConcurrentRuns)
assert.Equal(t, 1, len(r.NewSettings.Tasks)) assert.Len(t, r.NewSettings.Tasks, 1)
assert.Equal(t, "new task", r.NewSettings.Tasks[0].TaskKey) assert.Equal(t, "new task", r.NewSettings.Tasks[0].TaskKey)
assert.Equal(t, 0, r.NewSettings.Tasks[0].TimeoutSeconds) assert.Equal(t, 0, r.NewSettings.Tasks[0].TimeoutSeconds)
assert.Equal(t, 0, r.NewSettings.Tasks[0].MaxRetries) assert.Equal(t, 0, r.NewSettings.Tasks[0].MaxRetries)
assert.Equal(t, 0, r.NewSettings.Tasks[0].MinRetryIntervalMillis) assert.Equal(t, 0, r.NewSettings.Tasks[0].MinRetryIntervalMillis)
assert.Equal(t, true, r.NewSettings.Tasks[0].RetryOnTimeout) assert.True(t, r.NewSettings.Tasks[0].RetryOnTimeout)
} }
const incorrectJsonData = `{ const incorrectJsonData = `{
@ -280,8 +280,8 @@ func TestJsonUnmarshalForRequestWithForceSendFields(t *testing.T) {
assert.NoError(t, diags.Error()) assert.NoError(t, diags.Error())
assert.Empty(t, diags) assert.Empty(t, diags)
assert.Equal(t, false, r.NewSettings.NotificationSettings.NoAlertForSkippedRuns) assert.False(t, r.NewSettings.NotificationSettings.NoAlertForSkippedRuns)
assert.Equal(t, false, r.NewSettings.NotificationSettings.NoAlertForCanceledRuns) assert.False(t, r.NewSettings.NotificationSettings.NoAlertForCanceledRuns)
assert.NotContains(t, r.NewSettings.NotificationSettings.ForceSendFields, "NoAlertForSkippedRuns") assert.NotContains(t, r.NewSettings.NotificationSettings.ForceSendFields, "NoAlertForSkippedRuns")
assert.Contains(t, r.NewSettings.NotificationSettings.ForceSendFields, "NoAlertForCanceledRuns") assert.Contains(t, r.NewSettings.NotificationSettings.ForceSendFields, "NoAlertForCanceledRuns")
} }

View File

@ -33,6 +33,6 @@ func TestFindDirWithLeaf(t *testing.T) {
{ {
out, err := FindDirWithLeaf(root, "this-leaf-doesnt-exist-anywhere") out, err := FindDirWithLeaf(root, "this-leaf-doesnt-exist-anywhere")
assert.ErrorIs(t, err, os.ErrNotExist) assert.ErrorIs(t, err, os.ErrNotExist)
assert.Equal(t, out, "") assert.Equal(t, "", out)
} }
} }

View File

@ -96,7 +96,7 @@ func TestTemplateFromString(t *testing.T) {
v, err = fromString("1.1", NumberType) v, err = fromString("1.1", NumberType)
assert.NoError(t, err) assert.NoError(t, err)
// Floating point conversions are not perfect // Floating point conversions are not perfect
assert.True(t, (v.(float64)-1.1) < 0.000001) assert.Less(t, (v.(float64) - 1.1), 0.000001)
v, err = fromString("12345", IntegerType) v, err = fromString("12345", IntegerType)
assert.NoError(t, err) assert.NoError(t, err)
@ -104,7 +104,7 @@ func TestTemplateFromString(t *testing.T) {
v, err = fromString("123", NumberType) v, err = fromString("123", NumberType)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, float64(123), v) assert.InDelta(t, float64(123), v.(float64), 0.0001)
_, err = fromString("qrt", ArrayType) _, err = fromString("qrt", ArrayType)
assert.EqualError(t, err, "cannot parse string as object of type array. Value of string: \"qrt\"") assert.EqualError(t, err, "cannot parse string as object of type array. Value of string: \"qrt\"")

View File

@ -1,7 +1,6 @@
package notebook package notebook
import ( import (
"errors"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
@ -53,7 +52,7 @@ func TestDetectCallsDetectJupyter(t *testing.T) {
func TestDetectUnknownExtension(t *testing.T) { func TestDetectUnknownExtension(t *testing.T) {
_, _, err := Detect("./testdata/doesntexist.foobar") _, _, err := Detect("./testdata/doesntexist.foobar")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
nb, _, err := Detect("./testdata/unknown_extension.foobar") nb, _, err := Detect("./testdata/unknown_extension.foobar")
require.NoError(t, err) require.NoError(t, err)
@ -62,7 +61,7 @@ func TestDetectUnknownExtension(t *testing.T) {
func TestDetectNoExtension(t *testing.T) { func TestDetectNoExtension(t *testing.T) {
_, _, err := Detect("./testdata/doesntexist") _, _, err := Detect("./testdata/doesntexist")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
nb, _, err := Detect("./testdata/no_extension") nb, _, err := Detect("./testdata/no_extension")
require.NoError(t, err) require.NoError(t, err)

View File

@ -95,7 +95,7 @@ func TestBackgroundNoStdin(t *testing.T) {
func TestBackgroundFails(t *testing.T) { func TestBackgroundFails(t *testing.T) {
ctx := context.Background() ctx := context.Background()
_, err := Background(ctx, []string{"ls", "/dev/null/x"}) _, err := Background(ctx, []string{"ls", "/dev/null/x"})
assert.NotNil(t, err) assert.Error(t, err)
} }
func TestBackgroundFailsOnOption(t *testing.T) { func TestBackgroundFailsOnOption(t *testing.T) {

View File

@ -27,7 +27,7 @@ func TestForwardedFails(t *testing.T) {
err := Forwarded(ctx, []string{ err := Forwarded(ctx, []string{
"_non_existent_", "_non_existent_",
}, strings.NewReader("abc\n"), &buf, &buf) }, strings.NewReader("abc\n"), &buf, &buf)
assert.NotNil(t, err) assert.Error(t, err)
} }
func TestForwardedFailsOnStdinPipe(t *testing.T) { func TestForwardedFailsOnStdinPipe(t *testing.T) {
@ -39,5 +39,5 @@ func TestForwardedFailsOnStdinPipe(t *testing.T) {
c.Stdin = strings.NewReader("x") c.Stdin = strings.NewReader("x")
return nil return nil
}) })
assert.NotNil(t, err) assert.Error(t, err)
} }

View File

@ -41,7 +41,7 @@ func TestWorksWithLibsEnv(t *testing.T) {
vars := cmd.Environ() vars := cmd.Environ()
sort.Strings(vars) sort.Strings(vars)
assert.True(t, len(vars) >= 2) assert.GreaterOrEqual(t, len(vars), 2)
assert.Equal(t, "CCC=DDD", vars[0]) assert.Equal(t, "CCC=DDD", vars[0])
assert.Equal(t, "EEE=FFF", vars[1]) assert.Equal(t, "EEE=FFF", vars[1])
} }

View File

@ -18,7 +18,7 @@ func TestAtLeastOnePythonInstalled(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
a := all.Latest() a := all.Latest()
t.Logf("latest is: %s", a) t.Logf("latest is: %s", a)
assert.True(t, len(all) > 0) assert.NotEmpty(t, all)
} }
func TestNoInterpretersFound(t *testing.T) { func TestNoInterpretersFound(t *testing.T) {

View File

@ -51,7 +51,7 @@ func TestDiff(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
change, err := state.diff(ctx, files) change, err := state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 0) assert.Empty(t, change.delete)
assert.Len(t, change.put, 2) assert.Len(t, change.put, 2)
assert.Contains(t, change.put, "hello.txt") assert.Contains(t, change.put, "hello.txt")
assert.Contains(t, change.put, "world.txt") assert.Contains(t, change.put, "world.txt")
@ -67,7 +67,7 @@ func TestDiff(t *testing.T) {
change, err = state.diff(ctx, files) change, err = state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 0) assert.Empty(t, change.delete)
assert.Len(t, change.put, 1) assert.Len(t, change.put, 1)
assert.Contains(t, change.put, "world.txt") assert.Contains(t, change.put, "world.txt")
assertKeysOfMap(t, state.LastModifiedTimes, []string{"hello.txt", "world.txt"}) assertKeysOfMap(t, state.LastModifiedTimes, []string{"hello.txt", "world.txt"})
@ -82,7 +82,7 @@ func TestDiff(t *testing.T) {
change, err = state.diff(ctx, files) change, err = state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 1) assert.Len(t, change.delete, 1)
assert.Len(t, change.put, 0) assert.Empty(t, change.put)
assert.Contains(t, change.delete, "hello.txt") assert.Contains(t, change.delete, "hello.txt")
assertKeysOfMap(t, state.LastModifiedTimes, []string{"world.txt"}) assertKeysOfMap(t, state.LastModifiedTimes, []string{"world.txt"})
assert.Equal(t, map[string]string{"world.txt": "world.txt"}, state.LocalToRemoteNames) assert.Equal(t, map[string]string{"world.txt": "world.txt"}, state.LocalToRemoteNames)
@ -145,8 +145,8 @@ func TestFolderDiff(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
change, err := state.diff(ctx, files) change, err := state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 0) assert.Empty(t, change.delete)
assert.Len(t, change.rmdir, 0) assert.Empty(t, change.rmdir)
assert.Len(t, change.mkdir, 1) assert.Len(t, change.mkdir, 1)
assert.Len(t, change.put, 1) assert.Len(t, change.put, 1)
assert.Contains(t, change.mkdir, "foo") assert.Contains(t, change.mkdir, "foo")
@ -159,8 +159,8 @@ func TestFolderDiff(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 1) assert.Len(t, change.delete, 1)
assert.Len(t, change.rmdir, 1) assert.Len(t, change.rmdir, 1)
assert.Len(t, change.mkdir, 0) assert.Empty(t, change.mkdir)
assert.Len(t, change.put, 0) assert.Empty(t, change.put)
assert.Contains(t, change.delete, "foo/bar") assert.Contains(t, change.delete, "foo/bar")
assert.Contains(t, change.rmdir, "foo") assert.Contains(t, change.rmdir, "foo")
} }
@ -189,7 +189,7 @@ func TestPythonNotebookDiff(t *testing.T) {
foo.Overwrite(t, "# Databricks notebook source\nprint(\"abc\")") foo.Overwrite(t, "# Databricks notebook source\nprint(\"abc\")")
change, err := state.diff(ctx, files) change, err := state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 0) assert.Empty(t, change.delete)
assert.Len(t, change.put, 1) assert.Len(t, change.put, 1)
assert.Contains(t, change.put, "foo.py") assert.Contains(t, change.put, "foo.py")
assertKeysOfMap(t, state.LastModifiedTimes, []string{"foo.py"}) assertKeysOfMap(t, state.LastModifiedTimes, []string{"foo.py"})
@ -233,9 +233,9 @@ func TestPythonNotebookDiff(t *testing.T) {
change, err = state.diff(ctx, files) change, err = state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 1) assert.Len(t, change.delete, 1)
assert.Len(t, change.put, 0) assert.Empty(t, change.put)
assert.Contains(t, change.delete, "foo") assert.Contains(t, change.delete, "foo")
assert.Len(t, state.LastModifiedTimes, 0) assert.Empty(t, state.LastModifiedTimes)
assert.Equal(t, map[string]string{}, state.LocalToRemoteNames) assert.Equal(t, map[string]string{}, state.LocalToRemoteNames)
assert.Equal(t, map[string]string{}, state.RemoteToLocalNames) assert.Equal(t, map[string]string{}, state.RemoteToLocalNames)
} }
@ -264,7 +264,7 @@ func TestErrorWhenIdenticalRemoteName(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
change, err := state.diff(ctx, files) change, err := state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 0) assert.Empty(t, change.delete)
assert.Len(t, change.put, 2) assert.Len(t, change.put, 2)
assert.Contains(t, change.put, "foo.py") assert.Contains(t, change.put, "foo.py")
assert.Contains(t, change.put, "foo") assert.Contains(t, change.put, "foo")
@ -300,7 +300,7 @@ func TestNoErrorRenameWithIdenticalRemoteName(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
change, err := state.diff(ctx, files) change, err := state.diff(ctx, files)
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, change.delete, 0) assert.Empty(t, change.delete)
assert.Len(t, change.put, 1) assert.Len(t, change.put, 1)
assert.Contains(t, change.put, "foo.py") assert.Contains(t, change.put, "foo.py")

View File

@ -59,7 +59,7 @@ func TestGetFileSet(t *testing.T) {
fileList, err := s.GetFileList(ctx) fileList, err := s.GetFileList(ctx)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, len(fileList), 10) require.Len(t, fileList, 10)
inc, err = fileset.NewGlobSet(root, []string{}) inc, err = fileset.NewGlobSet(root, []string{})
require.NoError(t, err) require.NoError(t, err)
@ -77,7 +77,7 @@ func TestGetFileSet(t *testing.T) {
fileList, err = s.GetFileList(ctx) fileList, err = s.GetFileList(ctx)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, len(fileList), 2) require.Len(t, fileList, 2)
inc, err = fileset.NewGlobSet(root, []string{"./.databricks/*.go"}) inc, err = fileset.NewGlobSet(root, []string{"./.databricks/*.go"})
require.NoError(t, err) require.NoError(t, err)
@ -95,7 +95,7 @@ func TestGetFileSet(t *testing.T) {
fileList, err = s.GetFileList(ctx) fileList, err = s.GetFileList(ctx)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, len(fileList), 11) require.Len(t, fileList, 11)
} }
func TestRecursiveExclude(t *testing.T) { func TestRecursiveExclude(t *testing.T) {
@ -125,7 +125,7 @@ func TestRecursiveExclude(t *testing.T) {
fileList, err := s.GetFileList(ctx) fileList, err := s.GetFileList(ctx)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, len(fileList), 7) require.Len(t, fileList, 7)
} }
func TestNegateExclude(t *testing.T) { func TestNegateExclude(t *testing.T) {
@ -155,6 +155,6 @@ func TestNegateExclude(t *testing.T) {
fileList, err := s.GetFileList(ctx) fileList, err := s.GetFileList(ctx)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, len(fileList), 1) require.Len(t, fileList, 1)
require.Equal(t, fileList[0].Relative, "test/sub1/sub2/h.txt") require.Equal(t, "test/sub1/sub2/h.txt", fileList[0].Relative)
} }

View File

@ -24,7 +24,7 @@ func TestTemplateConfigAssignValuesFromFile(t *testing.T) {
err = c.assignValuesFromFile(filepath.Join(testDir, "config.json")) err = c.assignValuesFromFile(filepath.Join(testDir, "config.json"))
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, int64(1), c.values["int_val"]) assert.Equal(t, int64(1), c.values["int_val"])
assert.Equal(t, float64(2), c.values["float_val"]) assert.InDelta(t, float64(2), c.values["float_val"].(float64), 0.0001)
assert.Equal(t, true, c.values["bool_val"]) assert.Equal(t, true, c.values["bool_val"])
assert.Equal(t, "hello", c.values["string_val"]) assert.Equal(t, "hello", c.values["string_val"])
} }
@ -44,7 +44,7 @@ func TestTemplateConfigAssignValuesFromFileDoesNotOverwriteExistingConfigs(t *te
err = c.assignValuesFromFile(filepath.Join(testDir, "config.json")) err = c.assignValuesFromFile(filepath.Join(testDir, "config.json"))
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, int64(1), c.values["int_val"]) assert.Equal(t, int64(1), c.values["int_val"])
assert.Equal(t, float64(2), c.values["float_val"]) assert.InDelta(t, float64(2), c.values["float_val"].(float64), 0.0001)
assert.Equal(t, true, c.values["bool_val"]) assert.Equal(t, true, c.values["bool_val"])
assert.Equal(t, "this-is-not-overwritten", c.values["string_val"]) assert.Equal(t, "this-is-not-overwritten", c.values["string_val"])
} }
@ -89,7 +89,7 @@ func TestTemplateConfigAssignValuesFromDefaultValues(t *testing.T) {
err = c.assignDefaultValues(r) err = c.assignDefaultValues(r)
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, int64(123), c.values["int_val"]) assert.Equal(t, int64(123), c.values["int_val"])
assert.Equal(t, float64(123), c.values["float_val"]) assert.InDelta(t, float64(123), c.values["float_val"].(float64), 0.0001)
assert.Equal(t, true, c.values["bool_val"]) assert.Equal(t, true, c.values["bool_val"])
assert.Equal(t, "hello", c.values["string_val"]) assert.Equal(t, "hello", c.values["string_val"])
} }
@ -110,7 +110,7 @@ func TestTemplateConfigAssignValuesFromTemplatedDefaultValues(t *testing.T) {
err = c.assignDefaultValues(r) err = c.assignDefaultValues(r)
if assert.NoError(t, err) { if assert.NoError(t, err) {
assert.Equal(t, int64(123), c.values["int_val"]) assert.Equal(t, int64(123), c.values["int_val"])
assert.Equal(t, float64(123), c.values["float_val"]) assert.InDelta(t, float64(123), c.values["float_val"].(float64), 0.0001)
assert.Equal(t, true, c.values["bool_val"]) assert.Equal(t, true, c.values["bool_val"])
assert.Equal(t, "world", c.values["string_val"]) assert.Equal(t, "world", c.values["string_val"])
} }

View File

@ -86,7 +86,7 @@ func TestTemplateRandIntFunction(t *testing.T) {
assert.Len(t, r.files, 1) assert.Len(t, r.files, 1)
randInt, err := strconv.Atoi(strings.TrimSpace(string(r.files[0].(*inMemoryFile).content))) randInt, err := strconv.Atoi(strings.TrimSpace(string(r.files[0].(*inMemoryFile).content)))
assert.Less(t, randInt, 10) assert.Less(t, randInt, 10)
assert.Empty(t, err) assert.NoError(t, err)
} }
func TestTemplateUuidFunction(t *testing.T) { func TestTemplateUuidFunction(t *testing.T) {

View File

@ -434,7 +434,7 @@ func TestRendererSkipAllFilesInCurrentDirectory(t *testing.T) {
entries, err := os.ReadDir(tmpDir) entries, err := os.ReadDir(tmpDir)
require.NoError(t, err) require.NoError(t, err)
// Assert none of the files are persisted to disk, because of {{skip "*"}} // Assert none of the files are persisted to disk, because of {{skip "*"}}
assert.Len(t, entries, 0) assert.Empty(t, entries)
} }
func TestRendererSkipPatternsAreRelativeToFileDirectory(t *testing.T) { func TestRendererSkipPatternsAreRelativeToFileDirectory(t *testing.T) {
@ -588,8 +588,8 @@ func TestRendererNonTemplatesAreCreatedAsCopyFiles(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, r.files, 1) assert.Len(t, r.files, 1)
assert.Equal(t, r.files[0].(*copyFile).srcPath, "not-a-template") assert.Equal(t, "not-a-template", r.files[0].(*copyFile).srcPath)
assert.Equal(t, r.files[0].RelPath(), "not-a-template") assert.Equal(t, "not-a-template", r.files[0].RelPath())
} }
func TestRendererFileTreeRendering(t *testing.T) { func TestRendererFileTreeRendering(t *testing.T) {
@ -609,7 +609,7 @@ func TestRendererFileTreeRendering(t *testing.T) {
// Assert in memory representation is created. // Assert in memory representation is created.
assert.Len(t, r.files, 1) assert.Len(t, r.files, 1)
assert.Equal(t, r.files[0].RelPath(), "my_directory/my_file") assert.Equal(t, "my_directory/my_file", r.files[0].RelPath())
out, err := filer.NewLocalClient(tmpDir) out, err := filer.NewLocalClient(tmpDir)
require.NoError(t, err) require.NoError(t, err)

View File

@ -19,22 +19,25 @@ import (
var OverwriteMode = os.Getenv("TESTS_OUTPUT") == "OVERWRITE" var OverwriteMode = os.Getenv("TESTS_OUTPUT") == "OVERWRITE"
func ReadFile(t testutil.TestingT, ctx context.Context, filename string) string { func ReadFile(t testutil.TestingT, ctx context.Context, filename string) string {
t.Helper()
data, err := os.ReadFile(filename) data, err := os.ReadFile(filename)
if os.IsNotExist(err) { if os.IsNotExist(err) {
return "" return ""
} }
assert.NoError(t, err) assert.NoError(t, err, "Failed to read %s", filename)
// On CI, on Windows \n in the file somehow end up as \r\n // On CI, on Windows \n in the file somehow end up as \r\n
return NormalizeNewlines(string(data)) return NormalizeNewlines(string(data))
} }
func WriteFile(t testutil.TestingT, filename, data string) { func WriteFile(t testutil.TestingT, filename, data string) {
t.Helper()
t.Logf("Overwriting %s", filename) t.Logf("Overwriting %s", filename)
err := os.WriteFile(filename, []byte(data), 0o644) err := os.WriteFile(filename, []byte(data), 0o644)
assert.NoError(t, err) assert.NoError(t, err, "Failed to write %s", filename)
} }
func AssertOutput(t testutil.TestingT, ctx context.Context, out, outTitle, expectedPath string) { func AssertOutput(t testutil.TestingT, ctx context.Context, out, outTitle, expectedPath string) {
t.Helper()
expected := ReadFile(t, ctx, expectedPath) expected := ReadFile(t, ctx, expectedPath)
out = ReplaceOutput(t, ctx, out) out = ReplaceOutput(t, ctx, out)
@ -49,6 +52,7 @@ func AssertOutput(t testutil.TestingT, ctx context.Context, out, outTitle, expec
} }
func AssertOutputJQ(t testutil.TestingT, ctx context.Context, out, outTitle, expectedPath string, ignorePaths []string) { func AssertOutputJQ(t testutil.TestingT, ctx context.Context, out, outTitle, expectedPath string, ignorePaths []string) {
t.Helper()
expected := ReadFile(t, ctx, expectedPath) expected := ReadFile(t, ctx, expectedPath)
out = ReplaceOutput(t, ctx, out) out = ReplaceOutput(t, ctx, out)
@ -69,6 +73,7 @@ var (
) )
func ReplaceOutput(t testutil.TestingT, ctx context.Context, out string) string { func ReplaceOutput(t testutil.TestingT, ctx context.Context, out string) string {
t.Helper()
out = NormalizeNewlines(out) out = NormalizeNewlines(out)
replacements := GetReplacementsMap(ctx) replacements := GetReplacementsMap(ctx)
if replacements == nil { if replacements == nil {
@ -136,6 +141,7 @@ func GetReplacementsMap(ctx context.Context) *ReplacementsContext {
} }
func PrepareReplacements(t testutil.TestingT, r *ReplacementsContext, w *databricks.WorkspaceClient) { func PrepareReplacements(t testutil.TestingT, r *ReplacementsContext, w *databricks.WorkspaceClient) {
t.Helper()
// in some clouds (gcp) w.Config.Host includes "https://" prefix in others it's really just a host (azure) // in some clouds (gcp) w.Config.Host includes "https://" prefix in others it's really just a host (azure)
host := strings.TrimPrefix(strings.TrimPrefix(w.Config.Host, "http://"), "https://") host := strings.TrimPrefix(strings.TrimPrefix(w.Config.Host, "http://"), "https://")
r.Set(host, "$DATABRICKS_HOST") r.Set(host, "$DATABRICKS_HOST")
@ -167,6 +173,7 @@ func PrepareReplacements(t testutil.TestingT, r *ReplacementsContext, w *databri
} }
func PrepareReplacementsUser(t testutil.TestingT, r *ReplacementsContext, u iam.User) { func PrepareReplacementsUser(t testutil.TestingT, r *ReplacementsContext, u iam.User) {
t.Helper()
// There could be exact matches or overlap between different name fields, so sort them by length // There could be exact matches or overlap between different name fields, so sort them by length
// to ensure we match the largest one first and map them all to the same token // to ensure we match the largest one first and map them all to the same token
names := []string{ names := []string{

View File

@ -18,9 +18,10 @@ func UnifiedDiff(filename1, filename2, s1, s2 string) string {
} }
func AssertEqualTexts(t testutil.TestingT, filename1, filename2, expected, out string) { func AssertEqualTexts(t testutil.TestingT, filename1, filename2, expected, out string) {
t.Helper()
if len(out) < 1000 && len(expected) < 1000 { if len(out) < 1000 && len(expected) < 1000 {
// This shows full strings + diff which could be useful when debugging newlines // This shows full strings + diff which could be useful when debugging newlines
assert.Equal(t, expected, out) assert.Equal(t, expected, out, "%s vs %s", filename1, filename2)
} else { } else {
// only show diff for large texts // only show diff for large texts
diff := UnifiedDiff(filename1, filename2, expected, out) diff := UnifiedDiff(filename1, filename2, expected, out)
@ -29,6 +30,7 @@ func AssertEqualTexts(t testutil.TestingT, filename1, filename2, expected, out s
} }
func AssertEqualJQ(t testutil.TestingT, expectedName, outName, expected, out string, ignorePaths []string) { func AssertEqualJQ(t testutil.TestingT, expectedName, outName, expected, out string, ignorePaths []string) {
t.Helper()
patch, err := jsondiff.CompareJSON([]byte(expected), []byte(out)) patch, err := jsondiff.CompareJSON([]byte(expected), []byte(out))
if err != nil { if err != nil {
t.Logf("CompareJSON error for %s vs %s: %s (fallback to textual comparison)", outName, expectedName, err) t.Logf("CompareJSON error for %s vs %s: %s (fallback to textual comparison)", outName, expectedName, err)

View File

@ -2,7 +2,6 @@ package vfs
import ( import (
"context" "context"
"errors"
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
@ -42,7 +41,7 @@ func TestFilerPath(t *testing.T) {
// Open non-existent file. // Open non-existent file.
_, err = p.Open("doesntexist_test.go") _, err = p.Open("doesntexist_test.go")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Stat self. // Stat self.
s, err = p.Stat("filer_test.go") s, err = p.Stat("filer_test.go")
@ -52,7 +51,7 @@ func TestFilerPath(t *testing.T) {
// Stat non-existent file. // Stat non-existent file.
_, err = p.Stat("doesntexist_test.go") _, err = p.Stat("doesntexist_test.go")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// ReadDir self. // ReadDir self.
entries, err := p.ReadDir(".") entries, err := p.ReadDir(".")
@ -61,7 +60,7 @@ func TestFilerPath(t *testing.T) {
// ReadDir non-existent directory. // ReadDir non-existent directory.
_, err = p.ReadDir("doesntexist") _, err = p.ReadDir("doesntexist")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// ReadFile self. // ReadFile self.
buf, err = p.ReadFile("filer_test.go") buf, err = p.ReadFile("filer_test.go")
@ -70,7 +69,7 @@ func TestFilerPath(t *testing.T) {
// ReadFile non-existent file. // ReadFile non-existent file.
_, err = p.ReadFile("doesntexist_test.go") _, err = p.ReadFile("doesntexist_test.go")
assert.True(t, errors.Is(err, fs.ErrNotExist)) assert.ErrorIs(t, err, fs.ErrNotExist)
// Parent self. // Parent self.
pp := p.Parent() pp := p.Parent()

17
lint.sh
View File

@ -1,9 +1,14 @@
#!/bin/bash #!/bin/bash
set -euo pipefail set -uo pipefail
# With golangci-lint, if there are any compliation issues, then formatters' autofix won't be applied. # With golangci-lint, if there are any compliation issues, then formatters' autofix won't be applied.
# https://github.com/golangci/golangci-lint/issues/5257 # https://github.com/golangci/golangci-lint/issues/5257
# However, running goimports first alone will actually fix some of the compilation issues.
# Fixing formatting is also reasonable thing to do. golangci-lint run --fix "$@"
# For this reason, this script runs golangci-lint in two stages: lint_exit_code=$?
golangci-lint run --enable-only="gofmt,gofumpt,goimports" --fix $@
exec golangci-lint run --fix $@ if [ $lint_exit_code -ne 0 ]; then
# These linters work in presence of compilation issues when run alone, so let's get these fixes at least.
golangci-lint run --enable-only="gofmt,gofumpt,goimports" --fix "$@"
fi
exit $lint_exit_code