Commit Graph

169 Commits

Author SHA1 Message Date
Pieter Noordhuis 6dadf667b5
Merge remote-tracking branch 'origin/main' into dashboards 2024-09-12 16:03:58 +02:00
Pieter Noordhuis fb077a85d2
Fix artifact upload integration tests (#1767)
## Changes

I didn't run integration tests on #1756.

## Tests

Manually confirmed integration tests pass.
2024-09-11 12:16:18 +00:00
Pieter Noordhuis 7403101d59
Merge remote-tracking branch 'origin/main' into dashboards 2024-09-09 16:48:23 +02:00
shreyas-goenka a27c24a397
Add prompt when a pipeline recreation happens (#1672)
## Changes
DLT pipeline recreations are destructive. They can lead to lost history
of previous updates, outage of the tables temporarily and are
potentially computationally expensive. Thus we make a breaking change
where a prompt is shown to the user if there configuration changes will
lead to a DLT recreation.

Users can skip the prompt by specifying  the `--auto-approve` flag.

This PR also fixes an issue with our test runner where logs from the
cmdio.Logger would not get propagated to the reader returned by our
cobra test runner.

## Tests
Manually, and new unit and integration tests.

```
➜  bundle-playground-3 cli bundle deploy
Uploading bundle files to /Users/63ec021d-b0c6-49c0-93a0-5123953a1cb2/.bundle/test/development/files...
The following DLT pipelines will be recreated. Underlying tables will be unavailable for a transient period until the newly recreated pipelines are run once successfully. History of previous pipeline update runs will be lost because of recreation:
  recreate pipeline foo

Would you like to proceed? [y/n]: n
Deployment cancelled!
```
2024-09-04 11:11:47 +00:00
Pieter Noordhuis 3461c66dc9
Add DABs support for AI/BI dashboards 2024-09-03 10:35:41 +02:00
shreyas-goenka 096123674a
Fix streaming of stdout, stdin, stderr in cobra test runner (#1742)
## Changes
We were not using the readers and writers set in the test fixtures in
the progress logger. This PR fixes that. It also modifies
`TestAccAbortBind`, which was implicitly relying on the bug.

I encountered this bug while working on
https://github.com/databricks/cli/pull/1672.

## Tests
Manually. 

From non-tty:
```
Error: failed to bind the resource, err: This bind operation requires user confirmation, but the current console does not support prompting. Please specify --auto-approve if you would like to skip prompts and proceed.
```

From tty, bind works as expected.
```
Confirm import changes? Changes will be remotely applied only after running 'bundle deploy'. [y/n]: y
Updating deployment state...
Successfully bound databricks_pipeline with an id '9d2dedbb-f522-4503-96ba-4bc4d5bfa77d'. Run 'bundle deploy' to deploy changes to your workspace
```
2024-09-02 13:43:17 +00:00
shreyas-goenka 7fe08c2386
Revert hc-install version to 0.7.0 (#1711)
## Changes
With hc-install version `0.8.0` there was a regression where debug logs
would be leaked into stderr. Reported upstream in
https://github.com/hashicorp/hc-install/issues/239.

Meanwhile we need to revert and pin to version`0.7.0`. This PR also
includes a regression test.

## Tests
Regression test.
2024-08-22 15:04:26 +00:00
shreyas-goenka a4c1ba3e28
Use API mocks for duplicate path errors in workspace files extensions client (#1690)
## Changes
`TestAccFilerWorkspaceFilesExtensionsErrorsOnDupName` recently started
failing in our nightlies because the upstream `import` API was changed
to [prohibit conflicting file
paths](https://docs.databricks.com/en/release-notes/product/2024/august.html#files-can-no-longer-have-identical-names-in-workspace-folders).
Because existing conflicting file paths can still be grandfathered in,
we need to retain coverage for the test. To do this, this PR:
1. Removes the failing
`TestAccFilerWorkspaceFilesExtensionsErrorsOnDupName`
2. Add an equivalent unit test with the `list` and `get-status` API
calls mocked.
2024-08-21 07:45:25 +00:00
Andrew Nester f99335e871
Increased chan size for clusters test to pass (#1691)
## Changes
Increased chan size for clusters test to pass
2024-08-19 10:00:21 +00:00
Andrew Nester 7c5b650111
Fix integration tests after Go SDK bump (#1686)
## Changes
These 2 tests failed

`TestAccAlertsCreateErrWhenNoArguments ` -> switched to legacy command
for now, new one does not have a required request body (might be an
OpenAPI spec issue
https://github.com/databricks/databricks-sdk-go/blob/main/service/sql/model.go#L595),
will follow up later

`TestAccClustersList` -> increased channel size because new clusters API
returns more clusters

## Tests
Tests are green now
2024-08-16 08:32:38 +00:00
shreyas-goenka a6eb673d55
Print text logs in `import-dir` and `export-dir` commands (#1682)
## Changes
In https://github.com/databricks/cli/pull/1202 the semantics of
`cmdio.RenderJson` was changes to always render the JSON object. Before
we would only render it if `--output json` was specified.

This PR fixes the logs to print human-readable log lines instead of a
JSON object.
This PR also removes the now unused `cmdio.Render` method.

## Tests
Manually:
```
➜  bundle-playground git:(master) ✗ cli workspace import-dir ./tmp /Users/shreyas.goenka@databricks.com/test-import-1 -p aws-prod-ucws
Importing files from ./tmp
a -> /Users/shreyas.goenka@databricks.com/test-import-1/a
Import complete. The files are available at /Users/shreyas.goenka@databricks.com/test-import-1
```
```
➜  bundle-playground git:(master) ✗ cli workspace export-dir  /Users/shreyas.goenka@databricks.com/test-export-1 ./tmp-2 -p aws-prod-ucws
Exporting files from /Users/shreyas.goenka@databricks.com/test-export-1
/Users/shreyas.goenka@databricks.com/test-export-1/b -> tmp-2/b
Exported complete. The files are available at ./tmp-2
```
2024-08-15 12:53:02 +00:00
Andrew Nester 48ff18e5fc
Upload local libraries even if they don't have artifact defined (#1664)
## Changes
Previously for all the libraries referenced in configuration DABs made
sure that there is corresponding artifact section.
But this is not really necessary and flexible, because local libraries
might be built outside of dabs context.
It also created difficult to follow logic in code where we back
referenced libraries to artifacts which was difficult to fllow


This PR does 3 things:
1. Allows all local libraries referenced in DABs config to be uploaded
to remote
2. Simplifies upload and glob references expand logic by doing this in
single place
3. Speed things up by uploading library only once and doing this in
parallel

## Tests
Added unit + integration tests + made sure that change is backward
compatible (no changes in existing tests)

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-08-14 09:03:44 +00:00
Pieter Noordhuis ad8e61c739
Fix ability to import the CLI repository as module (#1671)
## Changes

While investigating #1629, I found that Go doesn't allow characters
outside the set documented at
https://pkg.go.dev/golang.org/x/mod/module#CheckFilePath.

To fix this, I changed the relevant test case to create the fixtures it
needs instead of loading it from the `testdata` directory (in
`renderer_test.go`).

Some test cases in `config_test.go` depended on templated paths without
needing to do so. In the process of fixing this, I refactored these
tests slightly to reduce dependencies between them.

This change also adds a test case to ensure that all files in the
repository are allowed to be part of a module (per the earlier
`CheckFilePath` function).

Fixes #1629.

## Tests

I manually confirmed I could import the repository as a Go module.
2024-08-12 14:20:04 +00:00
Pieter Noordhuis 0a4c0fb588
Add trailing slash to directory to produce completions for (#1666)
## Changes

The integration test for `fs ls` tried to produce completions for the
temporary directory without a trailing slash. The command will list the
parent directory to see if there are more directories with the same
prefix that are also valid completions. The test intends to test
completions for files inside the temporary directory, so we need that
trailing slash.

This test was hanging because the parent directory of the temporary DBFS
directory has more than 1k entries (on our integration testing
workspaces) and the line buffer in the test runner has a capacity of 1k
lines. To avoid the same hang in the future, this change modifies the
test runner to panic if the line buffer is full.

## Tests

Confirmed the integration test no longer hangs.
2024-08-12 07:07:50 +00:00
Pieter Noordhuis a240be0b5a
Run Spark JAR task test on multiple DBR versions (#1665)
## Changes

This explores error messages on older DBRs and UC vs non-UC.

## Tests

Integration tests pass.
2024-08-09 15:13:31 +00:00
andersrexdb 65f4aad87c
Add command line autocomplete to the fs commands (#1622)
## Changes
This PR adds autocomplete for cat, cp, ls, mkdir and rm.

The new completer can do completion for any `Filer`. The command
completion for the `sync` command can be moved to use this general
completer as a follow-up.

## Tests
- Tested manually against a workspace
- Unit tests
2024-08-09 09:40:25 +00:00
Andrew Nester 9d1fbbb39c
Enable Spark JAR task test (#1658)
## Changes
Enable Spark JAR task test

## Tests
```

Updating deployment state...
Deleting files...
Destroy complete!
--- PASS: TestAccSparkJarTaskDeployAndRunOnVolumes (194.13s)
PASS
coverage: 51.9% of statements in ./...
ok      github.com/databricks/cli/internal/bundle       194.586s        coverage: 51.9% of statements in ./...
```
2024-08-06 18:58:34 +00:00
shreyas-goenka a13d77f8eb
Fix python wheel task integration tests (#1648)
## Changes
A new Service Control Policy has removed the `ec2.RunInstances`
permission from our service principal for our AWS integration tests.
This PR switches over to using the instance pool which does not require
creating new clusters.

## Tests
The integration tests pass now.
2024-08-01 13:55:22 +00:00
shreyas-goenka 89c0af5bdc
Add resource for UC schemas to DABs (#1413)
## Changes
This PR adds support for UC Schemas to DABs. This allows users to define
schemas for tables and other assets their pipelines/workflows create as
part of the DAB, thus managing the life-cycle in the DAB.

The first version has a couple of intentional limitations:
1. The owner of the schema will be the deployment user. Changing the
owner of the schema is not allowed (yet). `run_as` will not be
restricted for DABs containing UC schemas. Let's limit the scope of
run_as to the compute identity used instead of ownership of data assets
like UC schemas.
2. API fields that are present in the update API but not the create API.
For example: enabling predictive optimization is not supported in the
create schema API and thus is not available in DABs at the moment.

## Tests
Manually and integration test. Manually verified the following work:
1. Development mode adds a "dev_" prefix.
2. Modified status is correctly computed in the `bundle summary`
command.
3. Grants work as expected, for assigning privileges.
4. Variable interpolation works for the schema ID.
2024-07-31 12:16:28 +00:00
Andrew Nester 434bcbb018
Allow artifacts (JARs, wheels) to be uploaded to UC Volumes (#1591)
## Changes
This change allows to specify UC volumes path as an artifact paths so
all artifacts (JARs, wheels) are uploaded to UC Volumes.

Example configuration is here:
```
bundle:
  name: jar-bundle

workspace:
  host: https://foo.com
  artifact_path: /Volumes/main/default/foobar

artifacts:
  my_java_code:
    path: ./sample-java
    build: "javac PrintArgs.java && jar cvfm PrintArgs.jar META-INF/MANIFEST.MF PrintArgs.class"
    files:
      - source: ./sample-java/PrintArgs.jar

resources:
  jobs:
    jar_job:
      name: "Test Spark Jar Job"
      tasks:
        - task_key: TestSparkJarTask
          new_cluster:
            num_workers: 1
            spark_version: "14.3.x-scala2.12"
            node_type_id: "i3.xlarge"
          spark_jar_task:
            main_class_name: PrintArgs
          libraries:
            - jar: ./sample-java/PrintArgs.jar
```
## Tests
Manually + added E2E test for Java jobs

E2E test is temporarily skipped until auth related issues for UC for
tests are resolved
2024-07-16 08:57:04 +00:00
Gleb Kanterov 25737bbb5d
Add regression tests for CLI error output (#1566)
## Changes
Add regression tests for https://github.com/databricks/cli/issues/1563

We test 2 code paths:
- if there is an error, we can print to stderr
- if there is a valid output, we can print to stdout

We should also consider adding black-box tests that will run the CLI
binary as a black box and inspect its output to stderr/stdout.

## Tests
Unit tests
2024-07-10 06:38:06 +00:00
shreyas-goenka 553fdd1e81
Serialize dynamic value for `bundle validate` output (#1499)
## Changes
Using dynamic values allows us to retain references like
`${resources.jobs...}` even when the type of field is not integer, eg:
`run_job_task`, or in general values that do not map to the Go types for
a field.

## Tests
Integration test
2024-06-18 15:04:20 +00:00
Pieter Noordhuis 448d41027d
Fix listing notebooks in a subdirectory (#1468)
## Changes

This worked fine if the notebooks are located in the filer's root and
didn't if they are nested in a directory.

This change adds test coverage and fixes the underlying issue.

## Tests

Ran integration test manually.
2024-06-04 09:53:14 +00:00
shreyas-goenka ec33a7c059
Add `filer.Filer` to read notebooks from WSFS without omitting their extension (#1457)
## Changes
This PR adds a filer that'll allow us to read notebooks from the WSFS
using their full paths (with the extension included). The filer relies
on the existing workspace filer (and consequently the workspace
import/export/list APIs).

Using this filer along with a virtual filesystem layer
(https://github.com/databricks/cli/pull/1452/files) will allow us to use
our custom implementation (which preserves the notebook extensions)
rather than the default mount available via DBR when the CLI is run from
DBR.

## Tests
Integration tests.

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-05-30 11:59:27 +00:00
Pieter Noordhuis 424499ec1d
Abstract over filesystem interaction with libs/vfs (#1452)
## Changes

Introduce `libs/vfs` for an implementation of `fs.FS` and friends that
_includes_ the absolute path it is anchored to.

This is needed for:
1. Intercepting file operations to inject custom logic (e.g., logging,
access control).
2. Traversing directories to find specific leaf directories (e.g.,
`.git`).
3. Converting virtual paths to OS-native paths.

Options 2 and 3 are not possible with the standard `fs.FS` interface.
They are needed such that we can provide an instance to the sync package
and still detect the containing `.git` directory and convert paths to
native paths.

This change focuses on making the following packages use `vfs.Path`:
* libs/fileset
* libs/git
* libs/sync

All entries returned by `fileset.All` are now slash-separated. This has
2 consequences:
* The sync snapshot now always uses slash-separated paths
* We don't need to call `filepath.FromSlash` as much as we did

## Tests

* All unit tests pass
* All integration tests pass
* Manually confirmed that a deployment made on Windows by a previous
version of the CLI can be deployed by a new version of the CLI while
retaining the validity of the local sync snapshot as well as the remote
deployment state.
2024-05-30 07:41:50 +00:00
Ilia Babanov 157877a152
Fix bundle destroy integration test (#1435)
I've updated the `deploy_then_remove_resources` test template in the
previous PR, but didn't notice that it was used in the destroy test too.
Now destroy test also checks deletion of jobs
2024-05-16 09:32:55 +00:00
Ilia Babanov 2035516fde
Don't merge-in remote resources during depolyments (#1432)
## Changes
`check_running_resources` now pulls the remote state without modifying
the bundle state, similar to how it was doing before. This avoids a
problem when we fail to compute deployment metadata for a deleted job
(which we shouldn't do in the first place)

`deploy_then_remove_resources_test` now also deploys and deletes a job
(in addition to a pipeline), which catches the error that this PR fixes.

## Tests
Unit and integ tests
2024-05-15 12:41:44 +00:00
Andrew Nester 1872aa12b3
Added support for job environments (#1379)
## Changes
The main changes are:
1. Don't link artifacts to libraries anymore and instead just iterate
over all jobs and tasks when uploading artifacts and update local path
to remote
2. Iterating over `jobs.environments` to check if there are any local
libraries and checking that they exist locally
3. Added tests to check environments are handled correctly

End-to-end test will follow up

## Tests
Added regression test, existing tests (including integration one) pass
2024-04-22 11:44:34 +00:00
Pieter Noordhuis cd675ded9a
Update `testutil` helpers to return path (#1383)
## Changes

I spotted a few call sites where the path of a test file was synthesized
multiple times. It is easier to capture the path as a variable and reuse
it.
2024-04-19 15:05:36 +00:00
shreyas-goenka e008c2bd8c
Cleanup remote file path on bundle destroy (#1374)
## Changes
The sync struct initialization would recreate the deleted `file_path`.
This PR moves to not initializing the sync object to delete the
snapshot, thus fixing the lingering `file_path` after `bundle destroy`.

## Tests
Manually, and a integration test to prevent regression.
2024-04-19 11:48:04 +00:00
Pieter Noordhuis 6e59b13452
Update Spark version in integration tests to 13.3 (#1375)
## Tests

Integration test run pending.
2024-04-19 11:31:54 +00:00
Andrew Nester 60a4a347f9
Fixed typo in error template for auth describe (#1341)
## Changes
Fixed typo in error template for auth describe

## Tests
Manually + added integration test
2024-04-08 11:19:13 +00:00
Andrew Nester 56e393c743
Allow specifying CLI version constraints required to run the bundle (#1320)
## Changes
Allow specifying CLI version constraints required to run the bundle

Example of configuration:

#### only allow specific version
```
bundle:
  name: my-bundle
  databricks_cli_version: "0.210.0"
```

#### allow all patch releases
```
bundle:
  name: my-bundle
  databricks_cli_version: "0.210.*"
```

#### constrain minimum version
```
bundle:
  name: my-bundle
  databricks_cli_version: ">= 0.210.0"
```

#### constrain range
```
bundle:
  name: my-bundle
  databricks_cli_version: ">= 0.210.0, <= 1.0.0"
```

For other examples see:
https://github.com/Masterminds/semver?tab=readme-ov-file#checking-version-constraints

Example error
```
sh-3.2$ databricks bundle validate
Error: Databricks CLI version constraint not satisfied. Required: >= 1.0.0, current: 0.216.0
```
## Tests
Added unit test cover all possible configuration permutations

---------

Co-authored-by: Lennart Kats (databricks) <lennart.kats@databricks.com>
2024-04-02 12:55:21 +00:00
Pieter Noordhuis 00d76d5afa
Move path field to bundle type (#1316)
## Changes

The bundle path was previously stored on the `config.Root` type under
the assumption that the first configuration file being loaded would set
it. This is slightly counterintuitive and we know what the path is upon
construction of the bundle. The new location for this property reflects
this.

## Tests

Unit tests pass.
2024-03-27 09:03:24 +00:00
Pieter Noordhuis ed194668db
Return `diag.Diagnostics` from mutators (#1305)
## Changes

This diagnostics type allows us to capture multiple warnings as well as
errors in the return value. This is a preparation for returning
additional warnings from mutators in case we detect non-fatal problems.

* All return statements that previously returned an error now return
`diag.FromErr`
* All return statements that previously returned `fmt.Errorf` now return
`diag.Errorf`
* All `err != nil` checks now use `diags.HasError()` or `diags.Error()`

## Tests

* Existing tests pass.
* I confirmed no call site under `./bundle` or `./cmd/bundle` uses
`errors.Is` on the return value from mutators. This is relevant because
we cannot wrap errors with `%w` when calling `diag.Errorf` (like
`fmt.Errorf`; context in https://github.com/golang/go/issues/47641).
2024-03-25 14:18:47 +00:00
Andrew Nester 9cf3dbe686
Use UserName field to identify if service principal is used (#1310)
## Changes
Use UserName field to identify if service principal is used

## Tests
Integration test passed
2024-03-25 11:32:45 +00:00
Andrew Nester d216404f27
Do CheckRunningResource only after terraform.Write (#1292)
## Changes
CheckRunningResource does `terraform.Show` which (I believe) expects
valid `bundle.tf.json` which is only written as part of
`terraform.Write` later.

With this PR order is changed.

Fixes #1286 

## Tests
Added regression E2E test
2024-03-18 15:39:18 +00:00
Andrew Nester 1b0ac61093
Added deployment state for bundles (#1267)
## Changes
This PR introduces new structure (and a file) being used locally and
synced remotely to Databricks workspace to track bundle deployment
related metadata.

The state is pulled from remote, updated and pushed back remotely as
part of `bundle deploy` command.

This state can be used for deployment sequencing as it's `Version` field
is monotonically increasing on each deployment.

Currently, it only tracks files being synced as part of the deployment.

This helps fix the issue with files not being removed during deployments
on CI/CD as sync snapshot was never present there.

Fixes #943 

## Tests
Added E2E (regression) test for files removal on CI/CD

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-03-18 14:41:58 +00:00
Andrew Nester e22dd8af7d
Enabled incorrectly skipped tests (#1280)
## Changes
Some integration tests were missing `TestAcc` prefix therefore were
skipped.

## Tests
Running integration tests
2024-03-14 12:56:21 +00:00
shreyas-goenka 5f29b5ecd9
Fix TestAccBundleInitOnMlopsStacks to work on aws-prod-ucws (#1283)
## Changes
aws-prod-ucws has CLOUD_ENV set to "ucws" which was failing the
validation checks in the template itself. This PR fixes the test.

## Tests
The tests pass now
2024-03-13 12:59:49 +00:00
shreyas-goenka d4329f470f
Add integration test for mlops-stacks initialization (#1155)
## Changes
This PR:
1. Adds an integration test for mlops-stacks that checks the
initialization and deployment of the project was successful.
2. Fixes a bug in the initialization of templates from non-tty. We need
to process the input parameters in order since their descriptions can
refer to input parameters that came before in the interactive UX.

## Tests
The integration test passes in CI.
2024-03-12 14:15:54 +00:00
Andrew Nester c7818560ca
Add usage string when command fails with incorrect arguments (#1276)
## Changes
Add usage string when command fails with incorrect arguments

Fixes #1119

## Tests
Example output

```
> databricks libraries cluster-status 
Error: accepts 1 arg(s), received 0

Usage:
  databricks libraries cluster-status CLUSTER_ID [flags]

Flags:
  -h, --help   help for cluster-status

Global Flags:
      --debug            enable debug logging
  -o, --output type      output type: text or json (default text)
  -p, --profile string   ~/.databrickscfg profile
  -t, --target string    bundle target to use (if applicable)
```
2024-03-12 14:12:34 +00:00
Miles Yucht b65ce75c1f
Use Go SDK Iterators when listing resources with the CLI (#1202)
## Changes
Currently, when the CLI run a list API call (like list jobs), it uses
the `List*All` methods from the SDK, which list all resources in the
collection. This is very slow for large collections: if you need to list
all jobs from a workspace that has 10,000+ jobs, you'll be waiting for
at least 100 RPCs to complete before seeing any output.

Instead of using List*All() methods, the SDK recently added an iterator
data structure that allows traversing the collection without needing to
completely list it first. New pages are fetched lazily if the next
requested item belongs to the next page. Using the List() methods that
return these iterators, the CLI can proactively print out some of the
response before the complete collection has been fetched.

This involves a pretty major rewrite of the rendering logic in `cmdio`.
The idea there is to define custom rendering logic based on the type of
the provided resource. There are three renderer interfaces:

1. textRenderer: supports printing something in a textual format (i.e.
not JSON, and not templated).
2. jsonRenderer: supports printing something in a pretty-printed JSON
format.
3. templateRenderer: supports printing something using a text template.

There are also three renderer implementations:

1. readerRenderer: supports printing a reader. This only implements the
textRenderer interface.
2. iteratorRenderer: supports printing a `listing.Iterator` from the Go
SDK. This implements jsonRenderer and templateRenderer, buffering 20
resources at a time before writing them to the output.
3. defaultRenderer: supports printing arbitrary resources (the previous
implementation).

Callers will either use `cmdio.Render()` for rendering individual
resources or `io.Reader` or `cmdio.RenderIterator()` for rendering an
iterator. This separate method is needed to safely be able to match on
the type of the iterator, since Go does not allow runtime type matches
on generic types with an existential type parameter.

One other change that needs to happen is to split the templates used for
text representation of list resources into a header template and a row
template. The template is now executed multiple times for List API
calls, but the header should only be printed once. To support this, I
have added `headerTemplate` to `cmdIO`, and I have also changed
`RenderWithTemplate` to include a `headerTemplate` parameter everywhere.

## Tests
- [x] Unit tests for text rendering logic
- [x] Unit test for reflection-based iterator construction.

---------

Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-02-21 14:16:36 +00:00
shreyas-goenka 4ac1c1655b
Fix CLI nightlies on our UC workspaces (#1225)
This PR fixes some test helper functions so that they work properly on
our test environments for AWS and Azure UC workspaces.
2024-02-21 13:06:03 +00:00
shreyas-goenka 5ba0aaa5c5
Add support for UC Volumes to the `databricks fs` commands (#1209)
## Changes
```
shreyas.goenka@THW32HFW6T cli % databricks fs -h
Commands to do file system operations on DBFS and UC Volumes.

Usage:
  databricks fs [command]

Available Commands:
  cat         Show file content.
  cp          Copy files and directories.
  ls          Lists files.
  mkdir       Make directories.
  rm          Remove files and directories.
```

This PR adds support for UC Volumes to the fs commands. The fs commands
for UC volumes work the same as they currently do for DBFS. This is
ensured by running the same test matrix we across both DBFS and UC
Volumes versions of the fs commands.

## Tests
Support for UC volumes is tested by running the same tests as we did
originally for DBFS commands. The tests require a `main` catalog to
exist in the workspace, which does in our test workspaces environments
which have the `TEST_METASTORE_ID` environment variable set.

For the Files API filer, we do the same by running mostly common tests
to ensure the filers for "local", "wsfs", "dbfs" and "files API" are
consistent.

The tests are also made to all run in parallel to reduce the time taken.
To ensure the separation of the tests, each test creates its own UC
schema (for UC volumes tests) or DBFS directories (for DBFS tests).
2024-02-20 16:14:37 +00:00
Pieter Noordhuis 87dd46a3f8
Use dynamic configuration model in bundles (#1098)
## Changes

This is a fundamental change to how we load and process bundle
configuration. We now depend on the configuration being represented as a
`dyn.Value`. This representation is functionally equivalent to Go's
`any` (it is variadic) and allows us to capture metadata associated with
a value, such as where it was defined (e.g. file, line, and column). It
also allows us to represent Go's zero values properly (e.g. empty
string, integer equal to 0, or boolean false).

Using this representation allows us to let the configuration model
deviate from the typed structure we have been relying on so far
(`config.Root`). We need to deviate from these types when using
variables for fields that are not a string themselves. For example,
using `${var.num_workers}` for an integer `workers` field was impossible
until now (though not implemented in this change).

The loader for a `dyn.Value` includes functionality to capture any and
all type mismatches between the user-defined configuration and the
expected types. These mismatches can be surfaced as validation errors in
future PRs.

Given that many mutators expect the typed struct to be the source of
truth, this change converts between the dynamic representation and the
typed representation on mutator entry and exit. Existing mutators can
continue to modify the typed representation and these modifications are
reflected in the dynamic representation (see `MarkMutatorEntry` and
`MarkMutatorExit` in `bundle/config/root.go`).

Required changes included in this change:
* The existing interpolation package is removed in favor of
`libs/dyn/dynvar`.
* Functionality to merge job clusters, job tasks, and pipeline clusters
are now all broken out into their own mutators.

To be implemented later:
* Allow variable references for non-string types.
* Surface diagnostics about the configuration provided by the user in
the validation output.
* Some mutators use a resource's configuration file path to resolve
related relative paths. These depend on `bundle/config/paths.Path` being
set and populated through `ConfigureConfigFilePath`. Instead, they
should interact with the dynamically typed configuration directly. Doing
this also unlocks being able to differentiate different base paths used
within a job (e.g. a task override with a relative path defined in a
directory other than the base job).

## Tests

* Existing unit tests pass (some have been modified to accommodate)
* Integration tests pass
2024-02-16 19:41:58 +00:00
Andrew Nester e474948a4b
Generate correct YAML if custom_tags or spark_conf is used for pipeline or job cluster configuration (#1210)
These fields (key and values) needs to be double quoted in order for
yaml loader to read, parse and unmarshal it into Go struct correctly
because these fields are `map[string]string` type.

## Tests
Added regression unit and E2E tests
2024-02-15 15:03:19 +00:00
Andrew Nester 80670eceed
Added `bundle deployment bind` and `unbind` command (#1131)
## Changes
Added `bundle deployment bind` and `unbind` command.

This command allows to bind bundle-defined resources to existing
resources in Databricks workspace so they become DABs-managed.

## Tests
Manually + added E2E test
2024-02-14 18:04:45 +00:00
Pieter Noordhuis 4073e45d4b
Use mockery to generate mocks compatible with testify/mock (#1190)
## Changes

This is the same approach we use in the Go SDK.

## Tests

Tests pass.
2024-02-08 15:18:53 +00:00
Pieter Noordhuis f8b0f783ea
Use `acc.WorkspaceTest` helper from bundle integration tests (#1181)
## Changes

This helper:
* Constructs a context
* Constructs a `*databricks.WorkspaceClient`
* Ensures required environment variables are present to run an
integration test
* Enables debugging integration tests from VS Code

Debugging integration tests (from VS Code) is made possible by a prelude
in the helper that checks if the calling process is a debug binary, and
if so, sources environment variables from
`~/..databricks/debug-env.json` (if present).

## Tests

Integration tests still pass.

---------

Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-02-07 11:18:56 +00:00