## Changes
We perform a check during path translation that the path being
referenced is contained in the bundle's sync root. If it isn't, it's not
a valid remote reference. However, this doesn't apply to paths that are
_always_ local, such as the artifact path. An artifact's build command
is executed in its path. Files created by the artifact build (e.g.
wheels or JARs) don't need to be in the sync root because they have a
dedicated and different upload path into `${workspace.artifact_path}`.
Therefore, this check that a path is contained in the bundle's sync root
doesn't apply to artifact paths. This change modifies the structure of
path translation to allow opting out of this check.
Fixes#1927.
## Tests
* Existing and new tests pass.
* Manually confirmed that building and using a wheel built outside the
sync root path works as expected.
* No acceptance tests because we don't run build as part of validate.
## Changes
Now it's possible to configure new `app` resource in bundle and point it
to the custom `source_code_path` location where Databricks App code is
defined.
On `databricks bundle deploy` DABs will create an app. All consecutive
`databricks bundle deploy` execution will update an existing app if
there are any updated
On `databricks bundle run <my_app>` DABs will execute app deployment. If
the app is not started yet, it will start the app first.
### Bundle configuration
```
bundle:
name: apps
variables:
my_job_id:
description: "ID of job to run app"
lookup:
job: "My Job"
databricks_name:
description: "Name for app user"
additional_flags:
description: "Additional flags to run command app"
default: ""
my_app_config:
type: complex
description: "Configuration for my Databricks App"
default:
command:
- flask
- --app
- hello
- run
- ${var.additional_flags}
env:
- name: DATABRICKS_NAME
value: ${var.databricks_name}
resources:
apps:
my_app:
name: "anester-app" # required and has to be unique
description: "My App"
source_code_path: ./app # required and points to location of app code
config: ${var.my_app_config}
resources:
- name: "my-job"
description: "A job for app to be able to run"
job:
id: ${var.my_job_id}
permission: "CAN_MANAGE_RUN"
permissions:
- user_name: "foo@bar.com"
level: "CAN_VIEW"
- service_principal_name: "my_sp"
level: "CAN_MANAGE"
targets:
dev:
variables:
databricks_name: "Andrew (from dev)"
additional_flags: --debug
prod:
variables:
databricks_name: "Andrew (from prod)"
```
### Execution
1. `databricks bundle deploy -t dev`
2. `databricks bundle run my_app -t dev`
**If app is started**
```
✓ Getting the status of the app my-app
✓ App is in RUNNING state
✓ Preparing source code for new app deployment.
✓ Deployment is pending
✓ Starting app with command: flask --app hello run --debug
✓ App started successfully
You can access the app at <app-url>
```
**If app is not started**
```
✓ Getting the status of the app my-app
✓ App is in UNAVAILABLE state
✓ Starting the app my-app
✓ App is starting...
....
✓ App is starting...
✓ App is started!
✓ Preparing source code for new app deployment.
✓ Downloading source code from /Workspace/Users/...
✓ Starting app with command: flask --app hello run --debug
✓ App started successfully
You can access the app at <app-url>
```
## Tests
Added unit and config tests + manual test.
```
--- PASS: TestAccDeployBundleWithApp (404.59s)
PASS
coverage: 36.8% of statements in ./...
ok github.com/databricks/cli/internal/bundle 405.035s coverage: 36.8% of statements in ./...
```
## Changes
Move mutator.Merge{JobClusters,JobParameters,JobTasks,PipelineClusters}
after variable resolution. This helps with the case when key contains a
variable.
@pietern mentioned here
https://github.com/databricks/cli/pull/2101#pullrequestreview-2539168762
it should be safe.
## Tests
Existing acceptance that was capturing the bug is updated with corrected
output.
## Changes
This updates `mode: production` to allow `root_path` to indicate
uniqueness. Historically, we required `run_as` for this, which isn't
actually very effective for that purpose. `run_as` also had the problem
that it doesn't work for pipelines.
This is a cherry-pick from https://github.com/databricks/cli/pull/1387
---------
Co-authored-by: Pieter Noordhuis <pcnoordhuis@gmail.com>
## Changes
- New kind of test is added - acceptance tests. See acceptance/README.md
for explanation.
- A few tests are converted to acceptance tests by moving databricks.yml
to acceptance/ and adding corresponding script files.
As these tests run against compiled binary and can capture full output
of the command, they can be useful to support major changes such as
refactoring internal logging / diagnostics or complex variable
interpolation.
These are currently run as part of 'make test' but the intention is to
run them as part of integration tests as well.
### Benefits
- Full binary is tested, exactly as users get it.
- We're not testing custom set of mutators like many existing tests.
- Not mocking anything, real SDK is used (although the HTTP endpoint is
not a real Databricks env).
- Easy to maintain: output can be updated automatically.
- Can easily set up external env, such as env vars, CLI args,
.databrickscfg location etc.
### Gaps
The tests currently share the test server and there is global place to
define handlers. We should have a way for tests to override / add new
handlers.
## Tests
I manually checked that output of new acceptance tests matches previous
asserts.
## Changes
This PR:
1. Incrementally improves the error messages shown to the user when the
volume they are referring to in `workspace.artifact_path` does not
exist.
2. Performs this validation in both `bundle validate` and `bundle
deploy` compared to before on just deployments.
3. It runs "fast" validations on `bundle deploy`, which earlier were
only run on `bundle validate`.
## Tests
Unit tests and manually. Also, existing integration tests provide
coverage (`TestUploadArtifactToVolumeNotYetDeployed`,
`TestUploadArtifactFileToVolumeThatDoesNotExist`)
Examples:
```
.venv➜ bundle-playground git:(master) ✗ cli bundle validate
Error: cannot access volume capital.whatever.my_volume: User does not have READ VOLUME on Volume 'capital.whatever.my_volume'.
at workspace.artifact_path
in databricks.yml:7:18
```
and
```
.venv➜ bundle-playground git:(master) ✗ cli bundle validate
Error: volume capital.whatever.foobar does not exist
at workspace.artifact_path
resources.volumes.foo
in databricks.yml:7:18
databricks.yml:12:7
You are using a volume in your artifact_path that is managed by
this bundle but which has not been deployed yet. Please first deploy
the volume using 'bundle deploy' and then switch over to using it in
the artifact_path.
```
## Changes
- Enable new linter: testifylint.
- Apply fixes with --fix.
- Fix remaining issues (mostly with aider).
There were 2 cases we --fix did the wrong thing - this seems to a be a
bug in linter: https://github.com/Antonboom/testifylint/issues/210
Nonetheless, I kept that check enabled, it seems useful, just need to be
fixed manually after autofix.
## Tests
Existing tests
## Changes
Fix cases where accumulated diagnostics are lost instead of being
propagated further. In some cases it's not possible, add a comment
there.
## Tests
Existing tests