Commit Graph

181 Commits

Author SHA1 Message Date
Andrew Nester f5781e2707
Merge branch 'main' into feature/apps 2024-12-10 16:11:20 +01:00
Lennart Kats (databricks) f3c628e537
Allow overriding compute for non-development mode targets (#1899)
## Changes
Allow overriding compute for non-development targets. We previously had
a restriction in place where `--cluster-id` was only allowed for targets
that use `mode: development`. The intention was to prevent mistakes, but
this was overly restrictive.

## Tests
Updated unit tests.
2024-12-10 10:02:44 +00:00
Denis Bilenko 1b2be1b2cb
Add error checking in tests and enable errcheck there (#1980)
## Changes
Fix all errcheck-found issues in tests and test helpers. Mostly this
done by adding require.NoError(t, err), sometimes panic() where t object
is not available).

Initial change is obtained with aider+claude, then manually reviewed and
cleaned up.

## Tests
Existing tests.
2024-12-09 13:56:41 +01:00
Andrew Nester d0d875b0db
Merge branch 'main' into feature/apps 2024-12-05 15:43:50 +01:00
Andrew Nester 97a1456038
addressed feedback 2024-12-05 15:40:34 +01:00
Denis Bilenko 0ad790e468
Properly read Git metadata when running inside workspace (#1945)
## Changes

Since there is no .git directory in Workspace file system, we need to
make an API call to api/2.0/workspace/get-status?return_git_info=true to
fetch git the root of the repo, current branch, commit and origin.

Added new function FetchRepositoryInfo that either looks up and parses
.git or calls remote API depending on env.

Refactor Repository/View/FileSet to accept repository root rather than
calculate it. This helps because:
- Repository is currently created in multiple places and finding the
repository root is becoming relatively expensive (API call needed).
- Repository/FileSet/View do not have access to current Bundle which is
where WorkplaceClient is stored.

## Tests

- Tested manually by running "bundle validate --json" inside web
terminal within Databricks env.
- Added integration tests for the new API.

---------

Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-05 10:13:13 +00:00
shreyas-goenka 0da17f6ec6
Add default value for `volume_type` for DABs (#1952)
## Changes

The Unity Catalog volumes API requires a `volume_type` argument when
creating volumes. In the context of DABs, it's unnecessary to require
users to specify the volume type every time. We can default to "MANAGED"
instead.

This PR is similar to https://github.com/databricks/cli/pull/1743 which
does the same for dashboards.

## Tests
Unit test
2024-12-04 11:05:54 +00:00
Denis Bilenko 0e088eb9f8
Simplify load_git_details.go; remove unnecessary Abs() call (#1950)
Suggested here
https://github.com/databricks/cli/pull/1945#discussion_r1866088579
2024-12-02 22:41:38 +00:00
shreyas-goenka 2847533e1e
Add DABs support for Unity Catalog volumes (#1762)
## Changes

This PR adds support for UC volumes to DABs.

### Can I use a UC volume managed by DABs in `artifact_path`?

Yes, but we require the volume to exist before being referenced in
`artifact_path`. Otherwise you'll see an error that the volume does not
exist. For this case, this PR also adds a warning if we detect that the
UC volume is defined in the DAB itself, which informs the user to deploy
the UC volume in a separate deployment first before using it in
`artifact_path`.

We cannot create the UC volume and then upload the artifacts to it in
the same `bundle deploy` because `bundle deploy` always uploads the
artifacts to `artifact_path` before materializing any resources defined
in the bundle. Supporting this in a single deployment requires us to
migrate away from our dependency on the Databricks Terraform provider to
manage the CRUD lifecycle of DABs resources.

### Why do we not support `preset.name_prefix` for UC volumes?

UC volumes will not have a `dev_shreyas_goenka` prefix added in `mode:
development`. Configuring `presets.name_prefix` will be a no-op for UC
volumes. We have decided not to support prefixing for UC resources. This
is because:
1. UC provides its own namespace hierarchy that is independent of DABs.
2. Users can always manually use `${workspace.current_user.short_name}`
to configure the prefixes manually.

Customers often manually set up a UC hierarchy for dev and prod,
including a schema or catalog per developer. Thus, it's often
unnecessary for us to add prefixing in `mode: development` by default
for UC resources.

In retrospect, supporting prefixing for UC schemas and registered models
was a mistake and will be removed in a future release of DABs.

## Tests
Unit, integration test, and manually. 

### Manual Testing cases:
 1. UC volume does not exist:
```
➜  bundle-playground git:(master) ✗ cli bundle deploy
Error: failed to fetch metadata for the UC volume /Volumes/main/caps/my_volume that is configured in the artifact_path: Not Found
```

2. UC Volume does not exist, but is defined in the DAB
```
➜  bundle-playground git:(master) ✗ cli bundle deploy
Error: failed to fetch metadata for the UC volume /Volumes/main/caps/managed_by_dab that is configured in the artifact_path: Not Found

Warning: You might be using a UC volume in your artifact_path that is managed by this bundle but which has not been deployed yet. Please deploy the UC volume in a separate bundle deploy before using it in the artifact_path.
  at resources.volumes.bar
  in databricks.yml:24:7

```

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-12-02 21:18:07 +00:00
Denis Bilenko 00bd98f898
Move loadGitDetails mutator to Initialize phase (#1944)
This will require API call when run inside a workspace, which will
require workspace client (we don't have one at the current point). We
want to keep Load phase quick, since it's common across all commands.
2024-12-02 09:49:32 +00:00
Andrew Nester d72b03eea6
Added support for Databricks Apps in DABs 2024-11-29 12:51:12 +01:00
Ilya Kuznetsov 490dd058aa
Extended message for warning when source-linked mode is used outside of the workspace (#1929)
## Changes

Added path and locations to the warning which displayed when
source-linked mode is used outside of the workspace
2024-11-22 14:44:33 +00:00
shreyas-goenka c2e2abcc35
Extend "notebook not found" error to warn about missing extension (#1920)
## Changes
The full workspace path for a notebook does not contain the notebook's
extension. If a user converts that file path to a relative path (like
`/Workspace/bundle_root/bar/nb` -> `./bar/nb`), they can be confused as
to why the new file path does not work.

The changes in this PR nudge them to add the appropriate file extension
(e.g., `./bar/nb.py` or `./bar/nb.ipynb`).

One common way users can end up in this scenario is by using the view
job as YAML functionality in the Databricks UI.

## Tests
Unit test and manually.

```
(.venv) ➜  bundle-playground git:(master) ✗ cli bundle validate 
Error: notebook ./foo not found. Local notebook references are expected
to contain one of the following file extensions: [.py, .r, .scala, .sql, .ipynb]
```
2024-11-21 16:21:21 +05:30
Ilya Kuznetsov 756e55fabc
Source-linked deployments for bundles in the workspace (#1884)
## Changes

This change adds a preset for source-linked deployments. It is enabled
by default for targets in `development` mode **if** the Databricks CLI
is running from the `/Workspace` directory on DBR. It does not have an
effect when running the CLI anywhere else.

Key highlights:
1. Files in this mode won't be uploaded to workspace
2. Created resources will use references to source files instead of
their workspace copies

## Tests
1. Apply preset unit test covering conditional logic
2. High-level process target mode unit test for testing integration
between mutators

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-11-20 13:22:27 +01:00
Andrew Nester 7f3fb10c4a
Do not prepend paths starting with ~ or variable reference (#1905)
## Changes
Fixes #1904 

## Tests
Added regression test
2024-11-15 15:03:59 +00:00
Pieter Noordhuis 1db384018c
Make `TableName` field part of quality monitor schema (#1903)
## Changes

This field was special-cased in #1307 because it's not part of the JSON
payload in the SDK struct.

This approach, while pragmatic, meant it didn't show up in the JSON
schema. While debugging an issue with quality monitors in #1900, I
couldn't figure out why I was getting schema errors on this field, or
how it was passed through to the TF representation. This commit removes
the special case and makes it behave like everything else.

## Tests

* Unit tests pass.
* Confirmed that the updated schema failed validation before this
change.
2024-11-14 17:39:38 +00:00
Pieter Noordhuis 1508d65c4c
Extract functionality to detect if the CLI is running on DBR (#1889)
## Changes

Whether or not the CLI is running on DBR can be detected once and stored
in the command's context.

By storing it in the context, it can easily be mocked for testing.

This builds on the simpler approach and conversation in #1744. It
unblocks testing of the DBR-specific paths while not compromising on the
checks we can perform to test if the CLI is running on DBR.

## Tests

* Unit tests for the new `dbr` package
* New unit test for the `ConfigureWSFS` mutator
2024-11-14 16:10:45 +00:00
dependabot[bot] 25838ee0af
Bump github.com/databricks/databricks-sdk-go from 0.49.0 to 0.51.0 (#1878)
Known issues:

- [ ] _(non-blocking with a command override)_ `apps.Update` requires 2
`name` params (one from path, one from request body)
- [ ] _(non-blocking)_ `lakeview.Create` does not require positional
argument `display_name` anymore because it's not marked as required in
request body

Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.49.0 to 0.51.0.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
2024-11-13 13:40:53 +00:00
Pieter Noordhuis 26afab2ccb
Fix relative path resolution for dashboards on Windows (#1881)
## Changes

The file presence check for dashboard files was missing a
`filepath.ToSlash`.

This means it didn't work on Windows unless the dashboard was located at
a path without slashes (i.e. the bundle root).

Closes #1875.

## Tests

* Added a unit test to cover this case (failed before the fix).
* Manually ran a dashboard deployment on Windows.
2024-11-05 09:53:53 +00:00
Andrew Nester ac71d2e5ce
Fixed adding /Workspace prefix for resource paths (#1866)
## Changes
`/Workspace` prefix needs to be added to `resource_path` as well.

Fixes the issue mentioned here:
https://github.com/databricks/cli/pull/1822#issuecomment-2447697498

Fixes #1867 

## Tests
Added regression test
2024-10-30 17:34:11 +00:00
Pieter Noordhuis 11f75fd320
Add support for AI/BI dashboards (#1743)
## Changes

This change adds support for modeling [AI/BI dashboards][docs] in DABs.


[Example bundle configuration][example] is located in the
`bundle-examples` repository.

[docs]: https://docs.databricks.com/en/dashboards/index.html#dashboards
[example]:
https://github.com/databricks/bundle-examples/tree/main/knowledge_base/dashboard_nyc_taxi

## Tests

* Added unit tests for self-contained parts
* Integration test for e2e dashboard deployment and remote change
modification
2024-10-29 09:11:08 +00:00
Andrew Nester ffdbec87cc
Added support for pip options in environment dependencies (#1842)
## Changes
Added support for specifying pip options such as `--extra-index-url` and
etc. in environments dependencies

```
environments:
  - environment_key: Default
    spec:
      client: "1"
      dependencies:
        - --extra-index-url https://foo@bar.com/packages/smth somepackage
        - json==1.0.0
```

## Tests
Added regression test

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-21 11:45:39 +00:00
Lennart Kats (databricks) c5043c3d9d
Add `bundle summary` to display URLs for deployed resources (#1731)
## Changes

Adds a textual output to the `databricks bundle summary` command, which
includes URLs of deployed resources.

Example usage:

```
$ databricks bundle summary
Name: my_pipeline
Target: dev
Workspace:
  Host: https://domain.databricks.com
  User: user@databricks.com
  Path: /Users/user@databricks.com/.bundle/my_pipeline/dev
Resources:
  Jobs:
    my_project_job:
      Name: [dev lennart] my_project_job
      URL:  https://domain.databricks.com/jobs/206899209187287?o=6051921418418893
  Pipelines:
    my_project_pipeline:
      Name: [dev lennart] my_project_pipeline
      URL:  https://domain.databricks.com/pipelines/3f849fd5-ba7d-47fa-a34c-c6bf034b4f58?o=6051921418418893
```

Notes:
* The top headers of the output are the same as those from the existing
`bundle validate` command
* URLs are colored light blue in the output
* For resources that haven't been deployed yet, we show `(not deployed)`
in place of the URL

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Pieter Noordhuis <pcnoordhuis@gmail.com>
2024-10-18 06:45:47 +00:00
Pieter Noordhuis 3270afaff4
Move utility functions dealing with IAM to libs/iamutil (#1820)
## Changes

The two functions `GetShortUserName` and `IsServicePrincipal` are
unrelated to auth or the purpose of the auth package. This change moves
them into their own package and updates `IsServicePrincipal` to take an
`*iam.User` argument instead of a string username.

## Tests

Tests pass.
2024-10-10 13:02:25 +00:00
Lennart Kats (databricks) e885794722
Show actionable errors for collaborative deployment scenarios (#1386)
## Changes

This adds diagnostics for collaborative (production) deployment
scenarios, including:

- Bob deploys a bundle that is normally deployed by Alice, but this
fails because Bob can't write to `/Users/Alice/.bundle`.
- Charlie deploys a bundle that is normally deployed by Alice, but this
fails because he can't create a new pipeline where Alice would be the
owner.
- Alice deploys a bundle where she didn't list herself as one of the
CAN_MANAGE users in permissions. That can work, but is probably a
mistake.

## Tests

Unit tests, manual testing.
2024-10-10 11:18:23 +00:00
Andrew Nester a8cff48c0b
Always prepend bundle remote paths with /Workspace (#1724)
## Changes
Due to platform changes, all libraries, notebooks and etc. paths used in
Databricks must be started with either /Workspace or /Volumes prefix.

This PR makes sure that all bundle paths are correctly prefixed.

Note: this change is a breaking change if user previously configured and
used `/Workspace/Workspace` folder in their workspace file system or
having `/Workspace/${workspace.root_path}...` pattern configured
anywhere in their bundle config

Fixes: #1751

AI:
- [x] Scan DABs config and error out on
`/Workspace/${workspace.root_path}...` pattern usage

## Tests
Added unit tests

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-10-02 15:34:00 +00:00
Pieter Noordhuis 80d55f4540
Add resource path field to bundle workspace configuration (#1800)
## Changes

Default workspace path for resources with a presence in the workspace
tree.

Note: this path is **not** created automatically (yet). We need this
only for dashboards (so far), so can take care of creation if one or
more dashboards are part of a deployment. This saves an API call for
deployments where this is not necessary.

## Tests

Expanded existing tests.
2024-10-02 13:55:40 +00:00
Lennart Kats (databricks) da3b4f7c72
Fix panic in `apply_presets.go` (#1796)
## Changes

This fixes the user-reported panic in `apply_presets.go`. I'm still
unsure how to reproduce this, since the CLI just reports `ob broken_job
is not defined` when I try to use `bundle deploy` with an empty job.
That said — we may as well be defensive here and I see we have lots of
checks for empty job/cluster/etc. settings scattered throughout our code
base so at least we're somewhat consistent.
2024-09-29 14:08:10 +00:00
Pieter Noordhuis 1d1aa0a416
Rename `RootPath` -> `BundleRootPath` (#1792)
## Changes

After introducing the `SyncRootPath` field on the bundle (#1694), the
previous `RootPath` became ambiguous. Does it mean the bundle root path
or the sync root path? This PR renames to field to `BundleRootPath` to
remove the ambiguity.

## Tests

n/a

---------

Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
2024-09-27 10:03:05 +00:00
Pieter Noordhuis 56cd96cb93
Move trampoline code into trampoline package (#1793)
## Changes

Doing this to make room for PyDABs under `bundle/python`.

## Tests

n/a
2024-09-27 09:32:54 +00:00
Pieter Noordhuis a1dca56abf
Trim trailing whitespace (#1794)
## Changes

Trailing whitespace is trimmed per the VS Code settings for this
repository.

## Tests

n/a
2024-09-27 09:30:39 +00:00
shreyas-goenka 495040e4cd
Modify SetLocation test utility to take full locations as argument (#1788)
I plan to use this in https://github.com/databricks/cli/pull/1780, to
set the line and column numbers as well for the locations.

gopatch file used:
```
@@
var x expression
var y expression
var z expression
@@
-bundletest.SetLocation(x, y, z)
+bundletest.SetLocation(x, y, []dyn.Location{{File: z}})
```
2024-09-25 16:13:48 +00:00
Gleb Kanterov 490259a14a
Refactor jobs path translation (#1782)
## Changes
Extract package for other modules to transform different kinds of paths
in job resources.

## Tests
Unit tests
2024-09-24 13:51:54 +00:00
Andrew Nester 56ed9bebf3
Added support for creating all-purpose clusters (#1698)
## Changes
Added support for creating all-purpose clusters

Example of configuration

```
bundle:
  name: clusters

resources:
  clusters:
    test_cluster:
      cluster_name: "Test Cluster"
      num_workers: 2
      node_type_id: "i3.xlarge"
      autoscale:
        min_workers: 2
        max_workers: 7
      spark_version: "13.3.x-scala2.12"
      spark_conf:
        "spark.executor.memory": "2g"

  jobs:
    test_job:
      name: "Test Job"
      tasks:
        - task_key: test_task
          existing_cluster_id: ${resources.clusters.test_cluster.id}
          notebook_task:
            notebook_path: "./src/test.py"

targets:
    development:
      mode: development
      compute_id: ${resources.clusters.test_cluster.id}

```

## Tests
Added unit, config and E2E tests
2024-09-23 10:42:34 +00:00
Lennart Kats (databricks) e220f9ddd6
Use the friendly name of service principals when shortening their name (#1770)
## Summary

Use the friendly name of service principals when shortening their name.

This change is helpful for the prefix in development mode. Instead of
adding a prefix like `[dev 1706906c-c0a2-4c25-9f57-3a7aa3cb8123]`, we'll
prefix like `[dev my_principal]`.
2024-09-16 18:35:07 +00:00
Andrew Nester 02e83877f4
Added listing cluster filtering for cluster lookups (#1754)
## Changes
We added a custom resolver for the cluster to add filtering for the
cluster source when we list all clusters.

Without the filtering listing could take a very long time (5-10 mins)
which leads to lookup timeouts.

## Tests
Existing unit tests passing
2024-09-06 11:34:57 +00:00
Gleb Kanterov ed448815b4
PythonMutator: explain missing package error (#1736)
## Changes
Explain the error when the `databricks-pydabs` package is not installed
or the Python environment isn't correctly activated.

Example output:

```
Error: python mutator process failed: ".venv/bin/python3 -m databricks.bundles.build --phase load --input .../input.json --output .../output.json --diagnostics .../diagnostics.json: exit status 1", use --debug to enable logging

.../.venv/bin/python3: Error while finding module specification for 'databricks.bundles.build' (ModuleNotFoundError: No module named 'databricks')

Explanation: 'databricks-pydabs' library is not installed in the Python environment.

If using Python wheels, ensure that 'databricks-pydabs' is included in the dependencies, 
and that the wheel is installed in the Python environment:

  $ .venv/bin/pip install -e .

If using a virtual environment, ensure it is specified as the venv_path property in databricks.yml, 
or activate the environment before running CLI commands:

  experimental:
    pydabs:
      venv_path: .venv
```

## Tests
Unit tests
2024-09-02 09:49:30 +00:00
Andrew Nester 582558cac2
Do not suppress normalisation diagnostics for resolving variables (#1740)
## Changes

Tested on the following bundle configuration

```
bundle:
  name: clusters
  mode: development

variables:
  webhook_notifications:
    description: Webhook URL for notifications
    type: complex
    default:
      on_failure:
        id: 6a6c04c1-389c-4534-95af-b68b62a9dbe6

resources:
  jobs:
    test_job:
      name: "Andrew Nester Test Job"
      tasks:
        - task_key: test_task
          notebook_task:
            notebook_path: "./src/test.py"
          new_cluster:
            num_workers: 2
            node_type_id: "i3.xlarge"
            autoscale:
              min_workers: 2
              max_workers: 7
            spark_version: "12.2.x-scala2.12"
            spark_conf:
              "spark.executor.memory": "2g"
      webhook_notifications: ${var.webhook_notifications}

```

bundle validate output is below

```
andrew.nester@HFW9Y94129 wheel % databricks bundle validate
Warning: expected sequence, found map
  at resources.jobs.test_job.webhook_notifications.on_failure
  in bundle.yml:11:9

Name: clusters
Target: default
Workspace:
  User: andrew.nester@databricks.com
  Path: /Users/andrew.nester@databricks.com/.bundle/clusters/default
```

**Note** that error correctly points to the variable
2024-09-02 09:17:18 +00:00
Gleb Kanterov 70ce802518
PythonMutator: preserve normalize diagnostics (#1735)
## Changes
Preserve diagnostics if there are any errors or warnings when
PythonMutator normalizes output. If anything goes wrong during
conversion, diagnostics contain the relevant location and path.

## Tests
Unit tests
2024-08-30 13:29:00 +00:00
Lennart Kats (databricks) 85459c1963
Improve error handling for /Volumes paths in mode: development (#1716)
## Changes
* Provide a more helpful error when using an artifact_path based on
/Volumes
* Allow the use of short_names in /Volumes paths

## Example cases

Example of a valid /Volumes artifact_path:
* `artifact_path:
/Volumes/catalog/schema/${workspace.current_user.short_name}/libs`

Example of an invalid /Volumes path (when using `mode: development`):
* `artifact_path: /Volumes/catalog/schema/libs`
* Resulting error: `artifact_path should contain the current username or
${workspace.current_user.short_name} to ensure uniqueness when using
'mode: development'`
2024-08-28 12:14:19 +00:00
Lennart Kats (databricks) 84b47745e4
Ignore CLI version check on development builds of the CLI (#1714)
## Changes

This changes makes sure we ignore CLI version check on development
builds of the CLI.

Before:

```
$ cat databricks.yml | grep cli_version
  databricks_cli_version: ">= 0.223.1"
$ cli bundle deploy
Error: Databricks CLI version constraint not satisfied. Required: >= 0.223.1, current: 0.0.0-dev+06b169284737
```

after

```
...
$ cli bundle deploy
...
Warning: Ignoring Databricks CLI version constraint for development build. Required: >= 0.223.1, current: 0.0.0-dev+d52d6f08fcd5
```


## Tests
<!-- How is this tested? -->
2024-08-23 10:13:21 +00:00
Pieter Noordhuis 6e8cd835a3
Add paths field to bundle sync configuration (#1694)
## Changes

This field allows a user to configure paths to synchronize to the
workspace.

Allowed values are relative paths to files and directories anchored at
the directory where the field is set. If one or more values traverse up
the directory tree (to an ancestor of the bundle root directory), the
CLI will dynamically determine the root path to use to ensure that the
file tree structure remains intact.

For example, given a `databricks.yml` in `my_bundle` that includes:

```yaml
sync:
  paths:
    - ../common
    - .
```

Then upon synchronization, the workspace will look like:
```
.
├── common
│   └── lib.py
└── my_bundle
    ├── databricks.yml
    └── notebook.py
```

If not set behavior remains identical.

## Tests

* Newly added unit tests for the mutators and under `bundle/tests`.
* Manually confirmed a bundle without this configuration works the same.
* Manually confirmed a bundle with this configuration works.
2024-08-21 15:33:25 +00:00
shreyas-goenka f5df211320
Fix prefix preset used for UC schemas (#1704)
## Changes
In https://github.com/databricks/cli/pull/1490 we regressed and started
using the development mode prefix for UC schemas regardless of the mode
of the bundle target.

This PR fixes the regression and adds a regression test

## Tests
Failing integration tests pass now.
2024-08-21 12:53:54 +00:00
Witold Czaplewski 192f33bb13
[DAB] Add support for requirements libraries in Job Tasks (#1543)
## Changes
While experimenting with DAB I discovered that requirements libraries
are being ignored.

One thing worth mentioning is that `bundle validate` runs successfully,
but `bundle deploy` fails. This PR only covers the second part.


## Tests
<!-- How is this tested? -->
Added a unit test
2024-08-21 10:03:56 +00:00
Gleb Kanterov 44902fa350
Make `pydabs/venv_path` optional (#1687)
## Changes
Make `pydabs/venv_path` optional. When not specified, CLI detects the
Python interpreter using `python.DetectExecutable`, the same way as for
`artifacts`. `python.DetectExecutable` works correctly if a virtual
environment is activated or `python3` is available on PATH through other
means.

Extract the venv detection code from PyDABs into `libs/python/detect`.
This code will be used when we implement the `python/venv_path` section
in `databricks.yml`.

## Tests
Unit tests and manually

---------

Co-authored-by: Pieter Noordhuis <pcnoordhuis@gmail.com>
2024-08-20 13:26:57 +00:00
Lennart Kats (databricks) 78d0ac5c6a
Add configurable presets for name prefixes, tags, etc. (#1490)
## Changes

This adds configurable transformations based on the transformations
currently seen in `mode: development`.

Example databricks.yml showcasing how some transformations:

```
bundle:
  name: my_bundle

targets:
  dev:
    presets:
      prefix: "myprefix_"          # prefix all resource names with myprefix_
      pipelines_development: true  # set development to true by default for pipelines
      trigger_pause_status: PAUSED # set pause_status to PAUSED by default for all triggers and schedules
      jobs_max_concurrent_runs: 10 # set max_concurrent runs to 10 by default for all jobs
      tags:
        dev: true
```

## Tests

* Existing process_target_mode tests that were adapted to use this new
code
* Unit tests specific for the new mutator
* Unit tests for config loading and merging
* Manual e2e testing
2024-08-19 18:18:50 +00:00
Lennart Kats (databricks) 07627023f5
Pause continuous pipelines when 'mode: development' is used (#1590)
## Changes

This makes it so that the pipelines `continuous` property is set to
false by default when using `mode: development`.
2024-08-19 16:27:57 +00:00
Andrew Nester 48ff18e5fc
Upload local libraries even if they don't have artifact defined (#1664)
## Changes
Previously for all the libraries referenced in configuration DABs made
sure that there is corresponding artifact section.
But this is not really necessary and flexible, because local libraries
might be built outside of dabs context.
It also created difficult to follow logic in code where we back
referenced libraries to artifacts which was difficult to fllow


This PR does 3 things:
1. Allows all local libraries referenced in DABs config to be uploaded
to remote
2. Simplifies upload and glob references expand logic by doing this in
single place
3. Speed things up by uploading library only once and doing this in
parallel

## Tests
Added unit + integration tests + made sure that change is backward
compatible (no changes in existing tests)

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2024-08-14 09:03:44 +00:00
shreyas-goenka 7ae80de351
Stop tracking file path locations in bundle resources (#1673)
## Changes
Since locations are already tracked in the dynamic value tree, we no
longer need to track it at the resource/artifact level. This PR:
1. Removes use of `paths.Paths`. Uses dyn.Location instead.
2. Refactors the validation of resources not being empty valued to be
generic across all resource types.
  
## Tests
Existing unit tests.
2024-08-13 12:50:15 +00:00
Pieter Noordhuis f3ffded3bf
Merge job parameters based on their name (#1659)
## Changes

This change enables overriding the default value of job parameters in
target overrides.

This is the same approach we already take for job clusters and job
tasks.

Closes #1620.

## Tests

Mutator unit tests and lightweight end-to-end tests.
2024-08-06 16:12:18 +00:00