Commit Graph

3 Commits

Author SHA1 Message Date
Andrew Nester 8053e9c4e4
Fix segfault in bundle summary command (#1937)
## Changes
This PR introduces use of new `isNil` method. It allows to ensure we
filter out all improperly defined resources in `bundle summary` command.
This includes deleted resources or resources with incorrect
configuration such as only defining key of the resource and nothing
else.

Fixes #1919, #1913

## Tests
Added regression unit test case
2024-11-28 12:27:24 +00:00
Lennart Kats (databricks) c5043c3d9d
Add `bundle summary` to display URLs for deployed resources (#1731)
## Changes

Adds a textual output to the `databricks bundle summary` command, which
includes URLs of deployed resources.

Example usage:

```
$ databricks bundle summary
Name: my_pipeline
Target: dev
Workspace:
  Host: https://domain.databricks.com
  User: user@databricks.com
  Path: /Users/user@databricks.com/.bundle/my_pipeline/dev
Resources:
  Jobs:
    my_project_job:
      Name: [dev lennart] my_project_job
      URL:  https://domain.databricks.com/jobs/206899209187287?o=6051921418418893
  Pipelines:
    my_project_pipeline:
      Name: [dev lennart] my_project_pipeline
      URL:  https://domain.databricks.com/pipelines/3f849fd5-ba7d-47fa-a34c-c6bf034b4f58?o=6051921418418893
```

Notes:
* The top headers of the output are the same as those from the existing
`bundle validate` command
* URLs are colored light blue in the output
* For resources that haven't been deployed yet, we show `(not deployed)`
in place of the URL

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Pieter Noordhuis <pcnoordhuis@gmail.com>
2024-10-18 06:45:47 +00:00
Andrew Nester 56ed9bebf3
Added support for creating all-purpose clusters (#1698)
## Changes
Added support for creating all-purpose clusters

Example of configuration

```
bundle:
  name: clusters

resources:
  clusters:
    test_cluster:
      cluster_name: "Test Cluster"
      num_workers: 2
      node_type_id: "i3.xlarge"
      autoscale:
        min_workers: 2
        max_workers: 7
      spark_version: "13.3.x-scala2.12"
      spark_conf:
        "spark.executor.memory": "2g"

  jobs:
    test_job:
      name: "Test Job"
      tasks:
        - task_key: test_task
          existing_cluster_id: ${resources.clusters.test_cluster.id}
          notebook_task:
            notebook_path: "./src/test.py"

targets:
    development:
      mode: development
      compute_id: ${resources.clusters.test_cluster.id}

```

## Tests
Added unit, config and E2E tests
2024-09-23 10:42:34 +00:00