Commit Graph

20 Commits

Author SHA1 Message Date
Andrew Nester bc30c9ed4a
Added `--restart` flag for `bundle run` command (#1191)
## Changes
Added `--restart` flag for `bundle run` command

When running with this flag, `bundle run` will cancel all existing runs
before starting a new one

## Tests
Manually
2024-02-09 14:33:14 +00:00
Andrew Nester 2bbb644749
Group bundle run flags by job and pipeline types (#1174)
## Changes
Group bundle run flags by job and pipeline types

## Tests
```
Run a resource (e.g. a job or a pipeline)

Usage:
  databricks bundle run [flags] KEY

Job Flags:
      --dbt-commands strings                 A list of commands to execute for jobs with DBT tasks.
      --jar-params strings                   A list of parameters for jobs with Spark JAR tasks.
      --notebook-params stringToString       A map from keys to values for jobs with notebook tasks. (default [])
      --params stringToString                comma separated k=v pairs for job parameters (default [])
      --pipeline-params stringToString       A map from keys to values for jobs with pipeline tasks. (default [])
      --python-named-params stringToString   A map from keys to values for jobs with Python wheel tasks. (default [])
      --python-params strings                A list of parameters for jobs with Python tasks.
      --spark-submit-params strings          A list of parameters for jobs with Spark submit tasks.
      --sql-params stringToString            A map from keys to values for jobs with SQL tasks. (default [])

Pipeline Flags:
      --full-refresh strings   List of tables to reset and recompute.
      --full-refresh-all       Perform a full graph reset and recompute.
      --refresh strings        List of tables to update.
      --refresh-all            Perform a full graph update.

Flags:
  -h, --help      help for run
      --no-wait   Don't wait for the run to complete.

Global Flags:
      --debug            enable debug logging
  -o, --output type      output type: text or json (default text)
  -p, --profile string   ~/.databrickscfg profile
  -t, --target string    bundle target to use (if applicable)
      --var strings      set values for variables defined in bundle config. Example: --var="foo=bar"
   ```
2024-02-06 14:51:02 +00:00
Andrew Nester 6dd6899b52
Do not allow input prompts in Git Bash terminal (#1069)
## Changes

Likely due to fact that Git Bash does not correctly support ANSI escape
sequences, we cannot use `promptui` package there. See known issues:

- https://github.com/manifoldco/promptui/issues/208
- https://github.com/chzyer/readline/issues/191
2023-12-18 15:01:59 +00:00
shreyas-goenka b6aa4631f1
Fix metadata computation for empty bundle (#939)
## Changes
This PR fixes metadata computation for empty bundle. Before we would
error because the `terraform.Load()` mutator errors on a empty / no
state file.

## Tests
Failing integration tests now pass.
2023-11-02 11:00:30 +00:00
Pieter Noordhuis a2775f836f
Use interactive prompt to select resource to run if not specified (#762)
## Changes

Display an interactive prompt with a list of resources to run if one
isn't specified and the command is run interactively.

## Tests

Manually confirmed:
* The new prompt works
* Shell completion still works
* Specifying a key argument still works
2023-09-11 18:03:12 +00:00
Pieter Noordhuis bee7a16cb0
Remove dependency on global state for remaining commands (#613)
## Changes

This removes the remaining dependency on global state and unblocks work
to parallelize integration tests. As is, we can already uncomment an
integration test that had to be skipped because of other tests tainting
global state. This is no longer an issue.

Also see #595 and #606.

## Tests

* Unit and integration tests pass.
* Manually confirmed the help output is the same.
2023-07-27 10:03:08 +00:00
Pieter Noordhuis ec892aa11c
Remove dependency on global state for the root command (#606)
## Changes

This change is another step towards a CLI without globals. Also see #595.

The flags for the root command are now encapsulated in struct types.

## Tests

Unit tests pass.
2023-07-26 11:17:09 +00:00
Lennart Kats (databricks) 57e75d3e22
Add development runs (#522)
This implements the "development run" functionality that we desire for DABs in the workspace / IDE.

## bundle.yml changes

In bundle.yml, there should be a "dev" environment that is marked as
`mode: debug`:
```
environments:
  dev:
    default: true
    mode: development # future accepted values might include pull_request, production
```

Setting `mode` to `development` indicates that this environment is used
just for running things for development. This results in several changes
to deployed assets:
* All assets will get '[dev]' in their name and will get a 'dev' tag
* All assets will be hidden from the list of assets (future work; e.g.
for jobs we would have a special job_type that hides it from the list)
* All deployed assets will be ephemeral (future work, we need some form
of garbage collection)
* Pipelines will be marked as 'development: true'
* Jobs can run on development compute through the `--compute` parameter
in the CLI
* Jobs get their schedule / triggers paused
* Jobs get concurrent runs (it's really annoying if your runs get
skipped because the last run was still in progress)

Other accepted values for `mode` are `default` (which does nothing) and
`pull-request` (which is reserved for future use).

## CLI changes

To run a single job called "shark_sighting" on existing compute, use the
following commands:
```
$ databricks bundle deploy --compute 0617-201942-9yd9g8ix
$ databricks bundle run shark_sighting
```

which would deploy and run a job called "[dev] shark_sightings" on the
compute provided. Note that `--compute` is not accepted in production
environments, so we show an error if `mode: development` is not used.

The `run --deploy` command offers a convenient shorthand for the common
combination of deploying & running:
```
$ export DATABRICKS_COMPUTE=0617-201942-9yd9g8ix
$ bundle run --deploy shark_sightings
```
The `--deploy` addition isn't really essential and I welcome feedback 🤔
I played with the idea of a "debug" or "dev" command but that seemed to
only make the option space even broader for users. The above could work
well with an IDE or workspace that automatically sets the target
compute.

One more thing I added is`run --no-wait` can now be used to run
something without waiting for it to be completed (useful for IDE-like
environments that can display progress themselves).
```
$ bundle run --deploy shark_sightings --no-wait
```
2023-07-12 08:51:54 +02:00
Andrew Nester 6141476ca2
Added support for bundle.Seq, simplified Mutator.Apply interface (#403)
## Changes
Added support for `bundle.Seq`, simplified `Mutator.Apply` interface by
removing list of mutators from return values/

## Tests
1. Ran `cli bundle deploy` and interrupted it with Cmd + C mid execution
so lock is not released
2. Ran `cli bundle deploy` top make sure that CLI is not trying to
release lock when it fail to acquire it
```
andrew.nester@HFW9Y94129 multiples-tasks % cli bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!

^C
andrew.nester@HFW9Y94129 multiples-tasks % cli bundle deploy
Error: deploy lock acquired by andrew.nester@databricks.com at 2023-05-24 12:10:23.050343 +0200 CEST. Use --force to override
```
2023-05-24 14:45:19 +02:00
Pieter Noordhuis 98ebb78c9b
Rename bricks -> databricks (#389)
## Changes

Rename all instances of "bricks" to "databricks".

## Tests

* Confirmed the goreleaser build works, uses the correct new binary
name, and produces the right archives.
* Help output is confirmed to be correct.
* Output of `git grep -w bricks` is minimal with a couple changes
remaining for after the repository rename.
2023-05-16 18:35:39 +02:00
shreyas-goenka c5e940f664
Add support for variables in bundle config (#359)
## Changes
This PR now allows you to define variables in the bundle config and set
them in three ways
1. command line args
2. process environment variable
3. in the bundle config itself

## Tests
manually, unit, and black box tests

---------

Co-authored-by: Miles Yucht <miles@databricks.com>
2023-05-15 11:34:05 +02:00
Serge Smertin 4c4a293015
Added OpenAPI command coverage (#357)
This PR adds the following command groups:

## Workspace-level command groups

 * `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
 * `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
 * `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
 * `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
 * `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
 * `bricks dashboards` - In general, there is little need to modify dashboards using the API.
 * `bricks data-sources` - This API is provided to assist you in making new query objects.
 * `bricks experiments` - MLflow Experiment tracking.
 * `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
 * `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
 * `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
 * `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
 * `bricks grants` - In Unity Catalog, data is secure by default.
 * `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
 * `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
 * `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
 * `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
 * `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
 * `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
 * `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
 * `bricks model-registry` - MLflow Model Registry commands.
 * `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
 * `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
 * `bricks policy-families` - View available policy families.
 * `bricks providers` - Databricks Providers REST API.
 * `bricks queries` - These endpoints are used for CRUD operations on query definitions.
 * `bricks query-history` - Access the history of queries through SQL warehouses.
 * `bricks recipient-activation` - Databricks Recipient Activation REST API.
 * `bricks recipients` - Databricks Recipients REST API.
 * `bricks repos` - The Repos API allows users to manage their git repos.
 * `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
 * `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
 * `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
 * `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
 * `bricks shares` - Databricks Shares REST API.
 * `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
 * `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
 * `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
 * `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
 * `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
 * `bricks users` - User identities recognized by Databricks and represented by email addresses.
 * `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
 * `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
 * `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
 * `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.

## Account-level command groups

 * `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
 * `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
 * `bricks account credentials` - These APIs manage credential configurations for this workspace.
 * `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
 * `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
 * `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
 * `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
 * `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
 * `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
 * `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
 * `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
 * `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
 * `bricks account private-access` - These APIs manage private access settings for this account.
 * `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
 * `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
 * `bricks account storage` - These APIs manage storage configurations for this workspace.
 * `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
 * `bricks account users` - User identities recognized by Databricks and represented by email addresses.
 * `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
 * `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
 * `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 13:06:16 +02:00
Pieter Noordhuis 04e77102c9
Add mutators to pull and push Terraform state (#288)
## Changes

Pull state before deploying and push state after deploying.

Note: the run command was missing mutators to initialize Terraform. This
is necessary if the cache directory is removed between running "deploy"
and "run" (which is valid now that we synchronize state).

## Tests

Manually.
2023-03-30 12:01:09 +02:00
shreyas-goenka 047a189c1e
Add job run output logging (#260)
This PR adds output logging for job runs

Tested using unit tests and manually
2023-03-21 16:25:18 +01:00
shreyas-goenka c4c8f944f3
Remove redundant terraform initialize mutator (#238)
Tested manually that bricks bundle run runs a pipeline.

phases.Initialize already has terraform.Initialize mutator
2023-03-09 15:05:02 +01:00
Pieter Noordhuis dd95668474
Complete positional argument to bundle run (#220)
Command completion can be configured through `bricks completion`.
2023-02-20 21:55:06 +01:00
Pieter Noordhuis 6737af4b06
Move bundle loading functions to top level (#181)
We intend to let non-bundle commands use bundle configuration for their
operating context (workspace, auth, default cluster, etc).

As such, all commands must first try to load a bundle configuration.
If there is no bundle they can fall back on taking their operating
context from command line flags and the environment.

This is on top of #180.
2023-01-27 17:05:57 +01:00
Pieter Noordhuis 8f4461904b
Define flags for running jobs and pipelines (#146) 2022-12-23 15:17:16 +01:00
Pieter Noordhuis 49aa858b89
Run command must always take a single argument (#156) 2022-12-22 16:19:38 +01:00
Pieter Noordhuis b111416fe5
Add `bricks bundle run` command (#134) 2022-12-15 15:12:47 +01:00