Commit Graph

83 Commits

Author SHA1 Message Date
Shreyas Goenka 74627160cb
- 2023-05-22 12:31:18 +02:00
Shreyas Goenka 601e16e9e8
add version of schema 2023-05-22 11:06:10 +02:00
Shreyas Goenka 89d02cda6b
nit 2023-05-22 10:48:32 +02:00
Shreyas Goenka 8a9e59df3e
add tests for individual validators 2023-05-22 10:43:43 +02:00
Shreyas Goenka 2125f256e5
lint 2023-05-22 10:30:27 +02:00
Shreyas Goenka 2511c18352
add test for dir gen skip 2023-05-22 10:16:40 +02:00
Shreyas Goenka f6fc63a09e
lazy generate dirs 2023-05-22 09:46:26 +02:00
Shreyas Goenka 479e2b4e32
Add config-file flag 2023-05-22 05:58:05 +02:00
Shreyas Goenka 350053487a
added flag for config file location 2023-05-21 22:48:38 +02:00
Shreyas Goenka a21fbdd383
refactor logic out of the cmd 2023-05-21 22:37:38 +02:00
Shreyas Goenka f5384fc5ed
added a map for validators, and moved init under bundle 2023-05-19 15:39:55 +02:00
Shreyas Goenka 9a8205b3df
address comments plus close file 2023-05-17 15:33:04 +02:00
Shreyas Goenka ecc07e893f
use errors is 2023-05-17 15:27:03 +02:00
Shreyas Goenka 9a70861e4b
address some comments 2023-05-17 15:00:27 +02:00
Shreyas Goenka fb2da25f28
Add tests for validate config 2023-05-17 14:42:24 +02:00
Shreyas Goenka 1d5e09bd39
correct validateType to use reflection and added test for it 2023-05-17 14:26:35 +02:00
Shreyas Goenka 4b37b94630
merge after rename 2023-05-17 14:06:31 +02:00
Shreyas Goenka 742edb5215
added unknown type test 2023-05-17 14:05:06 +02:00
Shreyas Goenka 05b3b6ca2e
added tests for castfloat 2023-05-17 14:02:41 +02:00
Shreyas Goenka 603a68bade
Added e2e tests 2023-05-17 13:41:15 +02:00
Shreyas Goenka 4b7a606d72
move walker to a separate library 2023-05-17 11:53:13 +02:00
Pieter Noordhuis 8979ed1394
Fix tests for new repository name (#390) 2023-05-16 19:02:07 +02:00
Pieter Noordhuis 98ebb78c9b
Rename bricks -> databricks (#389)
## Changes

Rename all instances of "bricks" to "databricks".

## Tests

* Confirmed the goreleaser build works, uses the correct new binary
name, and produces the right archives.
* Help output is confirmed to be correct.
* Output of `git grep -w bricks` is minimal with a couple changes
remaining for after the repository rename.
2023-05-16 18:35:39 +02:00
Andrew Nester 180dfc9a40
Added ability for deferred mutator execution (#380)
## Changes
Added `DeferredMutator` and `bundle.Defer` function which allows to
always execute some mutators either in the end of execution chain or
after error occurs in the middle of execution chain.

Usage as follows:

```
deferredMutator := bundle.Defer([]bundle.Mutator{
    lock.Acquire()
    transform.DoSomething(),
    //...
}, []bundle.Mutator{
    lock.Release(),
})
```
In such case `lock.Release()` will always be executed: either when all
operations above succeed or when any of them fails

## Tests
Before the change

```
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!

Error: terraform not initialized
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Error: deploy lock acquired by andrew.nester@databricks.com at 2023-05-10 16:41:22.902659 +0200 CEST. Use --force to override

```

After the change
```
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy 
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!

Error: terraform not initialized
andrew.nester@HFW9Y94129 multiples-tasks % bricks bundle deploy
Starting upload of bundle files
Uploaded bundle files at /Users/andrew.nester@databricks.com/.bundle/simple-task/development/files!

Error: terraform not initialized
```
2023-05-16 18:01:50 +02:00
shreyas-goenka 9e16140b6e
Add git config block to bundle config (#356)
## Changes
This config block contains commit, branch and remote_url which will be
automatically loaded if specified in the repo, and can also be specified
by the user

## Tests
Unit and black-box tests
2023-04-26 16:54:36 +02:00
Serge Smertin 4c4a293015
Added OpenAPI command coverage (#357)
This PR adds the following command groups:

## Workspace-level command groups

 * `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
 * `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
 * `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
 * `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
 * `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
 * `bricks dashboards` - In general, there is little need to modify dashboards using the API.
 * `bricks data-sources` - This API is provided to assist you in making new query objects.
 * `bricks experiments` - MLflow Experiment tracking.
 * `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
 * `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
 * `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
 * `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
 * `bricks grants` - In Unity Catalog, data is secure by default.
 * `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
 * `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
 * `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
 * `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
 * `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
 * `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
 * `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
 * `bricks model-registry` - MLflow Model Registry commands.
 * `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
 * `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
 * `bricks policy-families` - View available policy families.
 * `bricks providers` - Databricks Providers REST API.
 * `bricks queries` - These endpoints are used for CRUD operations on query definitions.
 * `bricks query-history` - Access the history of queries through SQL warehouses.
 * `bricks recipient-activation` - Databricks Recipient Activation REST API.
 * `bricks recipients` - Databricks Recipients REST API.
 * `bricks repos` - The Repos API allows users to manage their git repos.
 * `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
 * `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
 * `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
 * `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
 * `bricks shares` - Databricks Shares REST API.
 * `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
 * `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
 * `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
 * `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
 * `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
 * `bricks users` - User identities recognized by Databricks and represented by email addresses.
 * `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
 * `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
 * `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
 * `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.

## Account-level command groups

 * `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
 * `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
 * `bricks account credentials` - These APIs manage credential configurations for this workspace.
 * `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
 * `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
 * `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
 * `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
 * `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
 * `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
 * `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
 * `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
 * `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
 * `bricks account private-access` - These APIs manage private access settings for this account.
 * `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
 * `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
 * `bricks account storage` - These APIs manage storage configurations for this workspace.
 * `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
 * `bricks account users` - User identities recognized by Databricks and represented by email addresses.
 * `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
 * `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
 * `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 13:06:16 +02:00
shreyas-goenka 43bc9a0d9d
Use cmdio logger to log bricks cmd execution errors (#348)
## Changes
Uses the cmdio logger to log the execution error

## Tests
Manually by making the root command return fake errors. Here is the
output:
```
shreyas.goenka@THW32HFW6T bricks % bricks bundle validate
Error: my foo error
```

```
shreyas.goenka@THW32HFW6T bricks % bricks bundle validate --progress-format=json
{
  "error": "my foo error"
}
```

---------

Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
2023-04-24 12:11:52 +02:00
Serge Smertin 9581187c9e
Update to Go SDK v0.8.0 (#351)
## Changes

- Update to Go SDK v0.8.0
- Fix all breaking changes

## Tests

- make test
2023-04-21 10:30:20 +02:00
shreyas-goenka ddc0237468
Error out if question prompts are used in json mode (#340)
## Changes
This PR disallows questions in json mode

## Tests
Manually and unit test
```
shreyas.goenka@THW32HFW6T job-output % bricks bundle destroy --progress-format=json
The following resources will be removed:
{
  "resource_type": "databricks_job",
  "action": "delete",
  "resource_name": "foo"
}
Error: question prompts are not supported in json mode
```
2023-04-18 17:13:49 +02:00
shreyas-goenka 598ad62688
Log mutator messages using progress logger (#312)
This PR uses progress logger to log messages inside mutators
2023-04-18 16:55:06 +02:00
shreyas-goenka 85889dffb1
Move state to event for whether they support inplace progress logging (#339)
## Changes
Adds a IsInplaceSupported() function to the event interface. Any event
that now uses the progress logger has to declare whether they support in
place logging

## Tests
Manually
2023-04-18 14:20:35 +02:00
shreyas-goenka b9c68b4bd5
Fix wrap around issues with inplace logging (#334)
## Changes
We deal with wraparounds for long lines of text in a bad way. This PR
fixes that by saving the cursor position

## Tests
Manually
2023-04-14 13:06:04 +02:00
shreyas-goenka bd11da88eb
Do not fail snapshot destroy if snapshot does not exist (#328)
## Changes
`bricks bundle destroy` would fail if the sync snapshot did not exist

## Tests
Manually

After:
```
shreyas.goenka@THW32HFW6T bundle-destroy % bricks bundle destroy --auto-approve
No resources to destroy!

Remote directory /Users/shreyas.goenka@databricks.com/.bundle/destroy/default will be deleted
Successfully deleted files!
```

Before:
```
shreyas.goenka@THW32HFW6T bundle-destroy % bricks bundle destroy --auto-approve
No resources to destroy!

Remote directory /Users/shreyas.goenka@databricks.com/.bundle/destroy/default will be deleted
Error: failed to destroy sync snapshot file: remove /Users/shreyas.goenka/projects/bundle-destroy/.databricks/bundle/default/sync-snapshots/a5bd1966cb8980a9.json: no such file or directory
```
2023-04-12 21:37:01 +02:00
shreyas-goenka 42cd405eba
Add tests for fileSet adding `databricks` to .gitignore (#325)
## Changes
<!-- Summary of your changes that are easy to understand -->

These are flows that were earlier only being tested in package
`project`. Since package `project` has been deleted in
https://github.com/databricks/bricks/pull/321, we needed to add coverage
as done here

## Tests
<!-- How is this tested? -->
2023-04-12 12:04:10 +02:00
Miles Yucht 946906221d
Delete sync snapshots file when destroying a bundle (#323)
## Changes
This PR changes the files.Delete() mutator to delete the sync snapshots
file on destroy. This ensures that files will be uploaded when the
bundle is uploaded again.

## Tests
- [x] Manual test: Ran `bricks bundle destroy`, observed that the sync
snapshots file was deleted.
2023-04-11 16:57:01 +02:00
shreyas-goenka 4871f7bc8a
Add bundle destroy command (#300)
Adds bundle destroy capability to bricks
2023-04-06 12:54:58 +02:00
Pieter Noordhuis 33645ae6ef
Revert "Configure log level to info by default (#267)" (#307)
## Changes

This reverts commit e7a7e5b95a.

Job and pipeline runs print progress information now. No need to
continue to rely on logging for this.

## Tests
2023-04-05 15:37:09 +02:00
Serge Smertin 02d9f877b5
Make `bricks auth` use `all-apis` scope (#304)
## Changes
Use `all-apis` scope, so that we can use the issued token for SCIM APIs.
The production environment has to be tuned in order to enable `all-apis`
scope for a specific account.

## Tests
Manual
2023-04-05 10:18:13 +02:00
shreyas-goenka 902813a490
Hardcode `.databricks` ignore pattern to ensure we never sync the cache directory (#295)
## Changes
<!-- Summary of your changes that are easy to understand -->
1. Add pattern to always ignore .databricks
2. Best effort creation of .gitignore with .databricks if it's needed

## Tests
<!-- How is this tested? -->
2023-04-04 15:44:57 +02:00
dependabot[bot] 57cf66d3a8
Bump github.com/databricks/databricks-sdk-go from 0.5.0 to 0.6.0 (#299) 2023-04-03 21:33:21 +02:00
Pieter Noordhuis cfd32c9602
Try to resolve a profile if only the host is specified (#287)
## Changes

This improves out of the box usability where a user who already
configured a `.databrickscfg` file will be able to reference the
workspace host in their `bundle.yml` and it will automatically pick up
the right profile.

## Tests

* Newly added tests pass.
* Manual testing confirms intended behavior.

---------

Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
2023-03-29 20:44:19 +02:00
Pieter Noordhuis 8af934bbbb
Function to find the Git repository containing a bundle (#289)
## Changes

Useful functions from #277.

## Tests

Tests pass.
2023-03-29 16:36:35 +02:00
shreyas-goenka 8fd3dccca9
Add progress logs for job runs (#276) 2023-03-29 14:58:09 +02:00
Pieter Noordhuis 1b47dd3af7
Trim log source field to basename of file (#273)
This makes logs more readable and avoids leaking paths.

Before:
```
time=2023-03-22T16:38:30.238+01:00 level=INFO source=/Users/pieter.noordhuis/dev/bricks/bundle/phases/phase.go:30 msg="Phase: initialize"
time=2023-03-22T16:38:31.303+01:00 level=INFO source=/Users/pieter.noordhuis/dev/bricks/bundle/phases/phase.go:30 msg="Phase: build"
time=2023-03-22T16:38:31.303+01:00 level=INFO source=/Users/pieter.noordhuis/dev/bricks/bundle/phases/phase.go:30 msg="Phase: deploy"
```

After:
```
time=2023-03-22T17:02:47.290+01:00 level=INFO source=phase.go:30 msg="Phase: initialize"
time=2023-03-22T17:02:48.171+01:00 level=INFO source=phase.go:30 msg="Phase: build"
time=2023-03-22T17:02:48.171+01:00 level=INFO source=phase.go:30 msg="Phase: deploy"
```
2023-03-23 08:56:39 +01:00
Pieter Noordhuis 123a5e15e9
Acquire lock prior to deploy (#270)
Add configuration:

```
bundle:
  lock:
    enabled: true
    force: false
```

The force field can be set by passing the `--force` argument to `bricks
bundle deploy`. Doing so means the deployment lock is acquired even if
it is currently held. This should only be used in exceptional cases
(e.g. a previous deployment has failed to release the lock).
2023-03-22 16:37:26 +01:00
shreyas-goenka 75d516939b
Error out if notebook file does not exist locally (#261)
Adds check for whether file exists locally

case 1: local (relative) file does not exist
```
    foo:
      name: "[job-output] test-job by shreyas"

      tasks:
        - task_key: my_notebook_task
          existing_cluster_id: ***
          notebook_task:
            notebook_path: "./doesnotexist"
```
output:
```
shreyas.goenka@THW32HFW6T job-output % bricks bundle deploy
Error: notebook ./doesnotexist not found. Error: open /Users/shreyas.goenka/projects/job-output/doesnotexist: no such file or directory
```


case 2: remote (absolute) file does not exist
```
    foo:
      name: "[job-output] test-job by shreyas"

      tasks:
        - task_key: my_notebook_task
          existing_cluster_id: ***
          notebook_task:
            notebook_path: "/Users/shreyas.goenka@databricks.com/doesnotexist"
```

output:
```
shreyas.goenka@THW32HFW6T job-output % bricks bundle deploy
shreyas.goenka@THW32HFW6T job-output % bricks bundle run foo
Error: failed to reach TERMINATED or SKIPPED, got INTERNAL_ERROR: Task my_notebook_task failed with message: Notebook not found: /Users/shreyas.goenka@databricks.com/doesnotexist. This caused all downstream tasks to get skipped.
```

case 3: remote exists
Successful deploy and run
2023-03-21 18:13:16 +01:00
Pieter Noordhuis 7dcc0d4b41
Fix test (#268)
Follow up to #267.
2023-03-21 16:34:16 +01:00
Pieter Noordhuis e7a7e5b95a
Configure log level to info by default (#267)
Note: we log at INFO level by default until
we implement progress reporting to stdout/stderr.
2023-03-21 16:14:20 +01:00
shreyas-goenka ae09eb02d5
Path escape file path in filer interface (#254) 2023-03-17 17:42:35 +01:00
Pieter Noordhuis ad666ff796
Use new logger throughout codebase (#256) 2023-03-17 15:17:31 +01:00