Commit Graph

11 Commits

Author SHA1 Message Date
Taiga Matsumoto e408b701ac
Add override to support YAML inputs for apps (#921)
## Changes
<!-- Summary of your changes that are easy to understand -->

Take @andrefurlan-db 's original
[commit](https://github.com/databricks/cli/compare/databricks:6e21ced...andrefurlan-db:12ed10c)
to add `apps` support to the CLI and add the yaml file-support as an
override (the apps routes are already apart of the Go SDK and are
available for use in the CLI)

**NOTE: this feature is still private preview. CLI usage will be
internal only**

## Tests
<!-- How is this tested? -->
2023-10-27 18:57:26 +00:00
Andrew Nester 6e708da6fc
Upgraded Go version to 1.21 (#664)
## Changes
Upgraded Go version to 1.21

Upgraded to use `slices` and `slog` from core instead of experimental.

Still use `exp/maps` as our code relies on `maps.Keys` which is not part
of core package and therefore refactoring required.

### Tests

Integration tests passed

```
[DEBUG] Test execution command:  /opt/homebrew/opt/go@1.21/bin/go test ./... -json -timeout 1h -run ^TestAcc
[DEBUG] Test execution directory:  /Users/andrew.nester/cli
2023/08/15 13:20:51 [INFO]  TestAccAlertsCreateErrWhenNoArguments (2.150s)
2023/08/15 13:20:52 [INFO]  TestAccApiGet (0.580s)
2023/08/15 13:20:53 [INFO]  TestAccClustersList (0.900s)
2023/08/15 13:20:54 [INFO]  TestAccClustersGet (0.870s)
2023/08/15 13:21:06 [INFO]  TestAccFilerWorkspaceFilesReadWrite (11.980s)
2023/08/15 13:21:13 [INFO]  TestAccFilerWorkspaceFilesReadDir (7.060s)
2023/08/15 13:21:25 [INFO]  TestAccFilerDbfsReadWrite (12.810s)
2023/08/15 13:21:33 [INFO]  TestAccFilerDbfsReadDir (7.380s)
2023/08/15 13:21:41 [INFO]  TestAccFilerWorkspaceNotebookConflict (7.760s)
2023/08/15 13:21:49 [INFO]  TestAccFilerWorkspaceNotebookWithOverwriteFlag (8.660s)
2023/08/15 13:21:49 [INFO]  TestAccFilerLocalReadWrite (0.020s)
2023/08/15 13:21:49 [INFO]  TestAccFilerLocalReadDir (0.010s)
2023/08/15 13:21:52 [INFO]  TestAccFsCatForDbfs (3.190s)
2023/08/15 13:21:53 [INFO]  TestAccFsCatForDbfsOnNonExistentFile (0.890s)
2023/08/15 13:21:54 [INFO]  TestAccFsCatForDbfsInvalidScheme (0.600s)
2023/08/15 13:21:57 [INFO]  TestAccFsCatDoesNotSupportOutputModeJson (2.960s)
2023/08/15 13:22:28 [INFO]  TestAccFsCpDir (31.480s)
2023/08/15 13:22:43 [INFO]  TestAccFsCpFileToFile (14.530s)
2023/08/15 13:22:58 [INFO]  TestAccFsCpFileToDir (14.610s)
2023/08/15 13:23:29 [INFO]  TestAccFsCpDirToDirFileNotOverwritten (31.810s)
2023/08/15 13:23:47 [INFO]  TestAccFsCpFileToDirFileNotOverwritten (17.500s)
2023/08/15 13:24:04 [INFO]  TestAccFsCpFileToFileFileNotOverwritten (17.260s)
2023/08/15 13:24:37 [INFO]  TestAccFsCpDirToDirWithOverwriteFlag (32.690s)
2023/08/15 13:24:56 [INFO]  TestAccFsCpFileToFileWithOverwriteFlag (19.290s)
2023/08/15 13:25:15 [INFO]  TestAccFsCpFileToDirWithOverwriteFlag (19.230s)
2023/08/15 13:25:17 [INFO]  TestAccFsCpErrorsWhenSourceIsDirWithoutRecursiveFlag (2.010s)
2023/08/15 13:25:18 [INFO]  TestAccFsCpErrorsOnInvalidScheme (0.610s)
2023/08/15 13:25:33 [INFO]  TestAccFsCpSourceIsDirectoryButTargetIsFile (14.900s)
2023/08/15 13:25:37 [INFO]  TestAccFsLsForDbfs (3.770s)
2023/08/15 13:25:41 [INFO]  TestAccFsLsForDbfsWithAbsolutePaths (4.160s)
2023/08/15 13:25:44 [INFO]  TestAccFsLsForDbfsOnFile (2.990s)
2023/08/15 13:25:46 [INFO]  TestAccFsLsForDbfsOnEmptyDir (1.870s)
2023/08/15 13:25:46 [INFO]  TestAccFsLsForDbfsForNonexistingDir (0.850s)
2023/08/15 13:25:47 [INFO]  TestAccFsLsWithoutScheme (0.560s)
2023/08/15 13:25:49 [INFO]  TestAccFsMkdirCreatesDirectory (2.310s)
2023/08/15 13:25:52 [INFO]  TestAccFsMkdirCreatesMultipleDirectories (2.920s)
2023/08/15 13:25:55 [INFO]  TestAccFsMkdirWhenDirectoryAlreadyExists (2.320s)
2023/08/15 13:25:57 [INFO]  TestAccFsMkdirWhenFileExistsAtPath (2.820s)
2023/08/15 13:26:01 [INFO]  TestAccFsRmForFile (4.030s)
2023/08/15 13:26:05 [INFO]  TestAccFsRmForEmptyDirectory (3.530s)
2023/08/15 13:26:08 [INFO]  TestAccFsRmForNonEmptyDirectory (3.190s)
2023/08/15 13:26:09 [INFO]  TestAccFsRmForNonExistentFile (0.830s)
2023/08/15 13:26:13 [INFO]  TestAccFsRmForNonEmptyDirectoryWithRecursiveFlag (3.580s)
2023/08/15 13:26:13 [INFO]  TestAccGitClone (0.800s)
2023/08/15 13:26:14 [INFO]  TestAccGitCloneWithOnlyRepoNameOnAlternateBranch (0.790s)
2023/08/15 13:26:15 [INFO]  TestAccGitCloneErrorsWhenRepositoryDoesNotExist (0.540s)
2023/08/15 13:26:23 [INFO]  TestAccLock (8.630s)
2023/08/15 13:26:27 [INFO]  TestAccLockUnlockWithoutAllowsLockFileNotExist (3.490s)
2023/08/15 13:26:30 [INFO]  TestAccLockUnlockWithAllowsLockFileNotExist (3.130s)
2023/08/15 13:26:39 [INFO]  TestAccSyncFullFileSync (9.370s)
2023/08/15 13:26:50 [INFO]  TestAccSyncIncrementalFileSync (10.390s)
2023/08/15 13:27:00 [INFO]  TestAccSyncNestedFolderSync (10.680s)
2023/08/15 13:27:11 [INFO]  TestAccSyncNestedFolderDoesntFailOnNonEmptyDirectory (10.970s)
2023/08/15 13:27:22 [INFO]  TestAccSyncNestedSpacePlusAndHashAreEscapedSync (10.930s)
2023/08/15 13:27:29 [INFO]  TestAccSyncIncrementalFileOverwritesFolder (7.020s)
2023/08/15 13:27:37 [INFO]  TestAccSyncIncrementalSyncPythonNotebookToFile (7.380s)
2023/08/15 13:27:43 [INFO]  TestAccSyncIncrementalSyncFileToPythonNotebook (6.050s)
2023/08/15 13:27:48 [INFO]  TestAccSyncIncrementalSyncPythonNotebookDelete (5.390s)
2023/08/15 13:27:51 [INFO]  TestAccSyncEnsureRemotePathIsUsableIfRepoDoesntExist (2.570s)
2023/08/15 13:27:56 [INFO]  TestAccSyncEnsureRemotePathIsUsableIfRepoExists (5.540s)
2023/08/15 13:27:58 [INFO]  TestAccSyncEnsureRemotePathIsUsableInWorkspace (1.840s)
2023/08/15 13:27:59 [INFO]  TestAccWorkspaceList (0.790s)
2023/08/15 13:28:08 [INFO]  TestAccExportDir (8.860s)
2023/08/15 13:28:11 [INFO]  TestAccExportDirDoesNotOverwrite (3.090s)
2023/08/15 13:28:14 [INFO]  TestAccExportDirWithOverwriteFlag (3.500s)
2023/08/15 13:28:23 [INFO]  TestAccImportDir (8.330s)
2023/08/15 13:28:34 [INFO]  TestAccImportDirDoesNotOverwrite (10.970s)
2023/08/15 13:28:44 [INFO]  TestAccImportDirWithOverwriteFlag (10.130s)
2023/08/15 13:28:44 [INFO]  68/68 passed, 0 failed, 3 skipped
```
2023-08-15 13:50:40 +00:00
shreyas-goenka 3d03df8844
Add mode default as a valid set value for progress-format flag (#472)
Manually tested `progress-format` works fine for all three modes
2023-06-14 13:26:56 +02:00
Pieter Noordhuis 98ebb78c9b
Rename bricks -> databricks (#389)
## Changes

Rename all instances of "bricks" to "databricks".

## Tests

* Confirmed the goreleaser build works, uses the correct new binary
name, and produces the right archives.
* Help output is confirmed to be correct.
* Output of `git grep -w bricks` is minimal with a couple changes
remaining for after the repository rename.
2023-05-16 18:35:39 +02:00
Serge Smertin 4c4a293015
Added OpenAPI command coverage (#357)
This PR adds the following command groups:

## Workspace-level command groups

 * `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
 * `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
 * `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
 * `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
 * `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
 * `bricks dashboards` - In general, there is little need to modify dashboards using the API.
 * `bricks data-sources` - This API is provided to assist you in making new query objects.
 * `bricks experiments` - MLflow Experiment tracking.
 * `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
 * `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
 * `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
 * `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
 * `bricks grants` - In Unity Catalog, data is secure by default.
 * `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
 * `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
 * `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
 * `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
 * `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
 * `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
 * `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
 * `bricks model-registry` - MLflow Model Registry commands.
 * `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
 * `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
 * `bricks policy-families` - View available policy families.
 * `bricks providers` - Databricks Providers REST API.
 * `bricks queries` - These endpoints are used for CRUD operations on query definitions.
 * `bricks query-history` - Access the history of queries through SQL warehouses.
 * `bricks recipient-activation` - Databricks Recipient Activation REST API.
 * `bricks recipients` - Databricks Recipients REST API.
 * `bricks repos` - The Repos API allows users to manage their git repos.
 * `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
 * `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
 * `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
 * `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
 * `bricks shares` - Databricks Shares REST API.
 * `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
 * `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
 * `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
 * `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
 * `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
 * `bricks users` - User identities recognized by Databricks and represented by email addresses.
 * `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
 * `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
 * `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
 * `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.

## Account-level command groups

 * `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
 * `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
 * `bricks account credentials` - These APIs manage credential configurations for this workspace.
 * `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
 * `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
 * `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
 * `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
 * `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
 * `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
 * `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
 * `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
 * `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
 * `bricks account private-access` - These APIs manage private access settings for this account.
 * `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
 * `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
 * `bricks account storage` - These APIs manage storage configurations for this workspace.
 * `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
 * `bricks account users` - User identities recognized by Databricks and represented by email addresses.
 * `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
 * `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
 * `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 13:06:16 +02:00
Pieter Noordhuis 33645ae6ef
Revert "Configure log level to info by default (#267)" (#307)
## Changes

This reverts commit e7a7e5b95a.

Job and pipeline runs print progress information now. No need to
continue to rely on logging for this.

## Tests
2023-04-05 15:37:09 +02:00
shreyas-goenka 8fd3dccca9
Add progress logs for job runs (#276) 2023-03-29 14:58:09 +02:00
Pieter Noordhuis 7dcc0d4b41
Fix test (#268)
Follow up to #267.
2023-03-21 16:34:16 +01:00
Pieter Noordhuis e7a7e5b95a
Configure log level to info by default (#267)
Note: we log at INFO level by default until
we implement progress reporting to stdout/stderr.
2023-03-21 16:14:20 +01:00
Pieter Noordhuis 32a29c6af4
Add structured logging infrastructure (#246)
New global flags:
* `--log-file FILE`: can be literal `stdout`, `stderr`, or a file name (default `stderr`)
* `--log-level LEVEL`: can be `error`, `warn`, `info`, `debug`, `trace`, or `disabled` (default `disabled`)
* `--log-format TYPE`: can be `text` or `json` (default `text`)

New functions in the `log` package take a `context.Context` and retrieve
the logger from said context.

Because we carry the logger in a context, adding
[attributes](https://pkg.go.dev/golang.org/x/exp/slog#hdr-Attrs_and_Values)
to the logger can be done as follows:

```go
ctx = log.NewContext(ctx, log.GetLogger(ctx).With("foo", "bar"))
```
2023-03-16 14:46:53 +01:00
Pieter Noordhuis e872b587cc
Add optional JSON output for sync command (#230)
JSON output makes it easy to process synchronization progress
information in downstream tools (e.g. the vscode extension).
This changes introduces a `sync.Event` interface type for progress events as
well as an `sync.EventNotifier` that lets the sync code pass along
progress events to calling code.

Example output in text mode (default, this uses the existing logger calls):
```text
2023/03/03 14:07:17 [INFO] Remote file sync location: /Repos/pieter.noordhuis@databricks.com/...
2023/03/03 14:07:18 [INFO] Initial Sync Complete
2023/03/03 14:07:22 [INFO] Action: PUT: foo
2023/03/03 14:07:23 [INFO] Uploaded foo
2023/03/03 14:07:23 [INFO] Complete
2023/03/03 14:07:25 [INFO] Action: DELETE: foo
2023/03/03 14:07:25 [INFO] Deleted foo
2023/03/03 14:07:25 [INFO] Complete
```

Example output in JSON mode:
```json
{"timestamp":"2023-03-03T14:08:15.459439+01:00","seq":0,"type":"start"}
{"timestamp":"2023-03-03T14:08:15.459461+01:00","seq":0,"type":"complete"}
{"timestamp":"2023-03-03T14:08:18.459821+01:00","seq":1,"type":"start","put":["foo"]}
{"timestamp":"2023-03-03T14:08:18.459867+01:00","seq":1,"type":"progress","action":"put","path":"foo","progress":0}
{"timestamp":"2023-03-03T14:08:19.418696+01:00","seq":1,"type":"progress","action":"put","path":"foo","progress":1}
{"timestamp":"2023-03-03T14:08:19.421397+01:00","seq":1,"type":"complete","put":["foo"]}
{"timestamp":"2023-03-03T14:08:22.459238+01:00","seq":2,"type":"start","delete":["foo"]}
{"timestamp":"2023-03-03T14:08:22.459268+01:00","seq":2,"type":"progress","action":"delete","path":"foo","progress":0}
{"timestamp":"2023-03-03T14:08:22.686413+01:00","seq":2,"type":"progress","action":"delete","path":"foo","progress":1}
{"timestamp":"2023-03-03T14:08:22.688989+01:00","seq":2,"type":"complete","delete":["foo"]}
```

---------

Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
2023-03-08 10:27:19 +01:00