Added OpenAPI command coverage (#357)
This PR adds the following command groups:
## Workspace-level command groups
* `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
* `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
* `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
* `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
* `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
* `bricks dashboards` - In general, there is little need to modify dashboards using the API.
* `bricks data-sources` - This API is provided to assist you in making new query objects.
* `bricks experiments` - MLflow Experiment tracking.
* `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
* `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
* `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
* `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
* `bricks grants` - In Unity Catalog, data is secure by default.
* `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
* `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
* `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
* `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
* `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
* `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
* `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
* `bricks model-registry` - MLflow Model Registry commands.
* `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
* `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
* `bricks policy-families` - View available policy families.
* `bricks providers` - Databricks Providers REST API.
* `bricks queries` - These endpoints are used for CRUD operations on query definitions.
* `bricks query-history` - Access the history of queries through SQL warehouses.
* `bricks recipient-activation` - Databricks Recipient Activation REST API.
* `bricks recipients` - Databricks Recipients REST API.
* `bricks repos` - The Repos API allows users to manage their git repos.
* `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
* `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
* `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
* `bricks shares` - Databricks Shares REST API.
* `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
* `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
* `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
* `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
* `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
* `bricks users` - User identities recognized by Databricks and represented by email addresses.
* `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
* `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
* `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
* `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.
## Account-level command groups
* `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
* `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
* `bricks account credentials` - These APIs manage credential configurations for this workspace.
* `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
* `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
* `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
* `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
* `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
* `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
* `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
* `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
* `bricks account private-access` - These APIs manage private access settings for this account.
* `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks account storage` - These APIs manage storage configurations for this workspace.
* `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
* `bricks account users` - User identities recognized by Databricks and represented by email addresses.
* `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
* `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
* `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package log_delivery
import (
"fmt"
2023-05-16 16:35:39 +00:00
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
Added OpenAPI command coverage (#357)
This PR adds the following command groups:
## Workspace-level command groups
* `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
* `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
* `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
* `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
* `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
* `bricks dashboards` - In general, there is little need to modify dashboards using the API.
* `bricks data-sources` - This API is provided to assist you in making new query objects.
* `bricks experiments` - MLflow Experiment tracking.
* `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
* `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
* `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
* `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
* `bricks grants` - In Unity Catalog, data is secure by default.
* `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
* `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
* `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
* `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
* `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
* `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
* `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
* `bricks model-registry` - MLflow Model Registry commands.
* `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
* `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
* `bricks policy-families` - View available policy families.
* `bricks providers` - Databricks Providers REST API.
* `bricks queries` - These endpoints are used for CRUD operations on query definitions.
* `bricks query-history` - Access the history of queries through SQL warehouses.
* `bricks recipient-activation` - Databricks Recipient Activation REST API.
* `bricks recipients` - Databricks Recipients REST API.
* `bricks repos` - The Repos API allows users to manage their git repos.
* `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
* `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
* `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
* `bricks shares` - Databricks Shares REST API.
* `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
* `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
* `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
* `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
* `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
* `bricks users` - User identities recognized by Databricks and represented by email addresses.
* `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
* `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
* `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
* `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.
## Account-level command groups
* `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
* `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
* `bricks account credentials` - These APIs manage credential configurations for this workspace.
* `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
* `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
* `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
* `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
* `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
* `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
* `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
* `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
* `bricks account private-access` - These APIs manage private access settings for this account.
* `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks account storage` - These APIs manage storage configurations for this workspace.
* `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
* `bricks account users` - User identities recognized by Databricks and represented by email addresses.
* `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
* `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
* `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
"github.com/databricks/databricks-sdk-go/service/billing"
"github.com/spf13/cobra"
)
var Cmd = & cobra . Command {
Use : "log-delivery" ,
Short : ` These APIs manage log delivery configurations for this account. ` ,
Long : ` These APIs manage log delivery configurations for this account . The two
supported log types for this API are _billable usage logs_ and _audit logs_ .
This feature is in Public Preview . This feature works with all account ID
types .
Log delivery works with all account types . However , if your account is on the
E2 version of the platform or on a select custom plan that allows multiple
workspaces per account , you can optionally configure different storage
destinations for each workspace . Log delivery status is also provided to know
the latest status of log delivery attempts . The high - level flow of billable
usage delivery :
1. * * Create storage * * : In AWS , [ create a new AWS S3 bucket ] with a specific
bucket policy . Using Databricks APIs , call the Account API to create a
[ storage configuration object ] ( # operation / create - storage - config ) that uses the
bucket name . 2. * * Create credentials * * : In AWS , create the appropriate AWS IAM
role . For full details , including the required IAM role policies and trust
relationship , see [ Billable usage log delivery ] . Using Databricks APIs , call
the Account API to create a [ credential configuration
object ] ( # operation / create - credential - config ) that uses the IAM role ' s ARN . 3.
* * Create log delivery configuration * * : Using Databricks APIs , call the Account
API to [ create a log delivery
configuration ] ( # operation / create - log - delivery - config ) that uses the credential
and storage configuration objects from previous steps . You can specify if the
logs should include all events of that log type in your account ( _Account
level_ delivery ) or only events for a specific set of workspaces ( _workspace
level_ delivery ) . Account level log delivery applies to all current and future
workspaces plus account level logs , while workspace level log delivery solely
delivers logs related to the specified workspaces . You can create multiple
types of delivery configurations per account .
For billable usage delivery : * For more information about billable usage logs ,
see [ Billable usage log delivery ] . For the CSV schema , see the [ Usage page ] . *
The delivery location is < bucket - name > / < prefix > / billable - usage / csv / , where
< prefix > is the name of the optional delivery path prefix you set up during
log delivery configuration . Files are named
workspaceId = < workspace - id > - usageMonth = < month > . csv . * All billable usage logs
apply to specific workspaces ( _workspace level_ logs ) . You can aggregate usage
for your entire account by creating an _account level_ delivery configuration
that delivers logs for all current and future workspaces in your account . *
The files are delivered daily by overwriting the month ' s CSV file for each
workspace .
For audit log delivery : * For more information about about audit log delivery ,
see [ Audit log delivery ] , which includes information about the used JSON
schema . * The delivery location is
< bucket - name > / < delivery - path - prefix > / workspaceId = < workspaceId > / date = < yyyy - mm - dd > / auditlogs_ < internal - id > . json .
Files may get overwritten with the same content multiple times to achieve
exactly - once delivery . * If the audit log delivery configuration included
specific workspace IDs , only _workspace - level_ audit logs for those workspaces
are delivered . If the log delivery configuration applies to the entire account
( _account level_ delivery configuration ) , the audit log delivery includes
workspace - level audit logs for all workspaces in the account as well as
account - level audit logs . See [ Audit log delivery ] for details . * Auditable
events are typically available in logs within 15 minutes .
[ Audit log delivery ] : https : //docs.databricks.com/administration-guide/account-settings/audit-logs.html
[ Billable usage log delivery ] : https : //docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html
[ Usage page ] : https : //docs.databricks.com/administration-guide/account-settings/usage.html
[ create a new AWS S3 bucket ] : https : //docs.databricks.com/administration-guide/account-api/aws-storage.html`,
}
// start create command
var createReq billing . WrappedCreateLogDeliveryConfiguration
var createJson flags . JsonFlag
func init ( ) {
Cmd . AddCommand ( createCmd )
// TODO: short flags
createCmd . Flags ( ) . Var ( & createJson , "json" , ` either inline JSON string or @path/to/file.json with request body ` )
// TODO: complex arg: log_delivery_configuration
}
var createCmd = & cobra . Command {
Use : "create" ,
Short : ` Create a new log delivery configuration. ` ,
Long : ` Create a new log delivery configuration .
Creates a new Databricks log delivery configuration to enable delivery of the
specified type of logs to your storage location . This requires that you
already created a [ credential object ] ( # operation / create - credential - config )
( which encapsulates a cross - account service IAM role ) and a [ storage
configuration object ] ( # operation / create - storage - config ) ( which encapsulates an
S3 bucket ) .
For full details , including the required IAM role policies and bucket
policies , see [ Deliver and access billable usage logs ] or [ Configure audit
logging ] .
* * Note * * : There is a limit on the number of log delivery configurations
available per account ( each limit applies separately to each log type
including billable usage and audit logs ) . You can create a maximum of two
enabled account - level delivery configurations ( configurations without a
workspace filter ) per type . Additionally , you can create two enabled
workspace - level delivery configurations per workspace for each log type , which
means that the same workspace ID can occur in the workspace filter for no more
than two delivery configurations per log type .
You cannot delete a log delivery configuration , but you can disable it ( see
[ Enable or disable log delivery
configuration ] ( # operation / patch - log - delivery - config - status ) ) .
[ Configure audit logging ] : https : //docs.databricks.com/administration-guide/account-settings/audit-logs.html
[ Deliver and access billable usage logs ] : https : //docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html`,
Annotations : map [ string ] string { } ,
PreRunE : root . MustAccountClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
a := root . AccountClient ( ctx )
err = createJson . Unmarshal ( & createReq )
if err != nil {
return err
}
response , err := a . LogDelivery . Create ( ctx , createReq )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start get command
var getReq billing . GetLogDeliveryRequest
func init ( ) {
Cmd . AddCommand ( getCmd )
// TODO: short flags
}
var getCmd = & cobra . Command {
Use : "get LOG_DELIVERY_CONFIGURATION_ID" ,
Short : ` Get log delivery configuration. ` ,
Long : ` Get log delivery configuration .
Gets a Databricks log delivery configuration object for an account , both
specified by ID . ` ,
Annotations : map [ string ] string { } ,
PreRunE : root . MustAccountClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
a := root . AccountClient ( ctx )
if len ( args ) == 0 {
names , err := a . LogDelivery . LogDeliveryConfigurationConfigNameToConfigIdMap ( ctx , billing . ListLogDeliveryRequest { } )
if err != nil {
return err
}
id , err := cmdio . Select ( ctx , names , "Databricks log delivery configuration ID" )
if err != nil {
return err
}
args = append ( args , id )
}
if len ( args ) != 1 {
return fmt . Errorf ( "expected to have databricks log delivery configuration id" )
}
getReq . LogDeliveryConfigurationId = args [ 0 ]
response , err := a . LogDelivery . Get ( ctx , getReq )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start list command
var listReq billing . ListLogDeliveryRequest
func init ( ) {
Cmd . AddCommand ( listCmd )
// TODO: short flags
listCmd . Flags ( ) . StringVar ( & listReq . CredentialsId , "credentials-id" , listReq . CredentialsId , ` Filter by credential configuration ID. ` )
listCmd . Flags ( ) . Var ( & listReq . Status , "status" , ` Filter by status ENABLED or DISABLED. ` )
listCmd . Flags ( ) . StringVar ( & listReq . StorageConfigurationId , "storage-configuration-id" , listReq . StorageConfigurationId , ` Filter by storage configuration ID. ` )
}
var listCmd = & cobra . Command {
Use : "list" ,
Short : ` Get all log delivery configurations. ` ,
Long : ` Get all log delivery configurations .
Gets all Databricks log delivery configurations associated with an account
specified by ID . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 0 ) ,
PreRunE : root . MustAccountClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
a := root . AccountClient ( ctx )
response , err := a . LogDelivery . ListAll ( ctx , listReq )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start patch-status command
var patchStatusReq billing . UpdateLogDeliveryConfigurationStatusRequest
func init ( ) {
Cmd . AddCommand ( patchStatusCmd )
// TODO: short flags
}
var patchStatusCmd = & cobra . Command {
Use : "patch-status STATUS LOG_DELIVERY_CONFIGURATION_ID" ,
Short : ` Enable or disable log delivery configuration. ` ,
Long : ` Enable or disable log delivery configuration .
Enables or disables a log delivery configuration . Deletion of delivery
configurations is not supported , so disable log delivery configurations that
are no longer needed . Note that you can ' t re - enable a delivery configuration
if this would violate the delivery configuration limits described under
[ Create log delivery ] ( # operation / create - log - delivery - config ) . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 2 ) ,
PreRunE : root . MustAccountClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
a := root . AccountClient ( ctx )
_ , err = fmt . Sscan ( args [ 0 ] , & patchStatusReq . Status )
if err != nil {
return fmt . Errorf ( "invalid STATUS: %s" , args [ 0 ] )
}
patchStatusReq . LogDeliveryConfigurationId = args [ 1 ]
err = a . LogDelivery . PatchStatus ( ctx , patchStatusReq )
if err != nil {
return err
}
return nil
} ,
}
// end service LogDelivery