Added OpenAPI command coverage (#357)
This PR adds the following command groups:
## Workspace-level command groups
* `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
* `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
* `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
* `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
* `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
* `bricks dashboards` - In general, there is little need to modify dashboards using the API.
* `bricks data-sources` - This API is provided to assist you in making new query objects.
* `bricks experiments` - MLflow Experiment tracking.
* `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
* `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
* `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
* `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
* `bricks grants` - In Unity Catalog, data is secure by default.
* `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
* `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
* `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
* `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
* `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
* `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
* `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
* `bricks model-registry` - MLflow Model Registry commands.
* `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
* `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
* `bricks policy-families` - View available policy families.
* `bricks providers` - Databricks Providers REST API.
* `bricks queries` - These endpoints are used for CRUD operations on query definitions.
* `bricks query-history` - Access the history of queries through SQL warehouses.
* `bricks recipient-activation` - Databricks Recipient Activation REST API.
* `bricks recipients` - Databricks Recipients REST API.
* `bricks repos` - The Repos API allows users to manage their git repos.
* `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
* `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
* `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
* `bricks shares` - Databricks Shares REST API.
* `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
* `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
* `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
* `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
* `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
* `bricks users` - User identities recognized by Databricks and represented by email addresses.
* `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
* `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
* `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
* `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.
## Account-level command groups
* `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
* `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
* `bricks account credentials` - These APIs manage credential configurations for this workspace.
* `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
* `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
* `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
* `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
* `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
* `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
* `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
* `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
* `bricks account private-access` - These APIs manage private access settings for this account.
* `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks account storage` - These APIs manage storage configurations for this workspace.
* `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
* `bricks account users` - User identities recognized by Databricks and represented by email addresses.
* `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
* `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
* `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package secrets
import (
"fmt"
2023-05-16 16:35:39 +00:00
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/flags"
Added OpenAPI command coverage (#357)
This PR adds the following command groups:
## Workspace-level command groups
* `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts.
* `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace.
* `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules.
* `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters.
* `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal.
* `bricks dashboards` - In general, there is little need to modify dashboards using the API.
* `bricks data-sources` - This API is provided to assist you in making new query objects.
* `bricks experiments` - MLflow Experiment tracking.
* `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path.
* `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog.
* `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user.
* `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
* `bricks grants` - In Unity Catalog, data is secure by default.
* `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects.
* `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
* `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with.
* `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists.
* `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs.
* `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
* `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog.
* `bricks model-registry` - MLflow Model Registry commands.
* `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints.
* `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
* `bricks policy-families` - View available policy families.
* `bricks providers` - Databricks Providers REST API.
* `bricks queries` - These endpoints are used for CRUD operations on query definitions.
* `bricks query-history` - Access the history of queries through SQL warehouses.
* `bricks recipient-activation` - Databricks Recipient Activation REST API.
* `bricks recipients` - Databricks Recipients REST API.
* `bricks repos` - The Repos API allows users to manage their git repos.
* `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace.
* `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions.
* `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints.
* `bricks shares` - Databricks Shares REST API.
* `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant.
* `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables.
* `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace.
* `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users.
* `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
* `bricks users` - User identities recognized by Databricks and represented by email addresses.
* `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files.
* `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL.
* `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders.
* `bricks workspace-conf` - This API allows updating known workspace settings for advanced users.
## Account-level command groups
* `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range.
* `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period.
* `bricks account credentials` - These APIs manage credential configurations for this workspace.
* `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional).
* `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects.
* `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console.
* `bricks account log-delivery` - These APIs manage log delivery configurations for this account.
* `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace.
* `bricks account metastores` - These APIs manage Unity Catalog metastores for an account.
* `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional).
* `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration.
* `bricks account private-access` - These APIs manage private access settings for this account.
* `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud.
* `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms.
* `bricks account storage` - These APIs manage storage configurations for this workspace.
* `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore.
* `bricks account users` - User identities recognized by Databricks and represented by email addresses.
* `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account.
* `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account.
* `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
"github.com/databricks/databricks-sdk-go/service/workspace"
"github.com/spf13/cobra"
)
var Cmd = & cobra . Command {
Use : "secrets" ,
Short : ` The Secrets API allows you to manage secrets, secret scopes, and access permissions. ` ,
Long : ` The Secrets API allows you to manage secrets , secret scopes , and access
permissions .
Sometimes accessing data requires that you authenticate to external data
sources through JDBC . Instead of directly entering your credentials into a
notebook , use Databricks secrets to store your credentials and reference them
in notebooks and jobs .
Administrators , secret creators , and users granted permission can read
Databricks secrets . While Databricks makes an effort to redact secret values
that might be displayed in notebooks , it is not possible to prevent such users
from reading secrets . ` ,
}
// start create-scope command
var createScopeReq workspace . CreateScope
var createScopeJson flags . JsonFlag
func init ( ) {
Cmd . AddCommand ( createScopeCmd )
// TODO: short flags
createScopeCmd . Flags ( ) . Var ( & createScopeJson , "json" , ` either inline JSON string or @path/to/file.json with request body ` )
createScopeCmd . Flags ( ) . StringVar ( & createScopeReq . InitialManagePrincipal , "initial-manage-principal" , createScopeReq . InitialManagePrincipal , ` The principal that is initially granted MANAGE permission to the created scope. ` )
// TODO: complex arg: keyvault_metadata
createScopeCmd . Flags ( ) . Var ( & createScopeReq . ScopeBackendType , "scope-backend-type" , ` The backend type the scope will be created with. ` )
}
var createScopeCmd = & cobra . Command {
Use : "create-scope" ,
Short : ` Create a new secret scope. ` ,
Long : ` Create a new secret scope .
The scope name must consist of alphanumeric characters , dashes , underscores ,
and periods , and may not exceed 128 characters . The maximum number of scopes
in a workspace is 100. ` ,
Annotations : map [ string ] string { } ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
err = createScopeJson . Unmarshal ( & createScopeReq )
if err != nil {
return err
}
createScopeReq . Scope = args [ 0 ]
err = w . Secrets . CreateScope ( ctx , createScopeReq )
if err != nil {
return err
}
return nil
} ,
}
// start delete-acl command
var deleteAclReq workspace . DeleteAcl
func init ( ) {
Cmd . AddCommand ( deleteAclCmd )
// TODO: short flags
}
var deleteAclCmd = & cobra . Command {
Use : "delete-acl SCOPE PRINCIPAL" ,
Short : ` Delete an ACL. ` ,
Long : ` Delete an ACL .
Deletes the given ACL on the given scope .
Users must have the MANAGE permission to invoke this API . Throws
RESOURCE_DOES_NOT_EXIST if no such secret scope , principal , or ACL exists .
Throws PERMISSION_DENIED if the user does not have permission to make this
API call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 2 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
deleteAclReq . Scope = args [ 0 ]
deleteAclReq . Principal = args [ 1 ]
err = w . Secrets . DeleteAcl ( ctx , deleteAclReq )
if err != nil {
return err
}
return nil
} ,
}
// start delete-scope command
var deleteScopeReq workspace . DeleteScope
func init ( ) {
Cmd . AddCommand ( deleteScopeCmd )
// TODO: short flags
}
var deleteScopeCmd = & cobra . Command {
Use : "delete-scope SCOPE" ,
Short : ` Delete a secret scope. ` ,
Long : ` Delete a secret scope .
Deletes a secret scope .
Throws RESOURCE_DOES_NOT_EXIST if the scope does not exist . Throws
PERMISSION_DENIED if the user does not have permission to make this API
call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 1 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
deleteScopeReq . Scope = args [ 0 ]
err = w . Secrets . DeleteScope ( ctx , deleteScopeReq )
if err != nil {
return err
}
return nil
} ,
}
// start delete-secret command
var deleteSecretReq workspace . DeleteSecret
func init ( ) {
Cmd . AddCommand ( deleteSecretCmd )
// TODO: short flags
}
var deleteSecretCmd = & cobra . Command {
Use : "delete-secret SCOPE KEY" ,
Short : ` Delete a secret. ` ,
Long : ` Delete a secret .
Deletes the secret stored in this secret scope . You must have WRITE or
MANAGE permission on the secret scope .
Throws RESOURCE_DOES_NOT_EXIST if no such secret scope or secret exists .
Throws PERMISSION_DENIED if the user does not have permission to make this
API call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 2 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
deleteSecretReq . Scope = args [ 0 ]
deleteSecretReq . Key = args [ 1 ]
err = w . Secrets . DeleteSecret ( ctx , deleteSecretReq )
if err != nil {
return err
}
return nil
} ,
}
// start get-acl command
var getAclReq workspace . GetAclRequest
func init ( ) {
Cmd . AddCommand ( getAclCmd )
// TODO: short flags
}
var getAclCmd = & cobra . Command {
Use : "get-acl SCOPE PRINCIPAL" ,
Short : ` Get secret ACL details. ` ,
Long : ` Get secret ACL details .
Gets the details about the given ACL , such as the group and permission . Users
must have the MANAGE permission to invoke this API .
Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists . Throws
PERMISSION_DENIED if the user does not have permission to make this API
call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 2 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
getAclReq . Scope = args [ 0 ]
getAclReq . Principal = args [ 1 ]
response , err := w . Secrets . GetAcl ( ctx , getAclReq )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start list-acls command
var listAclsReq workspace . ListAclsRequest
func init ( ) {
Cmd . AddCommand ( listAclsCmd )
// TODO: short flags
}
var listAclsCmd = & cobra . Command {
Use : "list-acls SCOPE" ,
Short : ` Lists ACLs. ` ,
Long : ` Lists ACLs .
List the ACLs for a given secret scope . Users must have the MANAGE
permission to invoke this API .
Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists . Throws
PERMISSION_DENIED if the user does not have permission to make this API
call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 1 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
listAclsReq . Scope = args [ 0 ]
response , err := w . Secrets . ListAclsAll ( ctx , listAclsReq )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start list-scopes command
func init ( ) {
Cmd . AddCommand ( listScopesCmd )
}
var listScopesCmd = & cobra . Command {
Use : "list-scopes" ,
Short : ` List all scopes. ` ,
Long : ` List all scopes .
Lists all secret scopes available in the workspace .
Throws PERMISSION_DENIED if the user does not have permission to make this
API call . ` ,
Annotations : map [ string ] string { } ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
response , err := w . Secrets . ListScopesAll ( ctx )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start list-secrets command
var listSecretsReq workspace . ListSecretsRequest
func init ( ) {
Cmd . AddCommand ( listSecretsCmd )
// TODO: short flags
}
var listSecretsCmd = & cobra . Command {
Use : "list-secrets SCOPE" ,
Short : ` List secret keys. ` ,
Long : ` List secret keys .
Lists the secret keys that are stored at this scope . This is a metadata - only
operation ; secret data cannot be retrieved using this API . Users need the READ
permission to make this call .
The lastUpdatedTimestamp returned is in milliseconds since epoch . Throws
RESOURCE_DOES_NOT_EXIST if no such secret scope exists . Throws
PERMISSION_DENIED if the user does not have permission to make this API
call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 1 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
listSecretsReq . Scope = args [ 0 ]
response , err := w . Secrets . ListSecretsAll ( ctx , listSecretsReq )
if err != nil {
return err
}
return cmdio . Render ( ctx , response )
} ,
}
// start put-acl command
var putAclReq workspace . PutAcl
func init ( ) {
Cmd . AddCommand ( putAclCmd )
// TODO: short flags
}
var putAclCmd = & cobra . Command {
Use : "put-acl SCOPE PRINCIPAL PERMISSION" ,
Short : ` Create/update an ACL. ` ,
Long : ` Create / update an ACL .
Creates or overwrites the Access Control List ( ACL ) associated with the given
principal ( user or group ) on the specified scope point .
In general , a user or group will use the most powerful permission available to
them , and permissions are ordered as follows :
* MANAGE - Allowed to change ACLs , and read and write to this secret scope .
* WRITE - Allowed to read and write to this secret scope . * READ - Allowed
to read this secret scope and list what secrets are available .
Note that in general , secret values can only be read from within a command on
a cluster ( for example , through a notebook ) . There is no API to read the
actual secret value material outside of a cluster . However , the user ' s
permission will be applied based on who is executing the command , and they
must have at least READ permission .
Users must have the MANAGE permission to invoke this API .
The principal is a user or group name corresponding to an existing Databricks
principal to be granted or revoked access .
Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists . Throws
RESOURCE_ALREADY_EXISTS if a permission for the principal already exists .
Throws INVALID_PARAMETER_VALUE if the permission is invalid . Throws
PERMISSION_DENIED if the user does not have permission to make this API
call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 3 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
putAclReq . Scope = args [ 0 ]
putAclReq . Principal = args [ 1 ]
_ , err = fmt . Sscan ( args [ 2 ] , & putAclReq . Permission )
if err != nil {
return fmt . Errorf ( "invalid PERMISSION: %s" , args [ 2 ] )
}
err = w . Secrets . PutAcl ( ctx , putAclReq )
if err != nil {
return err
}
return nil
} ,
}
// start put-secret command
var putSecretReq workspace . PutSecret
func init ( ) {
Cmd . AddCommand ( putSecretCmd )
// TODO: short flags
putSecretCmd . Flags ( ) . StringVar ( & putSecretReq . BytesValue , "bytes-value" , putSecretReq . BytesValue , ` If specified, value will be stored as bytes. ` )
putSecretCmd . Flags ( ) . StringVar ( & putSecretReq . StringValue , "string-value" , putSecretReq . StringValue , ` If specified, note that the value will be stored in UTF-8 (MB4) form. ` )
}
var putSecretCmd = & cobra . Command {
Use : "put-secret SCOPE KEY" ,
Short : ` Add a secret. ` ,
Long : ` Add a secret .
Inserts a secret under the provided scope with the given name . If a secret
already exists with the same name , this command overwrites the existing
secret ' s value . The server encrypts the secret using the secret scope ' s
encryption settings before storing it .
You must have WRITE or MANAGE permission on the secret scope . The secret
key must consist of alphanumeric characters , dashes , underscores , and periods ,
and cannot exceed 128 characters . The maximum allowed secret value size is 128
KB . The maximum number of secrets in a given scope is 1000.
The input fields "string_value" or "bytes_value" specify the type of the
secret , which will determine the value returned when the secret value is
requested . Exactly one must be specified .
Throws RESOURCE_DOES_NOT_EXIST if no such secret scope exists . Throws
RESOURCE_LIMIT_EXCEEDED if maximum number of secrets in scope is exceeded .
Throws INVALID_PARAMETER_VALUE if the key name or value length is invalid .
Throws PERMISSION_DENIED if the user does not have permission to make this
API call . ` ,
Annotations : map [ string ] string { } ,
Args : cobra . ExactArgs ( 2 ) ,
PreRunE : root . MustWorkspaceClient ,
RunE : func ( cmd * cobra . Command , args [ ] string ) ( err error ) {
ctx := cmd . Context ( )
w := root . WorkspaceClient ( ctx )
putSecretReq . Scope = args [ 0 ]
putSecretReq . Key = args [ 1 ]
err = w . Secrets . PutSecret ( ctx , putSecretReq )
if err != nil {
return err
}
return nil
} ,
}
// end service Secrets