databricks-cli/cmd/workspace/pipelines/pipelines.go

596 lines
19 KiB
Go
Raw Normal View History

Added OpenAPI command coverage (#357) This PR adds the following command groups: ## Workspace-level command groups * `bricks alerts` - The alerts API can be used to perform CRUD operations on alerts. * `bricks catalogs` - A catalog is the first layer of Unity Catalog’s three-level namespace. * `bricks cluster-policies` - Cluster policy limits the ability to configure clusters based on a set of rules. * `bricks clusters` - The Clusters API allows you to create, start, edit, list, terminate, and delete clusters. * `bricks current-user` - This API allows retrieving information about currently authenticated user or service principal. * `bricks dashboards` - In general, there is little need to modify dashboards using the API. * `bricks data-sources` - This API is provided to assist you in making new query objects. * `bricks experiments` - MLflow Experiment tracking. * `bricks external-locations` - An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path. * `bricks functions` - Functions implement User-Defined Functions (UDFs) in Unity Catalog. * `bricks git-credentials` - Registers personal access token for Databricks to do operations on behalf of the user. * `bricks global-init-scripts` - The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace. * `bricks grants` - In Unity Catalog, data is secure by default. * `bricks groups` - Groups simplify identity management, making it easier to assign access to Databricks Workspace, data, and other securable objects. * `bricks instance-pools` - Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times. * `bricks instance-profiles` - The Instance Profiles API allows admins to add, list, and remove instance profiles that users can launch clusters with. * `bricks ip-access-lists` - IP Access List enables admins to configure IP access lists. * `bricks jobs` - The Jobs API allows you to create, edit, and delete jobs. * `bricks libraries` - The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster. * `bricks metastores` - A metastore is the top-level container of objects in Unity Catalog. * `bricks model-registry` - MLflow Model Registry commands. * `bricks permissions` - Permissions API are used to create read, write, edit, update and manage access for various users on different objects and endpoints. * `bricks pipelines` - The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines. * `bricks policy-families` - View available policy families. * `bricks providers` - Databricks Providers REST API. * `bricks queries` - These endpoints are used for CRUD operations on query definitions. * `bricks query-history` - Access the history of queries through SQL warehouses. * `bricks recipient-activation` - Databricks Recipient Activation REST API. * `bricks recipients` - Databricks Recipients REST API. * `bricks repos` - The Repos API allows users to manage their git repos. * `bricks schemas` - A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. * `bricks secrets` - The Secrets API allows you to manage secrets, secret scopes, and access permissions. * `bricks service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. * `bricks serving-endpoints` - The Serving Endpoints API allows you to create, update, and delete model serving endpoints. * `bricks shares` - Databricks Shares REST API. * `bricks storage-credentials` - A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant. * `bricks table-constraints` - Primary key and foreign key constraints encode relationships between fields in tables. * `bricks tables` - A table resides in the third layer of Unity Catalog’s three-level namespace. * `bricks token-management` - Enables administrators to get all tokens and delete tokens for other users. * `bricks tokens` - The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs. * `bricks users` - User identities recognized by Databricks and represented by email addresses. * `bricks volumes` - Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files. * `bricks warehouses` - A SQL warehouse is a compute resource that lets you run SQL commands on data objects within Databricks SQL. * `bricks workspace` - The Workspace API allows you to list, import, export, and delete notebooks and folders. * `bricks workspace-conf` - This API allows updating known workspace settings for advanced users. ## Account-level command groups * `bricks account billable-usage` - This API allows you to download billable usage logs for the specified account and date range. * `bricks account budgets` - These APIs manage budget configuration including notifications for exceeding a budget for a period. * `bricks account credentials` - These APIs manage credential configurations for this workspace. * `bricks account custom-app-integration` - These APIs enable administrators to manage custom oauth app integrations, which is required for adding/using Custom OAuth App Integration like Tableau Cloud for Databricks in AWS cloud. * `bricks account encryption-keys` - These APIs manage encryption key configurations for this workspace (optional). * `bricks account groups` - Groups simplify identity management, making it easier to assign access to Databricks Account, data, and other securable objects. * `bricks account ip-access-lists` - The Accounts IP Access List API enables account admins to configure IP access lists for access to the account console. * `bricks account log-delivery` - These APIs manage log delivery configurations for this account. * `bricks account metastore-assignments` - These APIs manage metastore assignments to a workspace. * `bricks account metastores` - These APIs manage Unity Catalog metastores for an account. * `bricks account networks` - These APIs manage network configurations for customer-managed VPCs (optional). * `bricks account o-auth-enrollment` - These APIs enable administrators to enroll OAuth for their accounts, which is required for adding/using any OAuth published/custom application integration. * `bricks account private-access` - These APIs manage private access settings for this account. * `bricks account published-app-integration` - These APIs enable administrators to manage published oauth app integrations, which is required for adding/using Published OAuth App Integration like Tableau Cloud for Databricks in AWS cloud. * `bricks account service-principals` - Identities for use with jobs, automated tools, and systems such as scripts, apps, and CI/CD platforms. * `bricks account storage` - These APIs manage storage configurations for this workspace. * `bricks account storage-credentials` - These APIs manage storage credentials for a particular metastore. * `bricks account users` - User identities recognized by Databricks and represented by email addresses. * `bricks account vpc-endpoints` - These APIs manage VPC endpoint configurations for this account. * `bricks account workspace-assignment` - The Workspace Permission Assignment API allows you to manage workspace permissions for principals in your account. * `bricks account workspaces` - These APIs manage workspaces for this account.
2023-04-26 11:06:16 +00:00
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package pipelines
import (
"fmt"
"time"
"github.com/databricks/bricks/cmd/root"
"github.com/databricks/bricks/libs/cmdio"
"github.com/databricks/bricks/libs/flags"
"github.com/databricks/databricks-sdk-go/retries"
"github.com/databricks/databricks-sdk-go/service/pipelines"
"github.com/spf13/cobra"
)
var Cmd = &cobra.Command{
Use: "pipelines",
Short: `The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.`,
Long: `The Delta Live Tables API allows you to create, edit, delete, start, and view
details about pipelines.
Delta Live Tables is a framework for building reliable, maintainable, and
testable data processing pipelines. You define the transformations to perform
on your data, and Delta Live Tables manages task orchestration, cluster
management, monitoring, data quality, and error handling.
Instead of defining your data pipelines using a series of separate Apache
Spark tasks, Delta Live Tables manages how your data is transformed based on a
target schema you define for each processing step. You can also enforce data
quality with Delta Live Tables expectations. Expectations allow you to define
expected data quality and specify how to handle records that fail those
expectations.`,
}
// start create command
var createReq pipelines.CreatePipeline
var createJson flags.JsonFlag
func init() {
Cmd.AddCommand(createCmd)
// TODO: short flags
createCmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
createCmd.Flags().BoolVar(&createReq.AllowDuplicateNames, "allow-duplicate-names", createReq.AllowDuplicateNames, `If false, deployment will fail if name conflicts with that of another pipeline.`)
createCmd.Flags().StringVar(&createReq.Catalog, "catalog", createReq.Catalog, `A catalog in Unity Catalog to publish data from this pipeline to.`)
createCmd.Flags().StringVar(&createReq.Channel, "channel", createReq.Channel, `DLT Release Channel that specifies which version to use.`)
// TODO: array: clusters
// TODO: map via StringToStringVar: configuration
createCmd.Flags().BoolVar(&createReq.Continuous, "continuous", createReq.Continuous, `Whether the pipeline is continuous or triggered.`)
createCmd.Flags().BoolVar(&createReq.Development, "development", createReq.Development, `Whether the pipeline is in Development mode.`)
createCmd.Flags().BoolVar(&createReq.DryRun, "dry-run", createReq.DryRun, ``)
createCmd.Flags().StringVar(&createReq.Edition, "edition", createReq.Edition, `Pipeline product edition.`)
// TODO: complex arg: filters
createCmd.Flags().StringVar(&createReq.Id, "id", createReq.Id, `Unique identifier for this pipeline.`)
// TODO: array: libraries
createCmd.Flags().StringVar(&createReq.Name, "name", createReq.Name, `Friendly identifier for this pipeline.`)
createCmd.Flags().BoolVar(&createReq.Photon, "photon", createReq.Photon, `Whether Photon is enabled for this pipeline.`)
createCmd.Flags().StringVar(&createReq.Storage, "storage", createReq.Storage, `DBFS root directory for storing checkpoints and tables.`)
createCmd.Flags().StringVar(&createReq.Target, "target", createReq.Target, `Target schema (database) to add tables in this pipeline to.`)
// TODO: complex arg: trigger
}
var createCmd = &cobra.Command{
Use: "create",
Short: `Create a pipeline.`,
Long: `Create a pipeline.
Creates a new data processing pipeline based on the requested configuration.
If successful, this method returns the ID of the new pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
err = createJson.Unmarshal(&createReq)
if err != nil {
return err
}
response, err := w.Pipelines.Create(ctx, createReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start delete command
var deleteReq pipelines.DeletePipelineRequest
func init() {
Cmd.AddCommand(deleteCmd)
// TODO: short flags
}
var deleteCmd = &cobra.Command{
Use: "delete PIPELINE_ID",
Short: `Delete a pipeline.`,
Long: `Delete a pipeline.
Deletes a pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if len(args) == 0 {
names, err := w.Pipelines.PipelineStateInfoNameToPipelineIdMap(ctx, pipelines.ListPipelinesRequest{})
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have ")
}
deleteReq.PipelineId = args[0]
err = w.Pipelines.Delete(ctx, deleteReq)
if err != nil {
return err
}
return nil
},
}
// start get command
var getReq pipelines.GetPipelineRequest
var getSkipWait bool
var getTimeout time.Duration
func init() {
Cmd.AddCommand(getCmd)
getCmd.Flags().BoolVar(&getSkipWait, "no-wait", getSkipWait, `do not wait to reach RUNNING state`)
getCmd.Flags().DurationVar(&getTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach RUNNING state`)
// TODO: short flags
}
var getCmd = &cobra.Command{
Use: "get PIPELINE_ID",
Short: `Get a pipeline.`,
Long: `Get a pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if len(args) == 0 {
names, err := w.Pipelines.PipelineStateInfoNameToPipelineIdMap(ctx, pipelines.ListPipelinesRequest{})
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have ")
}
getReq.PipelineId = args[0]
response, err := w.Pipelines.Get(ctx, getReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start get-update command
var getUpdateReq pipelines.GetUpdateRequest
func init() {
Cmd.AddCommand(getUpdateCmd)
// TODO: short flags
}
var getUpdateCmd = &cobra.Command{
Use: "get-update PIPELINE_ID UPDATE_ID",
Short: `Get a pipeline update.`,
Long: `Get a pipeline update.
Gets an update from an active pipeline.`,
Annotations: map[string]string{},
Args: cobra.ExactArgs(2),
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
getUpdateReq.PipelineId = args[0]
getUpdateReq.UpdateId = args[1]
response, err := w.Pipelines.GetUpdate(ctx, getUpdateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start list-pipeline-events command
var listPipelineEventsReq pipelines.ListPipelineEventsRequest
var listPipelineEventsJson flags.JsonFlag
func init() {
Cmd.AddCommand(listPipelineEventsCmd)
// TODO: short flags
listPipelineEventsCmd.Flags().Var(&listPipelineEventsJson, "json", `either inline JSON string or @path/to/file.json with request body`)
listPipelineEventsCmd.Flags().StringVar(&listPipelineEventsReq.Filter, "filter", listPipelineEventsReq.Filter, `Criteria to select a subset of results, expressed using a SQL-like syntax.`)
listPipelineEventsCmd.Flags().IntVar(&listPipelineEventsReq.MaxResults, "max-results", listPipelineEventsReq.MaxResults, `Max number of entries to return in a single page.`)
// TODO: array: order_by
listPipelineEventsCmd.Flags().StringVar(&listPipelineEventsReq.PageToken, "page-token", listPipelineEventsReq.PageToken, `Page token returned by previous call.`)
}
var listPipelineEventsCmd = &cobra.Command{
Use: "list-pipeline-events",
Short: `List pipeline events.`,
Long: `List pipeline events.
Retrieves events for a pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
err = listPipelineEventsJson.Unmarshal(&listPipelineEventsReq)
if err != nil {
return err
}
listPipelineEventsReq.PipelineId = args[0]
response, err := w.Pipelines.ListPipelineEventsAll(ctx, listPipelineEventsReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start list-pipelines command
var listPipelinesReq pipelines.ListPipelinesRequest
var listPipelinesJson flags.JsonFlag
func init() {
Cmd.AddCommand(listPipelinesCmd)
// TODO: short flags
listPipelinesCmd.Flags().Var(&listPipelinesJson, "json", `either inline JSON string or @path/to/file.json with request body`)
listPipelinesCmd.Flags().StringVar(&listPipelinesReq.Filter, "filter", listPipelinesReq.Filter, `Select a subset of results based on the specified criteria.`)
listPipelinesCmd.Flags().IntVar(&listPipelinesReq.MaxResults, "max-results", listPipelinesReq.MaxResults, `The maximum number of entries to return in a single page.`)
// TODO: array: order_by
listPipelinesCmd.Flags().StringVar(&listPipelinesReq.PageToken, "page-token", listPipelinesReq.PageToken, `Page token returned by previous call.`)
}
var listPipelinesCmd = &cobra.Command{
Use: "list-pipelines",
Short: `List pipelines.`,
Long: `List pipelines.
Lists pipelines defined in the Delta Live Tables system.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
err = listPipelinesJson.Unmarshal(&listPipelinesReq)
if err != nil {
return err
}
response, err := w.Pipelines.ListPipelinesAll(ctx, listPipelinesReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start list-updates command
var listUpdatesReq pipelines.ListUpdatesRequest
func init() {
Cmd.AddCommand(listUpdatesCmd)
// TODO: short flags
listUpdatesCmd.Flags().IntVar(&listUpdatesReq.MaxResults, "max-results", listUpdatesReq.MaxResults, `Max number of entries to return in a single page.`)
listUpdatesCmd.Flags().StringVar(&listUpdatesReq.PageToken, "page-token", listUpdatesReq.PageToken, `Page token returned by previous call.`)
listUpdatesCmd.Flags().StringVar(&listUpdatesReq.UntilUpdateId, "until-update-id", listUpdatesReq.UntilUpdateId, `If present, returns updates until and including this update_id.`)
}
var listUpdatesCmd = &cobra.Command{
Use: "list-updates PIPELINE_ID",
Short: `List pipeline updates.`,
Long: `List pipeline updates.
List updates for an active pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if len(args) == 0 {
names, err := w.Pipelines.PipelineStateInfoNameToPipelineIdMap(ctx, pipelines.ListPipelinesRequest{})
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "The pipeline to return updates for")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have the pipeline to return updates for")
}
listUpdatesReq.PipelineId = args[0]
response, err := w.Pipelines.ListUpdates(ctx, listUpdatesReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start reset command
var resetReq pipelines.ResetRequest
var resetSkipWait bool
var resetTimeout time.Duration
func init() {
Cmd.AddCommand(resetCmd)
resetCmd.Flags().BoolVar(&resetSkipWait, "no-wait", resetSkipWait, `do not wait to reach RUNNING state`)
resetCmd.Flags().DurationVar(&resetTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach RUNNING state`)
// TODO: short flags
}
var resetCmd = &cobra.Command{
Use: "reset PIPELINE_ID",
Short: `Reset a pipeline.`,
Long: `Reset a pipeline.
Resets a pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if len(args) == 0 {
names, err := w.Pipelines.PipelineStateInfoNameToPipelineIdMap(ctx, pipelines.ListPipelinesRequest{})
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have ")
}
resetReq.PipelineId = args[0]
if resetSkipWait {
err = w.Pipelines.Reset(ctx, resetReq)
if err != nil {
return err
}
return nil
}
spinner := cmdio.Spinner(ctx)
info, err := w.Pipelines.ResetAndWait(ctx, resetReq,
retries.Timeout[pipelines.GetPipelineResponse](resetTimeout),
func(i *retries.Info[pipelines.GetPipelineResponse]) {
if i.Info == nil {
return
}
statusMessage := i.Info.Cause
spinner <- statusMessage
})
close(spinner)
if err != nil {
return err
}
return cmdio.Render(ctx, info)
},
}
// start start-update command
var startUpdateReq pipelines.StartUpdate
var startUpdateJson flags.JsonFlag
func init() {
Cmd.AddCommand(startUpdateCmd)
// TODO: short flags
startUpdateCmd.Flags().Var(&startUpdateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
startUpdateCmd.Flags().Var(&startUpdateReq.Cause, "cause", ``)
startUpdateCmd.Flags().BoolVar(&startUpdateReq.FullRefresh, "full-refresh", startUpdateReq.FullRefresh, `If true, this update will reset all tables before running.`)
// TODO: array: full_refresh_selection
// TODO: array: refresh_selection
}
var startUpdateCmd = &cobra.Command{
Use: "start-update",
Short: `Queue a pipeline update.`,
Long: `Queue a pipeline update.
Starts or queues a pipeline update.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
err = startUpdateJson.Unmarshal(&startUpdateReq)
if err != nil {
return err
}
startUpdateReq.PipelineId = args[0]
response, err := w.Pipelines.StartUpdate(ctx, startUpdateReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
},
}
// start stop command
var stopReq pipelines.StopRequest
var stopSkipWait bool
var stopTimeout time.Duration
func init() {
Cmd.AddCommand(stopCmd)
stopCmd.Flags().BoolVar(&stopSkipWait, "no-wait", stopSkipWait, `do not wait to reach IDLE state`)
stopCmd.Flags().DurationVar(&stopTimeout, "timeout", 20*time.Minute, `maximum amount of time to reach IDLE state`)
// TODO: short flags
}
var stopCmd = &cobra.Command{
Use: "stop PIPELINE_ID",
Short: `Stop a pipeline.`,
Long: `Stop a pipeline.
Stops a pipeline.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
if len(args) == 0 {
names, err := w.Pipelines.PipelineStateInfoNameToPipelineIdMap(ctx, pipelines.ListPipelinesRequest{})
if err != nil {
return err
}
id, err := cmdio.Select(ctx, names, "")
if err != nil {
return err
}
args = append(args, id)
}
if len(args) != 1 {
return fmt.Errorf("expected to have ")
}
stopReq.PipelineId = args[0]
if stopSkipWait {
err = w.Pipelines.Stop(ctx, stopReq)
if err != nil {
return err
}
return nil
}
spinner := cmdio.Spinner(ctx)
info, err := w.Pipelines.StopAndWait(ctx, stopReq,
retries.Timeout[pipelines.GetPipelineResponse](stopTimeout),
func(i *retries.Info[pipelines.GetPipelineResponse]) {
if i.Info == nil {
return
}
statusMessage := i.Info.Cause
spinner <- statusMessage
})
close(spinner)
if err != nil {
return err
}
return cmdio.Render(ctx, info)
},
}
// start update command
var updateReq pipelines.EditPipeline
var updateJson flags.JsonFlag
func init() {
Cmd.AddCommand(updateCmd)
// TODO: short flags
updateCmd.Flags().Var(&updateJson, "json", `either inline JSON string or @path/to/file.json with request body`)
updateCmd.Flags().BoolVar(&updateReq.AllowDuplicateNames, "allow-duplicate-names", updateReq.AllowDuplicateNames, `If false, deployment will fail if name has changed and conflicts the name of another pipeline.`)
updateCmd.Flags().StringVar(&updateReq.Catalog, "catalog", updateReq.Catalog, `A catalog in Unity Catalog to publish data from this pipeline to.`)
updateCmd.Flags().StringVar(&updateReq.Channel, "channel", updateReq.Channel, `DLT Release Channel that specifies which version to use.`)
// TODO: array: clusters
// TODO: map via StringToStringVar: configuration
updateCmd.Flags().BoolVar(&updateReq.Continuous, "continuous", updateReq.Continuous, `Whether the pipeline is continuous or triggered.`)
updateCmd.Flags().BoolVar(&updateReq.Development, "development", updateReq.Development, `Whether the pipeline is in Development mode.`)
updateCmd.Flags().StringVar(&updateReq.Edition, "edition", updateReq.Edition, `Pipeline product edition.`)
updateCmd.Flags().Int64Var(&updateReq.ExpectedLastModified, "expected-last-modified", updateReq.ExpectedLastModified, `If present, the last-modified time of the pipeline settings before the edit.`)
// TODO: complex arg: filters
updateCmd.Flags().StringVar(&updateReq.Id, "id", updateReq.Id, `Unique identifier for this pipeline.`)
// TODO: array: libraries
updateCmd.Flags().StringVar(&updateReq.Name, "name", updateReq.Name, `Friendly identifier for this pipeline.`)
updateCmd.Flags().BoolVar(&updateReq.Photon, "photon", updateReq.Photon, `Whether Photon is enabled for this pipeline.`)
updateCmd.Flags().StringVar(&updateReq.PipelineId, "pipeline-id", updateReq.PipelineId, `Unique identifier for this pipeline.`)
updateCmd.Flags().StringVar(&updateReq.Storage, "storage", updateReq.Storage, `DBFS root directory for storing checkpoints and tables.`)
updateCmd.Flags().StringVar(&updateReq.Target, "target", updateReq.Target, `Target schema (database) to add tables in this pipeline to.`)
// TODO: complex arg: trigger
}
var updateCmd = &cobra.Command{
Use: "update",
Short: `Edit a pipeline.`,
Long: `Edit a pipeline.
Updates a pipeline with the supplied configuration.`,
Annotations: map[string]string{},
PreRunE: root.MustWorkspaceClient,
RunE: func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := root.WorkspaceClient(ctx)
err = updateJson.Unmarshal(&updateReq)
if err != nil {
return err
}
updateReq.PipelineId = args[0]
err = w.Pipelines.Update(ctx, updateReq)
if err != nil {
return err
}
return nil
},
}
// end service Pipelines