mirror of https://github.com/databricks/cli.git
Bump github.com/databricks/databricks-sdk-go from 0.42.0 to 0.43.0 (#1522)
Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.42.0 to 0.43.0. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's releases</a>.</em></p> <blockquote> <h2>v0.43.0</h2> <p>Major Changes and Improvements:</p> <ul> <li>Support partners in user agent for SDK (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/925">#925</a>).</li> <li>Add <code>serverless_compute_id</code> field to the config (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/952">#952</a>).</li> </ul> <p>Other Changes:</p> <ul> <li>Generate from latest spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/944">#944</a>) and (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/947">#947</a>).</li> </ul> <p>API Changes:</p> <ul> <li>Changed <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogInfo">catalog.CatalogInfo</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogIsolationMode">catalog.CatalogIsolationMode</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ExternalLocationInfo">catalog.ExternalLocationInfo</a>.</li> <li>Added <code>MaxResults</code> and <code>PageToken</code> fields for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ListCatalogsRequest">catalog.ListCatalogsRequest</a>.</li> <li>Added <code>NextPageToken</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ListCatalogsResponse">catalog.ListCatalogsResponse</a>.</li> <li>Added <code>TableServingUrl</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#StorageCredentialInfo">catalog.StorageCredentialInfo</a>.</li> <li>Changed <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCatalog">catalog.UpdateCatalog</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogIsolationMode">catalog.CatalogIsolationMode</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateExternalLocation">catalog.UpdateExternalLocation</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateStorageCredential">catalog.UpdateStorageCredential</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogIsolationMode">catalog.CatalogIsolationMode</a>.</li> <li>Added <code>CreateSchedule</code>, <code>CreateSubscription</code>, <code>DeleteSchedule</code>, <code>DeleteSubscription</code>, <code>GetSchedule</code>, <code>GetSubscription</code>, <code>List</code>, <code>ListSchedules</code>, <code>ListSubscriptions</code> and <code>UpdateSchedule</code> methods for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#LakeviewAPI">w.Lakeview</a> workspace-level service.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#CreateScheduleRequest">dashboards.CreateScheduleRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#CreateSubscriptionRequest">dashboards.CreateSubscriptionRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#CronSchedule">dashboards.CronSchedule</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DashboardView">dashboards.DashboardView</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DeleteScheduleRequest">dashboards.DeleteScheduleRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DeleteSubscriptionRequest">dashboards.DeleteSubscriptionRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GetScheduleRequest">dashboards.GetScheduleRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GetSubscriptionRequest">dashboards.GetSubscriptionRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListDashboardsRequest">dashboards.ListDashboardsRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListDashboardsResponse">dashboards.ListDashboardsResponse</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSchedulesRequest">dashboards.ListSchedulesRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSchedulesResponse">dashboards.ListSchedulesResponse</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSubscriptionsRequest">dashboards.ListSubscriptionsRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSubscriptionsResponse">dashboards.ListSubscriptionsResponse</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Schedule">dashboards.Schedule</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#SchedulePauseStatus">dashboards.SchedulePauseStatus</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Subscriber">dashboards.Subscriber</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Subscription">dashboards.Subscription</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#SubscriptionSubscriberDestination">dashboards.SubscriptionSubscriberDestination</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#SubscriptionSubscriberUser">dashboards.SubscriptionSubscriberUser</a> and <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#UpdateScheduleRequest">dashboards.UpdateScheduleRequest</a> structs.</li> <li>Added <code>OnStreamingBacklogExceeded</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobEmailNotifications">jobs.JobEmailNotifications</a>.</li> <li>Added <code>EnvironmentKey</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#RunTask">jobs.RunTask</a>.</li> <li>Removed <code>ConditionTask</code>, <code>DbtTask</code>, <code>NotebookTask</code>, <code>PipelineTask</code>, <code>PythonWheelTask</code>, <code>RunJobTask</code>, <code>SparkJarTask</code>, <code>SparkPythonTask</code>, <code>SparkSubmitTask</code>, <code>SqlTask</code> and <code>Environments</code> fields for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li> <li>Added <code>DbtTask</code> and <code>EnvironmentKey</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitTask">jobs.SubmitTask</a>.</li> <li>Added <code>OnStreamingBacklogExceeded</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#TaskEmailNotifications">jobs.TaskEmailNotifications</a>.</li> <li>Added <code>Periodic</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#TriggerSettings">jobs.TriggerSettings</a>.</li> <li>Added <code>OnStreamingBacklogExceeded</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#WebhookNotifications">jobs.WebhookNotifications</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#PeriodicTriggerConfiguration">jobs.PeriodicTriggerConfiguration</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#PeriodicTriggerConfigurationTimeUnit">jobs.PeriodicTriggerConfigurationTimeUnit</a>.</li> <li>Added <code>ProviderSummary</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#Listing">marketplace.Listing</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#ProviderIconFile">marketplace.ProviderIconFile</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#ProviderIconType">marketplace.ProviderIconType</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#ProviderListingSummaryInfo">marketplace.ProviderListingSummaryInfo</a>.</li> <li>Added <code>Start</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#AppsAPI">w.Apps</a> workspace-level service.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsDataPlaneAPI">w.ServingEndpointsDataPlane</a> workspace-level service.</li> <li>Added <code>ServicePrincipalId</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#App">serving.App</a>.</li> <li>Added <code>ServicePrincipalName</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#App">serving.App</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#StartAppRequest">serving.StartAppRequest</a>.</li> <li>Added <code>QueryNextPage</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#VectorSearchIndexesAPI">w.VectorSearchIndexes</a> workspace-level service.</li> <li>Added <code>QueryType</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#QueryVectorIndexRequest">vectorsearch.QueryVectorIndexRequest</a>.</li> <li>Added <code>NextPageToken</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#QueryVectorIndexResponse">vectorsearch.QueryVectorIndexResponse</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#QueryVectorIndexNextPageRequest">vectorsearch.QueryVectorIndexNextPageRequest</a>.</li> </ul> <p>OpenAPI SHA: 7437dabb9dadee402c1fc060df4c1ce8cc5369f0, Date: 2024-06-25</p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's changelog</a>.</em></p> <blockquote> <h2>0.43.0</h2> <p>Major Changes and Improvements:</p> <ul> <li>Support partners in user agent for SDK (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/925">#925</a>).</li> <li>Add <code>serverless_compute_id</code> field to the config (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/952">#952</a>).</li> </ul> <p>Other Changes:</p> <ul> <li>Generate from latest spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/944">#944</a>) and (<a href="https://redirect.github.com/databricks/databricks-sdk-go/pull/947">#947</a>).</li> </ul> <p>API Changes:</p> <ul> <li>Changed <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogInfo">catalog.CatalogInfo</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogIsolationMode">catalog.CatalogIsolationMode</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ExternalLocationInfo">catalog.ExternalLocationInfo</a>.</li> <li>Added <code>MaxResults</code> and <code>PageToken</code> fields for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ListCatalogsRequest">catalog.ListCatalogsRequest</a>.</li> <li>Added <code>NextPageToken</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#ListCatalogsResponse">catalog.ListCatalogsResponse</a>.</li> <li>Added <code>TableServingUrl</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#OnlineTable">catalog.OnlineTable</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#StorageCredentialInfo">catalog.StorageCredentialInfo</a>.</li> <li>Changed <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateCatalog">catalog.UpdateCatalog</a> to <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogIsolationMode">catalog.CatalogIsolationMode</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateExternalLocation">catalog.UpdateExternalLocation</a>.</li> <li>Added <code>IsolationMode</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#UpdateStorageCredential">catalog.UpdateStorageCredential</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/catalog#CatalogIsolationMode">catalog.CatalogIsolationMode</a>.</li> <li>Added <code>CreateSchedule</code>, <code>CreateSubscription</code>, <code>DeleteSchedule</code>, <code>DeleteSubscription</code>, <code>GetSchedule</code>, <code>GetSubscription</code>, <code>List</code>, <code>ListSchedules</code>, <code>ListSubscriptions</code> and <code>UpdateSchedule</code> methods for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#LakeviewAPI">w.Lakeview</a> workspace-level service.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#CreateScheduleRequest">dashboards.CreateScheduleRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#CreateSubscriptionRequest">dashboards.CreateSubscriptionRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#CronSchedule">dashboards.CronSchedule</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DashboardView">dashboards.DashboardView</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DeleteScheduleRequest">dashboards.DeleteScheduleRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#DeleteSubscriptionRequest">dashboards.DeleteSubscriptionRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GetScheduleRequest">dashboards.GetScheduleRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GetSubscriptionRequest">dashboards.GetSubscriptionRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListDashboardsRequest">dashboards.ListDashboardsRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListDashboardsResponse">dashboards.ListDashboardsResponse</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSchedulesRequest">dashboards.ListSchedulesRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSchedulesResponse">dashboards.ListSchedulesResponse</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSubscriptionsRequest">dashboards.ListSubscriptionsRequest</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#ListSubscriptionsResponse">dashboards.ListSubscriptionsResponse</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Schedule">dashboards.Schedule</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#SchedulePauseStatus">dashboards.SchedulePauseStatus</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Subscriber">dashboards.Subscriber</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#Subscription">dashboards.Subscription</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#SubscriptionSubscriberDestination">dashboards.SubscriptionSubscriberDestination</a>, <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#SubscriptionSubscriberUser">dashboards.SubscriptionSubscriberUser</a> and <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#UpdateScheduleRequest">dashboards.UpdateScheduleRequest</a> structs.</li> <li>Added <code>OnStreamingBacklogExceeded</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#JobEmailNotifications">jobs.JobEmailNotifications</a>.</li> <li>Added <code>EnvironmentKey</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#RunTask">jobs.RunTask</a>.</li> <li>Removed <code>ConditionTask</code>, <code>DbtTask</code>, <code>NotebookTask</code>, <code>PipelineTask</code>, <code>PythonWheelTask</code>, <code>RunJobTask</code>, <code>SparkJarTask</code>, <code>SparkPythonTask</code>, <code>SparkSubmitTask</code>, <code>SqlTask</code> and <code>Environments</code> fields for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitRun">jobs.SubmitRun</a>.</li> <li>Added <code>DbtTask</code> and <code>EnvironmentKey</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#SubmitTask">jobs.SubmitTask</a>.</li> <li>Added <code>OnStreamingBacklogExceeded</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#TaskEmailNotifications">jobs.TaskEmailNotifications</a>.</li> <li>Added <code>Periodic</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#TriggerSettings">jobs.TriggerSettings</a>.</li> <li>Added <code>OnStreamingBacklogExceeded</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#WebhookNotifications">jobs.WebhookNotifications</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#PeriodicTriggerConfiguration">jobs.PeriodicTriggerConfiguration</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/jobs#PeriodicTriggerConfigurationTimeUnit">jobs.PeriodicTriggerConfigurationTimeUnit</a>.</li> <li>Added <code>ProviderSummary</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#Listing">marketplace.Listing</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#ProviderIconFile">marketplace.ProviderIconFile</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#ProviderIconType">marketplace.ProviderIconType</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/marketplace#ProviderListingSummaryInfo">marketplace.ProviderListingSummaryInfo</a>.</li> <li>Added <code>Start</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#AppsAPI">w.Apps</a> workspace-level service.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointsDataPlaneAPI">w.ServingEndpointsDataPlane</a> workspace-level service.</li> <li>Added <code>ServicePrincipalId</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#App">serving.App</a>.</li> <li>Added <code>ServicePrincipalName</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#App">serving.App</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#StartAppRequest">serving.StartAppRequest</a>.</li> <li>Added <code>QueryNextPage</code> method for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#VectorSearchIndexesAPI">w.VectorSearchIndexes</a> workspace-level service.</li> <li>Added <code>QueryType</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#QueryVectorIndexRequest">vectorsearch.QueryVectorIndexRequest</a>.</li> <li>Added <code>NextPageToken</code> field for <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#QueryVectorIndexResponse">vectorsearch.QueryVectorIndexResponse</a>.</li> <li>Added <a href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/vectorsearch#QueryVectorIndexNextPageRequest">vectorsearch.QueryVectorIndexNextPageRequest</a>.</li> </ul> <p>OpenAPI SHA: 7437dabb9dadee402c1fc060df4c1ce8cc5369f0, Date: 2024-06-25</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="3e419132ea
"><code>3e41913</code></a> Release v0.43.0 (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/955">#955</a>)</li> <li><a href="ce3dc984f7
"><code>ce3dc98</code></a> Add <code>serverless_compute_id</code> field to the config (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/952">#952</a>)</li> <li><a href="00b1d09b24
"><code>00b1d09</code></a> Update OpenAPI spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/947">#947</a>)</li> <li><a href="d098b1a3e7
"><code>d098b1a</code></a> Support partners in SDK (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/925">#925</a>)</li> <li><a href="490bc13c0e
"><code>490bc13</code></a> Generate from latest spec (<a href="https://redirect.github.com/databricks/databricks-sdk-go/issues/944">#944</a>)</li> <li>See full diff in <a href="https://github.com/databricks/databricks-sdk-go/compare/v0.42.0...v0.43.0">compare view</a></li> </ul> </details> <br /> <details> <summary>Most Recent Ignore Conditions Applied to This Pull Request</summary> | Dependency Name | Ignore Conditions | | --- | --- | | github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] | </details> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.42.0&new-version=0.43.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
This commit is contained in:
parent
100a0516d4
commit
8468878eed
|
@ -1 +1 @@
|
||||||
37b925eba37dfb3d7e05b6ba2d458454ce62d3a0
|
7437dabb9dadee402c1fc060df4c1ce8cc5369f0
|
|
@ -7,7 +7,7 @@ package account
|
||||||
import (
|
import (
|
||||||
"github.com/databricks/cli/cmd/root"
|
"github.com/databricks/cli/cmd/root"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
{{range .Services}}{{if and .IsAccounts (not .HasParent)}}{{if not (in $excludes .KebabName) }}
|
{{range .Services}}{{if and .IsAccounts (not .HasParent) (not .IsDataPlane)}}{{if not (in $excludes .KebabName) }}
|
||||||
{{.SnakeName}} "github.com/databricks/cli/cmd/account/{{(.TrimPrefix "account").KebabName}}"{{end}}{{end}}{{end}}
|
{{.SnakeName}} "github.com/databricks/cli/cmd/account/{{(.TrimPrefix "account").KebabName}}"{{end}}{{end}}{{end}}
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -17,7 +17,7 @@ func New() *cobra.Command {
|
||||||
Short: `Databricks Account Commands`,
|
Short: `Databricks Account Commands`,
|
||||||
}
|
}
|
||||||
|
|
||||||
{{range .Services}}{{if and .IsAccounts (not .HasParent)}}{{if not (in $excludes .KebabName) -}}
|
{{range .Services}}{{if and .IsAccounts (not .HasParent) (not .IsDataPlane)}}{{if not (in $excludes .KebabName) -}}
|
||||||
cmd.AddCommand({{.SnakeName}}.New())
|
cmd.AddCommand({{.SnakeName}}.New())
|
||||||
{{end}}{{end}}{{end}}
|
{{end}}{{end}}{{end}}
|
||||||
|
|
||||||
|
|
|
@ -14,14 +14,14 @@ package workspace
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/databricks/cli/cmd/root"
|
"github.com/databricks/cli/cmd/root"
|
||||||
{{range .Services}}{{if and (not .IsAccounts) (not .HasParent)}}{{if not (in $excludes .KebabName) }}
|
{{range .Services}}{{if and (not .IsAccounts) (not .HasParent) (not .IsDataPlane)}}{{if not (in $excludes .KebabName) }}
|
||||||
{{.SnakeName}} "github.com/databricks/cli/cmd/workspace/{{.KebabName}}"{{end}}{{end}}{{end}}
|
{{.SnakeName}} "github.com/databricks/cli/cmd/workspace/{{.KebabName}}"{{end}}{{end}}{{end}}
|
||||||
)
|
)
|
||||||
|
|
||||||
func All() []*cobra.Command {
|
func All() []*cobra.Command {
|
||||||
var out []*cobra.Command
|
var out []*cobra.Command
|
||||||
|
|
||||||
{{range .Services}}{{if and (not .IsAccounts) (not .HasParent)}}{{if not (in $excludes .KebabName) -}}
|
{{range .Services}}{{if and (not .IsAccounts) (not .HasParent) (not .IsDataPlane)}}{{if not (in $excludes .KebabName) -}}
|
||||||
out = append(out, {{.SnakeName}}.New())
|
out = append(out, {{.SnakeName}}.New())
|
||||||
{{end}}{{end}}{{end}}
|
{{end}}{{end}}{{end}}
|
||||||
|
|
||||||
|
|
|
@ -22,6 +22,7 @@ import (
|
||||||
"dbsql-permissions"
|
"dbsql-permissions"
|
||||||
"account-access-control-proxy"
|
"account-access-control-proxy"
|
||||||
"files"
|
"files"
|
||||||
|
"serving-endpoints-data-plane"
|
||||||
}}
|
}}
|
||||||
|
|
||||||
{{if not (in $excludes .KebabName) }}
|
{{if not (in $excludes .KebabName) }}
|
||||||
|
|
|
@ -79,6 +79,17 @@
|
||||||
"experimental": {
|
"experimental": {
|
||||||
"description": "",
|
"description": "",
|
||||||
"properties": {
|
"properties": {
|
||||||
|
"pydabs": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"enabled": {
|
||||||
|
"description": ""
|
||||||
|
},
|
||||||
|
"venv_path": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"python_wheel_wrapper": {
|
"python_wheel_wrapper": {
|
||||||
"description": ""
|
"description": ""
|
||||||
},
|
},
|
||||||
|
@ -236,6 +247,12 @@
|
||||||
"description": ""
|
"description": ""
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "A list of email addresses to notify when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.",
|
||||||
|
"items": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -853,6 +870,12 @@
|
||||||
"description": ""
|
"description": ""
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "A list of email addresses to notify when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.",
|
||||||
|
"items": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -1595,6 +1618,17 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "An optional list of system notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.\nA maximum of 3 destinations can be specified for the `on_streaming_backlog_exceeded` property.",
|
||||||
|
"items": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"id": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -1634,6 +1668,17 @@
|
||||||
"pause_status": {
|
"pause_status": {
|
||||||
"description": "Whether this trigger is paused or not."
|
"description": "Whether this trigger is paused or not."
|
||||||
},
|
},
|
||||||
|
"periodic": {
|
||||||
|
"description": "Periodic trigger settings.",
|
||||||
|
"properties": {
|
||||||
|
"interval": {
|
||||||
|
"description": "The interval at which the trigger should run."
|
||||||
|
},
|
||||||
|
"unit": {
|
||||||
|
"description": "The unit of time for the interval."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"table": {
|
"table": {
|
||||||
"description": "Old table trigger settings name. Deprecated in favor of `table_update`.",
|
"description": "Old table trigger settings name. Deprecated in favor of `table_update`.",
|
||||||
"properties": {
|
"properties": {
|
||||||
|
@ -1712,6 +1757,17 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "An optional list of system notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.\nA maximum of 3 destinations can be specified for the `on_streaming_backlog_exceeded` property.",
|
||||||
|
"items": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"id": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -1740,16 +1796,16 @@
|
||||||
"description": "Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.",
|
"description": "Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"catalog_name": {
|
"catalog_name": {
|
||||||
"description": "The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if it was already set."
|
"description": "The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if the inference table is already enabled."
|
||||||
},
|
},
|
||||||
"enabled": {
|
"enabled": {
|
||||||
"description": "If inference tables are enabled or not. NOTE: If you have already disabled payload logging once, you cannot enable again."
|
"description": "Indicates whether the inference table is enabled."
|
||||||
},
|
},
|
||||||
"schema_name": {
|
"schema_name": {
|
||||||
"description": "The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if it was already set."
|
"description": "The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if the inference table is already enabled."
|
||||||
},
|
},
|
||||||
"table_name_prefix": {
|
"table_name_prefix": {
|
||||||
"description": "The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if it was already set."
|
"description": "The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if the inference table is already enabled."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -2623,7 +2679,7 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"notebook": {
|
"notebook": {
|
||||||
"description": "The path to a notebook that defines a pipeline and is stored in the \u003cDatabricks\u003e workspace.\n",
|
"description": "The path to a notebook that defines a pipeline and is stored in the Databricks workspace.\n",
|
||||||
"properties": {
|
"properties": {
|
||||||
"path": {
|
"path": {
|
||||||
"description": "The absolute path of the notebook."
|
"description": "The absolute path of the notebook."
|
||||||
|
@ -3167,6 +3223,12 @@
|
||||||
"description": ""
|
"description": ""
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "A list of email addresses to notify when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.",
|
||||||
|
"items": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -3784,6 +3846,12 @@
|
||||||
"description": ""
|
"description": ""
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "A list of email addresses to notify when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.",
|
||||||
|
"items": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
"description": "A list of email addresses to be notified when a run successfully completes. A run is considered to have completed successfully if it ends with a `TERMINATED` `life_cycle_state` and a `SUCCESS` result_state. If not specified on job creation, reset, or update, the list is empty, and notifications are not sent.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -4526,6 +4594,17 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "An optional list of system notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.\nA maximum of 3 destinations can be specified for the `on_streaming_backlog_exceeded` property.",
|
||||||
|
"items": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"id": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -4565,6 +4644,17 @@
|
||||||
"pause_status": {
|
"pause_status": {
|
||||||
"description": "Whether this trigger is paused or not."
|
"description": "Whether this trigger is paused or not."
|
||||||
},
|
},
|
||||||
|
"periodic": {
|
||||||
|
"description": "Periodic trigger settings.",
|
||||||
|
"properties": {
|
||||||
|
"interval": {
|
||||||
|
"description": "The interval at which the trigger should run."
|
||||||
|
},
|
||||||
|
"unit": {
|
||||||
|
"description": "The unit of time for the interval."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"table": {
|
"table": {
|
||||||
"description": "Old table trigger settings name. Deprecated in favor of `table_update`.",
|
"description": "Old table trigger settings name. Deprecated in favor of `table_update`.",
|
||||||
"properties": {
|
"properties": {
|
||||||
|
@ -4643,6 +4733,17 @@
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"on_streaming_backlog_exceeded": {
|
||||||
|
"description": "An optional list of system notification IDs to call when any streaming backlog thresholds are exceeded for any stream.\nStreaming backlog thresholds can be set in the `health` field using the following metrics: `STREAMING_BACKLOG_BYTES`, `STREAMING_BACKLOG_RECORDS`, `STREAMING_BACKLOG_SECONDS`, or `STREAMING_BACKLOG_FILES`.\nAlerting is based on the 10-minute average of these metrics. If the issue persists, notifications are resent every 30 minutes.\nA maximum of 3 destinations can be specified for the `on_streaming_backlog_exceeded` property.",
|
||||||
|
"items": {
|
||||||
|
"description": "",
|
||||||
|
"properties": {
|
||||||
|
"id": {
|
||||||
|
"description": ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
"on_success": {
|
"on_success": {
|
||||||
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
"description": "An optional list of system notification IDs to call when the run completes successfully. A maximum of 3 destinations can be specified for the `on_success` property.",
|
||||||
"items": {
|
"items": {
|
||||||
|
@ -4671,16 +4772,16 @@
|
||||||
"description": "Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.",
|
"description": "Configuration for Inference Tables which automatically logs requests and responses to Unity Catalog.",
|
||||||
"properties": {
|
"properties": {
|
||||||
"catalog_name": {
|
"catalog_name": {
|
||||||
"description": "The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if it was already set."
|
"description": "The name of the catalog in Unity Catalog. NOTE: On update, you cannot change the catalog name if the inference table is already enabled."
|
||||||
},
|
},
|
||||||
"enabled": {
|
"enabled": {
|
||||||
"description": "If inference tables are enabled or not. NOTE: If you have already disabled payload logging once, you cannot enable again."
|
"description": "Indicates whether the inference table is enabled."
|
||||||
},
|
},
|
||||||
"schema_name": {
|
"schema_name": {
|
||||||
"description": "The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if it was already set."
|
"description": "The name of the schema in Unity Catalog. NOTE: On update, you cannot change the schema name if the inference table is already enabled."
|
||||||
},
|
},
|
||||||
"table_name_prefix": {
|
"table_name_prefix": {
|
||||||
"description": "The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if it was already set."
|
"description": "The prefix of the table in Unity Catalog. NOTE: On update, you cannot change the prefix name if the inference table is already enabled."
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
@ -5554,7 +5655,7 @@
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"notebook": {
|
"notebook": {
|
||||||
"description": "The path to a notebook that defines a pipeline and is stored in the \u003cDatabricks\u003e workspace.\n",
|
"description": "The path to a notebook that defines a pipeline and is stored in the Databricks workspace.\n",
|
||||||
"properties": {
|
"properties": {
|
||||||
"path": {
|
"path": {
|
||||||
"description": "The absolute path of the notebook."
|
"description": "The absolute path of the notebook."
|
||||||
|
|
|
@ -24,7 +24,12 @@ func New() *cobra.Command {
|
||||||
Databricks SQL object that periodically runs a query, evaluates a condition of
|
Databricks SQL object that periodically runs a query, evaluates a condition of
|
||||||
its result, and notifies one or more users and/or notification destinations if
|
its result, and notifies one or more users and/or notification destinations if
|
||||||
the condition was met. Alerts can be scheduled using the sql_task type of
|
the condition was met. Alerts can be scheduled using the sql_task type of
|
||||||
the Jobs API, e.g. :method:jobs/create.`,
|
the Jobs API, e.g. :method:jobs/create.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`,
|
||||||
GroupID: "sql",
|
GroupID: "sql",
|
||||||
Annotations: map[string]string{
|
Annotations: map[string]string{
|
||||||
"package": "sql",
|
"package": "sql",
|
||||||
|
@ -73,7 +78,12 @@ func newCreate() *cobra.Command {
|
||||||
|
|
||||||
Creates an alert. An alert is a Databricks SQL object that periodically runs a
|
Creates an alert. An alert is a Databricks SQL object that periodically runs a
|
||||||
query, evaluates a condition of its result, and notifies users or notification
|
query, evaluates a condition of its result, and notifies users or notification
|
||||||
destinations if the condition was met.`
|
destinations if the condition was met.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -131,8 +141,13 @@ func newDelete() *cobra.Command {
|
||||||
cmd.Long = `Delete an alert.
|
cmd.Long = `Delete an alert.
|
||||||
|
|
||||||
Deletes an alert. Deleted alerts are no longer accessible and cannot be
|
Deletes an alert. Deleted alerts are no longer accessible and cannot be
|
||||||
restored. **Note:** Unlike queries and dashboards, alerts cannot be moved to
|
restored. **Note**: Unlike queries and dashboards, alerts cannot be moved to
|
||||||
the trash.`
|
the trash.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -199,7 +214,12 @@ func newGet() *cobra.Command {
|
||||||
cmd.Short = `Get an alert.`
|
cmd.Short = `Get an alert.`
|
||||||
cmd.Long = `Get an alert.
|
cmd.Long = `Get an alert.
|
||||||
|
|
||||||
Gets an alert.`
|
Gets an alert.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -261,7 +281,12 @@ func newList() *cobra.Command {
|
||||||
cmd.Short = `Get alerts.`
|
cmd.Short = `Get alerts.`
|
||||||
cmd.Long = `Get alerts.
|
cmd.Long = `Get alerts.
|
||||||
|
|
||||||
Gets a list of alerts.`
|
Gets a list of alerts.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -312,7 +337,12 @@ func newUpdate() *cobra.Command {
|
||||||
cmd.Short = `Update an alert.`
|
cmd.Short = `Update an alert.`
|
||||||
cmd.Long = `Update an alert.
|
cmd.Long = `Update an alert.
|
||||||
|
|
||||||
Updates an alert.`
|
Updates an alert.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
|
|
@ -42,6 +42,7 @@ func New() *cobra.Command {
|
||||||
cmd.AddCommand(newGetEnvironment())
|
cmd.AddCommand(newGetEnvironment())
|
||||||
cmd.AddCommand(newList())
|
cmd.AddCommand(newList())
|
||||||
cmd.AddCommand(newListDeployments())
|
cmd.AddCommand(newListDeployments())
|
||||||
|
cmd.AddCommand(newStart())
|
||||||
cmd.AddCommand(newStop())
|
cmd.AddCommand(newStop())
|
||||||
cmd.AddCommand(newUpdate())
|
cmd.AddCommand(newUpdate())
|
||||||
|
|
||||||
|
@ -615,6 +616,64 @@ func newListDeployments() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start start command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var startOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*serving.StartAppRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newStart() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var startReq serving.StartAppRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Use = "start NAME"
|
||||||
|
cmd.Short = `Start an app.`
|
||||||
|
cmd.Long = `Start an app.
|
||||||
|
|
||||||
|
Start the last active deployment of the app in the workspace.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
NAME: The name of the app.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
startReq.Name = args[0]
|
||||||
|
|
||||||
|
response, err := w.Apps.Start(ctx, startReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range startOverrides {
|
||||||
|
fn(cmd, &startReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
// start stop command
|
// start stop command
|
||||||
|
|
||||||
// Slice with functions to override default command behavior.
|
// Slice with functions to override default command behavior.
|
||||||
|
|
|
@ -273,6 +273,8 @@ func newList() *cobra.Command {
|
||||||
// TODO: short flags
|
// TODO: short flags
|
||||||
|
|
||||||
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include catalogs in the response for which the principal can only access selective metadata for.`)
|
cmd.Flags().BoolVar(&listReq.IncludeBrowse, "include-browse", listReq.IncludeBrowse, `Whether to include catalogs in the response for which the principal can only access selective metadata for.`)
|
||||||
|
cmd.Flags().IntVar(&listReq.MaxResults, "max-results", listReq.MaxResults, `Maximum number of catalogs to return.`)
|
||||||
|
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `Opaque pagination token to go to next page based on previous query.`)
|
||||||
|
|
||||||
cmd.Use = "list"
|
cmd.Use = "list"
|
||||||
cmd.Short = `List catalogs.`
|
cmd.Short = `List catalogs.`
|
||||||
|
|
|
@ -268,8 +268,8 @@ func newList() *cobra.Command {
|
||||||
|
|
||||||
Fetch a paginated list of dashboard objects.
|
Fetch a paginated list of dashboard objects.
|
||||||
|
|
||||||
### **Warning: Calling this API concurrently 10 or more times could result in
|
**Warning**: Calling this API concurrently 10 or more times could result in
|
||||||
throttling, service degradation, or a temporary ban.**`
|
throttling, service degradation, or a temporary ban.`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
|
|
@ -25,7 +25,12 @@ func New() *cobra.Command {
|
||||||
This API does not support searches. It returns the full list of SQL warehouses
|
This API does not support searches. It returns the full list of SQL warehouses
|
||||||
in your workspace. We advise you to use any text editor, REST client, or
|
in your workspace. We advise you to use any text editor, REST client, or
|
||||||
grep to search the response from this API for the name of your SQL warehouse
|
grep to search the response from this API for the name of your SQL warehouse
|
||||||
as it appears in Databricks SQL.`,
|
as it appears in Databricks SQL.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`,
|
||||||
GroupID: "sql",
|
GroupID: "sql",
|
||||||
Annotations: map[string]string{
|
Annotations: map[string]string{
|
||||||
"package": "sql",
|
"package": "sql",
|
||||||
|
@ -60,7 +65,12 @@ func newList() *cobra.Command {
|
||||||
|
|
||||||
Retrieves a full list of SQL warehouses available in this workspace. All
|
Retrieves a full list of SQL warehouses available in this workspace. All
|
||||||
fields that appear in this API response are enumerated for clarity. However,
|
fields that appear in this API response are enumerated for clarity. However,
|
||||||
you need only a SQL warehouse's id to create new queries against it.`
|
you need only a SQL warehouse's id to create new queries against it.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
|
|
@ -348,6 +348,7 @@ func newUpdate() *cobra.Command {
|
||||||
cmd.Flags().StringVar(&updateReq.CredentialName, "credential-name", updateReq.CredentialName, `Name of the storage credential used with this location.`)
|
cmd.Flags().StringVar(&updateReq.CredentialName, "credential-name", updateReq.CredentialName, `Name of the storage credential used with this location.`)
|
||||||
// TODO: complex arg: encryption_details
|
// TODO: complex arg: encryption_details
|
||||||
cmd.Flags().BoolVar(&updateReq.Force, "force", updateReq.Force, `Force update even if changing url invalidates dependent external tables or mounts.`)
|
cmd.Flags().BoolVar(&updateReq.Force, "force", updateReq.Force, `Force update even if changing url invalidates dependent external tables or mounts.`)
|
||||||
|
cmd.Flags().Var(&updateReq.IsolationMode, "isolation-mode", `Whether the current securable is accessible from all workspaces or a specific set of workspaces. Supported values: [ISOLATION_MODE_ISOLATED, ISOLATION_MODE_OPEN]`)
|
||||||
cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the external location.`)
|
cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the external location.`)
|
||||||
cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `The owner of the external location.`)
|
cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `The owner of the external location.`)
|
||||||
cmd.Flags().BoolVar(&updateReq.ReadOnly, "read-only", updateReq.ReadOnly, `Indicates whether the external location is read-only.`)
|
cmd.Flags().BoolVar(&updateReq.ReadOnly, "read-only", updateReq.ReadOnly, `Indicates whether the external location is read-only.`)
|
||||||
|
|
|
@ -69,6 +69,8 @@ func newCreate() *cobra.Command {
|
||||||
cmd.Short = `Create a function.`
|
cmd.Short = `Create a function.`
|
||||||
cmd.Long = `Create a function.
|
cmd.Long = `Create a function.
|
||||||
|
|
||||||
|
**WARNING: This API is experimental and will change in future versions**
|
||||||
|
|
||||||
Creates a new function
|
Creates a new function
|
||||||
|
|
||||||
The user must have the following permissions in order for the function to be
|
The user must have the following permissions in order for the function to be
|
||||||
|
|
|
@ -1502,24 +1502,15 @@ func newSubmit() *cobra.Command {
|
||||||
cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
cmd.Flags().Var(&submitJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
||||||
|
|
||||||
// TODO: array: access_control_list
|
// TODO: array: access_control_list
|
||||||
// TODO: complex arg: condition_task
|
|
||||||
// TODO: complex arg: dbt_task
|
|
||||||
// TODO: complex arg: email_notifications
|
// TODO: complex arg: email_notifications
|
||||||
|
// TODO: array: environments
|
||||||
// TODO: complex arg: git_source
|
// TODO: complex arg: git_source
|
||||||
// TODO: complex arg: health
|
// TODO: complex arg: health
|
||||||
cmd.Flags().StringVar(&submitReq.IdempotencyToken, "idempotency-token", submitReq.IdempotencyToken, `An optional token that can be used to guarantee the idempotency of job run requests.`)
|
cmd.Flags().StringVar(&submitReq.IdempotencyToken, "idempotency-token", submitReq.IdempotencyToken, `An optional token that can be used to guarantee the idempotency of job run requests.`)
|
||||||
// TODO: complex arg: notebook_task
|
|
||||||
// TODO: complex arg: notification_settings
|
// TODO: complex arg: notification_settings
|
||||||
// TODO: complex arg: pipeline_task
|
|
||||||
// TODO: complex arg: python_wheel_task
|
|
||||||
// TODO: complex arg: queue
|
// TODO: complex arg: queue
|
||||||
// TODO: complex arg: run_as
|
// TODO: complex arg: run_as
|
||||||
// TODO: complex arg: run_job_task
|
|
||||||
cmd.Flags().StringVar(&submitReq.RunName, "run-name", submitReq.RunName, `An optional name for the run.`)
|
cmd.Flags().StringVar(&submitReq.RunName, "run-name", submitReq.RunName, `An optional name for the run.`)
|
||||||
// TODO: complex arg: spark_jar_task
|
|
||||||
// TODO: complex arg: spark_python_task
|
|
||||||
// TODO: complex arg: spark_submit_task
|
|
||||||
// TODO: complex arg: sql_task
|
|
||||||
// TODO: array: tasks
|
// TODO: array: tasks
|
||||||
cmd.Flags().IntVar(&submitReq.TimeoutSeconds, "timeout-seconds", submitReq.TimeoutSeconds, `An optional timeout applied to each run of this job.`)
|
cmd.Flags().IntVar(&submitReq.TimeoutSeconds, "timeout-seconds", submitReq.TimeoutSeconds, `An optional timeout applied to each run of this job.`)
|
||||||
// TODO: complex arg: webhook_notifications
|
// TODO: complex arg: webhook_notifications
|
||||||
|
|
|
@ -31,13 +31,23 @@ func New() *cobra.Command {
|
||||||
|
|
||||||
// Add methods
|
// Add methods
|
||||||
cmd.AddCommand(newCreate())
|
cmd.AddCommand(newCreate())
|
||||||
|
cmd.AddCommand(newCreateSchedule())
|
||||||
|
cmd.AddCommand(newCreateSubscription())
|
||||||
|
cmd.AddCommand(newDeleteSchedule())
|
||||||
|
cmd.AddCommand(newDeleteSubscription())
|
||||||
cmd.AddCommand(newGet())
|
cmd.AddCommand(newGet())
|
||||||
cmd.AddCommand(newGetPublished())
|
cmd.AddCommand(newGetPublished())
|
||||||
|
cmd.AddCommand(newGetSchedule())
|
||||||
|
cmd.AddCommand(newGetSubscription())
|
||||||
|
cmd.AddCommand(newList())
|
||||||
|
cmd.AddCommand(newListSchedules())
|
||||||
|
cmd.AddCommand(newListSubscriptions())
|
||||||
cmd.AddCommand(newMigrate())
|
cmd.AddCommand(newMigrate())
|
||||||
cmd.AddCommand(newPublish())
|
cmd.AddCommand(newPublish())
|
||||||
cmd.AddCommand(newTrash())
|
cmd.AddCommand(newTrash())
|
||||||
cmd.AddCommand(newUnpublish())
|
cmd.AddCommand(newUnpublish())
|
||||||
cmd.AddCommand(newUpdate())
|
cmd.AddCommand(newUpdate())
|
||||||
|
cmd.AddCommand(newUpdateSchedule())
|
||||||
|
|
||||||
// Apply optional overrides to this command.
|
// Apply optional overrides to this command.
|
||||||
for _, fn := range cmdOverrides {
|
for _, fn := range cmdOverrides {
|
||||||
|
@ -126,6 +136,277 @@ func newCreate() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start create-schedule command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var createScheduleOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.CreateScheduleRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newCreateSchedule() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var createScheduleReq dashboards.CreateScheduleRequest
|
||||||
|
var createScheduleJson flags.JsonFlag
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
cmd.Flags().Var(&createScheduleJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&createScheduleReq.DisplayName, "display-name", createScheduleReq.DisplayName, `The display name for schedule.`)
|
||||||
|
cmd.Flags().Var(&createScheduleReq.PauseStatus, "pause-status", `The status indicates whether this schedule is paused or not. Supported values: [PAUSED, UNPAUSED]`)
|
||||||
|
|
||||||
|
cmd.Use = "create-schedule DASHBOARD_ID"
|
||||||
|
cmd.Short = `Create dashboard schedule.`
|
||||||
|
cmd.Long = `Create dashboard schedule.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
if cmd.Flags().Changed("json") {
|
||||||
|
err = createScheduleJson.Unmarshal(&createScheduleReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
|
||||||
|
}
|
||||||
|
createScheduleReq.DashboardId = args[0]
|
||||||
|
|
||||||
|
response, err := w.Lakeview.CreateSchedule(ctx, createScheduleReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range createScheduleOverrides {
|
||||||
|
fn(cmd, &createScheduleReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start create-subscription command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var createSubscriptionOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.CreateSubscriptionRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newCreateSubscription() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var createSubscriptionReq dashboards.CreateSubscriptionRequest
|
||||||
|
var createSubscriptionJson flags.JsonFlag
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
cmd.Flags().Var(&createSubscriptionJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
||||||
|
|
||||||
|
cmd.Use = "create-subscription DASHBOARD_ID SCHEDULE_ID"
|
||||||
|
cmd.Short = `Create schedule subscription.`
|
||||||
|
cmd.Long = `Create schedule subscription.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the subscription belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule to which the subscription belongs.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(2)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
if cmd.Flags().Changed("json") {
|
||||||
|
err = createSubscriptionJson.Unmarshal(&createSubscriptionReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
|
||||||
|
}
|
||||||
|
createSubscriptionReq.DashboardId = args[0]
|
||||||
|
createSubscriptionReq.ScheduleId = args[1]
|
||||||
|
|
||||||
|
response, err := w.Lakeview.CreateSubscription(ctx, createSubscriptionReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range createSubscriptionOverrides {
|
||||||
|
fn(cmd, &createSubscriptionReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start delete-schedule command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var deleteScheduleOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.DeleteScheduleRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newDeleteSchedule() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var deleteScheduleReq dashboards.DeleteScheduleRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&deleteScheduleReq.Etag, "etag", deleteScheduleReq.Etag, `The etag for the schedule.`)
|
||||||
|
|
||||||
|
cmd.Use = "delete-schedule DASHBOARD_ID SCHEDULE_ID"
|
||||||
|
cmd.Short = `Delete dashboard schedule.`
|
||||||
|
cmd.Long = `Delete dashboard schedule.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(2)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
deleteScheduleReq.DashboardId = args[0]
|
||||||
|
deleteScheduleReq.ScheduleId = args[1]
|
||||||
|
|
||||||
|
err = w.Lakeview.DeleteSchedule(ctx, deleteScheduleReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range deleteScheduleOverrides {
|
||||||
|
fn(cmd, &deleteScheduleReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start delete-subscription command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var deleteSubscriptionOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.DeleteSubscriptionRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newDeleteSubscription() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var deleteSubscriptionReq dashboards.DeleteSubscriptionRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&deleteSubscriptionReq.Etag, "etag", deleteSubscriptionReq.Etag, `The etag for the subscription.`)
|
||||||
|
|
||||||
|
cmd.Use = "delete-subscription DASHBOARD_ID SCHEDULE_ID SUBSCRIPTION_ID"
|
||||||
|
cmd.Short = `Delete schedule subscription.`
|
||||||
|
cmd.Long = `Delete schedule subscription.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard which the subscription belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule which the subscription belongs.
|
||||||
|
SUBSCRIPTION_ID: UUID identifying the subscription.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(3)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
deleteSubscriptionReq.DashboardId = args[0]
|
||||||
|
deleteSubscriptionReq.ScheduleId = args[1]
|
||||||
|
deleteSubscriptionReq.SubscriptionId = args[2]
|
||||||
|
|
||||||
|
err = w.Lakeview.DeleteSubscription(ctx, deleteSubscriptionReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range deleteSubscriptionOverrides {
|
||||||
|
fn(cmd, &deleteSubscriptionReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
// start get command
|
// start get command
|
||||||
|
|
||||||
// Slice with functions to override default command behavior.
|
// Slice with functions to override default command behavior.
|
||||||
|
@ -242,6 +523,303 @@ func newGetPublished() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start get-schedule command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var getScheduleOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.GetScheduleRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newGetSchedule() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var getScheduleReq dashboards.GetScheduleRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Use = "get-schedule DASHBOARD_ID SCHEDULE_ID"
|
||||||
|
cmd.Short = `Get dashboard schedule.`
|
||||||
|
cmd.Long = `Get dashboard schedule.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(2)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
getScheduleReq.DashboardId = args[0]
|
||||||
|
getScheduleReq.ScheduleId = args[1]
|
||||||
|
|
||||||
|
response, err := w.Lakeview.GetSchedule(ctx, getScheduleReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range getScheduleOverrides {
|
||||||
|
fn(cmd, &getScheduleReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start get-subscription command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var getSubscriptionOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.GetSubscriptionRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newGetSubscription() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var getSubscriptionReq dashboards.GetSubscriptionRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Use = "get-subscription DASHBOARD_ID SCHEDULE_ID SUBSCRIPTION_ID"
|
||||||
|
cmd.Short = `Get schedule subscription.`
|
||||||
|
cmd.Long = `Get schedule subscription.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard which the subscription belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule which the subscription belongs.
|
||||||
|
SUBSCRIPTION_ID: UUID identifying the subscription.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(3)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
getSubscriptionReq.DashboardId = args[0]
|
||||||
|
getSubscriptionReq.ScheduleId = args[1]
|
||||||
|
getSubscriptionReq.SubscriptionId = args[2]
|
||||||
|
|
||||||
|
response, err := w.Lakeview.GetSubscription(ctx, getSubscriptionReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range getSubscriptionOverrides {
|
||||||
|
fn(cmd, &getSubscriptionReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start list command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var listOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.ListDashboardsRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newList() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var listReq dashboards.ListDashboardsRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Flags().IntVar(&listReq.PageSize, "page-size", listReq.PageSize, `The number of dashboards to return per page.`)
|
||||||
|
cmd.Flags().StringVar(&listReq.PageToken, "page-token", listReq.PageToken, `A page token, received from a previous ListDashboards call.`)
|
||||||
|
cmd.Flags().BoolVar(&listReq.ShowTrashed, "show-trashed", listReq.ShowTrashed, `The flag to include dashboards located in the trash.`)
|
||||||
|
cmd.Flags().Var(&listReq.View, "view", `Indicates whether to include all metadata from the dashboard in the response. Supported values: [DASHBOARD_VIEW_BASIC, DASHBOARD_VIEW_FULL]`)
|
||||||
|
|
||||||
|
cmd.Use = "list"
|
||||||
|
cmd.Short = `List dashboards.`
|
||||||
|
cmd.Long = `List dashboards.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(0)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
response := w.Lakeview.List(ctx, listReq)
|
||||||
|
return cmdio.RenderIterator(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range listOverrides {
|
||||||
|
fn(cmd, &listReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start list-schedules command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var listSchedulesOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.ListSchedulesRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newListSchedules() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var listSchedulesReq dashboards.ListSchedulesRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Flags().IntVar(&listSchedulesReq.PageSize, "page-size", listSchedulesReq.PageSize, `The number of schedules to return per page.`)
|
||||||
|
cmd.Flags().StringVar(&listSchedulesReq.PageToken, "page-token", listSchedulesReq.PageToken, `A page token, received from a previous ListSchedules call.`)
|
||||||
|
|
||||||
|
cmd.Use = "list-schedules DASHBOARD_ID"
|
||||||
|
cmd.Short = `List dashboard schedules.`
|
||||||
|
cmd.Long = `List dashboard schedules.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
listSchedulesReq.DashboardId = args[0]
|
||||||
|
|
||||||
|
response := w.Lakeview.ListSchedules(ctx, listSchedulesReq)
|
||||||
|
return cmdio.RenderIterator(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range listSchedulesOverrides {
|
||||||
|
fn(cmd, &listSchedulesReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// start list-subscriptions command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var listSubscriptionsOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.ListSubscriptionsRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newListSubscriptions() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var listSubscriptionsReq dashboards.ListSubscriptionsRequest
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
|
||||||
|
cmd.Flags().IntVar(&listSubscriptionsReq.PageSize, "page-size", listSubscriptionsReq.PageSize, `The number of subscriptions to return per page.`)
|
||||||
|
cmd.Flags().StringVar(&listSubscriptionsReq.PageToken, "page-token", listSubscriptionsReq.PageToken, `A page token, received from a previous ListSubscriptions call.`)
|
||||||
|
|
||||||
|
cmd.Use = "list-subscriptions DASHBOARD_ID SCHEDULE_ID"
|
||||||
|
cmd.Short = `List schedule subscriptions.`
|
||||||
|
cmd.Long = `List schedule subscriptions.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the subscription belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule to which the subscription belongs.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(2)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
listSubscriptionsReq.DashboardId = args[0]
|
||||||
|
listSubscriptionsReq.ScheduleId = args[1]
|
||||||
|
|
||||||
|
response := w.Lakeview.ListSubscriptions(ctx, listSubscriptionsReq)
|
||||||
|
return cmdio.RenderIterator(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range listSubscriptionsOverrides {
|
||||||
|
fn(cmd, &listSubscriptionsReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
// start migrate command
|
// start migrate command
|
||||||
|
|
||||||
// Slice with functions to override default command behavior.
|
// Slice with functions to override default command behavior.
|
||||||
|
@ -576,4 +1154,79 @@ func newUpdate() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start update-schedule command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var updateScheduleOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*dashboards.UpdateScheduleRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newUpdateSchedule() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var updateScheduleReq dashboards.UpdateScheduleRequest
|
||||||
|
var updateScheduleJson flags.JsonFlag
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
cmd.Flags().Var(&updateScheduleJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&updateScheduleReq.DisplayName, "display-name", updateScheduleReq.DisplayName, `The display name for schedule.`)
|
||||||
|
cmd.Flags().StringVar(&updateScheduleReq.Etag, "etag", updateScheduleReq.Etag, `The etag for the schedule.`)
|
||||||
|
cmd.Flags().Var(&updateScheduleReq.PauseStatus, "pause-status", `The status indicates whether this schedule is paused or not. Supported values: [PAUSED, UNPAUSED]`)
|
||||||
|
|
||||||
|
cmd.Use = "update-schedule DASHBOARD_ID SCHEDULE_ID"
|
||||||
|
cmd.Short = `Update dashboard schedule.`
|
||||||
|
cmd.Long = `Update dashboard schedule.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
DASHBOARD_ID: UUID identifying the dashboard to which the schedule belongs.
|
||||||
|
SCHEDULE_ID: UUID identifying the schedule.`
|
||||||
|
|
||||||
|
// This command is being previewed; hide from help output.
|
||||||
|
cmd.Hidden = true
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(2)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
if cmd.Flags().Changed("json") {
|
||||||
|
err = updateScheduleJson.Unmarshal(&updateScheduleReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("please provide command input in JSON format by specifying the --json flag")
|
||||||
|
}
|
||||||
|
updateScheduleReq.DashboardId = args[0]
|
||||||
|
updateScheduleReq.ScheduleId = args[1]
|
||||||
|
|
||||||
|
response, err := w.Lakeview.UpdateSchedule(ctx, updateScheduleReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range updateScheduleOverrides {
|
||||||
|
fn(cmd, &updateScheduleReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
// end service Lakeview
|
// end service Lakeview
|
||||||
|
|
|
@ -23,7 +23,12 @@ func New() *cobra.Command {
|
||||||
Long: `These endpoints are used for CRUD operations on query definitions. Query
|
Long: `These endpoints are used for CRUD operations on query definitions. Query
|
||||||
definitions include the target SQL warehouse, query text, name, description,
|
definitions include the target SQL warehouse, query text, name, description,
|
||||||
tags, parameters, and visualizations. Queries can be scheduled using the
|
tags, parameters, and visualizations. Queries can be scheduled using the
|
||||||
sql_task type of the Jobs API, e.g. :method:jobs/create.`,
|
sql_task type of the Jobs API, e.g. :method:jobs/create.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`,
|
||||||
GroupID: "sql",
|
GroupID: "sql",
|
||||||
Annotations: map[string]string{
|
Annotations: map[string]string{
|
||||||
"package": "sql",
|
"package": "sql",
|
||||||
|
@ -76,7 +81,12 @@ func newCreate() *cobra.Command {
|
||||||
available SQL warehouses. Or you can copy the data_source_id from an
|
available SQL warehouses. Or you can copy the data_source_id from an
|
||||||
existing query.
|
existing query.
|
||||||
|
|
||||||
**Note**: You cannot add a visualization until you create the query.`
|
**Note**: You cannot add a visualization until you create the query.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -135,7 +145,12 @@ func newDelete() *cobra.Command {
|
||||||
|
|
||||||
Moves a query to the trash. Trashed queries immediately disappear from
|
Moves a query to the trash. Trashed queries immediately disappear from
|
||||||
searches and list views, and they cannot be used for alerts. The trash is
|
searches and list views, and they cannot be used for alerts. The trash is
|
||||||
deleted after 30 days.`
|
deleted after 30 days.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -203,7 +218,12 @@ func newGet() *cobra.Command {
|
||||||
cmd.Long = `Get a query definition.
|
cmd.Long = `Get a query definition.
|
||||||
|
|
||||||
Retrieve a query object definition along with contextual permissions
|
Retrieve a query object definition along with contextual permissions
|
||||||
information about the currently authenticated user.`
|
information about the currently authenticated user.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -278,8 +298,13 @@ func newList() *cobra.Command {
|
||||||
Gets a list of queries. Optionally, this list can be filtered by a search
|
Gets a list of queries. Optionally, this list can be filtered by a search
|
||||||
term.
|
term.
|
||||||
|
|
||||||
### **Warning: Calling this API concurrently 10 or more times could result in
|
**Warning**: Calling this API concurrently 10 or more times could result in
|
||||||
throttling, service degradation, or a temporary ban.**`
|
throttling, service degradation, or a temporary ban.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -330,7 +355,12 @@ func newRestore() *cobra.Command {
|
||||||
cmd.Long = `Restore a query.
|
cmd.Long = `Restore a query.
|
||||||
|
|
||||||
Restore a query that has been moved to the trash. A restored query appears in
|
Restore a query that has been moved to the trash. A restored query appears in
|
||||||
list views and searches. You can use restored queries for alerts.`
|
list views and searches. You can use restored queries for alerts.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
@ -409,7 +439,12 @@ func newUpdate() *cobra.Command {
|
||||||
|
|
||||||
Modify this query definition.
|
Modify this query definition.
|
||||||
|
|
||||||
**Note**: You cannot undo this operation.`
|
**Note**: You cannot undo this operation.
|
||||||
|
|
||||||
|
**Note**: A new version of the Databricks SQL API will soon be available.
|
||||||
|
[Learn more]
|
||||||
|
|
||||||
|
[Learn more]: https://docs.databricks.com/en/whats-coming.html#updates-to-the-databricks-sql-api-for-managing-queries-alerts-and-data-sources`
|
||||||
|
|
||||||
cmd.Annotations = make(map[string]string)
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
|
|
@ -366,6 +366,7 @@ func newUpdate() *cobra.Command {
|
||||||
cmd.Flags().StringVar(&updateReq.Comment, "comment", updateReq.Comment, `Comment associated with the credential.`)
|
cmd.Flags().StringVar(&updateReq.Comment, "comment", updateReq.Comment, `Comment associated with the credential.`)
|
||||||
// TODO: complex arg: databricks_gcp_service_account
|
// TODO: complex arg: databricks_gcp_service_account
|
||||||
cmd.Flags().BoolVar(&updateReq.Force, "force", updateReq.Force, `Force update even if there are dependent external locations or external tables.`)
|
cmd.Flags().BoolVar(&updateReq.Force, "force", updateReq.Force, `Force update even if there are dependent external locations or external tables.`)
|
||||||
|
cmd.Flags().Var(&updateReq.IsolationMode, "isolation-mode", `Whether the current securable is accessible from all workspaces or a specific set of workspaces. Supported values: [ISOLATION_MODE_ISOLATED, ISOLATION_MODE_OPEN]`)
|
||||||
cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the storage credential.`)
|
cmd.Flags().StringVar(&updateReq.NewName, "new-name", updateReq.NewName, `New name for the storage credential.`)
|
||||||
cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `Username of current owner of credential.`)
|
cmd.Flags().StringVar(&updateReq.Owner, "owner", updateReq.Owner, `Username of current owner of credential.`)
|
||||||
cmd.Flags().BoolVar(&updateReq.ReadOnly, "read-only", updateReq.ReadOnly, `Whether the storage credential is only usable for read operations.`)
|
cmd.Flags().BoolVar(&updateReq.ReadOnly, "read-only", updateReq.ReadOnly, `Whether the storage credential is only usable for read operations.`)
|
||||||
|
|
|
@ -42,6 +42,7 @@ func New() *cobra.Command {
|
||||||
cmd.AddCommand(newGetIndex())
|
cmd.AddCommand(newGetIndex())
|
||||||
cmd.AddCommand(newListIndexes())
|
cmd.AddCommand(newListIndexes())
|
||||||
cmd.AddCommand(newQueryIndex())
|
cmd.AddCommand(newQueryIndex())
|
||||||
|
cmd.AddCommand(newQueryNextPage())
|
||||||
cmd.AddCommand(newScanIndex())
|
cmd.AddCommand(newScanIndex())
|
||||||
cmd.AddCommand(newSyncIndex())
|
cmd.AddCommand(newSyncIndex())
|
||||||
cmd.AddCommand(newUpsertDataVectorIndex())
|
cmd.AddCommand(newUpsertDataVectorIndex())
|
||||||
|
@ -416,6 +417,7 @@ func newQueryIndex() *cobra.Command {
|
||||||
cmd.Flags().StringVar(&queryIndexReq.FiltersJson, "filters-json", queryIndexReq.FiltersJson, `JSON string representing query filters.`)
|
cmd.Flags().StringVar(&queryIndexReq.FiltersJson, "filters-json", queryIndexReq.FiltersJson, `JSON string representing query filters.`)
|
||||||
cmd.Flags().IntVar(&queryIndexReq.NumResults, "num-results", queryIndexReq.NumResults, `Number of results to return.`)
|
cmd.Flags().IntVar(&queryIndexReq.NumResults, "num-results", queryIndexReq.NumResults, `Number of results to return.`)
|
||||||
cmd.Flags().StringVar(&queryIndexReq.QueryText, "query-text", queryIndexReq.QueryText, `Query text.`)
|
cmd.Flags().StringVar(&queryIndexReq.QueryText, "query-text", queryIndexReq.QueryText, `Query text.`)
|
||||||
|
cmd.Flags().StringVar(&queryIndexReq.QueryType, "query-type", queryIndexReq.QueryType, `The query type to use.`)
|
||||||
// TODO: array: query_vector
|
// TODO: array: query_vector
|
||||||
cmd.Flags().Float64Var(&queryIndexReq.ScoreThreshold, "score-threshold", queryIndexReq.ScoreThreshold, `Threshold for the approximate nearest neighbor search.`)
|
cmd.Flags().Float64Var(&queryIndexReq.ScoreThreshold, "score-threshold", queryIndexReq.ScoreThreshold, `Threshold for the approximate nearest neighbor search.`)
|
||||||
|
|
||||||
|
@ -469,6 +471,76 @@ func newQueryIndex() *cobra.Command {
|
||||||
return cmd
|
return cmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// start query-next-page command
|
||||||
|
|
||||||
|
// Slice with functions to override default command behavior.
|
||||||
|
// Functions can be added from the `init()` function in manually curated files in this directory.
|
||||||
|
var queryNextPageOverrides []func(
|
||||||
|
*cobra.Command,
|
||||||
|
*vectorsearch.QueryVectorIndexNextPageRequest,
|
||||||
|
)
|
||||||
|
|
||||||
|
func newQueryNextPage() *cobra.Command {
|
||||||
|
cmd := &cobra.Command{}
|
||||||
|
|
||||||
|
var queryNextPageReq vectorsearch.QueryVectorIndexNextPageRequest
|
||||||
|
var queryNextPageJson flags.JsonFlag
|
||||||
|
|
||||||
|
// TODO: short flags
|
||||||
|
cmd.Flags().Var(&queryNextPageJson, "json", `either inline JSON string or @path/to/file.json with request body`)
|
||||||
|
|
||||||
|
cmd.Flags().StringVar(&queryNextPageReq.EndpointName, "endpoint-name", queryNextPageReq.EndpointName, `Name of the endpoint.`)
|
||||||
|
cmd.Flags().StringVar(&queryNextPageReq.PageToken, "page-token", queryNextPageReq.PageToken, `Page token returned from previous QueryVectorIndex or QueryVectorIndexNextPage API.`)
|
||||||
|
|
||||||
|
cmd.Use = "query-next-page INDEX_NAME"
|
||||||
|
cmd.Short = `Query next page.`
|
||||||
|
cmd.Long = `Query next page.
|
||||||
|
|
||||||
|
Use next_page_token returned from previous QueryVectorIndex or
|
||||||
|
QueryVectorIndexNextPage request to fetch next page of results.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
INDEX_NAME: Name of the vector index to query.`
|
||||||
|
|
||||||
|
cmd.Annotations = make(map[string]string)
|
||||||
|
|
||||||
|
cmd.Args = func(cmd *cobra.Command, args []string) error {
|
||||||
|
check := root.ExactArgs(1)
|
||||||
|
return check(cmd, args)
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd.PreRunE = root.MustWorkspaceClient
|
||||||
|
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
|
||||||
|
ctx := cmd.Context()
|
||||||
|
w := root.WorkspaceClient(ctx)
|
||||||
|
|
||||||
|
if cmd.Flags().Changed("json") {
|
||||||
|
err = queryNextPageJson.Unmarshal(&queryNextPageReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
queryNextPageReq.IndexName = args[0]
|
||||||
|
|
||||||
|
response, err := w.VectorSearchIndexes.QueryNextPage(ctx, queryNextPageReq)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return cmdio.Render(ctx, response)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable completions since they are not applicable.
|
||||||
|
// Can be overridden by manual implementation in `override.go`.
|
||||||
|
cmd.ValidArgsFunction = cobra.NoFileCompletions
|
||||||
|
|
||||||
|
// Apply optional overrides to this command.
|
||||||
|
for _, fn := range queryNextPageOverrides {
|
||||||
|
fn(cmd, &queryNextPageReq)
|
||||||
|
}
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
// start scan-index command
|
// start scan-index command
|
||||||
|
|
||||||
// Slice with functions to override default command behavior.
|
// Slice with functions to override default command behavior.
|
||||||
|
|
2
go.mod
2
go.mod
|
@ -5,7 +5,7 @@ go 1.21
|
||||||
require (
|
require (
|
||||||
github.com/Masterminds/semver/v3 v3.2.1 // MIT
|
github.com/Masterminds/semver/v3 v3.2.1 // MIT
|
||||||
github.com/briandowns/spinner v1.23.1 // Apache 2.0
|
github.com/briandowns/spinner v1.23.1 // Apache 2.0
|
||||||
github.com/databricks/databricks-sdk-go v0.42.0 // Apache 2.0
|
github.com/databricks/databricks-sdk-go v0.43.0 // Apache 2.0
|
||||||
github.com/fatih/color v1.17.0 // MIT
|
github.com/fatih/color v1.17.0 // MIT
|
||||||
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
|
github.com/ghodss/yaml v1.0.0 // MIT + NOTICE
|
||||||
github.com/google/uuid v1.6.0 // BSD-3-Clause
|
github.com/google/uuid v1.6.0 // BSD-3-Clause
|
||||||
|
|
|
@ -32,8 +32,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
||||||
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
||||||
github.com/databricks/databricks-sdk-go v0.42.0 h1:WKdoqnvb+jvsR9+IYkC3P4BH5eJHRzVOr59y3mCoY+s=
|
github.com/databricks/databricks-sdk-go v0.43.0 h1:x4laolWhYlsQg2t8yWEGyRPZy4/Wv3pKnLEoJfVin7I=
|
||||||
github.com/databricks/databricks-sdk-go v0.42.0/go.mod h1:a9rr0FOHLL26kOjQjZZVFjIYmRABCbrAWVeundDEVG8=
|
github.com/databricks/databricks-sdk-go v0.43.0/go.mod h1:a9rr0FOHLL26kOjQjZZVFjIYmRABCbrAWVeundDEVG8=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
|
Loading…
Reference in New Issue