build(deps): bump github.com/databricks/databricks-sdk-go from 0.59.0 to 0.60.0 (#2504)

Bumps
[github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go)
from 0.59.0 to 0.60.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/releases">github.com/databricks/databricks-sdk-go's
releases</a>.</em></p>
<blockquote>
<h2>v0.60.0</h2>
<h2>Release v0.60.0</h2>
<h3>API Changes</h3>
<p>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/ml#ForecastingAPI">w.Forecasting</a>
workspace-level service.
Added ExecuteMessageAttachmentQuery and GetMessageAttachmentQueryResult
methods for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GenieAPI">w.Genie</a>
workspace-level service.
Added StatementId field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GenieQueryAttachment">dashboards.GenieQueryAttachment</a>.
Added BudgetPolicyId field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#CreateServingEndpoint">serving.CreateServingEndpoint</a>.
Added BudgetPolicyId field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpoint">serving.ServingEndpoint</a>.
Added BudgetPolicyId field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointDetailed">serving.ServingEndpointDetailed</a>.
Added CouldNotGetModelDeploymentsException enum value for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MessageErrorType">dashboards.MessageErrorType</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md">github.com/databricks/databricks-sdk-go's
changelog</a>.</em></p>
<blockquote>
<h2>Release v0.60.0</h2>
<h3>API Changes</h3>
<ul>
<li>Added <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/ml#ForecastingAPI">w.Forecasting</a>
workspace-level service.</li>
<li>Added <code>ExecuteMessageAttachmentQuery</code> and
<code>GetMessageAttachmentQueryResult</code> methods for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GenieAPI">w.Genie</a>
workspace-level service.</li>
<li>Added <code>StatementId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#GenieQueryAttachment">dashboards.GenieQueryAttachment</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#CreateServingEndpoint">serving.CreateServingEndpoint</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpoint">serving.ServingEndpoint</a>.</li>
<li>Added <code>BudgetPolicyId</code> field for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/serving#ServingEndpointDetailed">serving.ServingEndpointDetailed</a>.</li>
<li>Added <code>CouldNotGetModelDeploymentsException</code> enum value
for <a
href="https://pkg.go.dev/github.com/databricks/databricks-sdk-go/service/dashboards#MessageErrorType">dashboards.MessageErrorType</a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="424a24bf81"><code>424a24b</code></a>
[Release] Release v0.60.0</li>
<li><a
href="19e0348fb0"><code>19e0348</code></a>
Update OpenAPI spec and Update mockery version (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1168">#1168</a>)</li>
<li><a
href="c33f416057"><code>c33f416</code></a>
Remove unnecessary config files and GitHub workflows (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1165">#1165</a>)</li>
<li><a
href="2f2d945bb2"><code>2f2d945</code></a>
[Fix] Properly parse the <code>RetryInfo</code> error detail (<a
href="https://redirect.github.com/databricks/databricks-sdk-go/issues/1162">#1162</a>)</li>
<li>See full diff in <a
href="https://github.com/databricks/databricks-sdk-go/compare/v0.59.0...v0.60.0">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/databricks/databricks-sdk-go | [>= 0.28.a, < 0.29] |
</details>


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/databricks/databricks-sdk-go&package-manager=go_modules&previous-version=0.59.0&new-version=0.60.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andrew Nester <andrew.nester@databricks.com>
This commit is contained in:
dependabot[bot] 2025-03-17 15:20:36 +00:00 committed by GitHub
parent e5f39b5916
commit c08a061f1a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 623 additions and 82 deletions

View File

@ -14,7 +14,6 @@
"go"
],
"post_generate": [
"[ ! -f tagging.py ] || mv tagging.py internal/genkit/tagging.py",
"go test -timeout 240s -run TestConsistentDatabricksSdkVersion github.com/databricks/cli/internal/build",
"make schema",
"echo 'bundle/internal/tf/schema/\\*.go linguist-generated=true' >> ./.gitattributes",

View File

@ -1 +1 @@
e5c870006a536121442cfd2441bdc8a5fb76ae1e
cd641c9dd4febe334b339dd7878d099dcf0eeab5

1
.gitattributes vendored
View File

@ -67,6 +67,7 @@ cmd/workspace/disable-legacy-dbfs/disable-legacy-dbfs.go linguist-generated=true
cmd/workspace/enhanced-security-monitoring/enhanced-security-monitoring.go linguist-generated=true
cmd/workspace/experiments/experiments.go linguist-generated=true
cmd/workspace/external-locations/external-locations.go linguist-generated=true
cmd/workspace/forecasting/forecasting.go linguist-generated=true
cmd/workspace/functions/functions.go linguist-generated=true
cmd/workspace/genie/genie.go linguist-generated=true
cmd/workspace/git-credentials/git-credentials.go linguist-generated=true

View File

@ -63,4 +63,9 @@ integration: vendor
integration-short: vendor
VERBOSE_TEST=1 $(INTEGRATION) -short
generate:
genkit update-sdk
[ ! -f tagging.py ] || mv tagging.py internal/genkit/tagging.py
[ ! -f .github/workflows/next-changelog.yml ] || rm .github/workflows/next-changelog.yml
.PHONY: lint tidy lintcheck fmt test cover showcover build snapshot vendor schema integration integration-short acc-cover acc-showcover docs

View File

@ -10,3 +10,7 @@
### Internal
### API Changes
* Added `databricks genie execute-message-attachment-query` command.
* Added `databricks genie get-message-attachment-query-result` command.
* `databricks genie execute-message-query` marked as Deprecated.
* `databricks genie get-message-query-result-by-attachment` marked as Deprecated.

View File

@ -1,7 +1,7 @@
Cloud = false # This test needs to run against stubbed Databricks API
[[Server]]
Pattern = "GET /api/2.1/jobs/get"
Pattern = "GET /api/2.2/jobs/get"
Response.Body = '''
{
"job_id": 11223344,

View File

@ -97,6 +97,7 @@ Vector Search
vector-search-indexes **Index**: An efficient representation of your embedding vectors that supports real-time and efficient approximate nearest neighbor (ANN) search queries.
Dashboards
genie Genie provides a no-code experience for business users, powered by AI/BI.
lakeview These APIs provide specific management operations for Lakeview dashboards.
Marketplace

View File

@ -155,6 +155,11 @@ func AddHandlers(server *testserver.Server) {
return req.Workspace.JobsGet(jobId)
})
server.Handle("GET", "/api/2.2/jobs/get", func(req testserver.Request) any {
jobId := req.URL.Query().Get("job_id")
return req.Workspace.JobsGet(jobId)
})
server.Handle("GET", "/api/2.1/jobs/list", func(req testserver.Request) any {
return req.Workspace.JobsList()
})

View File

@ -1,6 +1,6 @@
{
"method": "POST",
"path": "/api/2.1/jobs/create",
"path": "/api/2.2/jobs/create",
"body": {
"name": "abc"
}

View File

@ -1,7 +1,7 @@
RecordRequests = true
[[Server]]
Pattern = "POST /api/2.1/jobs/create"
Pattern = "POST /api/2.2/jobs/create"
Response.Body = '''
{
"error_code": "PERMISSION_DENIED",

View File

@ -8,7 +8,7 @@
]
},
"method": "POST",
"path": "/api/2.1/jobs/create",
"path": "/api/2.2/jobs/create",
"body": {
"name": "abc"
}

View File

@ -2,7 +2,7 @@ RecordRequests = true
IncludeRequestHeaders = ["Authorization", "User-Agent"]
[[Server]]
Pattern = "POST /api/2.1/jobs/create"
Pattern = "POST /api/2.2/jobs/create"
Response.Body = '''
{
"job_id": 1111

View File

@ -88,7 +88,28 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
- Currently, Databricks allows at most 45 custom tags
- Clusters can only reuse cloud resources if the resources' tags are a subset of the cluster tags
"data_security_mode": {}
"data_security_mode":
"description": |
Data security mode decides what data governance model to use when accessing data
from a cluster.
The following modes can only be used when `kind = CLASSIC_PREVIEW`.
* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.
* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.
* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.
The following modes can be used regardless of `kind`.
* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
The following modes are deprecated starting with Databricks Runtime 15.0 and
will be removed for future Databricks Runtime versions:
* `LEGACY_TABLE_ACL`: This mode is for users migrating from legacy Table ACL clusters.
* `LEGACY_PASSTHROUGH`: This mode is for users migrating from legacy Passthrough on high concurrency clusters.
* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.
* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesnt have UC nor passthrough enabled.
"docker_image": {}
"driver_instance_pool_id":
"description": |-
@ -123,7 +144,18 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
This field can only be used when `kind = CLASSIC_PREVIEW`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
"kind": {}
"kind":
"description": |
The kind of compute described by this compute specification.
Depending on `kind`, different validations and default values will be applied.
Clusters with `kind = CLASSIC_PREVIEW` support the following fields, whereas clusters with no specified `kind` do not.
* [is_single_node](/api/workspace/clusters/create#is_single_node)
* [use_ml_runtime](/api/workspace/clusters/create#use_ml_runtime)
* [data_security_mode](/api/workspace/clusters/create#data_security_mode) set to `DATA_SECURITY_MODE_AUTO`, `DATA_SECURITY_MODE_DEDICATED`, or `DATA_SECURITY_MODE_STANDARD`
By using the [simple form](https://docs.databricks.com/compute/simple-form.html), your clusters are automatically using `kind = CLASSIC_PREVIEW`.
"node_type_id":
"description": |
This field encodes, through a single value, the resources available to each of
@ -143,7 +175,15 @@ github.com/databricks/cli/bundle/config/resources.Cluster:
"policy_id":
"description": |-
The ID of the cluster policy used to create the cluster if applicable.
"runtime_engine": {}
"runtime_engine":
"description": |
Determines the cluster's runtime engine, either standard or Photon.
This field is not compatible with legacy `spark_version` values that contain `-photon-`.
Remove `-photon-` from the `spark_version` and set `runtime_engine` to `PHOTON`.
If left unspecified, the runtime engine defaults to standard unless the spark_version
contains -photon-, in which case Photon will be used.
"single_user_name":
"description": |-
Single user name if data_security_mode is `SINGLE_USER`
@ -264,7 +304,9 @@ github.com/databricks/cli/bundle/config/resources.Job:
If `git_source` is set, these tasks retrieve the file from the remote repository by default. However, this behavior can be overridden by setting `source` to `WORKSPACE` on the task.
Note: dbt and SQL File tasks support only version-controlled sources. If dbt or SQL File tasks are used, `git_source` must be defined on the job.
"health": {}
"health":
"description": |-
An optional set of health rules that can be defined for this job.
"job_clusters":
"description": |-
A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings.
@ -292,7 +334,11 @@ github.com/databricks/cli/bundle/config/resources.Job:
"queue":
"description": |-
The queue settings of the job.
"run_as": {}
"run_as":
"description": |-
Write-only setting. Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who created the job.
Either `user_name` or `service_principal_name` should be specified. If not, an error is thrown.
"schedule":
"description": |-
An optional periodic schedule for this job. The default behavior is that the job only runs when triggered by clicking “Run Now” in the Jobs UI or sending an API request to `runNow`.
@ -365,6 +411,9 @@ github.com/databricks/cli/bundle/config/resources.ModelServingEndpoint:
"ai_gateway":
"description": |-
The AI Gateway configuration for the serving endpoint. NOTE: Only external model and provisioned throughput endpoints are currently supported.
"budget_policy_id":
"description": |-
The budget policy to be applied to the serving endpoint.
"config":
"description": |-
The core config of the serving endpoint.
@ -440,7 +489,11 @@ github.com/databricks/cli/bundle/config/resources.Pipeline:
"restart_window":
"description": |-
Restart window of this pipeline.
"run_as": {}
"run_as":
"description": |-
Write-only setting, available only in Create/Update calls. Specifies the user or service principal that the pipeline runs as. If not specified, the pipeline runs as the user who created the pipeline.
Only `user_name` or `service_principal_name` can be specified. If both are specified, an error is thrown.
"schema":
"description": |-
The default schema (database) where tables are read from or published to. The presence of this field implies that the pipeline is in direct publishing mode.
@ -529,7 +582,9 @@ github.com/databricks/cli/bundle/config/resources.Schema:
"name":
"description": |-
Name of schema, relative to parent catalog.
"properties": {}
"properties":
"description": |-
A map of key-value properties attached to the securable.
"storage_root":
"description": |-
Storage root URL for managed tables within schema.
@ -878,7 +933,11 @@ github.com/databricks/databricks-sdk-go/service/compute.AutoScale:
The minimum number of workers to which the cluster can scale down when underutilized.
It is also the initial number of workers the cluster will have after creation.
github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes:
"availability": {}
"availability":
"description": |
Availability type used for all subsequent nodes past the `first_on_demand` ones.
Note: If `first_on_demand` is zero, this availability type will be used for the entire cluster.
"ebs_volume_count":
"description": |-
The number of volumes launched for each instance. Users can choose up to 10 volumes.
@ -908,7 +967,9 @@ github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes:
"ebs_volume_throughput":
"description": |-
If using gp3 volumes, what throughput to use for the disk. If this is not set, the maximum performance of a gp2 volume with the same volume size will be used.
"ebs_volume_type": {}
"ebs_volume_type":
"description": |-
The type of EBS volumes that will be launched with this cluster.
"first_on_demand":
"description": |-
The first `first_on_demand` nodes of the cluster will be placed on on-demand instances.
@ -967,7 +1028,11 @@ github.com/databricks/databricks-sdk-go/service/compute.AwsAvailability:
- |-
SPOT_WITH_FALLBACK
github.com/databricks/databricks-sdk-go/service/compute.AzureAttributes:
"availability": {}
"availability":
"description": |-
Availability type used for all subsequent nodes past the `first_on_demand` ones.
Note: If `first_on_demand` is zero (which only happens on pool clusters), this availability
type will be used for the entire cluster.
"first_on_demand":
"description": |-
The first `first_on_demand` nodes of the cluster will be placed on on-demand instances.
@ -1062,7 +1127,28 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
- Currently, Databricks allows at most 45 custom tags
- Clusters can only reuse cloud resources if the resources' tags are a subset of the cluster tags
"data_security_mode": {}
"data_security_mode":
"description": |
Data security mode decides what data governance model to use when accessing data
from a cluster.
The following modes can only be used when `kind = CLASSIC_PREVIEW`.
* `DATA_SECURITY_MODE_AUTO`: Databricks will choose the most appropriate access mode depending on your compute configuration.
* `DATA_SECURITY_MODE_STANDARD`: Alias for `USER_ISOLATION`.
* `DATA_SECURITY_MODE_DEDICATED`: Alias for `SINGLE_USER`.
The following modes can be used regardless of `kind`.
* `NONE`: No security isolation for multiple users sharing the cluster. Data governance features are not available in this mode.
* `SINGLE_USER`: A secure cluster that can only be exclusively used by a single user specified in `single_user_name`. Most programming languages, cluster features and data governance features are available in this mode.
* `USER_ISOLATION`: A secure cluster that can be shared by multiple users. Cluster users are fully isolated so that they cannot see each other's data and credentials. Most data governance features are supported in this mode. But programming languages and cluster features might be limited.
The following modes are deprecated starting with Databricks Runtime 15.0 and
will be removed for future Databricks Runtime versions:
* `LEGACY_TABLE_ACL`: This mode is for users migrating from legacy Table ACL clusters.
* `LEGACY_PASSTHROUGH`: This mode is for users migrating from legacy Passthrough on high concurrency clusters.
* `LEGACY_SINGLE_USER`: This mode is for users migrating from legacy Passthrough on standard clusters.
* `LEGACY_SINGLE_USER_STANDARD`: This mode provides a way that doesnt have UC nor passthrough enabled.
"docker_image": {}
"driver_instance_pool_id":
"description": |-
@ -1097,7 +1183,18 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
This field can only be used when `kind = CLASSIC_PREVIEW`.
When set to true, Databricks will automatically set single node related `custom_tags`, `spark_conf`, and `num_workers`
"kind": {}
"kind":
"description": |
The kind of compute described by this compute specification.
Depending on `kind`, different validations and default values will be applied.
Clusters with `kind = CLASSIC_PREVIEW` support the following fields, whereas clusters with no specified `kind` do not.
* [is_single_node](/api/workspace/clusters/create#is_single_node)
* [use_ml_runtime](/api/workspace/clusters/create#use_ml_runtime)
* [data_security_mode](/api/workspace/clusters/create#data_security_mode) set to `DATA_SECURITY_MODE_AUTO`, `DATA_SECURITY_MODE_DEDICATED`, or `DATA_SECURITY_MODE_STANDARD`
By using the [simple form](https://docs.databricks.com/compute/simple-form.html), your clusters are automatically using `kind = CLASSIC_PREVIEW`.
"node_type_id":
"description": |
This field encodes, through a single value, the resources available to each of
@ -1117,7 +1214,15 @@ github.com/databricks/databricks-sdk-go/service/compute.ClusterSpec:
"policy_id":
"description": |-
The ID of the cluster policy used to create the cluster if applicable.
"runtime_engine": {}
"runtime_engine":
"description": |
Determines the cluster's runtime engine, either standard or Photon.
This field is not compatible with legacy `spark_version` values that contain `-photon-`.
Remove `-photon-` from the `spark_version` and set `runtime_engine` to `PHOTON`.
If left unspecified, the runtime engine defaults to standard unless the spark_version
contains -photon-, in which case Photon will be used.
"single_user_name":
"description": |-
Single user name if data_security_mode is `SINGLE_USER`
@ -1211,7 +1316,9 @@ github.com/databricks/databricks-sdk-go/service/compute.DockerBasicAuth:
"description": |-
Name of the user
github.com/databricks/databricks-sdk-go/service/compute.DockerImage:
"basic_auth": {}
"basic_auth":
"description": |-
Basic auth with username and password
"url":
"description": |-
URL of the docker image.
@ -1242,7 +1349,10 @@ github.com/databricks/databricks-sdk-go/service/compute.Environment:
Allowed dependency could be <requirement specifier>, <archive url/path>, <local project path>(WSFS or Volumes in Databricks), <vcs project url>
E.g. dependencies: ["foo==0.0.1", "-r /Workspace/test/requirements.txt"]
github.com/databricks/databricks-sdk-go/service/compute.GcpAttributes:
"availability": {}
"availability":
"description": |-
This field determines whether the instance pool will contain preemptible
VMs, on-demand VMs, or preemptible VMs with a fallback to on-demand VMs if the former is unavailable.
"boot_disk_size":
"description": |-
boot disk size in GB
@ -1604,7 +1714,9 @@ github.com/databricks/databricks-sdk-go/service/jobs.GenAiComputeTask:
"command":
"description": |-
Command launcher to run the actual script, e.g. bash, python etc.
"compute": {}
"compute":
"description": |-
Next field: 4
"dl_runtime_image":
"description": |-
Runtime image
@ -1671,7 +1783,9 @@ github.com/databricks/databricks-sdk-go/service/jobs.GitSource:
"git_provider":
"description": |-
Unique identifier of the service used to host the Git repository. The value is case insensitive.
"git_snapshot": {}
"git_snapshot":
"description": |-
Read-only state of the remote repository at the time the job was run. This field is only included on job runs.
"git_tag":
"description": |-
Name of the tag to be checked out and used by this job. This field cannot be specified in conjunction with git_branch or git_commit.
@ -1743,7 +1857,10 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobEnvironment:
"environment_key":
"description": |-
The key of an environment. It has to be unique within a job.
"spec": {}
"spec":
"description": |-
The environment entity used to preserve serverless environment side panel and jobs' environment for non-notebook task.
In this minimal environment spec, only pip dependencies are supported.
github.com/databricks/databricks-sdk-go/service/jobs.JobNotificationSettings:
"no_alert_for_canceled_runs":
"description": |-
@ -1830,8 +1947,18 @@ github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthOperator:
- |-
GREATER_THAN
github.com/databricks/databricks-sdk-go/service/jobs.JobsHealthRule:
"metric": {}
"op": {}
"metric":
"description": |-
Specifies the health metric that is being evaluated for a particular health rule.
* `RUN_DURATION_SECONDS`: Expected total time for a run in seconds.
* `STREAMING_BACKLOG_BYTES`: An estimate of the maximum bytes of data waiting to be consumed across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_RECORDS`: An estimate of the maximum offset lag across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_SECONDS`: An estimate of the maximum consumer delay across all streams. This metric is in Public Preview.
* `STREAMING_BACKLOG_FILES`: An estimate of the maximum number of outstanding files across all streams. This metric is in Public Preview.
"op":
"description": |-
Specifies the operator used to compare the health metric value with the specified threshold.
"value":
"description": |-
Specifies the threshold value that the health metric should obey to satisfy the health rule.
@ -2193,8 +2320,12 @@ github.com/databricks/databricks-sdk-go/service/jobs.Task:
"for_each_task":
"description": |-
The task executes a nested task for every input provided when the `for_each_task` field is present.
"gen_ai_compute_task": {}
"health": {}
"gen_ai_compute_task":
"description": |-
Next field: 9
"health":
"description": |-
An optional set of health rules that can be defined for this job.
"job_cluster_key":
"description": |-
If job_cluster_key, this task is executed reusing the cluster specified in `job.settings.job_clusters`.

View File

@ -477,6 +477,14 @@ github.com/databricks/databricks-sdk-go/service/apps.ComputeStatus:
"description": |-
PLACEHOLDER
"state": {}
github.com/databricks/databricks-sdk-go/service/catalog.MonitorInferenceLog:
"granularities":
"description": |-
Granularities for aggregating data into time windows based on their timestamp. Valid values are 5 minutes, 30 minutes, 1 hour, 1 day, n weeks, 1 month, or 1 year.
github.com/databricks/databricks-sdk-go/service/catalog.MonitorTimeSeries:
"granularities":
"description": |-
Granularities for aggregating data into time windows based on their timestamp. Valid values are 5 minutes, 30 minutes, 1 hour, 1 day, n weeks, 1 month, or 1 year.
github.com/databricks/databricks-sdk-go/service/compute.AwsAttributes:
"availability":
"description": |-
@ -508,10 +516,25 @@ github.com/databricks/databricks-sdk-go/service/compute.DockerImage:
"basic_auth":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.Environment:
"dependencies":
"description": |-
List of pip dependencies, as supported by the version of pip in this environment.
github.com/databricks/databricks-sdk-go/service/compute.GcpAttributes:
"availability":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.InitScriptInfo:
"abfss":
"description": |-
Contains the Azure Data Lake Storage destination path
github.com/databricks/databricks-sdk-go/service/compute.LogAnalyticsInfo:
"log_analytics_primary_key":
"description": |-
The primary key for the Azure Log Analytics agent configuration
"log_analytics_workspace_id":
"description": |-
The workspace ID for the Azure Log Analytics agent configuration
github.com/databricks/databricks-sdk-go/service/jobs.GenAiComputeTask:
"compute":
"description": |-
@ -579,26 +602,3 @@ github.com/databricks/databricks-sdk-go/service/serving.ServedModelInput:
"model_version":
"description": |-
PLACEHOLDER
github.com/databricks/databricks-sdk-go/service/compute.InitScriptInfo:
"abfss":
"description": |-
Contains the Azure Data Lake Storage destination path
github.com/databricks/databricks-sdk-go/service/compute.Environment:
"dependencies":
"description": |-
List of pip dependencies, as supported by the version of pip in this environment.
github.com/databricks/databricks-sdk-go/service/catalog.MonitorInferenceLog:
"granularities":
"description": |-
Granularities for aggregating data into time windows based on their timestamp. Valid values are 5 minutes, 30 minutes, 1 hour, 1 day, n weeks, 1 month, or 1 year.
github.com/databricks/databricks-sdk-go/service/catalog.MonitorTimeSeries:
"granularities":
"description": |-
Granularities for aggregating data into time windows based on their timestamp. Valid values are 5 minutes, 30 minutes, 1 hour, 1 day, n weeks, 1 month, or 1 year.
github.com/databricks/databricks-sdk-go/service/compute.LogAnalyticsInfo:
"log_analytics_primary_key":
"description": |-
The primary key for the Azure Log Analytics agent configuration
"log_analytics_workspace_id":
"description": |-
The workspace ID for the Azure Log Analytics agent configuration

View File

@ -569,6 +569,10 @@
"description": "The AI Gateway configuration for the serving endpoint. NOTE: Only external model and provisioned throughput endpoints are currently supported.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.AiGatewayConfig"
},
"budget_policy_id": {
"description": "The budget policy to be applied to the serving endpoint.",
"$ref": "#/$defs/string"
},
"config": {
"description": "The core config of the serving endpoint.",
"$ref": "#/$defs/github.com/databricks/databricks-sdk-go/service/serving.EndpointCoreConfigInput"

2
cmd/workspace/cmd.go generated
View File

@ -28,6 +28,7 @@ import (
data_sources "github.com/databricks/cli/cmd/workspace/data-sources"
experiments "github.com/databricks/cli/cmd/workspace/experiments"
external_locations "github.com/databricks/cli/cmd/workspace/external-locations"
forecasting "github.com/databricks/cli/cmd/workspace/forecasting"
functions "github.com/databricks/cli/cmd/workspace/functions"
genie "github.com/databricks/cli/cmd/workspace/genie"
git_credentials "github.com/databricks/cli/cmd/workspace/git-credentials"
@ -191,6 +192,7 @@ func All() []*cobra.Command {
out = append(out, workspace.New())
out = append(out, workspace_bindings.New())
out = append(out, workspace_conf.New())
out = append(out, forecasting.New())
return out
}

248
cmd/workspace/forecasting/forecasting.go generated Executable file
View File

@ -0,0 +1,248 @@
// Code generated from OpenAPI specs by Databricks SDK Generator. DO NOT EDIT.
package forecasting
import (
"fmt"
"time"
"github.com/databricks/cli/cmd/root"
"github.com/databricks/cli/libs/cmdio"
"github.com/databricks/cli/libs/command"
"github.com/databricks/cli/libs/flags"
"github.com/databricks/databricks-sdk-go/service/ml"
"github.com/spf13/cobra"
)
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var cmdOverrides []func(*cobra.Command)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "forecasting",
Short: `The Forecasting API allows you to create and get serverless forecasting experiments.`,
Long: `The Forecasting API allows you to create and get serverless forecasting
experiments`,
GroupID: "ml",
Annotations: map[string]string{
"package": "ml",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCreateExperiment())
cmd.AddCommand(newGetExperiment())
// Apply optional overrides to this command.
for _, fn := range cmdOverrides {
fn(cmd)
}
return cmd
}
// start create-experiment command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var createExperimentOverrides []func(
*cobra.Command,
*ml.CreateForecastingExperimentRequest,
)
func newCreateExperiment() *cobra.Command {
cmd := &cobra.Command{}
var createExperimentReq ml.CreateForecastingExperimentRequest
var createExperimentJson flags.JsonFlag
var createExperimentSkipWait bool
var createExperimentTimeout time.Duration
cmd.Flags().BoolVar(&createExperimentSkipWait, "no-wait", createExperimentSkipWait, `do not wait to reach SUCCEEDED state`)
cmd.Flags().DurationVar(&createExperimentTimeout, "timeout", 120*time.Minute, `maximum amount of time to reach SUCCEEDED state`)
// TODO: short flags
cmd.Flags().Var(&createExperimentJson, "json", `either inline JSON string or @path/to/file.json with request body`)
cmd.Flags().StringVar(&createExperimentReq.CustomWeightsColumn, "custom-weights-column", createExperimentReq.CustomWeightsColumn, `Name of the column in the input training table used to customize the weight for each time series to calculate weighted metrics.`)
cmd.Flags().Int64Var(&createExperimentReq.DataGranularityQuantity, "data-granularity-quantity", createExperimentReq.DataGranularityQuantity, `The quantity of the input data granularity.`)
cmd.Flags().StringVar(&createExperimentReq.ExperimentPath, "experiment-path", createExperimentReq.ExperimentPath, `The path to the created experiment.`)
// TODO: array: holiday_regions
cmd.Flags().Int64Var(&createExperimentReq.MaxRuntime, "max-runtime", createExperimentReq.MaxRuntime, `The maximum duration in minutes for which the experiment is allowed to run.`)
cmd.Flags().StringVar(&createExperimentReq.PredictionDataPath, "prediction-data-path", createExperimentReq.PredictionDataPath, `The three-level (fully qualified) path to a unity catalog table.`)
cmd.Flags().StringVar(&createExperimentReq.PrimaryMetric, "primary-metric", createExperimentReq.PrimaryMetric, `The evaluation metric used to optimize the forecasting model.`)
cmd.Flags().StringVar(&createExperimentReq.RegisterTo, "register-to", createExperimentReq.RegisterTo, `The three-level (fully qualified) path to a unity catalog model.`)
cmd.Flags().StringVar(&createExperimentReq.SplitColumn, "split-column", createExperimentReq.SplitColumn, `Name of the column in the input training table used for custom data splits.`)
// TODO: array: timeseries_identifier_columns
// TODO: array: training_frameworks
cmd.Use = "create-experiment TRAIN_DATA_PATH TARGET_COLUMN TIME_COLUMN DATA_GRANULARITY_UNIT FORECAST_HORIZON"
cmd.Short = `Create a forecasting experiment.`
cmd.Long = `Create a forecasting experiment.
Creates a serverless forecasting experiment. Returns the experiment ID.
Arguments:
TRAIN_DATA_PATH: The three-level (fully qualified) name of a unity catalog table. This
table serves as the training data for the forecasting model.
TARGET_COLUMN: Name of the column in the input training table that serves as the
prediction target. The values in this column will be used as the ground
truth for model training.
TIME_COLUMN: Name of the column in the input training table that represents the
timestamp of each row.
DATA_GRANULARITY_UNIT: The time unit of the input data granularity. Together with
data_granularity_quantity field, this defines the time interval between
consecutive rows in the time series data. Possible values: * 'W' (weeks) *
'D' / 'days' / 'day' * 'hours' / 'hour' / 'hr' / 'h' * 'm' / 'minute' /
'min' / 'minutes' / 'T' * 'S' / 'seconds' / 'sec' / 'second' * 'M' /
'month' / 'months' * 'Q' / 'quarter' / 'quarters' * 'Y' / 'year' / 'years'
FORECAST_HORIZON: The number of time steps into the future for which predictions should be
made. This value represents a multiple of data_granularity_unit and
data_granularity_quantity determining how far ahead the model will
forecast.`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
if cmd.Flags().Changed("json") {
err := root.ExactArgs(0)(cmd, args)
if err != nil {
return fmt.Errorf("when --json flag is specified, no positional arguments are required. Provide 'train_data_path', 'target_column', 'time_column', 'data_granularity_unit', 'forecast_horizon' in your JSON input")
}
return nil
}
check := root.ExactArgs(5)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := command.WorkspaceClient(ctx)
if cmd.Flags().Changed("json") {
diags := createExperimentJson.Unmarshal(&createExperimentReq)
if diags.HasError() {
return diags.Error()
}
if len(diags) > 0 {
err := cmdio.RenderDiagnosticsToErrorOut(ctx, diags)
if err != nil {
return err
}
}
}
if !cmd.Flags().Changed("json") {
createExperimentReq.TrainDataPath = args[0]
}
if !cmd.Flags().Changed("json") {
createExperimentReq.TargetColumn = args[1]
}
if !cmd.Flags().Changed("json") {
createExperimentReq.TimeColumn = args[2]
}
if !cmd.Flags().Changed("json") {
createExperimentReq.DataGranularityUnit = args[3]
}
if !cmd.Flags().Changed("json") {
_, err = fmt.Sscan(args[4], &createExperimentReq.ForecastHorizon)
if err != nil {
return fmt.Errorf("invalid FORECAST_HORIZON: %s", args[4])
}
}
wait, err := w.Forecasting.CreateExperiment(ctx, createExperimentReq)
if err != nil {
return err
}
if createExperimentSkipWait {
return cmdio.Render(ctx, wait.Response)
}
spinner := cmdio.Spinner(ctx)
info, err := wait.OnProgress(func(i *ml.ForecastingExperiment) {
status := i.State
statusMessage := fmt.Sprintf("current status: %s", status)
spinner <- statusMessage
}).GetWithTimeout(createExperimentTimeout)
close(spinner)
if err != nil {
return err
}
return cmdio.Render(ctx, info)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range createExperimentOverrides {
fn(cmd, &createExperimentReq)
}
return cmd
}
// start get-experiment command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getExperimentOverrides []func(
*cobra.Command,
*ml.GetForecastingExperimentRequest,
)
func newGetExperiment() *cobra.Command {
cmd := &cobra.Command{}
var getExperimentReq ml.GetForecastingExperimentRequest
// TODO: short flags
cmd.Use = "get-experiment EXPERIMENT_ID"
cmd.Short = `Get a forecasting experiment.`
cmd.Long = `Get a forecasting experiment.
Public RPC to get forecasting experiment
Arguments:
EXPERIMENT_ID: The unique ID of a forecasting experiment`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(1)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := command.WorkspaceClient(ctx)
getExperimentReq.ExperimentId = args[0]
response, err := w.Forecasting.GetExperiment(ctx, getExperimentReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getExperimentOverrides {
fn(cmd, &getExperimentReq)
}
return cmd
}
// end service forecasting

View File

@ -31,15 +31,14 @@ func New() *cobra.Command {
Annotations: map[string]string{
"package": "dashboards",
},
// This service is being previewed; hide from help output.
Hidden: true,
}
// Add methods
cmd.AddCommand(newCreateMessage())
cmd.AddCommand(newExecuteMessageAttachmentQuery())
cmd.AddCommand(newExecuteMessageQuery())
cmd.AddCommand(newGetMessage())
cmd.AddCommand(newGetMessageAttachmentQueryResult())
cmd.AddCommand(newGetMessageQueryResult())
cmd.AddCommand(newGetMessageQueryResultByAttachment())
cmd.AddCommand(newGetSpace())
@ -158,6 +157,71 @@ func newCreateMessage() *cobra.Command {
return cmd
}
// start execute-message-attachment-query command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var executeMessageAttachmentQueryOverrides []func(
*cobra.Command,
*dashboards.GenieExecuteMessageAttachmentQueryRequest,
)
func newExecuteMessageAttachmentQuery() *cobra.Command {
cmd := &cobra.Command{}
var executeMessageAttachmentQueryReq dashboards.GenieExecuteMessageAttachmentQueryRequest
// TODO: short flags
cmd.Use = "execute-message-attachment-query SPACE_ID CONVERSATION_ID MESSAGE_ID ATTACHMENT_ID"
cmd.Short = `Execute message attachment SQL query.`
cmd.Long = `Execute message attachment SQL query.
Execute the SQL for a message query attachment. Use this API when the query
attachment has expired and needs to be re-executed.
Arguments:
SPACE_ID: Genie space ID
CONVERSATION_ID: Conversation ID
MESSAGE_ID: Message ID
ATTACHMENT_ID: Attachment ID`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(4)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := command.WorkspaceClient(ctx)
executeMessageAttachmentQueryReq.SpaceId = args[0]
executeMessageAttachmentQueryReq.ConversationId = args[1]
executeMessageAttachmentQueryReq.MessageId = args[2]
executeMessageAttachmentQueryReq.AttachmentId = args[3]
response, err := w.Genie.ExecuteMessageAttachmentQuery(ctx, executeMessageAttachmentQueryReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range executeMessageAttachmentQueryOverrides {
fn(cmd, &executeMessageAttachmentQueryReq)
}
return cmd
}
// start execute-message-query command
// Slice with functions to override default command behavior.
@ -175,8 +239,8 @@ func newExecuteMessageQuery() *cobra.Command {
// TODO: short flags
cmd.Use = "execute-message-query SPACE_ID CONVERSATION_ID MESSAGE_ID"
cmd.Short = `Execute SQL query in a conversation message.`
cmd.Long = `Execute SQL query in a conversation message.
cmd.Short = `[Deprecated] Execute SQL query in a conversation message.`
cmd.Long = `[Deprecated] Execute SQL query in a conversation message.
Execute the SQL query in the message.
@ -185,6 +249,9 @@ func newExecuteMessageQuery() *cobra.Command {
CONVERSATION_ID: Conversation ID
MESSAGE_ID: Message ID`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -284,6 +351,72 @@ func newGetMessage() *cobra.Command {
return cmd
}
// start get-message-attachment-query-result command
// Slice with functions to override default command behavior.
// Functions can be added from the `init()` function in manually curated files in this directory.
var getMessageAttachmentQueryResultOverrides []func(
*cobra.Command,
*dashboards.GenieGetMessageAttachmentQueryResultRequest,
)
func newGetMessageAttachmentQueryResult() *cobra.Command {
cmd := &cobra.Command{}
var getMessageAttachmentQueryResultReq dashboards.GenieGetMessageAttachmentQueryResultRequest
// TODO: short flags
cmd.Use = "get-message-attachment-query-result SPACE_ID CONVERSATION_ID MESSAGE_ID ATTACHMENT_ID"
cmd.Short = `Get message attachment SQL query result.`
cmd.Long = `Get message attachment SQL query result.
Get the result of SQL query if the message has a query attachment. This is
only available if a message has a query attachment and the message status is
EXECUTING_QUERY OR COMPLETED.
Arguments:
SPACE_ID: Genie space ID
CONVERSATION_ID: Conversation ID
MESSAGE_ID: Message ID
ATTACHMENT_ID: Attachment ID`
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
check := root.ExactArgs(4)
return check(cmd, args)
}
cmd.PreRunE = root.MustWorkspaceClient
cmd.RunE = func(cmd *cobra.Command, args []string) (err error) {
ctx := cmd.Context()
w := command.WorkspaceClient(ctx)
getMessageAttachmentQueryResultReq.SpaceId = args[0]
getMessageAttachmentQueryResultReq.ConversationId = args[1]
getMessageAttachmentQueryResultReq.MessageId = args[2]
getMessageAttachmentQueryResultReq.AttachmentId = args[3]
response, err := w.Genie.GetMessageAttachmentQueryResult(ctx, getMessageAttachmentQueryResultReq)
if err != nil {
return err
}
return cmdio.Render(ctx, response)
}
// Disable completions since they are not applicable.
// Can be overridden by manual implementation in `override.go`.
cmd.ValidArgsFunction = cobra.NoFileCompletions
// Apply optional overrides to this command.
for _, fn := range getMessageAttachmentQueryResultOverrides {
fn(cmd, &getMessageAttachmentQueryResultReq)
}
return cmd
}
// start get-message-query-result command
// Slice with functions to override default command behavior.
@ -313,6 +446,9 @@ func newGetMessageQueryResult() *cobra.Command {
CONVERSATION_ID: Conversation ID
MESSAGE_ID: Message ID`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -365,8 +501,8 @@ func newGetMessageQueryResultByAttachment() *cobra.Command {
// TODO: short flags
cmd.Use = "get-message-query-result-by-attachment SPACE_ID CONVERSATION_ID MESSAGE_ID ATTACHMENT_ID"
cmd.Short = `Get conversation message SQL query result.`
cmd.Long = `Get conversation message SQL query result.
cmd.Short = `[Deprecated] Get conversation message SQL query result.`
cmd.Long = `[Deprecated] Get conversation message SQL query result.
Get the result of SQL query if the message has a query attachment. This is
only available if a message has a query attachment and the message status is
@ -378,6 +514,9 @@ func newGetMessageQueryResultByAttachment() *cobra.Command {
MESSAGE_ID: Message ID
ATTACHMENT_ID: Attachment ID`
// This command is being previewed; hide from help output.
cmd.Hidden = true
cmd.Annotations = make(map[string]string)
cmd.Args = func(cmd *cobra.Command, args []string) error {
@ -431,10 +570,10 @@ func newGetSpace() *cobra.Command {
// TODO: short flags
cmd.Use = "get-space SPACE_ID"
cmd.Short = `Get details of a Genie Space.`
cmd.Long = `Get details of a Genie Space.
cmd.Short = `Get Genie Space.`
cmd.Long = `Get Genie Space.
Get a Genie Space.
Get details of a Genie Space.
Arguments:
SPACE_ID: The ID associated with the Genie space`

View File

@ -24,7 +24,7 @@ func New() *cobra.Command {
Short: `The Serving Endpoints API allows you to create, update, and delete model serving endpoints.`,
Long: `The Serving Endpoints API allows you to create, update, and delete model
serving endpoints.
You can use a serving endpoint to serve models from the Databricks Model
Registry or from Unity Catalog. Endpoints expose the underlying models as
scalable REST API endpoints using serverless compute. This means the endpoints
@ -88,7 +88,7 @@ func newBuildLogs() *cobra.Command {
cmd.Use = "build-logs NAME SERVED_MODEL_NAME"
cmd.Short = `Get build logs for a served model.`
cmd.Long = `Get build logs for a served model.
Retrieves the build logs associated with the provided served model.
Arguments:
@ -155,6 +155,7 @@ func newCreate() *cobra.Command {
cmd.Flags().Var(&createJson, "json", `either inline JSON string or @path/to/file.json with request body`)
// TODO: complex arg: ai_gateway
cmd.Flags().StringVar(&createReq.BudgetPolicyId, "budget-policy-id", createReq.BudgetPolicyId, `The budget policy to be applied to the serving endpoint.`)
// TODO: complex arg: config
// TODO: array: rate_limits
cmd.Flags().BoolVar(&createReq.RouteOptimized, "route-optimized", createReq.RouteOptimized, `Enable route optimization for the serving endpoint.`)
@ -308,7 +309,7 @@ func newExportMetrics() *cobra.Command {
cmd.Use = "export-metrics NAME"
cmd.Short = `Get metrics of a serving endpoint.`
cmd.Long = `Get metrics of a serving endpoint.
Retrieves the metrics associated with the provided serving endpoint in either
Prometheus or OpenMetrics exposition format.
@ -369,7 +370,7 @@ func newGet() *cobra.Command {
cmd.Use = "get NAME"
cmd.Short = `Get a single serving endpoint.`
cmd.Long = `Get a single serving endpoint.
Retrieves the details for a single serving endpoint.
Arguments:
@ -427,7 +428,7 @@ func newGetOpenApi() *cobra.Command {
cmd.Use = "get-open-api NAME"
cmd.Short = `Get the schema for a serving endpoint.`
cmd.Long = `Get the schema for a serving endpoint.
Get the query schema of the serving endpoint in OpenAPI format. The schema
contains information for the supported paths, input and output format and
datatypes.
@ -489,7 +490,7 @@ func newGetPermissionLevels() *cobra.Command {
cmd.Use = "get-permission-levels SERVING_ENDPOINT_ID"
cmd.Short = `Get serving endpoint permission levels.`
cmd.Long = `Get serving endpoint permission levels.
Gets the permission levels that a user can have on an object.
Arguments:
@ -547,7 +548,7 @@ func newGetPermissions() *cobra.Command {
cmd.Use = "get-permissions SERVING_ENDPOINT_ID"
cmd.Short = `Get serving endpoint permissions.`
cmd.Long = `Get serving endpoint permissions.
Gets the permissions of a serving endpoint. Serving endpoints can inherit
permissions from their root object.
@ -715,7 +716,7 @@ func newLogs() *cobra.Command {
cmd.Use = "logs NAME SERVED_MODEL_NAME"
cmd.Short = `Get the latest logs for a served model.`
cmd.Long = `Get the latest logs for a served model.
Retrieves the service logs associated with the provided served model.
Arguments:
@ -782,7 +783,7 @@ func newPatch() *cobra.Command {
cmd.Use = "patch NAME"
cmd.Short = `Update tags of a serving endpoint.`
cmd.Long = `Update tags of a serving endpoint.
Used to batch add and delete tags from a serving endpoint with a single API
call.
@ -858,7 +859,7 @@ func newPut() *cobra.Command {
cmd.Use = "put NAME"
cmd.Short = `Update rate limits of a serving endpoint.`
cmd.Long = `Update rate limits of a serving endpoint.
Used to update the rate limits of a serving endpoint. NOTE: Only foundation
model endpoints are currently supported. For external models, use AI Gateway
to manage rate limits.
@ -938,7 +939,7 @@ func newPutAiGateway() *cobra.Command {
cmd.Use = "put-ai-gateway NAME"
cmd.Short = `Update AI Gateway of a serving endpoint.`
cmd.Long = `Update AI Gateway of a serving endpoint.
Used to update the AI Gateway of a serving endpoint. NOTE: Only external model
and provisioned throughput endpoints are currently supported.
@ -1098,7 +1099,7 @@ func newSetPermissions() *cobra.Command {
cmd.Use = "set-permissions SERVING_ENDPOINT_ID"
cmd.Short = `Set serving endpoint permissions.`
cmd.Long = `Set serving endpoint permissions.
Sets permissions on an object, replacing existing permissions if they exist.
Deletes all direct permissions if none are specified. Objects can inherit
permissions from their root object.
@ -1182,7 +1183,7 @@ func newUpdateConfig() *cobra.Command {
cmd.Use = "update-config NAME"
cmd.Short = `Update config of a serving endpoint.`
cmd.Long = `Update config of a serving endpoint.
Updates any combination of the serving endpoint's served entities, the compute
configuration of those served entities, and the endpoint's traffic config. An
endpoint that already has an update in progress can not be updated until the
@ -1272,7 +1273,7 @@ func newUpdatePermissions() *cobra.Command {
cmd.Use = "update-permissions SERVING_ENDPOINT_ID"
cmd.Short = `Update serving endpoint permissions.`
cmd.Long = `Update serving endpoint permissions.
Updates the permissions on a serving endpoint. Serving endpoints can inherit
permissions from their root object.

2
go.mod
View File

@ -9,7 +9,7 @@ require (
github.com/BurntSushi/toml v1.4.0 // MIT
github.com/Masterminds/semver/v3 v3.3.1 // MIT
github.com/briandowns/spinner v1.23.1 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.59.0 // Apache 2.0
github.com/databricks/databricks-sdk-go v0.60.0 // Apache 2.0
github.com/fatih/color v1.18.0 // MIT
github.com/google/uuid v1.6.0 // BSD-3-Clause
github.com/gorilla/mux v1.8.1 // BSD 3-Clause

4
go.sum generated
View File

@ -34,8 +34,8 @@ github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGX
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/cyphar/filepath-securejoin v0.2.5 h1:6iR5tXJ/e6tJZzzdMc1km3Sa7RRIVBKAK32O2s7AYfo=
github.com/cyphar/filepath-securejoin v0.2.5/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
github.com/databricks/databricks-sdk-go v0.59.0 h1:m87rbnoeO7A6+QKo4QzwyPE5AzEeGvopEaavn3F5y/o=
github.com/databricks/databricks-sdk-go v0.59.0/go.mod h1:JpLizplEs+up9/Z4Xf2x++o3sM9eTTWFGzIXAptKJzI=
github.com/databricks/databricks-sdk-go v0.60.0 h1:mCnPsK7gLxF6ps9WihQkh3OwOTTLq/JEzsBzDq1yYbc=
github.com/databricks/databricks-sdk-go v0.60.0/go.mod h1:JpLizplEs+up9/Z4Xf2x++o3sM9eTTWFGzIXAptKJzI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=

View File

@ -14,6 +14,7 @@ from datetime import datetime, timezone
NEXT_CHANGELOG_FILE_NAME = "NEXT_CHANGELOG.md"
CHANGELOG_FILE_NAME = "CHANGELOG.md"
PACKAGE_FILE_NAME = ".package.json"
CODEGEN_FILE_NAME = ".codegen.json"
"""
This script tags the release of the SDKs using a combination of the GitHub API and Git commands.
It reads the local repository to determine necessary changes, updates changelogs, and creates tags.
@ -153,14 +154,14 @@ def update_version_references(tag_info: TagInfo) -> None:
Code references are defined in .package.json files.
"""
# Load version patterns from '.package.json' file
package_file_path = os.path.join(os.getcwd(), tag_info.package.path, PACKAGE_FILE_NAME)
# Load version patterns from '.codegen.json' file at the top level of the repository
package_file_path = os.path.join(os.getcwd(), CODEGEN_FILE_NAME)
with open(package_file_path, 'r') as file:
package_file = json.load(file)
version = package_file.get('version')
if not version:
print(f"Version not found in .package.json. Nothing to update.")
print(f"`version` not found in .codegen.json. Nothing to update.")
return
# Update the versions