## Changes
It wasn't working because it deferred to the regular `slog.TextHandler`
for the `WithAttr` and `WithGroup` functions. Both of these functions
don't mutate the handler but return a new one. When the top-level logger
called one of these, log records in that context used the standard
handler instead of ours.
To implement tracking of attributes and groups, I followed the guide at
https://github.com/golang/example/blob/master/slog-handler-guide/README.md
for writing custom handlers.
## Tests
The new tests demonstrate formatting through `t.Log` and look good.
## Changes
It makes the behaviour consistent with or without `python_wheel_wrapper`
on when job is run with `--python-params` flag.
In `python_wheel_wrapper` mode it converts dynamic `python_params` in a
dynamic specially named `notebook_param` and the wrapper reads them with
`dbutils` and pass to `sys.argv`
Fixes#1000
## Tests
Added an integration test.
Integration tests pass.
## Changes
This PR adds documentation for positional arguments in commands that are
generated from the openapi spec.
Note: the changes to `.gitattributes` will be revert / properly fixed in
https://github.com/databricks/cli/pull/1012
## Changes
This PR introduces the `skip_prompt_if` extension to the jsonschema
library. If the inputs provided by the user match the JSON schema then
the prompt for that property is skipped.
Right now only constant checks are supported, but if in the future more
complicated conditionals are required, this can be extended to support
`allOf`, `oneOf`, `anyOf` etc allowing template authors to specify
conditionals of arbitary complexity.
## Tests
Unit tests and manually.
## Changes
This PR adds versioning for bundle templates. Right now there's only
logic for the maximum version of templates supported. At some point in
the future if we make a breaking template change we can also include a
minimum version of template supported by the CLI.
## Tests
Unit tests.
## Changes
Only clusters with their source attribute equal to `UI` or `API` should
be presented in the dropdown.
## Tests
Unit test and manual confirmation.
## Changes
The code included the to-be-created profile in the configuration and
that triggered the SDK to try and load it. Instead, we must use the
specified host and token directly.
## Tests
Manually. More integration test coverage tbd.
## Changes
The manual unshallow step is superfluous and can be done as part of the
`actions/checkout` step.
Companion to #1022.
## Tests
Manual trigger of the snapshot build workflow.
## Changes
Build failure for 32-bit Windows binary due to integer overflow in the
SDK.
We don't test 32-bit anywhere. I propose we stop publishing these builds
until we receive evidence they are still useful.
## Tests
n/a
## Changes
We didn't return the error upon creating a workspace or account client.
If there is an error, it must always propagate up the stack. The result
of this bug was that we were setting a `nil` account or workspace
client, which in turn caused SIGSEGVs.
Fixes#913.
## Tests
Manually confirmed this fixes the linked issue. The CLI now correctly
returns an error when the client cannot be constructed.
The issue was reproducible using a `.databrickscfg` with a single,
incorrectly configured profile.
## Changes
If there are no matches when doing Glob call for pipeline library
defined, leave the entry as is.
The next mutators in the chain will detect that file is missing and the
error will be more user friendly.
Before the change
```
Starting resource deployment
Error: terraform apply: exit status 1
Error: cannot create pipeline: libraries must contain at least one element
```
After
```
Error: notebook ./non-existent not found
```
## Tests
Added regression unit tests
## Changes
Removed hash from the upload path since it's not useful anyway.
The main reason for that change was to make it work on all-purpose
clusters. But in order to make it work, wheel version needs to be
increased anyway. So having only hash in path is useless.
Note: using --build-number (build tag) flag does not help with
re-installing libraries on all-purpose clusters. The reason is that
`pip` ignoring build tag when upgrading the library and only look at
wheel version.
Build tag is only used for sorting the versions and the one with higher
build tag takes priority when installed. It only works if no library is
installed.
See
a15dd75d98/src/pip/_internal/index/package_finder.py (L522-L556)https://github.com/pypa/pip/issues/4781
Thus, the only way to reinstall the library on all-purpose cluster is to
increase wheel version manually or use automatic version generation,
f.e.
```
setup(
version=datetime.datetime.utcnow().strftime("%Y%m%d.%H%M%S"),
...
)
```
## Tests
Integration tests passed.
## Changes
This makes mlops-stacks more discoverable and makes the UX of
initialising the mlops-stack template better.
## Tests
Manually
Dropdown UI:
```
shreyas.goenka@THW32HFW6T projects % cli bundle init
Template to use:
▸ default-python
mlops-stacks
```
Help message:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init -h
Initialize using a bundle template.
TEMPLATE_PATH optionally specifies which template to use. It can be one of the following:
- default-python: The default Python template
- mlops-stacks: The Databricks MLOps Stacks template. More information can be found at: https://github.com/databricks/mlops-stacks
```
## Changes
This PR makes changes required to automatically update the bundle docs
during the CLI release process. We rely on `post_generate` scripts that
are executed after code generation with CWD as the CLI repo root.
The new `output-file` flag is introduced because stdout redirect does
not work here and would otherwise require changes to our release
automation CLI (deco CLI)
## Tests
Manually. Regenerated the CLI and the descriptions were indeed generated
for the CLI from the provided openapi spec.
## Changes
If a struct has a field of type `config.Value`, then we set it to the
source value while converting a `config.Value` instance to a struct as
part of a call to `convert.ToTyped`.
This is convenient when dealing with deeply nested structs where
functions on inner structs need access to the metadata provided by their
corresponding `config.Value` (e.g. where they were defined).
## Tests
Added unit tests pass.
## Changes
A bug in the code that pulls the remote state could cause the local
state to be empty instead of a copy of the remote state. This happened
only if the local state was present and stale when compared to the
remote version.
We correctly checked for the state serial to see if the local state had
to be replaced but didn't seek back on the remote state before writing
it out. Because the staleness check would read the remote state in full,
copying from the same reader would immediately yield an EOF.
## Tests
* Unit tests for state pull and push mutators that rely on a mocked
filer.
* An integration test that deploys the same bundle from multiple paths,
triggering the staleness logic.
Both failed prior to the fix and now pass.
## Changes
This breaks out the flags into a separate struct to make it easier to
pass around.
If specified, the flag calls into the `cfgpicker` to prompt for a
cluster to associated with the profile.
## Tests
Existing tests pass; added one for host validation.
---------
Co-authored-by: Miles Yucht <miles@databricks.com>
## Changes
`databricks configure` creates a new .databrickscfg if one doesn't
already exist, but `databricks auth login` fails in this case. Because
`databricks auth login` anyways writes out the config file, we
gracefully handle this error and continue.
## Tests
Unit test.
```
$ ls ~/.databrickscfg*
/Users/miles/.databrickscfg.bak
$ ./cli auth login
Databricks Profile Name: test
Databricks Host: https://<HOST>
Profile test was successfully saved
$ ls ~/.databrickscfg*
/Users/miles/.databrickscfg /Users/miles/.databrickscfg.bak
$ cat ~/.databrickscfg
; The profile defined in the DEFAULT section is to be used as a fallback when no profile is explicitly specified.
[DEFAULT]
[test]
host = https://<HOST>
auth_type = databricks-cli
```
Adds better error message when input path is not a bundle template
before:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init ~/bricks
Error: open /Users/shreyas.goenka/bricks/databricks_template_schema.json: no such file or directory
```
after:
```
shreyas.goenka@THW32HFW6T bricks % cli bundle init ~/bricks
Error: expected to find a template schema file at /Users/shreyas.goenka/bricks/databricks_template_schema.json
```
## Changes
It appears that `USERPROFILE` env variable indicates where Azure CLI
stores configuration data (aka `.azure` folder).
https://learn.microsoft.com/en-us/cli/azure/azure-cli-configuration#cli-configuration-file
Passing it to terraform executable allows it to correctly authenticate
using Azure CLI.
Fixes#983
## Tests
Ran deployment on Window VM before and after the fix.
## Changes
Previously local JAR paths were transformed to remote path during
initialisation and thus artifact building logic did not recognise such
libraries as local to be handled and uploaded.
Now it's possible to use spark_jar_tasks with local JAR libraries on
14.1+ DBR clusters
Example configuration
```
bundle:
name: spark-jar
workspace:
host: ***
artifacts:
my_java_code:
path: ./sample-java
build: "javac PrintArgs.java && jar cvfm PrintArgs.jar META-INF/MANIFEST.MF PrintArgs.class"
files:
- source: "/Users/andrew.nester/dabs/wheel/sample-java/PrintArgs.jar"
resources:
jobs:
print_args:
name: "Print Args"
tasks:
- task_key: Print
new_cluster:
num_workers: 0
spark_version: 14.2.x-scala2.12
node_type_id: i3.xlarge
spark_conf:
"spark.databricks.cluster.profile": "singleNode"
"spark.master": "local[*]"
custom_tags:
ResourceClass: "SingleNode"
spark_jar_task:
main_class_name: PrintArgs
libraries:
- jar: ./sample-java/PrintArgs.jar
```
## Tests
Manually running `bundle deploy and bundle run`
## Changes
DLT currently doesn't always set `$PYTHONPATH` correctly (ES-947370).
This restores the original workaround to make new pipelines work while
that issue is being addressed. The workaround was removed in #832.
Manually tested.
## Changes
Some test call sites called directly into the mutator's `Apply` function
instead of `bundle.Apply`. Calling into `bundle.Apply` is preferred
because that's where we can run pre/post logic common across all
mutators.
## Tests
Pass.
## Changes
All calls to apply a mutator must go through `bundle.Apply`. This
conflicts with the existing use of the variable `bundle`. This change
un-aliases the variable from the package name by renaming all variables
to `b`.
## Tests
Pass.
## Changes
This PR:
1. Renames `FilesPath` -> `FilePath` and `ArtifactsPath` ->
`ArtifactPath` in the bundle and metadata configuration to make them
consistant with the json tags.
2. Fixes development / production mode error messages to point to
`file_path` and `artifact_path`
## Tests
Existing unit tests. This is a strightforward renaming of the fields.
## Changes
This PR is the counterpart to #904. With this change, we are able to
convert a `config.Value` into a Go struct, make modifications to the Go
struct, and reflect those changes in a new `config.Value`.
This functionality allows us to incrementally introduce this
configuration representation to existing bundle mutators. Bundle
mutators expect a `*bundle.Bundle` argument and mutate its configuration
directly. These mutations are not reflected in the corresponding
`config.Value` (once introduced), which means we cannot use the
`config.Value` as source of truth until we update _all_ mutators. To
address this, we can run `convert.ToTyped` and `convert.FromTyped` at
the mutator boundary (from `bundle.Apply`) and capture changes made to
the Go struct. Then we can incrementally make mutators aware of the
`config.Value` configuration and have them mutate that structure
directly.
## Tests
New unit tests pass.
Manual spot checks against the bundle configuration type.
## Changes
If args[0] == "." was provided to bundle init command, it would try to
resolve it as a built in template and error out.
## Tests
Manually
before:
```
shreyas.goenka@THW32HFW6T mlops-stack % cli bundle init .
Error: open /var/folders/lg/njll3hjx7pjcgxs6n7b290bw0000gp/T/templates3934264356/templates/databricks_template_schema.json: no such file or directory
```
after:
```
shreyas.goenka@THW32HFW6T mlops-stack % cli bundle init .
Welcome to MLOps Stacks. For detailed information on project generation, see the README at https://github.com/databricks/mlops-stacks/blob/main/README.md.
Project Name [my-mlops-project]: ^C
```
## Changes
The Jobs service expects these fields to always be present in the
metadata in their validation logic, which is reasonable. This PR removes
the omit empty tags so these fields are always uploaded to the workspace
`metadata.json` file.
Partly mitigates #859. It's still not clear to me if there is an actual
use case or if users are trying to use "development" mode jobs for
production, but making this overridable is reasonable.
Beyond this fix I think we could do something in the Jobs schedule UI,
but it would help to better understand the use case (or actual reason of
confusion). I expect we should hint customers to move away from dev mode
rather than unpause.
Bumps [golang.org/x/oauth2](https://github.com/golang/oauth2) from
0.13.0 to 0.14.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e067960af8"><code>e067960</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="4c91c17b32"><code>4c91c17</code></a>
google: adds header to security considerations section</li>
<li>See full diff in <a
href="https://github.com/golang/oauth2/compare/v0.13.0...v0.14.0">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=golang.org/x/oauth2&package-manager=go_modules&previous-version=0.13.0&new-version=0.14.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
## Changes
Now it's possible to define top level `permissions` section in bundle
configuration and permissions defined there will be applied to all
resources defined in the bundle.
Supported top-level permission levels: CAN_MANAGE, CAN_VIEW, CAN_RUN.
Permissions are applied to: Jobs, DLT Pipelines, ML Models, ML
Experiments and Model Service Endpoints
```
bundle:
name: permissions
workspace:
host: ***
permissions:
- level: CAN_VIEW
group_name: test-group
- level: CAN_MANAGE
user_name: user@company.com
- level: CAN_RUN
service_principal_name: 123456-abcdef
```
## Tests
Added corresponding unit tests + ran `bundle validate` and `bundle
deploy` manually