mirror of https://github.com/databricks/cli.git
3108883a8f
## Changes With this change, both job parameters and task parameters can be specified as positional arguments to bundle run. How the positional arguments are interpreted depends on the configuration of the job. ### Examples: For a job that has job parameters configured a user can specify: ``` databricks bundle run my_job -- --param1=value1 --param2=value2 ``` And the run is kicked off with job parameters set to: ```json { "param1": "value1", "param2": "value2" } ``` Similarly, for a job that doesn't use job parameters and only has `notebook_task` tasks, a user can specify: ``` databricks bundle run my_notebook_job -- --param1=value1 --param2=value2 ``` And the run is kicked off with task level `notebook_params` configured as: ```json { "param1": "value1", "param2": "value2" } ``` For a job that doesn't doesn't use job parameters and only has either `spark_python_task` or `python_wheel_task` tasks, a user can specify: ``` databricks bundle run my_python_file_job -- --flag=value other arguments ``` And the run is kicked off with task level `python_params` configured as: ```json [ "--flag=value", "other", "arguments" ] ``` The same is applied to jobs with only `spark_jar_task` or `spark_submit_task` tasks. ## Tests Unit tests. Tested the completions manually. |
||
---|---|---|
.. | ||
artifacts | ||
config | ||
deploy | ||
deployer | ||
env | ||
internal | ||
libraries | ||
metadata | ||
permissions | ||
phases | ||
python | ||
run | ||
schema | ||
scripts | ||
tests | ||
bundle.go | ||
bundle_read_only.go | ||
bundle_test.go | ||
context.go | ||
context_test.go | ||
deferred.go | ||
deferred_test.go | ||
log_string.go | ||
mutator.go | ||
mutator_read_only.go | ||
mutator_test.go | ||
parallel.go | ||
parallel_test.go | ||
root.go | ||
root_test.go | ||
seq.go | ||
seq_test.go |