This is a follow up to #2471 where incorrect config was used in
interactive_cluster test.
## Changes
- Fixed interactive_cluster to use proper config, it was accidentally
referring to config from ../base
- Add $NODE_TYPE_ID env var and replacement to acceptance tests, this is
necessary for interactive_cluster test.
- Disable acceptance/bundle/override on cloud. This started failing
because it has real node type that gets replaced with NODE_TYPE_ID but
only in AWS env. Since the test is focussed on config merging, there is
no need to run it against real workspaces.
- Modify all tests in integration_whl to print rendered databricks.yml,
to prevent this kind of error.
Convert integration/bundle/integration/bundle/python_wheel_test.go to
acceptance tests. I plan to expand these tests to check patchwheel
functionality.
Inside each test there were two runs - with params and without, I've
expanded each run into separate test to reduce total time as this runs
can be done in parallel.
Also add new env var DEFAULT_SPARK_VERSION that matches the one in
integration tests.
The tests are currently enabled on every PR (`CloudLong=true` is
commented out), this can be changed after landing.