Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Fix drift in databricks_cluster when is_single_node is set to true #4416

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ashenm
Copy link
Contributor

@ashenm ashenm commented Jan 17, 2025

Changes

Fixes #4360

Tests

  • make test run locally
  • relevant change in docs/ folder
  • covered with integration tests in internal/acceptance
  • using Go SDK
  • using TF Plugin Framework

In addition to the unit tests, deployed provider locally with following resource definition and verified diff

resource "databricks_cluster" "single_node" {
  spark_version      = "15.4.x-scala2.12"
  single_user_name   = "[email protected]"
  runtime_engine     = "STANDARD"
  node_type_id       = "r5.large"
  kind               = "CLASSIC_PREVIEW"
  is_single_node     = true
  data_security_mode = "SINGLE_USER"
  cluster_name       = upper("demo-single-node")

  custom_tags = {
    "owner" = "ashenm"
  }
  autotermination_minutes = 10
}

@ashenm ashenm requested review from a team as code owners January 17, 2025 16:20
@ashenm ashenm requested review from mgyucht and removed request for a team January 17, 2025 16:20
@ashenm
Copy link
Contributor Author

ashenm commented Jan 17, 2025

@alexott please take a stab at it when u get a slot

@alexott
Copy link
Contributor

alexott commented Jan 17, 2025

Please add tests for it - we have example of test for CustomizeDiff in resource_sql_table

@ashenm ashenm temporarily deployed to test-trigger-is January 17, 2025 16:30 — with GitHub Actions Inactive
@alexott
Copy link
Contributor

alexott commented Jan 20, 2025

@mgyucht given the work on porting clusters to the plugin framework - do we need this PR?

@mgyucht
Copy link
Contributor

mgyucht commented Jan 20, 2025

That work may take a bit longer than we thought to finish, so I'm OK merging incremental improvements like this.

Copy link
Contributor

@mgyucht mgyucht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation seems OK. Several comments can be addressed.

Please include any testing you did in the PR description.

Can you add a unit test to verify that the diff is as expected? Create a new test, and use a qa.ResourceFixture where you supply the HCL, InstanceState, and ExpectedDiff fields, in addition to the Resource field. Take a look at catalog/resource_sql_table_test.go TestResourceSqlTable_Diff_ExistingResource for an example of how to do this. Add your test to clusters/resource_cluster_test.go.

new, okNew := n.(map[string]interface{})

if !okNew || !okOld {
return fmt.Errorf("internal type casting error")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you include the actual type of o/n here with %T?

@@ -48,6 +54,43 @@ func ResourceCluster() common.Resource {
}
}

func singleNodeClusterChangesCustomizeDiff(d *schema.ResourceDiff) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, let's include documentation comments for any methods or complex logic. Here's a suggestion:

Suggested change
func singleNodeClusterChangesCustomizeDiff(d *schema.ResourceDiff) error {
// singleNodeClusterChangesCustomizeDiff adjusts the plan when `is_single_node` is `true`. When this happens,
// the API automatically sets the `ResourceClass` key in `custom_tags` and two other keys in the `spark_conf`.
// If the user hasn't set these explicitly in their config, the plan marks these keys for removal.
// This method copies the values for these keys from state to the plan.
// This needs to be done in addition to setting these attributes as computed; otherwise, this customization
// won't take effect for users who have set additional `spark_conf` or `custom_tags`.
func singleNodeClusterChangesCustomizeDiff(d *schema.ResourceDiff) error {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done 🙌

Delete: resourceClusterDelete,
Schema: clusterSchema,
CustomizeDiff: func(ctx context.Context, d *schema.ResourceDiff) error {
if isSingleNode, _ := d.GetOk("is_single_node"); isSingleNode.(bool) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially avoid interface conversion panic.

Suggested change
if isSingleNode, _ := d.GetOk("is_single_node"); isSingleNode.(bool) {
if isSingleNode, ok := d.GetOk("is_single_node"); ok && isSingleNode.(bool) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! Fixed 🙌

Comment on lines +1698 to +1717
expectedDiff["default_tags.%"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}
expectedDiff["driver_instance_pool_id"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}
expectedDiff["driver_node_type_id"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}
expectedDiff["enable_elastic_disk"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}
expectedDiff["enable_local_disk_encryption"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}
expectedDiff["state"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}
expectedDiff["url"] = &terraform.ResourceAttrDiff{Old: "", New: "", NewComputed: true, NewRemoved: false, RequiresNew: false, Sensitive: false}

instanceState := testCase.instanceState
instanceState["cluster_id"] = "abc"
instanceState["autotermination_minutes"] = "60"
instanceState["cluster_name"] = "Single Node Cluster"
instanceState["data_security_mode"] = "SINGLE_USER"
instanceState["spark_version"] = "15.4.x-scala2.12"
instanceState["single_user_name"] = "testuser"
instanceState["runtime_engine"] = "STANDARD"
instanceState["num_workers"] = "0"
instanceState["node_type_id"] = "Standard_F4s"
instanceState["kind"] = "CLASSIC_PREVIEW"
instanceState["is_single_node"] = "true"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a fan of this; if there's a way to simplify this please lemme know

Copy link

If integration tests don't run automatically, an authorized user can run them manually by following the instructions below:

Trigger:
go/deco-tests-run/terraform

Inputs:

  • PR number: 4416
  • Commit SHA: a063bc5d54621c6f1039cfa60fcec033af3820ee

Checks will be approved automatically on success.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ISSUE] Configuration drift in databricks_cluster when is_single_node is set to true
3 participants