-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-empty plans when migrating existing optional+computed lists to Terraform Plugin Framework #883
Comments
I think we might have a same issue here. Did you try to assign the block value to the state in We found that Framework doesn't support optional + computed block, thus the block can't be placed into state in If you put the block value into state in We couldn't find a workaround or a solution here neither, and I hope we can get Terraform team to take a look at this. |
Hi @maastha 👋 Thank you for submitting this and sorry you are running into trouble here. Terraform in general is very particular about data consistency and proper value planning once a resource is migrated to the framework, so it may involve teaching the provider code much more about the API behaviors associated with each attribute. Let's see if we cannot troubleshoot this a little further. For starters, can you run Terraform with warning logs enabled using the SDK version of the resource first? e.g. Next, we could use some additional information from you about the API responses. Are the left-hand values noted in the "no configuration blocks" plans always returned from the API with these values?
If so, this likely means we need to take a look at resource default attribute value implementations for each. That might help with the "all blocks configured" case. There is the additional caveat here that defaults may need to be added at the list or object level as well, to cover the "no configuration block" case, as defaults of nested object attributes are only consulted if their wrapping object exists already. Otherwise, as @zliang-akamai alludes to above, the answer may actually be to omit saving certain information to the state when it is not present in the configuration. It is likely that Terraform would have given practitioners errors if they did happen to directly reference those block attributes elsewhere in their configuration since the value would have changed between plan and apply. (In fact, I'm a little surprised that Terraform isn't already raising an error, as I would have expected a block without configuration to error if it was saved with data.) Finally, it is worth mentioning that when migrating from the SDK to the framework, Terraform may hide very particular plan differences (e.g. empty string to null) because not doing so would cause very noisy plans for practitioners with the older SDK. If you ever see a planned resource change with all the attributes "hidden", then hashicorp/terraform#31887 is an upstream bug report relating to this. Saving the plan file and inspecting it in JSON format or implementing a special plan check in terraform-plugin-testing that automatically compares all resource change before/after values for debugging is handy in those cases. I thought we had an testing feature request to consider implementing this debugging plan check somewhere but I'm having trouble finding it at the moment -- but can recreate it if needed. To add a little bit more context about this confusing migration space, I'll also mention the blocks with computed fields migration documentation. What I care to talk about there is that the terraform-plugin-sdk did some gymnastics in certain cases to make the single |
Hi @bflad, thanks for the detailed answer! I am still hoping to migrate Computed and Optional Something like: map[string]schema.Attribute{
"example_nested_block": schema.ListAttribute{
ElementType: types.ObjectType{
AttrTypes: map[string]attr.Type{
"example_block_attribute": types.StringType,
},
},
Computed: true,
Optional: true
}
} But the problem is we can't set a full schema to items of Since SDKv2 already did some gymnastics, I think many providers, including ours, will encounter issue similar to use case 1 in this issue if they are migrating to framework. It will be very nice if there are some ways Terraform can assist providers to make a such breaking change, if direct support of computed and optional block is not possible. Maybe something like state upgrade but for user config? I don't know.. Reference: |
Thanks so much, @zliang-akamai. Can you confirm whether or not Terraform is raising warning logs in your case for the SDK-based resource and whether it was also using the quirky I would think the list of objects solution would only work for |
Thanks a lot for the response @bflad
WARN logs if
|
@bflad
Reference:
|
Thanks so much, @maastha. I really appreciate you working through this with me. I apologize that I also neglected to mention a newer SDK feature which should promote those warning logs to errors in Terraform to make them a little bit more easier to see during acceptance testing: schema.Resource{
// ... other fields as necessary ...
EnableApplyLegacyTypeSystemErrors: true,
EnablePlanLegacyTypeSystemErrors: true,
} Just please be careful to not release the SDK based resource with those enabled since there are known errors. We had added that to the top of the overall framework migration steps, but I'm not sure its linked deeper in the migration documentation, so I'll take a look at seeing where it might make sense elsewhere if its not already.
What I meant to allude to was that defaults may be applicable for the planned values to match the API behavior. This should work as expected for underlying attributes, but the framework intentionally prevents developers from implementing the "native" default value handling for blocks (e.g. the literal Let's take a step back though with the below new knowledge --
Indeed, which is a good and bad thing. It is good that we are learning that the prior resource implementation was not meeting Terraform's data consistency rules, but of course bad because these need to be resolved in some way as part of this migration as this is an unavoidable problem. Given that the existing resource is violating those rules, I anticipate the deeper we look into this, the more likely that one of the following will need to happen:
There are quite a few things you are asking about, so maybe let's try to start at a high level for the intent of this configuration/data in the resource. Particularly since you mention each of them, let's talk about
I'm going to try to summarize my understanding so far, but please correct me where necessary:
So in the prior SDK implementation, you had to use what was available:
Now if this was a brand new resource implementation, I would recommend that all these be implemented by leveraging the protocol version 6 (Terraform 1.0+) nested attribute feature, which essentially gives the best solution everywhere since all the data can be optionally configured or computed by the API, and therefore always safely referenced in configurations. All the data would be refreshable as well for drift detection. In particular:
However, that is quite different than what exists today from a practitioner standpoint. Practitioners would be burdened to upgrade to Terraform 1.0+ if they haven't already and put an equals sign anywhere a block was being configured since its now a (nested) attribute. As a goal of this migration appears to be to minimize practitioner burden the (more common 😄) path of trying to re-implement the existing schema was taken. Since I do not know all the intricacies of this API and its behaviors nor every permutation of how the prior SDK might save a zero-value (e.g. I'm having a hard time determining if you got further with So to potentially offer a tangible potential
This does mean that Terraform can no longer perform drift detection if the block is unconfigured, but that is a requirement for using blocks. If you don't want that restriction, the implementation will need to change in some way. We can discuss options down that path if you would like. Figuring out what to do with existing state values if they were caused by the prior SDK saving the zero-value we can discuss once the general attribute/block handling is stable for new resources. I think the
|
Thank you so much @bflad for such a detailed response on this, really appreciate this, and thanks for working with us on this, we plan to evaluate all your suggestions. We have some follow-up questions:
In the above point, to make sure we understand correctly, the end result would effectively remove computed state that users may be referencing. Right?
|
Update:
Please let us know if there are any other options we could look into. We do think a StateUpgrader-like functionality or simply exposing the Config object in the |
It's unfortunately difficult for me to access this without seeing what you're doing/seeing (e.g. prior state, config, and plan) and understanding the exact behaviors of the API. What are "some of these lists", do you mean the Beyond using the "human-readable" plan output, in the plan JSON output you should be able to compare the before and after data to determine what Terraform thinks is an actual difference. If those updates are differences such as empty strings (caused by the prior SDK) to null, then potentially adding an empty string default value on the attribute can preserve the prior SDK behavior and hide the difference for practitioners that did not configure a value. You'll just need to account for empty string handling in your resource logic in addition to null handling.
You can try submitting a feature request upstream in Terraform to consider exposing this across the protocol, since that is not something that could be implemented in the framework without it existing there. Please note though that I'm not sure how well that data fits in with the intended design for the RPC (mainly for upgrading existing state to match breaking schema changes) and even if it was implemented today, there would now be some expectation on your practitioners to be on the Terraform version (1.8+ as of this writing, since 1.7 is considered feature complete as its entering release candidate very shortly) that passes the additional configuration information across the protocol. |
Module version
We are currently migrating existing Plugin-SDK based resources to Plugin Framework (Plugin protocol v6). In below resource, we have some optional+computed list block attributes where certain nested attributes if not configured/partially configured by the user in the config, the API/provider will still return those attributes. That response is currently persisted in the state.
In order to migrate these blocks to the Plugin Framework, we have tried using schema.ListNestedBlock but the returned plan after upgrade to the Framework-migrated resource is always non-empty. This will be a breaking change for our users.
What is the recommended non-breaking way to migrate list attributes such as these?
Relevant provider source code
Terraform Plugin SDK based schema for "advanced_configuration" block:
Terraform Plugin Framework migrated schema:
Terraform Configuration Files
Use-case 1 (no blocks configured):
Use-case 2 (all blocks configured):
Debug Output
Expected Behavior
After user upgrades to new provider version with the framework-migrated resource, user should not see any planned changes for optional+computed lists/set blocks when running
terraform plan
and not receive errors when runningterraform apply
.Actual Behavior
On running
terraform plan
below plan was produced by Terraform:Use-case 1 (no blocks configured):
Use-case 2 (all blocks configured):
Steps to Reproduce
terraform init
for provider v.X (contains-Plugin-SDK-based-resource-implementation) for above resourceterraform plan
terraform apply
terraform plan
again -> plan returnsNo changes.
terraform init --upgrade
terraform plan
-> Plan is not emptyReferences
Included advanced_configuration schema in this issue. Other block schemas mentioned in the example are:
The text was updated successfully, but these errors were encountered: