Releases: kubernetes-sigs/cluster-api-provider-aws
Cloud provider integration and general hardening
Breaking Changes
- Labels for Cluster API managed infrastructure and cloud-provider managed infrastructure overlapped. The breaking change introduces a new label for Cluster API to use as well as a tool to convert labels on existing clusters to the new format.
Action to migrate an existing v0.2 CAPA management cluster to v0.3.0
Please see the migration document for a detailed list of steps.
Notable changes
This release allows the out-of-tree kubernetes cloud-provider to manage resources it normally manages while keeping Cluster API managed assets tagged as such.
Cluster API owned resources are tagged with sigs.k8s.io/cluster-api-provider-aws/managed: true
cloud-provider owned resources are tagged with kubernetes.io/cluster/<cluster-name>: owned
The bigger bug fixes
- Now works with existing subnets and security groups.
- Removes a race condition around multiple control planes joining a cluster simultaneously.
- Fixes an issue where AWS would hang forever.
- Taints get copied from MachineDeployment into Machines via Kubeadm
Thanks to everyone who contributed to this release!
Container Image
The manager can be found as this image: gcr.io/cluster-api-provider-aws/cluster-api-aws-controller:v0.3.0
Multi control plane clusters and more reliable creation
Multi-node control plane clusters
It is now easy to get a multi-node control plane cluster. This release ships with an example that will create a 3-node control plane cluster with 1 worker node in a single availability zone. This is a small step towards cluster-api-provider-aws supporting highly availble clusters with minimal configuration. It is possible to create a highly available cluster, but much of the leg work is manual. Please see documentation for more information on how to create highly available clusters
Reliabilty
Previous releases had some issues with clusters getting stuck while being created. That was due to over-eagerness in reconciling the infrastrucutre. The code now only reconciles things when something has changed.
A good number of nil derefence errors have been cleaned up so you should be seeing fewer stack traces in your logs.
Supported Kubernetes versions
This release adds support for a few more Kubernetes versions including v1.14.0 and v1.14.1. You can find a complete list here.
Custom AMIs
Support has been added for custom AWS organizational IDs which allows you to upload your own AMIs to your own organization and use those to launch instances from.
Root device size customization
A configuration option for an AWS image's root device size has been added.
v0.2.0
cluster-api-provider-aws v0.2.0 Release notes
This release is based off Cluster API v0.1.0 release and v1alpha1 APIs.
Breaking changes
- Logic to determine a public subnet has changed and follows specific rules #560
Features
- Kubeadm configuration is now exposed through machine.spec.providerSpec.kubeadmConfiguration
- MachineClass is now fully supported #571
- Machine NodeRef is now properly set #579
- ProviderID is now properly set #637
- Fixed a bug where additional security groups or tags weren't properly applied #635
- Security groups are now applied using ENIs #684
- Resync interval is now 10m instead of 10h #671
Provided tools
clusterctl
- Clusterctl now supports additional phases: TODO
- Clusterctl delete command now properly deletes a cluster with all associated resources (forward cascade deletion)
clusterawsadm
- New permissions have been added to CloudFormation templates, run clusterawsadm [....] to update your AWS environment
Image
- gcr.io/cluster-api-provider-aws/cluster-api-aws-controller:v0.2.0
Limitations
- All cluster components and machines default to a single availability zone
Known issues
v0.1.1
cluster-api-provider-aws v0.1.1 Release notes
Breaking changes
- None
Features
- Adds support for MachineClasses
- Embedded kubeadm configuration types
- Multiple Clusters can now be created within a single namespace
- Cascading deletion of Clusters, MachineDeployments, MachineSets, and Machines are now supported
- cluster-api-provider-aws image is now built on top of the distroless base image
- A new flag
--namespace
has been added to cluster-api-provider-aws controller manager to watch a single namespace instead of all namespaces - Machines' ProviderID is now properly populated
Provided tools
clusterctl
- Delete cluster enhancements
clusterawsadm
- New permissions have been added to CloudFormation templates, run
clusterawsadm [....]
to update your AWS environment
Image
Limitations
- All cluster components and machines default to a single availability zone
Known issues
- Deleting the deployed machine after clusterctl pivots the cluster-api resources results in the instance being deleted and all cluster resources being orphaned: #214
v0.1.0
cluster-api-provider-aws v0.1.0 Release notes
Pre-alpha release of the AWS provider for cluster-api aka Clean Slate
Breaking changes
- Previous releases where versioned v1.0.0-alpha.x, in preparation for a real v1alpha1 release of the API we've reset the binary versioning to better indicate the state of the project. Since previous versions were released with a higher version number, we've deleted those previous releases to avoid future confusion.
Features
- Supports deploying clusters with Kubernetes version 1.13.x.
- Deploys to EC2 VPC private subnets with a bastion instance and public ELB.
- Supports deploying multi-control plane clusters. TODO (@ashish-amarnath): Get some docs for this
- Supports highly available clusters through minimal manual configuration.
- AWSClusterProviderSpec now supports custom CA certs for cluster-ca, etcd-ca and font-proxy-ca.
- GoDocs for the cluster-api-provider-aws is now available.
Provided tools
clusterctl
clusterctl is a tool for bootstrapping a cluster for hosting the cluster-api components.
See the Getting Started Guide.
clusterawsadm
clusterawsadm is a helper utility that is provided for creating
the prerequisite IAM roles and profiles needed for deploying a
cluster, as well as generating a secret containing AWS credentials
for using the controllers provided by cluster-api-provider-aws.
See the Getting Started Guide.
Limitations
- All cluster components and machines default to a single availability zone.
Known issues
- Deleting the deployed machine after clusterctl pivots the cluster-api resources results in the instance being deleted and all cluster resources being orphaned: #214.