-
For a user with the required privileges in AWS:
-
Create a user for installation.
-
Create a policy according to stratio-eks-policy.json.
-
Create a policy according to stratio-aws-temp-policy.json (for provisioning only).
-
Attach policies to the user.
-
Create an access key.
-
-
Private and public DNS zones created in AWS (optional).
-
Customized infrastructure created on AWS (optional).
-
Compose the cluster descriptor file.
-
User credentials (access_key and secret_key) and account data (region and account_id), which will be encrypted on first run.
-
GitHub token for downloading templates (optional).
-
Account data (region and account_id).
-
Data of the infrastructure already created (optional).
-
Management of DNS zones created (optional).
-
ECR URL.
-
External domain of the cluster.
-
Enable logging in EKS per component (optional).
-
Node groups.
-
Information required for the Stratio KEOS installation.
-
Regarding the control-plane, in the cluster descriptor you can indicate that it is a managed control-plane and the logs that you want to activate from it (API Server, audit, authenticator, controller_manager and/or scheduler).
Likewise, groups of worker nodes can be indicated with the following options:
-
name: group name, cannot be repeated.
-
size: instance type.
-
quantity: number of workers in the group.
-
min_size: minimum number of nodes for autoscaling (optional).
-
max_size: maximum number of nodes for autoscaling (optional).
-
labels: node labels in Kubernetes (optional).
-
root_volume: disk specifics (optional).
-
size: size in GB (default: 30GB).
-
type: disk type (default: gp2).
-
encrypted: disk encryption (default: false).
-
-
ssh_key: SSH key for node access (optional). Must exist in the provider.
-
spot: indicates if the instance is of spot type (optional).
-
node_image: the image of the worker nodes (optional). The indicated image must exist and be compatible with EKS.
-
zone_distribution: indicates whether the number of nodes must be balanced in the zones or not (default: balanced).
-
az: zone of the worker’s group (optional). If specified, only this one will be used for the whole group. This parameter overrides what is specified in zone_distribution.
Note
|
By default, the distribution of nodes will be done in zones a, b and c of the indicated region in a balanced way, therefore, the rest of the division by three of the number of nodes will be discarded. Example: if "quantity=7" is specified, only 2 nodes will be deployed in each of the zones. |
In order to facilitate the installation of Stratio KEOS, in the provisioning process a functional keos.yaml file is generated and ready to launch the installation. For this purpose, the version and flavour (production, development or minimal) can be indicated in the cluster descriptor.
keos:
version: 1.0.2
flavour: development
For any extra customization, the file must be modified before running the keos-installer.
You should run the provisioning and installation of the Kubernetes phase from a Linux machine with internet access and a Docker installed.
Once you have downloaded the .tgz
file of the cloud-provisioner, proceed to unzip it and run it with the creation parameters:
$ tar xvzf cloud-provisioner-*tar.gz
$ sudo ./bin/cloud-provisioner create cluster --name <cluster_id> --descriptor cluster.yaml
Vault Password:
Creating temporary cluster "example-eks" ...
✓ Ensuring node image (kindest/node:v1.27.0) 🖼
✓ Building Stratio image (stratio-capi-image:v1.27.0) 📸
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Installing CAPx 🎖️
✓ Generating secrets file 📝🗝️
✓ Installing keos cluster operator 💻
✓ Creating the workload cluster 💥
✓ Saving the workload cluster kubeconfig 📝
✓ Preparing nodes in workload cluster 📦
✓ Installing AWS LB controller in workload cluster ⚖️
✓ Installing StorageClass in workload cluster 💾
✓ Enabling workload clusters self-healing 🏥
✓ Installing CAPx in workload cluster 🎖️
✓ Configuring Network Policy Engine in workload cluster 🚧
✓ Installing cluster-autoscaler in workload cluster 🗚
✓ Installing keos cluster operator in workload cluster 💻
✓ Creating cloud-provisioner Objects backup 🗄️
✓ Moving the management role 🗝️
✓ Executing post-install steps 🎖️
✓ Generating the KEOS descriptor 📝
✓ Rotating and generating override_vars structure ⚒️
The cluster has been installed successfully. Please refer to the documents below on how to proceed:
1. Post-installation Stratio cloud-provisioner documentation
2. Stratio KEOS documentation
At this point, you will have a Kubernetes cluster with the features indicated in the descriptor and you will be able to access the EKS API Server with the AWS CLI as indicated in the official documentation.
aws eks update-kubeconfig --region <region> --name <cluster_id> --kubeconfig ./<cluster_id>.kubeconfig
kubectl --kubeconfig ./<cluster_id>.kubeconfig get nodes
Here, the permissions of clusterawsadm.json can be removed.
Next, proceed to deploy Stratio KEOS using keos-installer.
-
Enable the Kubernetes Engine API in GCP.
-
A user with the necessary privileges in GCP:
-
Create an IAM Service Account with permissions defined in:
-
Create a private key for the IAM Service Account of type JSON and download it in a
<project_name>-<id>.json
file. This data will be used for the credentials requested in the cluster descriptor.
-
-
Private and public DNS zones created in GCP (optional).
-
Custom infrastructure created in GCP (optional).
-
Composing the cluster descriptor file.
-
User credentials (private_key_id, private_key, and client_email) and account data (refion and project_id), which will be encrypted on first run.
-
GitHub token for template download (optional).
-
Data of the infrastructure already created (optional).
-
Management of DNS zones created (optional).
-
Docker registry data (URL, credentials).
-
External domain of the cluster.
-
Control-plane.
-
Node groups.
-
Information required for Stratio KEOS installation.
-
Note
|
The installation does not require a custom image. |
Tip
|
It is recommended to create a bastion to proceed with the installation. |
-
Have Docker installed (version 27.0.3 or higher).
-
Have a local image: stratio-capi-image:v1.27.0.
- As for the control-plane, in the cluster descriptor you can indicate that it is a managed control-plane and the following specifications must be included
-
-
cluster_network (mandatory): defines the cluster network.
-
private_cluster (mandatory): defines the spec of the private cluster.
-
enable_private_endpoint (mandatory/immutable; default: "true"): indicates whether the internal IP address of the master is used as the endpoint of the cluster.
-
control_plane_cidr_block (master-ipv4-cidr) (optional): is the IP range in CIDR notation to be used for the master network. This range must not overlap with any other range in use within the cluster network. Applies when enabled_private_nodes is "true" (default value) and must be a /28 subnet.
-
-
-
ip_allocation_policy (optional/immutable): represents the configuration options for the cluster GKE’s IP allocation (if not specified, the GKE defaults will be used).
-
cluster_ipv4_cidr_block: represents the range of IP addresses for the IPs of the pods of the cluster GKE (if not specified, the range with the default size will be chosen).
-
services_ipv4_cidr_block: represents the range of IP addresses for the GKE cluster services IPs (if not specified, the range with the default size will be chosen).
-
cluster_secondary_range_name: represents the name of the secondary range to be used for the CIDR block of the cluster GKE. The range will be used for the pods IP addresses and must be an existing secondary range associated with the cluster subnet.
-
services_secondary_range_name: represents the name of the secondary range to be used for the services CIDR block. The range will be used for the services IPs and must be an existing secondary range associated with the cluster subnet.
-
-
Note
|
If IP ranges are already created, the specified names (services_secondary_range_name and cluster_secondary_range_name) must be used. If they do not exist, CIDR notation (services_ipv4_cidr_block and cluster_ipv4_cidr_block) must be used to create them, but both methods cannot be used simultaneously. |
-
master_authorized_networks_config (optional/immutable): represents the cluster authorized networks configuration.
-
cidr_blocks (optional, since gcp_public_cidrs_access_enabled is always "true"): list of CIDR blocks that are allowed to access the master.
-
cidr_block (mandatory in case cidr_blocks is present): IP range in CIDR notation that will be allowed to access the master.
-
display_name (optional): name of the authorized network.
-
-
gcp_public_cidrs_access_enabled (default: "false", if enable_private_endpoint is "true"): indicates whether access to Google Compute Engine public IP addresses is allowed.
-
Note
|
Enabling the authorized networking configuration will prevent all external traffic from accessing the Kubernetes master over HTTPS except for traffic from the specified CIDR blocks, Google Compute Engine public IPs, and Google Cloud services IPs. |
-
monitoring_config (optional/immutable): defines the monitoring of the cluster.
-
enable_managed_prometheus (default: "false"): enables managed monitoring of the cluster with Prometheus.
-
-
logging_config (optional/immutable): defines the logging configuration of the cluster.
-
system_components (default: "false"): enables the system component of logging.
-
workloads (default: "false"): enables the workloads component of logging.
-
Note
|
Any modification of the above parameters will have no effect, they are only applied at cluster creation time. |
In the cluster descriptor, worker node groups can be indicated with the following options:
-
name: group name, cannot be repeated.
-
size: instance type.
-
quantity: number of workers in the group.
-
min_size: minimum number of nodes for autoscaling (optional).
-
max_size: maximum number of nodes for autoscaling (optional).
-
labels: node labels in Kubernetes (optional).
-
taints: taints of the nodes in Kubernetes (optional).
-
root_volume: disk specifics (optional).
-
size: size in GB (default: 30GB).
-
type: disk type (default: Managed).
-
-
zone_distribution: indicates whether the number of nodes should be balanced in the zones or not (default: balanced).
-
az: zone of the workers group (optional). If specified, only this one will be used for the whole group. This parameter overrides what is specified in zone_distribution.
Note
|
By default, the distribution of nodes will be done in zones a, b, and c of the indicated region in a balanced way, therefore, the rest of the division by three of the number of nodes will be discarded. Example: if 'quantity=7' is specified, only 2 nodes will be deployed in each of the zones. |
To facilitate the installation of Stratio KEOS, in the provisioning process a functional keos.yaml file is generated and ready to launch the installation. For this purpose, the version and flavor (production, development, or minimal) can be indicated in the cluster descriptor.
keos:
version: 1.1.2
flavour: development
For any extra customization, the file must be modified before running the keos-installer.
-
In case of using a custom infrastructure, the VPC and subnet of the region must be specified.
networks: vpc_id: "vpc-name" subnets: - subnet_id: "subnet-name"
-
Kubernetes version must be (1.28) and supported by GKE.
-
worker_nodes group names cannot be repeated.
Tip
|
For more details, see the installation guide. |
This phase (provisioning and installation of Kubernetes) should be run from the bastion machine.
Once the .tgz
file of the cloud-provisioner is downloaded, proceed to unzip it and run it with the creation parameters:
$ tar xvzf cloud-provisioner-*tar.gz
$ sudo ./bin/cloud-provisioner create cluster --name <cluster_id> --use-local-stratio-image --descriptor cluster.yaml
Vault Password:
Creating temporary cluster "example-gke" ...
✓ Using local Stratio image (stratio-capi-image:v1.27.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Installing Private CNI 🎖️
✓ Deleting local storage plugin 🎖️
✓ Installing CAPx 🎖️
✓ Generating secrets file 📝🗝️
✓ Installing keos cluster operator 💻
✓ Creating the workload cluster 💥
✓ Saving the workload cluster kubeconfig 📝
✓ Preparing nodes in workload cluster 📦
✓ Enabling CoreDNS as DNS server 📡
✓ Installing CAPx in workload cluster 🎖️
✓ Installing StorageClass in workload cluster 💾
✓ Enabling workload cluster's self-healing 🏥
✓ Configuring Network Policy Engine in workload cluster 🚧
✓ Installing keos cluster operator in workload cluster 💻
✓ Creating cloud-provisioner Objects backup 🗄️
✓ Moving the management role 🗝️
✓ Executing post-install steps 🎖️
✓ Generating the KEOS descriptor 📝
✓ Rotating and generating override_vars structure ⚒️
The cluster has been installed successfully. Please refer to the documents below on how to proceed:
1. Post-installation Stratio cloud-provisioner documentation.
2. Stratio KEOS documentation.
At this point, there will be a Kubernetes cluster with the features indicated in the descriptor, and the API Server can be accessed with the kubeconfig generated in the current directory (.kube/config):
kubectl --kubeconfig .kube/config get nodes
Next, proceed to deploy Stratio KEOS using keos-installer.
-
Users with the necessary privileges in Azure:
-
Create a Managed Identity with the roles: Contributor, AcrPull (on the ACR of the cluster, optional) and Managed Identity Operator. The reference of this identity (Resource ID) will be used in the cluster descriptor (format /subscriptions/<subscription_id>/resourcegroups/<resource_group_name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity_name>).
-
Create an App registration (will create an Enterprise application) and generate a client secret. The client secret value and its Secret ID will be used for the credentials requested in the cluster descriptor.
-
-
Private and public DNS zones created in Azure (optional).
-
Customized infrastructure created in Azure (optional).
-
Compose the cluster descriptor file.
-
User credentials (client_id and client_secret) and account data (subscription_id and tenant_id), which will be encrypted on first run.
-
GitHub token for template download (optional).
-
Data of the already created infrastructure (optional).
-
Management of DNS zones created (optional).
-
Docker registry data (URL, credentials).
-
External domain of the cluster.
-
control-plane.
-
Node groups.
-
Information required for the Stratio KEOS installation.
-
Note
|
The installation requires a custom image with parameters needed for Elasticsearch. |
For this provider, the control-plane will be deployed in VMs, therefore, the following options can be configured:
-
highly_available: defines whether the control-plane will have high availability (default: true).
-
managed: indicates that it is a control-plane in VMs.
-
size: instance type.
-
node_image: image of the nodes of the control-plane. The indicated image must exist in the account.
-
root_volume: disk particularities (optional).
-
size: size in GB (default: 30GB).
-
type: disk type (default: Standard_LRS).
-
In the cluster descriptor, groups of worker nodes can be indicated with the following options:
-
name: group name, cannot be repeated.
-
size: instance type.
-
quantity: number of workers in the group.
-
min_size: minimum number of nodes for autoscaling (optional).
-
max_size: maximum number of nodes for autoscaling (optional).
-
labels: node labels in Kubernetes (optional).
-
root_volume: disk specifics (optional).
-
size: size in GB (default: 30GB).
-
type: disk type (default: Standard_LRS).
-
-
ssh_key: SSH key for node access (optional). Must exist in the provider.
-
spot: indicates if the instance is of spot-type (optional).
-
node_image: the image of the worker nodes. The indicated image must exist in the account.
-
zone_distribution: indicates whether the number of nodes must be balanced in the zones or not (default: balanced).
-
az: zone of the workers group (optional). If specified, only this one will be used for the whole group. This parameter overrides what is specified in zone_distribution.
Note
|
By default, the distribution of nodes will be done in zones a, b and c of the indicated region in a balanced way, therefore, the rest of the division by three of the number of nodes will be discarded. Example: if 'quantity=7' is specified, only 2 nodes will be deployed in each of the zones. |
In order to facilitate the installation of Stratio KEOS, in the provisioning process a functional keos.yaml file is generated and ready to launch the installation. For this purpose, the version and flavour (production, development or minimal) can be indicated in the cluster descriptor.
keos:
version: 1.0.2
flavour: development
For any extra customization, the file must be modified before running the keos-installer.
-
If you use custom infrastructure, you must indicate the VPC and 3 subnets, one per region zone (a, b and c).
-
The configured Kubernetes version must be the one supported in the indicated images (optional).
-
The names of the worker_nodes groups cannot be repeated.
Tip
|
For more details, see the installation guide. |
You should run the provisioning and installation of the Kubernetes phase from a Linux machine with internet access and a Docker installed.
Once you have downloaded the .tgz
file of the cloud-provisioner, proceed to unzip it and run it with the creation parameters:
$ tar xvzf cloud-provisioner-*tar.gz
$ sudo ./bin/cloud-provisioner create cluster --name <cluster_id> --descriptor cluster.yaml
Vault Password:
Creating temporary cluster "example-azure" ...
✓ Ensuring node image (kindest/node:v1.27.0) 🖼
✓ Building Stratio image (stratio-capi-image:v1.27.0) 📸
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Installing CAPx 🎖️
✓ Generating secrets file 📝🗝️
✓ Installing keos cluster operator 💻
✓ Creating the workload cluster 💥
✓ Saving the workload cluster kubeconfig 📝
✓ Installing cloud-provider in workload cluster ☁️
✓ Installing Calico in workload cluster 🔌
✓ Installing CSI in workload cluster 💾
✓ Preparing nodes in workload cluster 📦
✓ Installing StorageClass in workload cluster 💾
✓ Enabling workload clusters self-healing 🏥
✓ Installing CAPx in workload cluster 🎖️
✓ Installing cluster-autoscaler in workload cluster 🗚
✓ Installing keos cluster operator in workload cluster 💻
✓ Creating cloud-provisioner Objects backup 🗄️
✓ Moving the management role 🗝️
✓ Executing post-install steps 🎖️
✓ Generating the KEOS descriptor 📝
The cluster has been installed successfully. Please refer to the documents below on how to proceed:
1. Post-installation Stratio cloud-provisioner documentation
2. Stratio KEOS documentation
At this point, you will have a Kubernetes cluster with the features indicated in the descriptor and you will be able to access the API Server with the kubeconfig generated in the current directory (.kube/config):
kubectl --kubeconfig .kube/config get nodes
Next, proceed to deploy Stratio KEOS using keos-installer.
At this point, you will have a Kubernetes cluster with the features indicated in the descriptor and you will be able to access the API Server with the kubeconfig generated in the current directory (.kube/config):
kubectl --kubeconfig .kube/config get nodes
Next, proceed to deploy Stratio KEOS using keos-installer.