From 27357af1e8c11279b0b8f0fb1ee0b595b81e0f3d Mon Sep 17 00:00:00 2001 From: Resmi A C Date: Fri, 20 Mar 2020 09:44:20 +0000 Subject: [PATCH 01/12] Azure WVM --- user_documentation/.DS_Store | Bin 0 -> 6148 bytes .../rst/application_description.rst | 136 ++++++++++++++++-- user_documentation/rst/deployment.rst | 3 +- 3 files changed, 128 insertions(+), 11 deletions(-) create mode 100644 user_documentation/.DS_Store diff --git a/user_documentation/.DS_Store b/user_documentation/.DS_Store new file mode 100644 index 0000000000000000000000000000000000000000..e17b1d66ed8bf3801f0c9d814f781ab2d1e8fabf GIT binary patch literal 6148 zcmeHK%}&BV5Z(pU1!Lr3BFDXW;{eFd$)u@x@Mcn@2Q|>9M4DhrXpv~e^fmO2d;(v` zncc-m=- zGx~WHjXrhPy9freUU~003KKU72NRVX_yY*Jz6?S?n%U7b^kX%Tqa8vNMX^_|q-nd^ zsLDpGHLuFF)2UTu^RPXi7sdWT{phrR_wblL&7WU1zZ~dSvTJYwub`On?wv_Eio#nM zSwt_Q2#En=fEXYK)`|gh42bQus+x)=28e;5Fo64mgobDt%rvU413J7uqrZTN0y@4W z5T!xOV5Si~AY7*c>Qru?7+j~rxHNH=!Azr0XI!le^O%(vj~A|1hjFRG8Mic2PYe(P z%M9eTS;q7K1b&&7kNo8nvWNj<;GZ$T3tgvcLs8~z{Z<~HwGvt%8Vbf`sDOaJbO`_h h_mPTnYQIDs;w*!iMw|u5RXQMD1Qa3E5d*)#zz6(RPyGM@ literal 0 HcmV?d00001 diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index a24aeeb..b3fe5ce 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -681,8 +681,8 @@ previous section is orchestrated by Kubernetes. This section introduces how the parameters of the virtual machine can be configured which will host the Kubernetes worker node. During operation MiCADO will instantiate as many virtual machines with the parameters defined here as required during scaling. -MiCADO currently supports four different cloud interfaces: CloudSigma, -CloudBroker, EC2, Nova. MiCADO supports multiple virtual machine "sets" +MiCADO currently supports six different cloud interfaces: CloudSigma, +CloudBroker, EC2, Nova, Azure and GCE. MiCADO supports multiple virtual machine "sets" which can be restricted and host only specific containers (defined in the requirements section of the container specification). At the moment multi-cloud support is in alpha stage, so only certain combinations of different cloud @@ -728,14 +728,15 @@ support human readability.: The **interfaces** section of all virtual machine definitions that follow are **REQUIRED**, and allow you to provide orchestrator specific inputs, in -the examples below we use **Occopus**. +the examples below we use either **Occopus** or **Terraform** based on suitability. -* **create**: *this key tells MiCADO to create the VM using Occopus* +* **create**: *this key tells MiCADO to create the VM using Occopus/Terraform* - * **inputs**: Specific settings for Occopus follow here + * **inputs**: Specific settings for Occopus/Terraform follow here * **interface_cloud:** tells Occopus which cloud type to interface with * **endpoint_cloud:** tells Occopus the endpoint API of the cloud + * **provider:** tells Terraform which cloud provider to interface with @@ -902,7 +903,7 @@ Nova ~~~~ To instantiate MiCADO workers on a cloud through Nova interface, please use the -template below. MiCADO **requires** image_id flavor_name, project_id and +template below. MiCADO **requires** image_id, flavor_name, project_id and network_id to instantiate a VM through *Nova*. :: @@ -943,6 +944,12 @@ inputs are available.: * **image_id** is the image id on your Nova cloud. Select an image containing a base os installation with cloud-init support! * **flavor_name** is the name of flavor to be instantiated on your Nova cloud. +* **flavor_id** is the id of the desired flavor for the VM. +* **tenant_name** is the name of the Tenant or Project to login with. +* **auth_url** is the Identity authentication URL. +* **network_name** is the human-readable name of the network. +* **user_domain_name** is the domain name where the user is located. +* **availability_zone** is the availability zone in which to create the VM. * **server_name** optionally defines the hostname of VM (e.g.:”helloworld”). * **key_name** optionally sets the name of the keypair to be associated to the instance. Keypair name must be defined on the target nova cloud before @@ -952,6 +959,115 @@ inputs are available.: * **network_id** is the id of the network you would like to use on your target Nova cloud. + Azure + ~~~~~ + + To instantiate MiCADO workers on a cloud through Azure interface, please use the + template below. MiCADO **requires** resource_group, virtual_network and subnet to + instantiate a VM through *Azure*. + + :: + + YOUR-VIRTUAL-MACHINE: + type: tosca.nodes.MiCADO.Azure.Compute + properties: + resource_group: ADD_YOUR_ID_HERE (e.g. TRG) + virtual_network: ADD_YOUR_ID_HERE (e.g. TVN) + subnet: ADD_YOUR_ID_HERE (e.g. TS) + network_security_group: ADD_YOUR_ID_HERE (e.g. TNSG) + vm_size: ADD_YOUR_ID_HERE (e.g. Standard_DS1_v2) + image: ADD_YOUR_ID_HERE (e.g. 16.04.0-LTS) + key_data: ADD_YOUR_KEY_HERE + + capabilities: + # OPTIONAL METADATA + host: + properties: + num_cpus: 2GHz + mem_size: 2GB + os: + properties: + type: linux + distribution: ubuntu + version: 16.04 + interfaces: + Terraform: + create: + inputs: + provider: azure + + Under the **properties** section of a Azure virtual machine definition these + inputs are available.: + + * **resource_group** specifies the name of the resource group in which the VM should exist. + * **virtual_network** specifies the virtual network associated with the VM. + * **subnet** specifies the subnet associated with the VM. + * **network_security_group** specifies the security settings for the VM. + * **vm_size** specifies the size of the VM. + * **image** specifies the name of the image. + * **key_data** sets the public SSH key to be associated with the instance. + * **public_ip** sets the public ip to be associated with the Windows VM. + + GCE + ~~~ + + To instantiate MiCADO workers on a cloud through Google interface, please use the + template below. MiCADO **requires** region, zone, project, machine_type and + network to instantiate a VM through *GCE*. + + :: + + YOUR-VIRTUAL-MACHINE: + type: tosca.nodes.MiCADO.GCE.Compute + properties: + region: ADD_YOUR_ID_HERE (e.g. us-west1) + project: ADD_YOUR_ID_HERE (e.g. PGCE) + machine_type: ADD_YOUR_ID_HERE (e.g. n1-standard-2) + zone: ADD_YOUR_ID_HERE (e.g. us-west1-a) + image: ADD_YOUR_ID_HERE (e.g. ubuntu-os-cloud/ubuntu-1604-lts) + network: ADD_YOUR_ID_HERE (e.g. default) + ssh-keys:ADD_YOUR_ID_HERE + + capabilities: + # OPTIONAL METADATA + host: + properties: + num_cpus: 2GHz + mem_size: 2GB + os: + properties: + type: linux + distribution: ubuntu + version: 16.04 + interfaces: + Terraform: + create: + inputs: + provider: gce + + Under the **properties** section of a GCE virtual machine definition these + inputs are available.: + + * **project** is the project to manage the resources in. + * **image** specifies the image from which to initialize the VM disk. + * **region** is the region that the resources should be created in. + * **machine_type** specifies the type of machine to create. + * **zone** is the zone that the machine should be created in. + * **network** is the network to attach to the instance. + * **ssh-keys** sets the public SSH key to be associated with the instance. + + The authentication in GCE is done using a service account key file in JSON + format. You can manage the key files using the Cloud Console. The steps to + retrieve the key file is as follows : + + * Open the **IAM & Admin** page in the Cloud Console. + * Click **Select a project**, choose a project, and click **Open**. + * In the left nav, click **Service accounts**. + * Find the row of the service account that you want to create a key for. + In that row, click the **More** button, and then click **Create key**. + * Select a **Key type** and click **Create**. + + Types ~~~~~ @@ -1113,7 +1229,7 @@ The subsections have the following roles: - In a scaling rule belonging to the virtual machine, the name of the variable to be updated is ``m_node_count``; as an effect the number stored in this variable will be set as target instance number for the virtual machines. - In a scaling rule belonging to the virtual machine, the name of the variable to be updated is ``m_nodes_todrop``;the variable must be filled with list of ids or ip addresses and as an effect the valid nodes will be dropped. The variable ``m_node_count`` should not be modified in case of node dropping, MiCADO will update it automatically. - In a scaling rule belonging to a kubernetes deployment, the name of the variable to be set is ``m_container_count``; as an effect the number stored in this variable will be set as target instance number for the kubernetes service. - + For debugging purposes, the following support is provided: * ``m_dryrun`` can be specified in the **constant** as list of components towards which the communication is disabled. It has the following syntax: m_dryrun: ["prometheus","occopus","k8s","optimizer"] Use this feature with caution! @@ -1129,10 +1245,10 @@ For implementing more advanced scaling policies, it is possible to utilize the b Current limitations - only web based applications are supported - - only one of the node sets can be supported + - only one of the node sets can be supported - no container scaling is supported -Optimiser can be utilised based on the following principles +Optimiser can be utilised based on the following principles - User specifies a so-called target metric with its associated minimum and maximum thresholds. The target metric is a monitored Prometheus expression for which the value is tried to be kept between the two thresholds by the Optimiser with scaling advices. - User specifies several so-called input metrics which represent the state of the system correlating to the target variable - User specifies several initial settings (see later) for the Optimiser @@ -1166,7 +1282,7 @@ Definition of the target metric for the Optimizer - **m_opt_target_maxth_MYTARGET** specifies the value below which the target metric must be kept. Requesting scaling advice from the Optimizer - In order to receive a scaling advice from the Optimiser, the method **m_opt_advice()** must be invoked in the scaling_rule section of the node. + In order to receive a scaling advice from the Optimiser, the method **m_opt_advice()** must be invoked in the scaling_rule section of the node. **IMPORTANT! Minimum and maximum one node must contain this method invocation in its scaling_rule section for proper operation!** diff --git a/user_documentation/rst/deployment.rst b/user_documentation/rst/deployment.rst index 84569dc..ca27332 100644 --- a/user_documentation/rst/deployment.rst +++ b/user_documentation/rst/deployment.rst @@ -17,6 +17,8 @@ For cloud interfaces supported by MiCADO: * EC2 (tested on Amazon and OpenNebula) * Nova (tested on OpenStack) +* Azure (tested on Microsoft Azure) +* GCE (tested on Google Cloud) * CloudSigma * CloudBroker @@ -280,4 +282,3 @@ In case your application contains a container exposing a service, you will have * First set **nodePort: xxxxx** (where xxxxx is a port in range 30000-32767) in the **properties: ports:** TOSCA description of your docker container. More information on this in the :ref:`applicationdescription` * The container will be accessible at *:* . Both, the IP and the port values can be extracted from the Kubernetes Dashboard (in case you forget it). The **IP** can be found under *Nodes > my_micado_vm > Addresses* menu, while the **port** can be found under *Discovery and load balancing > Services > my_app > Internal endpoints* menu. - From 26382a468afe5f5200d78de173e935b21ebab6b4 Mon Sep 17 00:00:00 2001 From: jaydesl <35102795+jaydesl@users.noreply.github.com> Date: Sat, 28 Mar 2020 10:38:51 +0000 Subject: [PATCH 02/12] Fix headings --- user_documentation/rst/application_description.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index b3fe5ce..f256eeb 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -959,8 +959,8 @@ inputs are available.: * **network_id** is the id of the network you would like to use on your target Nova cloud. - Azure - ~~~~~ +Azure +~~~~~ To instantiate MiCADO workers on a cloud through Azure interface, please use the template below. MiCADO **requires** resource_group, virtual_network and subnet to @@ -1008,8 +1008,8 @@ inputs are available.: * **key_data** sets the public SSH key to be associated with the instance. * **public_ip** sets the public ip to be associated with the Windows VM. - GCE - ~~~ +GCE +~~~ To instantiate MiCADO workers on a cloud through Google interface, please use the template below. MiCADO **requires** region, zone, project, machine_type and From 3c2e1c7fa98d358e89d1c387ce14694273706ddd Mon Sep 17 00:00:00 2001 From: jaydesl Date: Sat, 28 Mar 2020 13:02:26 +0000 Subject: [PATCH 03/12] Update docs for v0.8.1 --- user_documentation/.DS_Store | Bin 6148 -> 0 bytes .../rst/application_description.rst | 377 ++++++++++-------- user_documentation/rst/deployment.rst | 33 +- user_documentation/rst/index.rst | 2 +- 4 files changed, 233 insertions(+), 179 deletions(-) delete mode 100644 user_documentation/.DS_Store diff --git a/user_documentation/.DS_Store b/user_documentation/.DS_Store deleted file mode 100644 index e17b1d66ed8bf3801f0c9d814f781ab2d1e8fabf..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 6148 zcmeHK%}&BV5Z(pU1!Lr3BFDXW;{eFd$)u@x@Mcn@2Q|>9M4DhrXpv~e^fmO2d;(v` zncc-m=- zGx~WHjXrhPy9freUU~003KKU72NRVX_yY*Jz6?S?n%U7b^kX%Tqa8vNMX^_|q-nd^ zsLDpGHLuFF)2UTu^RPXi7sdWT{phrR_wblL&7WU1zZ~dSvTJYwub`On?wv_Eio#nM zSwt_Q2#En=fEXYK)`|gh42bQus+x)=28e;5Fo64mgobDt%rvU413J7uqrZTN0y@4W z5T!xOV5Si~AY7*c>Qru?7+j~rxHNH=!Azr0XI!le^O%(vj~A|1hjFRG8Mic2PYe(P z%M9eTS;q7K1b&&7kNo8nvWNj<;GZ$T3tgvcLs8~z{Z<~HwGvt%8Vbf`sDOaJbO`_h h_mPTnYQIDs;w*!iMw|u5RXQMD1Qa3E5d*)#zz6(RPyGM@ diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index f256eeb..22dd67d 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -47,7 +47,7 @@ Example of the overall structure of an ADT tosca_definitions_version: tosca_simple_yaml_1_0 imports: - - https://raw.githubusercontent.com/micado-scale/tosca/v0.8.0/micado_types.yaml + - https://raw.githubusercontent.com/micado-scale/tosca/v0.8.1/micado_types.yaml repositories: docker_hub: https://hub.docker.com/ @@ -435,7 +435,7 @@ options, here we again use the key **Kubernetes:** Types ~~~~~ -**NEW in v0.8.0** Through abstraction, it is possible to reference a +Through abstraction, it is possible to reference a pre-defined parent type and simplify the description of a volume. These parent types can hide or reduce the complexity of more complex TOSCA constructs such as **interfaces** by enforcing defaults or moving them @@ -554,7 +554,7 @@ Examples of the definition of a basic volume Configuration Data ------------------ -**NEW in v0.8.0** + Configuration data (a Kubernetes **ConfigMap**) are to be defined at the same level as virtual machines, containers and volumes and then loaded into environment variables, or mounted as volumes in the definition of containers @@ -682,8 +682,8 @@ parameters of the virtual machine can be configured which will host the Kubernetes worker node. During operation MiCADO will instantiate as many virtual machines with the parameters defined here as required during scaling. MiCADO currently supports six different cloud interfaces: CloudSigma, -CloudBroker, EC2, Nova, Azure and GCE. MiCADO supports multiple virtual machine "sets" -which can be restricted and host only specific containers (defined in the +CloudBroker, EC2, Nova, Azure and GCE. MiCADO supports multiple virtual machine +"sets" which can be restricted to host only specific containers (defined in the requirements section of the container specification). At the moment multi-cloud support is in alpha stage, so only certain combinations of different cloud service providers will work. @@ -711,14 +711,45 @@ The following subsections details how to configure them. General ~~~~~~~ -The **capabilities** sections for all virtual machine definitions that follow -are identical and are **ENTIRELY OPTIONAL**. They are filled with metadata to -support human readability.: +**Here is the basic look of a Virtual Machine node inside an ADT:** + +:: + + SAMPLE-VIRTUAL-MACHINE: + type: tosca.nodes.MiCADO...Compute + properties: + -* **num_cpus** under *host* is a readable string specifying clock speed - of the instance type -* **mem_size** under *host* is a readable string specifying RAM of the - instance type + capabilities: + host: + properties: + num_cpus: 2 + mem_size: 4 GB + os: + properties: + type: linux + distribution: ubuntu + version: 18.04 + + interfaces: + Occopus: + create: + inputs: + endpoint: https://mycloud/api/v1 + +The **properties** section is **REQUIRED** and contains the necessary +properties to provision the virtual machine and vary from cloud to cloud. +Properties for each cloud are detailed further below. + +The **capabilities** sections for all virtual machine definitions that follow +are identical and are **ENTIRELY OPTIONAL**. They are ommited in the +cloud-specific examples below. They are filled with the following metadata to +support human readability: + +* **num_cpus** under *host* is an integer specifying number of CPUs for + the instance type +* **mem_size** under *host* is a readable string with unit specifying RAM of + the instance type * **type** under *os* is a readable string specifying the operating system type of the image * **distribution** under *os* is a readable string specifying the OS distro @@ -728,16 +759,14 @@ support human readability.: The **interfaces** section of all virtual machine definitions that follow are **REQUIRED**, and allow you to provide orchestrator specific inputs, in -the examples below we use either **Occopus** or **Terraform** based on suitability. +the examples we use either **Occopus** or **Terraform** based on suitability. * **create**: *this key tells MiCADO to create the VM using Occopus/Terraform* - * **inputs**: Specific settings for Occopus/Terraform follow here - - * **interface_cloud:** tells Occopus which cloud type to interface with - * **endpoint_cloud:** tells Occopus the endpoint API of the cloud - * **provider:** tells Terraform which cloud provider to interface with + * **inputs**: Extra settings to pass to Occopus or Terraform + * **endpoint:** the endpoint API of the cloud (always required for + Occopus, sometimes required for Terraform) CloudSigma @@ -747,6 +776,10 @@ To instantiate MiCADO workers on CloudSigma, please use the template below. MiCADO **requires** num_cpus, mem_size, vnc_password, libdrive_id, public_key_id and firewall_policy to instantiate VM on *CloudSigma*. +Currently, only **Occopus** has support for CloudSigma, so Occopus must be +enabled as in :ref:`customize`, and the interface must be set to Occopus as +in the example below. + :: YOUR-VIRTUAL-MACHINE: @@ -761,23 +794,12 @@ public_key_id and firewall_policy to instantiate VM on *CloudSigma*. - firewall_policy: ADD_YOUR_FIREWALL_POLICY_ID_HERE (e.g. fd97e326-83c8-44d8-90f7-0a19110f3c9d) ip_v4_conf: conf: dhcp - capabilities: - # OPTIONAL METADATA - host: - properties: - num_cpus: 2GHz - mem_size: 2GB - os: - properties: - type: linux - distribution: ubuntu - version: 16.04 + interfaces: Occopus: create: inputs: - interface_cloud: cloudsigma - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g for cloudsigma https://zrh.cloudsigma.com/api/2.0 ) + endpoint: ADD_YOUR_ENDPOINT (e.g for cloudsigma https://zrh.cloudsigma.com/api/2.0 ) Under the **properties** section of a CloudSigma virtual machine definition these inputs are available.: @@ -804,6 +826,10 @@ To instantiate MiCADO workers on CloudBroker, please use the template below. MiCADO **requires** deployment_id and instance_type_id to instantiate a VM on *CloudBroker*. +Currently, only **Occopus** has support for CloudBroker, so Occopus must be +enabled as in :ref:`customize` and the interface must be set to Occopus as +in the example below. + :: YOUR-VIRTUAL-MACHINE: @@ -813,23 +839,12 @@ MiCADO **requires** deployment_id and instance_type_id to instantiate a VM on instance_type_id: ADD_YOUR_ID_HERE (e.g. 9b2028be-9287-4bf6-bbfe-bcbc92f065c0) key_pair_id: ADD_YOUR_ID_HERE (e.g. d865f75f-d32b-4444-9fbb-3332bcedeb75) opened_port: ADD_YOUR_PORTS_HERE (e.g. '22,2377,7946,8300,8301,8302,8500,8600,9100,9200,4789') - capabilities: - # OPTIONAL METADATA - host: - properties: - num_cpus: 2GHz - mem_size: 2GB - os: - properties: - type: linux - distribution: ubuntu - version: 16.04 + interfaces: Occopus: create: inputs: - interface_cloud: cloudbroker - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://cola-prototype.cloudbroker.com ) + endpoint: ADD_YOUR_ENDPOINT (e.g https://cola-prototype.cloudbroker.com ) Under the **properties** section of a CloudBroker virtual machine definition these inputs are available.: @@ -857,6 +872,10 @@ To instantiate MiCADO workers on a cloud through EC2 interface, please use the template below. MiCADO **requires** region_name, image_id and instance_type to instantiate a VM through *EC2*. +Both **Occopus and Terraform** support EC2 provisioning. To use Terraform, +enable it as described in :ref:`customize` and adjust the interfaces section +accordingly. + :: YOUR-VIRTUAL-MACHINE: @@ -865,23 +884,12 @@ instantiate a VM through *EC2*. region_name: ADD_YOUR_REGION_NAME_HERE (e.g. eu-west-1) image_id: ADD_YOUR_ID_HERE (e.g. ami-12345678) instance_type: ADD_YOUR_INSTANCE_TYPE_HERE (e.g. t1.small) - capabilities: - # OPTIONAL METADATA - host: - properties: - num_cpus: 2GHz - mem_size: 2GB - os: - properties: - type: linux - distribution: ubuntu - version: 16.04 + interfaces: Occopus: create: inputs: - interface_cloud: ec2 - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://ec2.eu-west-1.amazonaws.com) + endpoint: ADD_YOUR_ENDPOINT (e.g https://ec2.eu-west-1.amazonaws.com) Under the **properties** section of an EC2 virtual machine definition these inputs are available.: @@ -899,6 +907,22 @@ inputs are available.: * **subnet_id** optionally specifies subnet identifier (e.g. subnet-644e1e13) to be attached to the VM. +Under the **interfaces** section of an EC2 virtual machine definition, the +**endpoint** input is required by Occopus as seen in the example above. + +For Terraform the endpoint is discovered automatically based on region. +To customise the endpoint (e.g. for OpenNebula) pass the **endpoint** input +in interfaces. + +:: + + ... + interfaces: + Terraform: + create: + inputs: + endpoint: ADD_YOUR_ENDPOINT (e.g https://my-custom-endpoint/api) + Nova ~~~~ @@ -906,6 +930,10 @@ To instantiate MiCADO workers on a cloud through Nova interface, please use the template below. MiCADO **requires** image_id, flavor_name, project_id and network_id to instantiate a VM through *Nova*. +Both **Occopus and Terraform** support Nova provisioning. To use Terraform, +enable it as described in :ref:`customize` and adjust the interfaces section +accordingly. + :: YOUR-VIRTUAL-MACHINE: @@ -918,23 +946,12 @@ network_id to instantiate a VM through *Nova*. key_name: ADD_YOUR_KEY_HERE (e.g. keyname) security_groups: - ADD_YOUR_ID_HERE (e.g. d509348f-21f1-4723-9475-0cf749e05c33) - capabilities: - # OPTIONAL METADATA - host: - properties: - num_cpus: 2GHz - mem_size: 2GB - os: - properties: - type: linux - distribution: ubuntu - version: 16.04 + interfaces: Occopus: create: inputs: - interface_cloud: nova - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://sztaki.cloud.mta.hu:5000/v3) + endpoint: ADD_YOUR_ENDPOINT (e.g https://sztaki.cloud.mta.hu:5000/v3) Under the **properties** section of a Nova virtual machine definition these inputs are available.: @@ -943,11 +960,8 @@ inputs are available.: Nova cloud. * **image_id** is the image id on your Nova cloud. Select an image containing a base os installation with cloud-init support! -* **flavor_name** is the name of flavor to be instantiated on your Nova cloud. -* **flavor_id** is the id of the desired flavor for the VM. +* **flavor_name** is the id of the desired flavor for the VM. * **tenant_name** is the name of the Tenant or Project to login with. -* **auth_url** is the Identity authentication URL. -* **network_name** is the human-readable name of the network. * **user_domain_name** is the domain name where the user is located. * **availability_zone** is the availability zone in which to create the VM. * **server_name** optionally defines the hostname of VM (e.g.:”helloworld”). @@ -955,126 +969,149 @@ inputs are available.: instance. Keypair name must be defined on the target nova cloud before launching the VM. * **security_groups** optionally specify security settings (you can define - multiple security groups in the form of a list) for your VM. + multiple security groups in the form of a **list**) for your VM. * **network_id** is the id of the network you would like to use on your target Nova cloud. +Under the **interfaces** section of a Nova virtual machine definition, the +**endpoint** input (v3 Identity service) is required as seen in the +example above. + +For Terraform the endpoint should also be passed as **endpoint** in inputs. +Depending on the configuration of the OpenStack cluster, it may be necessary +to provide **network_name** in addition to the ID. + +:: + + ... + interfaces: + Terraform: + create: + inputs: + endpoint: ADD_YOUR_ENDPOINT (e.g https://sztaki.cloud.mta.hu:5000/v3) + network_name: ADD_YOUR_NETWORK_NAME (e.g mynet-default) + Azure ~~~~~ - To instantiate MiCADO workers on a cloud through Azure interface, please use the - template below. MiCADO **requires** resource_group, virtual_network and subnet to - instantiate a VM through *Azure*. +To instantiate MiCADO workers on a cloud through Azure interface, please +use the template below. Currently, only **Terraform** has support for Azure, +so Terraform must be enabled as in :ref:`customize`, and the interface must +be set to Terraform as in the example below. - :: +MiCADO supports Windows VM provisioning in Azure. To force a Windows VM, +simply **DO NOT** pass the **public_key** property and **set the image** to +a desired WindowsServer Sku (2016-Datacenter). `Refer to this Sku list `__ - YOUR-VIRTUAL-MACHINE: - type: tosca.nodes.MiCADO.Azure.Compute - properties: - resource_group: ADD_YOUR_ID_HERE (e.g. TRG) - virtual_network: ADD_YOUR_ID_HERE (e.g. TVN) - subnet: ADD_YOUR_ID_HERE (e.g. TS) - network_security_group: ADD_YOUR_ID_HERE (e.g. TNSG) - vm_size: ADD_YOUR_ID_HERE (e.g. Standard_DS1_v2) - image: ADD_YOUR_ID_HERE (e.g. 16.04.0-LTS) - key_data: ADD_YOUR_KEY_HERE +:: - capabilities: - # OPTIONAL METADATA - host: - properties: - num_cpus: 2GHz - mem_size: 2GB - os: - properties: - type: linux - distribution: ubuntu - version: 16.04 - interfaces: - Terraform: - create: - inputs: - provider: azure + YOUR-VIRTUAL-MACHINE: + type: tosca.nodes.MiCADO.Azure.Compute + properties: + resource_group: ADD_YOUR_RG_HERE (e.g. my-test) + virtual_network: ADD_YOUR_VNET_HERE (e.g. my-test-vnet) + subnet: ADD_YOUR_SUBNET_HERE (e.g. default) + network_security_group: ADD_YOUR_NSG_HERE (e.g. my-test-nsg) + size: ADD_YOUR_ID_HERE (e.g. Standard_B1ms) + image: ADD_YOUR_IMAGE_HERE (e.g. 18.04.0-LTS or 2016-Datacenter) + public_key: ADD_YOUR_MINIMUM_2048_KEY_HERE (e.g. ssh-rsa ASHFF...) + public_ip: [OPTIONAL] BOOLEAN_ENABLE_PUBLIC_IP (e.g. true) - Under the **properties** section of a Azure virtual machine definition these - inputs are available.: + interfaces: + Terraform: + create: + +Under the **properties** section of a Azure virtual machine definition these +inputs are available.: - * **resource_group** specifies the name of the resource group in which the VM should exist. - * **virtual_network** specifies the virtual network associated with the VM. - * **subnet** specifies the subnet associated with the VM. - * **network_security_group** specifies the security settings for the VM. - * **vm_size** specifies the size of the VM. - * **image** specifies the name of the image. - * **key_data** sets the public SSH key to be associated with the instance. - * **public_ip** sets the public ip to be associated with the Windows VM. +* **resource_group** specifies the name of the resource group in which + the VM should exist. +* **virtual_network** specifies the virtual network associated with the VM. +* **subnet** specifies the subnet associated with the VM. +* **network_security_group** specifies the security settings for the VM. +* **vm_size** specifies the size of the VM. +* **image** specifies the name of the image. +* **public_ip [OPTIONAL]** Associate a public IP with the VM. +* **key_data** The public SSH key (minimum 2048-bit) to be associated with + the instance. + **Defining this property forces creation of a Linux VM. If it is not** + **defined, a Windows VM will be created** + +Under the **interfaces** section of a Azure virtual machine definition no +specific inputs are required, but **Terraform: create:** should be present + +**Authentication** in Azure is supported by MiCADO in two ways: + + The first is by setting up a `Service Principal `__ + and providing the required fields in *credentials-cloud-api.yml* during + :ref:`cloud-credentials` + + The other option is by enabling a `System-Assigned Managed Identity `__ + on the **MiCADO Master VM** and then `modify access control `__ + of the **current subscription** to assign the role of **Contributor** to + the **MiCADO Master VM** GCE ~~~ - To instantiate MiCADO workers on a cloud through Google interface, please use the - template below. MiCADO **requires** region, zone, project, machine_type and - network to instantiate a VM through *GCE*. +To instantiate MiCADO workers on a cloud through Google interface, please use +the template below. Currently, only **Terraform** has support for Azure, +so Terraform must be enabled as in :ref:`customize`, and the interface must +be set to Terraform as in the example below. - :: +:: - YOUR-VIRTUAL-MACHINE: - type: tosca.nodes.MiCADO.GCE.Compute - properties: - region: ADD_YOUR_ID_HERE (e.g. us-west1) - project: ADD_YOUR_ID_HERE (e.g. PGCE) - machine_type: ADD_YOUR_ID_HERE (e.g. n1-standard-2) - zone: ADD_YOUR_ID_HERE (e.g. us-west1-a) - image: ADD_YOUR_ID_HERE (e.g. ubuntu-os-cloud/ubuntu-1604-lts) - network: ADD_YOUR_ID_HERE (e.g. default) - ssh-keys:ADD_YOUR_ID_HERE - - capabilities: - # OPTIONAL METADATA - host: - properties: - num_cpus: 2GHz - mem_size: 2GB - os: - properties: - type: linux - distribution: ubuntu - version: 16.04 - interfaces: - Terraform: - create: - inputs: - provider: gce - - Under the **properties** section of a GCE virtual machine definition these - inputs are available.: - - * **project** is the project to manage the resources in. - * **image** specifies the image from which to initialize the VM disk. - * **region** is the region that the resources should be created in. - * **machine_type** specifies the type of machine to create. - * **zone** is the zone that the machine should be created in. - * **network** is the network to attach to the instance. - * **ssh-keys** sets the public SSH key to be associated with the instance. - - The authentication in GCE is done using a service account key file in JSON - format. You can manage the key files using the Cloud Console. The steps to - retrieve the key file is as follows : - - * Open the **IAM & Admin** page in the Cloud Console. - * Click **Select a project**, choose a project, and click **Open**. - * In the left nav, click **Service accounts**. - * Find the row of the service account that you want to create a key for. - In that row, click the **More** button, and then click **Create key**. - * Select a **Key type** and click **Create**. + YOUR-VIRTUAL-MACHINE: + type: tosca.nodes.MiCADO.GCE.Compute + properties: + region: ADD_YOUR_ID_HERE (e.g. us-west1) + zone: ADD_YOUR_ID_HERE (e.g. us-west1-a) + project: ADD_YOUR_ID_HERE (e.g. PGCE) + machine_type: ADD_YOUR_ID_HERE (e.g. n1-standard-2) + image: ADD_YOUR_ID_HERE (e.g. ubuntu-os-cloud/ubuntu-1804-lts) + network: ADD_YOUR_ID_HERE (e.g. default) + ssh-keys: ADD_YOUR_ID_HERE (e.g. ssh-rsa AAAB3N...) + + interfaces: + Terraform: + create: + +Under the **properties** section of a GCE virtual machine definition these +inputs are available.: + +* **project** is the project to manage the resources in. +* **image** specifies the image from which to initialize the VM disk. +* **region** is the region that the resources should be created in. +* **machine_type** specifies the type of machine to create. +* **zone** is the zone that the machine should be created in. +* **network** is the network to attach to the instance. +* **ssh-keys** sets the public SSH key to be associated with the instance. + +Under the **interfaces** section of a GCE virtual machine definition no +specific inputs are required, but **Terraform: create:** should be present + +**Authentication** in GCE is done using a service account key file in JSON +format. You can manage the key files using the Cloud Console. The steps to +retrieve the key file is as follows : + + * Open the **IAM & Admin** page in the Cloud Console. + * Click **Select a project**, choose a project, and click **Open**. + * In the left nav, click **Service accounts**. + * Find the row of the service account that you want to create a key for. + In that row, click the **More** button, and then click **Create key**. + * Select a **Key type** and click **Create**. Types ~~~~~ -**NEW in v0.8.0** Through abstraction, it is possible to reference a +Through abstraction, it is possible to reference a pre-defined type and simplify the description of a virtual machine. Currently MiCADO supports these additional types for CloudSigma, but more can be written: +* **tosca.nodes.MiCADO.EC2.Compute.Terra** - + Orchestrates with Terraform on eu-west-2, overwrite region_name + under **properties** to change region * **tosca.nodes.MiCADO.CloudSigma.Compute.Occo** - Automatically orchestrates on Zurich with Occopus. There is no need to define further fields under **interfaces:** but Zurich can be changed @@ -1108,7 +1145,7 @@ Example definition of a VM using abstraction Monitoring Policy ----------------- -**NEW in v0.8.0** Metric collection is now disabled by default. The basic +Metric collection is now disabled by default. The basic exporters from previous MiCADO versions can be enabled through the monitoring policy below. If the policy is omitted, or if one property is left undefined, then the relevant metric collection will be disabled. diff --git a/user_documentation/rst/deployment.rst b/user_documentation/rst/deployment.rst index ca27332..817cc4c 100644 --- a/user_documentation/rst/deployment.rst +++ b/user_documentation/rst/deployment.rst @@ -24,7 +24,7 @@ For cloud interfaces supported by MiCADO: For the MiCADO master: -* Ubuntu 16.04, 18.04 (the worker image **must be** the same) +* Ubuntu 16.04 or 18.04 * (Minimum) 2GHz CPU & 3GB RAM & 15GB DISK * (Recommended) 2GHz CPU & 4GB RAM & 20GB DISK @@ -77,13 +77,13 @@ To install jq on other operating systems follow the `official installation guide wrk ---- -To install wrk on Ubuntu, use this command: +To install wrk on Ubuntu 16.04, use this command: :: sudo apt-get install wrk -To install wrk on other operating systems check the sidebar on the `github wiki `__. +To install wrk on other operating versions/systems check the sidebar on the `github wiki `__. Installation ============ @@ -95,9 +95,11 @@ Step 1: Download the ansible playbook. :: - curl --output ansible-micado-0.8.0.tar.gz -L https://github.com/micado-scale/ansible-micado/releases/download/v0.8.0/ansible-micado-0.8.0.tar.gz - tar -zxvf ansible-micado-0.8.0.tar.gz - cd ansible-micado-0.8.0/ + curl --output ansible-micado-0.8.1.tar.gz -L https://github.com/micado-scale/ansible-micado/releases/download/v0.8.1/ansible-micado-0.8.1.tar.gz + tar -zxvf ansible-micado-0.8.1.tar.gz + cd ansible-micado-0.8.1/ + +.. _cloud-credentials: Step 2: Specify cloud credential for instantiating MiCADO workers. ------------------------------------------------------------------ @@ -109,6 +111,13 @@ MiCADO master will use this credential against the cloud API to start/stop VM in cp sample-credentials-cloud-api.yml credentials-cloud-api.yml edit credentials-cloud-api.yml +**NOTE** If you are using Google Cloud, you must replace or fill the credentials-gce.json with your downloaded service account key file. + +:: + + cp sample-credentials-gce.json credentials-gce.json + edit credentials-gce.json + Edit credentials-cloud-api.yml to add cloud credentials. You will find predefined sections in the template for each cloud interface type MiCADO supports. Fill only the section belonging to your target cloud. Optionally you can use the `Ansible Vault `_ mechanism to keep the credential data in an encrypted format. To achieve this, create the above file using Vault with the command @@ -205,6 +214,8 @@ Edit the ``hosts.yml`` file to set the variables. The following parameters under Please, revise all the parameters, however in most cases the default values are correct. +.. _customize: + Step 6: Customize the deployment -------------------------------- @@ -220,6 +231,12 @@ A few parameters can be fine tuned before deployment. They are as follows: - **web_session_timeout**: Timeout value in seconds for the Dashboard. Default is 600. +- **enable_occopus**: Install and enable Occopus for cloud orchestration. Default is True. + +- **enable_terraform**: Install and enable Terraform for cloud orchestration. Default is False. + +*Note. MiCADO supports running both Occopus & Terraform on the same Master, if desired* + Step 7: Start the installation of MiCADO master. ------------------------------------------------ @@ -240,7 +257,7 @@ Optionally, you can split the deployment of your MiCADO Master in two. The ``bui You can clone the drive of a **"built"** MiCADO Master (or otherwise make an image from it) to be reused again and again. This will greatly speed up the deployment of future instances of MiCADO. -Running the following command will ``build`` a MiCADO Master node on an empty Ubuntu 16.04 VM. +Running the following command will ``build`` a MiCADO Master node on an empty Ubuntu VM. :: @@ -252,7 +269,7 @@ You can then run the following command to ``start`` any **"built"** MiCADO Maste ansible-playbook -i hosts.yml micado-master.yml --tags 'start' -As a last measure of increasing efficiency, you can also ``build`` a MiCADO Worker node. You can then clone/snapshot/image the drive of this VM and point to it in your ADT descriptions. Before running this operation, Make sure the *hosts.yml* points to the empty VM where you intend to build the worker image. Adjust the values under the key **micado-target** as needed. The following command will ``build`` a MiCADO Worker node on an empty Ubuntu 16.04 VM. +As a last measure of increasing efficiency, you can also ``build`` a MiCADO Worker node. You can then clone/snapshot/image the drive of this VM and point to it in your ADT descriptions. Before running this operation, Make sure the *hosts.yml* points to the empty VM where you intend to build the worker image. Adjust the values under the key **micado-target** as needed. The following command will ``build`` a MiCADO Worker node on an empty Ubuntu VM. :: diff --git a/user_documentation/rst/index.rst b/user_documentation/rst/index.rst index c99cc84..ce4ac21 100644 --- a/user_documentation/rst/index.rst +++ b/user_documentation/rst/index.rst @@ -16,7 +16,7 @@ MiCADO requires a TOSCA-based Application Description Template to be submitted c The format of the Application Description Template for MiCADO is detailed later. -To use MiCADO, first the MiCADO core services must be deployed on a virtual machine (called MiCADO Master) by an Ansible playbook. MiCADO Master is configured as the Kubernetes Master Node and has installed the Docker Engine, Occopus (to scale VMs), Prometheus (for monitoring), Policy Keeper (to perform decision on scaling) and Submitter (to provide submission endpoint) microservices to realize the autoscaling control loops. During operation MiCADO workers (realised on new VMs) are instantiated on demand which deploy Prometheus Node Exporter and CAdvisor as Kubernetes DaemonSets and the Docker engine through contextualisation. The newly instantiated MiCADO workers join the Kubernetes cluster managed by the MiCADO Master. +To use MiCADO, first the MiCADO core services must be deployed on a virtual machine (called MiCADO Master) by an Ansible playbook. MiCADO Master is configured as the Kubernetes Master Node and has installed the Docker Engine, Occopus and Terraform (to scale VMs), Prometheus (for monitoring), Policy Keeper (to perform decision on scaling) and Submitter (to provide submission endpoint) microservices to realize the autoscaling control loops. During operation MiCADO workers (realised on new VMs) are instantiated on demand which deploy Prometheus Node Exporter and CAdvisor as Kubernetes DaemonSets and the Docker engine through contextualisation. The newly instantiated MiCADO workers join the Kubernetes cluster managed by the MiCADO Master. In the current release, the status of the system can be inspected through the following ways: REST API provides interface for submission, update and list functionalities over applications. Dashboard provides three graphical view to inspect the VMs and Kubernetes Deployments. They are the Kubernetes Dashboard, Grafana and Prometheus. Finally, advanced users may find the logs of the MiCADO core services useful in the Kubernetes Dashboard under the ``micado-system`` and ``micado-worker`` namespaces, or directly on the MiCADO master. From bdfad4edbb1e411ae4c67ee3075a5b03131689bd Mon Sep 17 00:00:00 2001 From: jaydesl Date: Wed, 1 Apr 2020 11:31:26 +0100 Subject: [PATCH 04/12] Add OS app-creds, cred updates to docs --- .../rst/application_description.rst | 13 +++ user_documentation/rst/deployment.rst | 87 ++++++++++++++----- 2 files changed, 77 insertions(+), 23 deletions(-) diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index 22dd67d..0b1bab4 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -991,6 +991,19 @@ to provide **network_name** in addition to the ID. endpoint: ADD_YOUR_ENDPOINT (e.g https://sztaki.cloud.mta.hu:5000/v3) network_name: ADD_YOUR_NETWORK_NAME (e.g mynet-default) +**Authentication** in OpenStack is supported by MiCADO in two ways: + + The default method is authenticating with the same credentials + used to access the OpenStack WebUI by providing + the **username** and **password** fields in *credentials-cloud-api.yml* + during :ref:`cloud-credentials` + + The other option is with `Application Credentials `__ + For this method, provide **application_credential_id** and + **applicaiton_credential_secret** in *credentials-cloud-api.yml*. + If these fields are filled, **username** and **password** will be + ignored. + Azure ~~~~~ diff --git a/user_documentation/rst/deployment.rst b/user_documentation/rst/deployment.rst index 817cc4c..d3919c6 100644 --- a/user_documentation/rst/deployment.rst +++ b/user_documentation/rst/deployment.rst @@ -13,7 +13,7 @@ We recommend to perform the installation remotely as all your configuration file Prerequisites ============= -For cloud interfaces supported by MiCADO: +**A cloud interface supported by MiCADO** * EC2 (tested on Amazon and OpenNebula) * Nova (tested on OpenStack) @@ -22,13 +22,14 @@ For cloud interfaces supported by MiCADO: * CloudSigma * CloudBroker -For the MiCADO master: +**MiCADO master (a virtual machine on a supported cloud)** * Ubuntu 16.04 or 18.04 * (Minimum) 2GHz CPU & 3GB RAM & 15GB DISK * (Recommended) 2GHz CPU & 4GB RAM & 20GB DISK -For the host where the Ansible playbook is executed (differs depending on local or remote): +| **Ansible Remote (the host where the Ansible Playbook is executed)** +| *this could be the MiCADO Master itself, for a "local" execution of the playbook* * Ansible 2.8 or greater * curl @@ -104,34 +105,58 @@ Step 1: Download the ansible playbook. Step 2: Specify cloud credential for instantiating MiCADO workers. ------------------------------------------------------------------ -MiCADO master will use this credential against the cloud API to start/stop VM instances (MiCADO workers) to host the application and to realize scaling. Credentials here should belong to the same cloud as where MiCADO master is running. We recommend making a copy of our predefined template and edit it. MiCADO expects the credential in a file, called credentials-cloud-api.yml before deployment. Please, do not modify the structure of the template! +MiCADO master will use the credentials against the cloud API to start/stop VM +instances (MiCADO workers) to host the application and to realize scaling. +Credentials here should belong to the same cloud as where MiCADO master +is running. We recommend making a copy of our predefined template and edit it. +MiCADO expects the credential in a file, called *credentials-cloud-api.yml* +before deployment. Please, do not modify the structure of the template! :: cp sample-credentials-cloud-api.yml credentials-cloud-api.yml edit credentials-cloud-api.yml -**NOTE** If you are using Google Cloud, you must replace or fill the credentials-gce.json with your downloaded service account key file. + +Edit **credentials-cloud-api.yml** to add cloud credentials. You will find +predefined sections in the template for each cloud interface type MiCADO +supports. It is recommended to fill only the section belonging to your +target cloud. + +**NOTE** If you are using Google Cloud, you must replace or fill the +*credentials-gce.json* with your downloaded service account key file. :: cp sample-credentials-gce.json credentials-gce.json edit credentials-gce.json -Edit credentials-cloud-api.yml to add cloud credentials. You will find predefined sections in the template for each cloud interface type MiCADO supports. Fill only the section belonging to your target cloud. +It is possible to modify cloud credentials after MiCADO has been deployed, +see the section titled **Update Cloud Credentials** further down this page -Optionally you can use the `Ansible Vault `_ mechanism to keep the credential data in an encrypted format. To achieve this, create the above file using Vault with the command +Optional: Added security +~~~~~~~~~~~~~~~~~~~~~~~~ -:: + Credentials are stored in Kubernetes Secrets on the MiCADO Master. If + you wish to keep the credential data in an secure format on the Ansible + Remote as well, you can use the `Ansible Vault `_ + mechanism to to achieve this. Simply create the above file using Vault with the + following command - ansible-vault create credentials-cloud-api.yml + :: + ansible-vault create credentials-cloud-api.yml -This will launch the editor defined in the ``$EDITOR`` environment variable to make changes to the file. If you wish to make any changes to the previously encrypted file, you can use the command -:: + This will launch the editor defined in the ``$EDITOR`` environment variable to make changes to + the file. If you wish to make any changes to the previously encrypted file, you can use the command - ansible-vault edit credentials-cloud-api.yml + :: + + ansible-vault edit credentials-cloud-api.yml + + Be sure to see the note about deploying a playbook with vault encrypted files + in **Step 7** Step 3a: Specify security settings and credentials to access MiCADO. -------------------------------------------------------------------- @@ -252,28 +277,30 @@ If you have used Vault to encrypt your credentials, you have to add the path to ansible-playbook -i hosts.yml micado-master.yml --ask-vault-pass +Optional: Build & Start Roles +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Optionally, you can split the deployment of your MiCADO Master in two. The ``build`` tags prepare the node will all the necessary dependencies, libraries and images necessary for operation. The ``start`` tags intialise the cluster and all the MiCADO core components. + Optionally, you can split the deployment of your MiCADO Master in two. The ``build`` tags prepare the node will all the necessary dependencies, libraries and images necessary for operation. The ``start`` tags intialise the cluster and all the MiCADO core components. -You can clone the drive of a **"built"** MiCADO Master (or otherwise make an image from it) to be reused again and again. This will greatly speed up the deployment of future instances of MiCADO. + You can clone the drive of a **"built"** MiCADO Master (or otherwise make an image from it) to be reused again and again. This will greatly speed up the deployment of future instances of MiCADO. -Running the following command will ``build`` a MiCADO Master node on an empty Ubuntu VM. + Running the following command will ``build`` a MiCADO Master node on an empty Ubuntu VM. -:: + :: - ansible-playbook -i hosts.yml micado-master.yml --tags 'build' + ansible-playbook -i hosts.yml micado-master.yml --tags 'build' -You can then run the following command to ``start`` any **"built"** MiCADO Master node which will initialise and launch the core components for operation. + You can then run the following command to ``start`` any **"built"** MiCADO Master node which will initialise and launch the core components for operation. -:: + :: - ansible-playbook -i hosts.yml micado-master.yml --tags 'start' + ansible-playbook -i hosts.yml micado-master.yml --tags 'start' -As a last measure of increasing efficiency, you can also ``build`` a MiCADO Worker node. You can then clone/snapshot/image the drive of this VM and point to it in your ADT descriptions. Before running this operation, Make sure the *hosts.yml* points to the empty VM where you intend to build the worker image. Adjust the values under the key **micado-target** as needed. The following command will ``build`` a MiCADO Worker node on an empty Ubuntu VM. + As a last measure of increasing efficiency, you can also ``build`` a MiCADO Worker node. You can then clone/snapshot/image the drive of this VM and point to it in your ADT descriptions. Before running this operation, Make sure the *hosts.yml* points to the empty VM where you intend to build the worker image. Adjust the values under the key **micado-target** as needed. The following command will ``build`` a MiCADO Worker node on an empty Ubuntu VM. -:: + :: - ansible-playbook -i hosts.yml build-micado-worker.yml + ansible-playbook -i hosts.yml build-micado-worker.yml After deployment @@ -286,6 +313,20 @@ Once the deployment has successfully finished, you can proceed with * playing with the :ref:`tutorials` * creating your :ref:`applicationdescription` + +Update Cloud Credentials +======================== + +It is possible to modify cloud credentials on an already deployed MiCADO +Master. Simply make the necessary changes to the appropriate credentials +file (using *ansible-vault* if desired) and then run the following playbook +command: + +:: + + ansible-playbook -i hosts.yml micado-master.yml --tags update-auth + + Check the logs ============== From d7a9f2dbb7a980168c708c532edd0155d602a7c2 Mon Sep 17 00:00:00 2001 From: jaydesl Date: Wed, 1 Apr 2020 13:03:28 +0100 Subject: [PATCH 05/12] Add contextualisation help --- .../rst/application_description.rst | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index 0b1bab4..643b7de 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -719,6 +719,11 @@ General type: tosca.nodes.MiCADO...Compute properties: + context: + insert: true + cloud_config: | + runcmd: + - capabilities: host: @@ -741,6 +746,28 @@ The **properties** section is **REQUIRED** and contains the necessary properties to provision the virtual machine and vary from cloud to cloud. Properties for each cloud are detailed further below. +**Cloud Contextualisation** + + It is possible to provide custom configuration of the deployed nodes via + `cloud-init scripts `__ + . MiCADO relies on a cloud-init config to join nodes as workers to the + cluster, so it is recommended to only add to the default config, except + for certain cases. + + The **context** key is supported by all the cloud compute node definitions + below. New cloud-init configurations should be defined in **cloud_config** + and one of **append** or **insert** should be set to *true* to avoid + overwriting the default cloud-init config for MiCADO. + + - Setting **append** to true will add the newly defined configurations + to the end of the default cloud-init config + - Setting **insert** to true will add the newly defined configurations + to the start of the default cloud-init config, before the MiCADO Worker + is fully initialised + + + + The **capabilities** sections for all virtual machine definitions that follow are identical and are **ENTIRELY OPTIONAL**. They are ommited in the cloud-specific examples below. They are filled with the following metadata to From a2ed6fca98e2fb4e1106445b1ada1a610ad6a93f Mon Sep 17 00:00:00 2001 From: jaydesl Date: Wed, 1 Apr 2020 16:06:52 +0100 Subject: [PATCH 06/12] Add v0.8.1 release notes --- user_documentation/rst/release_notes.rst | 179 ++++++++++++++++++++++- 1 file changed, 172 insertions(+), 7 deletions(-) diff --git a/user_documentation/rst/release_notes.rst b/user_documentation/rst/release_notes.rst index 257d77b..9150a60 100644 --- a/user_documentation/rst/release_notes.rst +++ b/user_documentation/rst/release_notes.rst @@ -1,9 +1,167 @@ Release Notes ************* +**Changelog since v0.5.x of MiCADO** -**v0.8.0 (30 September 2019)** +v0.8.1 (April 2020) +=================== +What's New +---------- + +Terraform for Cloud Orchestration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +| Support for `Terraform `__ has been added to MiCADO! +| The TerraformAdaptor currently supports the following cloud resources: + +- OpenStack Nova Compute +- Amazon EC2 Compute +- Microsoft Azure Compute +- Google Compute Engine + +To use Terraform with MiCADO it must be **enabled** during deployment +of the MiCADO Master, and an appropriate **ADT** should be used. + +Improved Credential File Handling +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Cloud credentials are now stored in Kubernetes Secrets on the MiCADO Master. +Additionally, credentials on an already deployed MiCADO can now be updated +or modified using Ansible. + +Improved Node Contextualisation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It is now possible to **insert** contextualisation configurations earlier +in the default *cloud-init #cloud-config* for worker nodes. This extends +the existing **append** functionality to support configuration tasks which +should precede the initialisation of the worker node (joining the Kubernetes +cluster, bringing up the IPSec tunnel, etc...) + +Fixes +----- + +Zorp Ingress +~~~~~~~~~~~~ + +The Zorp Ingress Controllers in v0.8.0 were incorrectly being deployed +alongside *every* application, even if the policy did not call for it. This +has now been resolved. + +Additionally, these workers were requesting a large amount of CPU and Memory, +which could limit scheduling on the node. Those requests have been lowered to +more reasonable values. + +Different Versioned Workers +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In previous versions of MiCADO, deployed worker nodes which did not match +the Ubuntu version of the MiCADO Master would be unable to join the +MiCADO cluster. This has now been resolved. + +Known Issues & Deprecations +--------------------------- + +Prometheus Fails to Scrape Targets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Rarely, on some configrations of Ubuntu, IPSec drops Prometheus scrapes of +certain Prometheus Exporters deployed on worker nodes, causing the scrape to +fail with the error **context deadline exceeded**. As a temporary workaround, +the IPSec tunnel securing Master-Worker communications can be stopped by +appending **ipsec stop** to the default worker node *cloud-init #cloud-config*. + +Compute Node Inputs in ADTs +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Occopus **input** *interface_cloud* has been deprecated and removed, +as cloud discovery is now based on TOSCA type. It will continue to be +supported (ignored) in this version of MiCADO but may raise warnings or +errors in future versions. + +The **input** *endpoint_cloud* has been deprecated in favour of +*endpoint*. Both Terraform and Occopus will support *endpoint_cloud* +in this version of MiCADO but a future version will drop support. + +With the above changes in mind, Terraform will support v0.8.0 ADTs +which only include EC2 or Nova Compute nodes. This can be acheieved simply +by changing **interfaces** from *Occopus* to *Terraform*, though it +should be noted: + +- Terraform will auto-discover the EC2 endpoint based on the *region_name* + property, making the *endpoint* input no longer required. The *endpoint* + input can still be passed in to provide a custom endpoint. +- For some OpenStack configurations, Terraform requires a *network_name* + as well as *network_id* to correctly identify networks. The *network_name* + property can be passed in as **properties** or **inputs** + +Full Change List +---------------- + +Ansible Playbook +~~~~~~~~~~~~~~~~ + +- Refactor tasks to be more component-specific +- Add tasks for configuring and installing Terraform +- Use the Ansible *k8s module* for managing Kubernetes resources +- Optimise cloud-init scripts by reducing *apt-get update* +- Fix Master-Worker Ubuntu mismatch bug +- Handle undefined credential file path +- Store credential data in Kubernetes Secrets +- Support updates of credentials on a deployed MiCADO Master +- Add demo ADTs for Azure & GCE +- Update QuickStart docs in README + +MiCADO Dashboard +~~~~~~~~~~~~~~~~ + +- Bump Grafana to v6.6.2 +- Bump Prometheus to v2.16.0 +- Bump Kubernetes-Dashboard to v2.0.0 (rc7) +- Hide Kubernetes Secrets on Kubernetes-Dashboard + +Policy Keeper +~~~~~~~~~~~~~ + +- Refactor PK main loop to support multiple cloud orchestrators +- Add Terraform handler for scaling (up/down and dropping specific nodes) +- Switch to the *pykube* package instead of *kubernetes* + +TOSCASubmitter +~~~~~~~~~~~~~~ + +- Add the TerraformAdaptor +- Bump package versions + +- **OccopusAdaptor** + + - Discover cloud from TOSCA type and deprecate *interface_cloud* + - Rename *endpoint_cloud* to *endpoint* + - Support *insert* to cloud-init cloud-config + - Support authentication with OpenStack application credential + +- **PKAdaptor** + + - Pass orchestrator info to PK + +- **K8sAdaptor** + + - Lower Zorp Ingress reserved CPU and Memory + - Only deploy Zorp Ingress with matching policy + +Other +~~~~~ + +- Bump Kubernetes to v1.18 +- Bump Flannel to v0.12 +- Bump containerd.io to v.1.2.13 +- Bump Occopus to v1.7 (rc6) +- Bump cAdvisor to v0.34.0 +- Bump AlertManager to v0.20.0 + +v0.8.0 (30 September 2019) +========================== - simplify ADTs by introducing pre-defined TOSCA node types - add support for Kubernetes ConfigMaps, Namespaces and multi-container Pods - metric collection (disabled by default) is now enabled with "monitoring" policy @@ -37,7 +195,8 @@ Release Notes - add a timeout to Kubernetes undeploy - simplify hosts.yml file -**v0.7.3 (14 Jun 2019)** +v0.7.3 (14 Jun 2019) +==================== - update MiCADO internal core services to run in Kubernetes pods - remove Consul and replace it with Prometheus’ Kubernetes Service Discovery @@ -64,11 +223,13 @@ Release Notes - update the cQueue demo to demonstrate “virtual machine sets” - fix and improve the NGINX demo -**v0.7.2-rev1 (01 Apr 2019)** +v0.7.2-rev1 (01 Apr 2019) +========================= - fix dependency issue for Kubernetes 1.13.1 (`kubernetes/kubernetes#75683 `__) -**v0.7.2 (25 Feb 2019)** +v0.7.2 (25 Feb 2019) +==================== - add checking for minimal memory on micado master at deployment - support private networks on cloudsigma @@ -93,7 +254,8 @@ Release Notes - add support for kubernetes secrets - reimplement Credential Manager using the flask-users library -**v0.7.1 (10 Jan 2019)** +v0.7.1 (10 Jan 2019) +==================== - Fix: Add SKIP back to Dashboard (defaults changed in v1.13.1) - Fix: URL not found for Kubernetes manifest files @@ -104,11 +266,14 @@ Release Notes - Add Kubernetes service discovery support to Prometheus - Add new demo: nginx (HTTP request scaling) -**v0.7.0 (12 Dec 2018)** - +v0.7.0 (12 Dec 2018) +==================== - Introduce Kubernetes as the primary container orchestration engine - Replace the swarm-visualiser with the Kubernetes Dashboard +Older MiCADO Versions +===================== + **v0.6.1 (15 Oct 2018)** - enable VM-only deployments From f2207ba256396a5a215ce72fc207a1f2dd78887f Mon Sep 17 00:00:00 2001 From: jaydesl Date: Fri, 3 Apr 2020 10:08:30 +0100 Subject: [PATCH 07/12] Add note about nodetodrop in TF --- user_documentation/rst/application_description.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index 643b7de..0a92177 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -1299,7 +1299,7 @@ The subsections have the following roles: - m_nodes: python list of nodes belonging to the kubernetes cluster - m_node_count: the target number of nodes - - m_nodes_todrop: the ids or ip addresses of the nodes to be dropped in case of downscaling + - m_nodes_todrop: the ids or ip addresses of the nodes to be dropped in case of downscaling **NOTE MiCADO-Terraform supports private IPs on Azure or AWS EC2 only** - m_container_count: the target number of containers for the service the evaluation belongs to - m_time_since_node_count_changed: time in seconds elapsed since the number of nodes changed From 1ecb9652595ea7202e479c2ac5c3b55453603de2 Mon Sep 17 00:00:00 2001 From: jaydesl Date: Fri, 3 Apr 2020 11:53:24 +0100 Subject: [PATCH 08/12] Clarify OpenNebula support --- user_documentation/rst/application_description.rst | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index 0a92177..03ce5e9 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -899,9 +899,9 @@ To instantiate MiCADO workers on a cloud through EC2 interface, please use the template below. MiCADO **requires** region_name, image_id and instance_type to instantiate a VM through *EC2*. -Both **Occopus and Terraform** support EC2 provisioning. To use Terraform, -enable it as described in :ref:`customize` and adjust the interfaces section -accordingly. +**Terraform** supports provisioning on AWS EC2, and **Occopus** supports +both AWS EC2 and OpenNebula EC2. To use Terraform, enable it as described +in :ref:`customize` and adjust the interfaces section accordingly. :: @@ -938,8 +938,7 @@ Under the **interfaces** section of an EC2 virtual machine definition, the **endpoint** input is required by Occopus as seen in the example above. For Terraform the endpoint is discovered automatically based on region. -To customise the endpoint (e.g. for OpenNebula) pass the **endpoint** input -in interfaces. +To customise the endpoint pass the **endpoint** input in interfaces. :: From 97a6a9cbec1fbf57956647c1d652dacabdd07527 Mon Sep 17 00:00:00 2001 From: jaydesl Date: Mon, 6 Apr 2020 09:26:22 +0100 Subject: [PATCH 09/12] Change MiCADO version to 0.9.0 --- user_documentation/rst/application_description.rst | 2 +- user_documentation/rst/deployment.rst | 6 +++--- user_documentation/rst/release_notes.rst | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index 03ce5e9..d04d38f 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -47,7 +47,7 @@ Example of the overall structure of an ADT tosca_definitions_version: tosca_simple_yaml_1_0 imports: - - https://raw.githubusercontent.com/micado-scale/tosca/v0.8.1/micado_types.yaml + - https://raw.githubusercontent.com/micado-scale/tosca/v0.9.0/micado_types.yaml repositories: docker_hub: https://hub.docker.com/ diff --git a/user_documentation/rst/deployment.rst b/user_documentation/rst/deployment.rst index d3919c6..ca71d8f 100644 --- a/user_documentation/rst/deployment.rst +++ b/user_documentation/rst/deployment.rst @@ -96,9 +96,9 @@ Step 1: Download the ansible playbook. :: - curl --output ansible-micado-0.8.1.tar.gz -L https://github.com/micado-scale/ansible-micado/releases/download/v0.8.1/ansible-micado-0.8.1.tar.gz - tar -zxvf ansible-micado-0.8.1.tar.gz - cd ansible-micado-0.8.1/ + curl --output ansible-micado-0.9.0.tar.gz -L https://github.com/micado-scale/ansible-micado/releases/download/v0.9.0/ansible-micado-0.9.0.tar.gz + tar -zxvf ansible-micado-0.9.0.tar.gz + cd ansible-micado-0.9.0/ .. _cloud-credentials: diff --git a/user_documentation/rst/release_notes.rst b/user_documentation/rst/release_notes.rst index 9150a60..8336657 100644 --- a/user_documentation/rst/release_notes.rst +++ b/user_documentation/rst/release_notes.rst @@ -3,7 +3,7 @@ Release Notes **Changelog since v0.5.x of MiCADO** -v0.8.1 (April 2020) +v0.9.0 (April 2020) =================== What's New From e3528616b73564368777efbfb22b0b9e98437ea4 Mon Sep 17 00:00:00 2001 From: jaydesl Date: Wed, 8 Apr 2020 12:39:28 +0100 Subject: [PATCH 10/12] Update docs with IPSec issue --- user_documentation/rst/deployment.rst | 14 ++++++++++++-- user_documentation/rst/release_notes.rst | 19 +++++++++++-------- 2 files changed, 23 insertions(+), 10 deletions(-) diff --git a/user_documentation/rst/deployment.rst b/user_documentation/rst/deployment.rst index ca71d8f..5cccb50 100644 --- a/user_documentation/rst/deployment.rst +++ b/user_documentation/rst/deployment.rst @@ -210,11 +210,21 @@ Protocol Port(s) Service TCP 6443 kube-apiserver TCP 10250-10252 kubelet, kube-controller, kube-scheduler UDP 8285 & 8472 flannel overlay network + UDP 500 & 4500 IPSec ======== ============= ==================== -**NOTE:** ``[web_listening_port]`` should match with the actual value specified in Step 4a. + **NOTE:** ``[web_listening_port]`` should match with the actual value specified in Step 4a. -**NOTE:** MiCADO master has built-in firewall, therefore you can leave all ports open at cloud level. + **NOTE:** MiCADO master has built-in firewall, therefore you can leave all ports open at cloud level. + + **NOTE:** On some network configurations, for example where IPSec + protocols **ESP (50)** and **AH (51)** are blocked, important network + packets can get dropped in Master-Worker communications. This might be + seen as Prometheus scrapes failing with the error + **context deadline exceeded**, or Workers failing to join the Kubernetes + cluster. To disable the IPSec tunnel securing Master-Worker communications, + it can be stopped by appending **ipsec stop** to **runcmd** in the default + worker node *cloud-init #cloud-config*. **c)** Finally, launch the virtual machine with the proper settings (capacity, ssh keys, firewall): use any of aws, ec2, nova, etc command-line tools or web interface of your target cloud to launch a new VM. We recommend a VM with 2 cores, 4GB RAM, 20GB disk. Make sure you can ssh to it (password-free i.e. ssh public key is deployed) and your user is able to sudo (to install MiCADO as root). Store its IP address which will be referred as ``IP`` in the following steps. diff --git a/user_documentation/rst/release_notes.rst b/user_documentation/rst/release_notes.rst index 8336657..583f1f7 100644 --- a/user_documentation/rst/release_notes.rst +++ b/user_documentation/rst/release_notes.rst @@ -63,14 +63,17 @@ MiCADO cluster. This has now been resolved. Known Issues & Deprecations --------------------------- -Prometheus Fails to Scrape Targets -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Rarely, on some configrations of Ubuntu, IPSec drops Prometheus scrapes of -certain Prometheus Exporters deployed on worker nodes, causing the scrape to -fail with the error **context deadline exceeded**. As a temporary workaround, -the IPSec tunnel securing Master-Worker communications can be stopped by -appending **ipsec stop** to the default worker node *cloud-init #cloud-config*. +IPSec and Dropped Network Packets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +On some network configurations, for example where IPSec protocols ESP (50) and +AH (51) are blocked, important network packets can get dropped in +Master-Worker communications. This might be seen as Prometheus scrapes +failing with the error **context deadline exceeded**, or Workers failing +to join the Kubernetes cluster. To disable the IPSec tunnel securing +Master-Worker communications, it can be stopped by appending +**ipsec stop** to **runcmd** in the default worker node +*cloud-init #cloud-config*. Compute Node Inputs in ADTs ~~~~~~~~~~~~~~~~~~~~~~~~~~~ From d404d03e20b963595da0b75b7081d37f0a3e47f8 Mon Sep 17 00:00:00 2001 From: jaydesl Date: Thu, 9 Apr 2020 12:46:36 +0100 Subject: [PATCH 11/12] Separate whats new from release notes --- user_documentation/rst/release_notes.rst | 157 +++-------------------- user_documentation/rst/whats_new.rst | 102 +++++++++++++++ 2 files changed, 119 insertions(+), 140 deletions(-) create mode 100644 user_documentation/rst/whats_new.rst diff --git a/user_documentation/rst/release_notes.rst b/user_documentation/rst/release_notes.rst index 583f1f7..989a452 100644 --- a/user_documentation/rst/release_notes.rst +++ b/user_documentation/rst/release_notes.rst @@ -1,112 +1,14 @@ Release Notes ************* -**Changelog since v0.5.x of MiCADO** +| **Changelog since v0.5.x of MiCADO** +| See more detailed notes about upgrading in :ref:`whatsnew` -v0.9.0 (April 2020) -=================== - -What's New ----------- - -Terraform for Cloud Orchestration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -| Support for `Terraform `__ has been added to MiCADO! -| The TerraformAdaptor currently supports the following cloud resources: - -- OpenStack Nova Compute -- Amazon EC2 Compute -- Microsoft Azure Compute -- Google Compute Engine - -To use Terraform with MiCADO it must be **enabled** during deployment -of the MiCADO Master, and an appropriate **ADT** should be used. - -Improved Credential File Handling -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Cloud credentials are now stored in Kubernetes Secrets on the MiCADO Master. -Additionally, credentials on an already deployed MiCADO can now be updated -or modified using Ansible. - -Improved Node Contextualisation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -It is now possible to **insert** contextualisation configurations earlier -in the default *cloud-init #cloud-config* for worker nodes. This extends -the existing **append** functionality to support configuration tasks which -should precede the initialisation of the worker node (joining the Kubernetes -cluster, bringing up the IPSec tunnel, etc...) - -Fixes ------ - -Zorp Ingress -~~~~~~~~~~~~ - -The Zorp Ingress Controllers in v0.8.0 were incorrectly being deployed -alongside *every* application, even if the policy did not call for it. This -has now been resolved. - -Additionally, these workers were requesting a large amount of CPU and Memory, -which could limit scheduling on the node. Those requests have been lowered to -more reasonable values. - -Different Versioned Workers -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In previous versions of MiCADO, deployed worker nodes which did not match -the Ubuntu version of the MiCADO Master would be unable to join the -MiCADO cluster. This has now been resolved. - -Known Issues & Deprecations ---------------------------- - -IPSec and Dropped Network Packets -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -On some network configurations, for example where IPSec protocols ESP (50) and -AH (51) are blocked, important network packets can get dropped in -Master-Worker communications. This might be seen as Prometheus scrapes -failing with the error **context deadline exceeded**, or Workers failing -to join the Kubernetes cluster. To disable the IPSec tunnel securing -Master-Worker communications, it can be stopped by appending -**ipsec stop** to **runcmd** in the default worker node -*cloud-init #cloud-config*. - -Compute Node Inputs in ADTs -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The Occopus **input** *interface_cloud* has been deprecated and removed, -as cloud discovery is now based on TOSCA type. It will continue to be -supported (ignored) in this version of MiCADO but may raise warnings or -errors in future versions. - -The **input** *endpoint_cloud* has been deprecated in favour of -*endpoint*. Both Terraform and Occopus will support *endpoint_cloud* -in this version of MiCADO but a future version will drop support. - -With the above changes in mind, Terraform will support v0.8.0 ADTs -which only include EC2 or Nova Compute nodes. This can be acheieved simply -by changing **interfaces** from *Occopus* to *Terraform*, though it -should be noted: - -- Terraform will auto-discover the EC2 endpoint based on the *region_name* - property, making the *endpoint* input no longer required. The *endpoint* - input can still be passed in to provide a custom endpoint. -- For some OpenStack configurations, Terraform requires a *network_name* - as well as *network_id* to correctly identify networks. The *network_name* - property can be passed in as **properties** or **inputs** - -Full Change List ----------------- - -Ansible Playbook -~~~~~~~~~~~~~~~~ +v0.9.0 (9 April 2020) +===================== -- Refactor tasks to be more component-specific -- Add tasks for configuring and installing Terraform +- Refactor playbook tasks to be more component-specific +- Add playbook tasks for configuring and installing Terraform - Use the Ansible *k8s module* for managing Kubernetes resources - Optimise cloud-init scripts by reducing *apt-get update* - Fix Master-Worker Ubuntu mismatch bug @@ -115,47 +17,22 @@ Ansible Playbook - Support updates of credentials on a deployed MiCADO Master - Add demo ADTs for Azure & GCE - Update QuickStart docs in README - -MiCADO Dashboard -~~~~~~~~~~~~~~~~ - - Bump Grafana to v6.6.2 - Bump Prometheus to v2.16.0 - Bump Kubernetes-Dashboard to v2.0.0 (rc7) - Hide Kubernetes Secrets on Kubernetes-Dashboard - -Policy Keeper -~~~~~~~~~~~~~ - - Refactor PK main loop to support multiple cloud orchestrators -- Add Terraform handler for scaling (up/down and dropping specific nodes) -- Switch to the *pykube* package instead of *kubernetes* - -TOSCASubmitter -~~~~~~~~~~~~~~ - -- Add the TerraformAdaptor -- Bump package versions - -- **OccopusAdaptor** - - - Discover cloud from TOSCA type and deprecate *interface_cloud* - - Rename *endpoint_cloud* to *endpoint* - - Support *insert* to cloud-init cloud-config - - Support authentication with OpenStack application credential - -- **PKAdaptor** - - - Pass orchestrator info to PK - -- **K8sAdaptor** - - - Lower Zorp Ingress reserved CPU and Memory - - Only deploy Zorp Ingress with matching policy - -Other -~~~~~ - +- Add Terraform handler to PK for scaling (up/down and dropping specific nodes) +- Switch to the *pykube* package in PK instead of *kubernetes* +- Add the TerraformAdaptor to the TOSCASubmitter +- Bump TOSCASubmitter package versions +- Discover cloud from TOSCA ADT type and deprecate *interface_cloud* +- Rename ADT compute property *endpoint_cloud* to *endpoint* +- Support *insert* in ADT to modify cloud-init cloud-config +- Support authentication with OpenStack application credential +- Pass orchestrator info to PK during PKAdaptor translation +- Lower reserved CPU and Memory for Zorp Ingress on workers +- Only deploy Zorp Ingress to workers with matching ADT policy - Bump Kubernetes to v1.18 - Bump Flannel to v0.12 - Bump containerd.io to v.1.2.13 diff --git a/user_documentation/rst/whats_new.rst b/user_documentation/rst/whats_new.rst new file mode 100644 index 0000000..d95e9fa --- /dev/null +++ b/user_documentation/rst/whats_new.rst @@ -0,0 +1,102 @@ +.. _whatsnew: + +What's New +********** + +**This section contains detailed upgrade notes for recent versions** + +v0.9.0 +====== + +Major Enhancements +------------------ + +Terraform for Cloud Orchestration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +| Support for `Terraform `__ has been added to MiCADO! +| The TerraformAdaptor currently supports the following cloud resources: + +- OpenStack Nova Compute +- Amazon EC2 Compute +- Microsoft Azure Compute +- Google Compute Engine + +To use Terraform with MiCADO it must be **enabled** during deployment +of the MiCADO Master, and an appropriate **ADT** should be used. + +Improved Credential File Handling +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Cloud credentials are now stored in Kubernetes Secrets on the MiCADO Master. +Additionally, credentials on an already deployed MiCADO can now be updated +or modified using Ansible. + +Improved Node Contextualisation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +It is now possible to **insert** contextualisation configurations earlier +in the default *cloud-init #cloud-config* for worker nodes. This extends +the existing **append** functionality to support configuration tasks which +should precede the initialisation of the worker node (joining the Kubernetes +cluster, bringing up the IPSec tunnel, etc...) + +Fixes +----- + +Zorp Ingress +~~~~~~~~~~~~ + +The Zorp Ingress Controllers in v0.8.0 were incorrectly being deployed +alongside *every* application, even if the policy did not call for it. This +has now been resolved. + +Additionally, these workers were requesting a large amount of CPU and Memory, +which could limit scheduling on the node. Those requests have been lowered to +more reasonable values. + +Different Versioned Workers +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In previous versions of MiCADO, deployed worker nodes which did not match +the Ubuntu version of the MiCADO Master would be unable to join the +MiCADO cluster. This has now been resolved. + +Known Issues & Deprecations +--------------------------- + +IPSec and Dropped Network Packets +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +On some network configurations, for example where IPSec protocols ESP (50) and +AH (51) are blocked, important network packets can get dropped in +Master-Worker communications. This might be seen as Prometheus scrapes +failing with the error **context deadline exceeded**, or Workers failing +to join the Kubernetes cluster. To disable the IPSec tunnel securing +Master-Worker communications, it can be stopped by appending +**ipsec stop** to **runcmd** in the default worker node +*cloud-init #cloud-config*. + +Compute Node Inputs in ADTs +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The Occopus **input** *interface_cloud* has been deprecated and removed, +as cloud discovery is now based on TOSCA type. It will continue to be +supported (ignored) in this version of MiCADO but may raise warnings or +errors in future versions. + +The **input** *endpoint_cloud* has been deprecated in favour of +*endpoint*. Both Terraform and Occopus will support *endpoint_cloud* +in this version of MiCADO but a future version will drop support. + +With the above changes in mind, Terraform will support v0.8.0 ADTs +which only include EC2 or Nova Compute nodes. This can be acheieved simply +by changing **interfaces** from *Occopus* to *Terraform*, though it +should be noted: + +- Terraform will auto-discover the EC2 endpoint based on the *region_name* + property, making the *endpoint* input no longer required. The *endpoint* + input can still be passed in to provide a custom endpoint. +- For some OpenStack configurations, Terraform requires a *network_name* + as well as *network_id* to correctly identify networks. The *network_name* + property can be passed in as **properties** or **inputs** From afdbc86fc756e28afd3a82eaa0074cf82de96f40 Mon Sep 17 00:00:00 2001 From: jaydesl Date: Thu, 9 Apr 2020 13:06:53 +0100 Subject: [PATCH 12/12] Add whats new to TOC --- user_documentation/rst/index.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/user_documentation/rst/index.rst b/user_documentation/rst/index.rst index ce4ac21..30d48e5 100644 --- a/user_documentation/rst/index.rst +++ b/user_documentation/rst/index.rst @@ -30,3 +30,4 @@ In the current release, the status of the system can be inspected through the fo application_description tutorials release_notes + whats_new