diff --git a/user_documentation/Makefile b/user_documentation/Makefile index 37b63a4..6a3f368 100644 --- a/user_documentation/Makefile +++ b/user_documentation/Makefile @@ -5,7 +5,7 @@ SPHINXOPTS = SPHINXBUILD = sphinx-build SPHINXPROJ = micado -SOURCEDIR = . +SOURCEDIR = ./rst BUILDDIR = _build # Put it first so that "make" without argument is like "make help". @@ -17,4 +17,5 @@ help: # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + diff --git a/user_documentation/rst/application_description.rst b/user_documentation/rst/application_description.rst index a132eea..78905a8 100644 --- a/user_documentation/rst/application_description.rst +++ b/user_documentation/rst/application_description.rst @@ -10,50 +10,53 @@ Application description has four main sections: * **tosca_definitions_version**: ``tosca_simple_yaml_1_0``. * **imports**: a list of urls pointing to custom TOSCA types. The default url points to the custom types defined for MiCADO. Please, do not modify this url. * **repositories**: docker repositories with their addresses. -* **topology_template**: the main part of the application description to define 1) kubernetes deployments (of docker containers), 2) virtual machine (under the **node_templates** section) and 3) the scaling policy under the **policies** subsection. These sections will be detailed in subsections below. +* **topology_template**: the goal of the application description is to define 1) kubernetes deployments (of docker containers), 2) virtual machine (under the **node_templates** section) and 3) the scaling policy under the **policies** subsection. These sections will be detailed in subsections below. -Here is an overview example for the structure of the MiCADO application -description: +Here is an example for the structure of the MiCADO application description: :: tosca_definitions_version: tosca_simple_yaml_1_0 imports: - - https://raw.githubusercontent.com/micado-scale/tosca/v0.6.0/micado_types.yaml + - https://raw.githubusercontent.com/micado-scale/tosca/v0.x.2/micado_types.yaml repositories: docker_hub: https://hub.docker.com/ topology_template: node_templates: - YOUR_KUBERNETES_APP: + YOUR-KUBERNETES-APP: type: tosca.nodes.MiCADO.Container.Application.Docker properties: ... artifacts: ... + interfaces: + ... ... - YOUR_OTHER_KUBERNETES_APP: + YOUR-OTHER-KUBERNETES-APP: type: tosca.nodes.MiCADO.Container.Application.Docker properties: ... artifacts: ... + interfaces: + ... YOUR_VIRTUAL_MACHINE: - type: tosca.nodes.MiCADO.Occopus..Compute + type: tosca.nodes.MiCADO..Compute properties: - cloud: - interface_cloud: ... - endpoint_cloud: ... + ... + interfaces: + ... capabilities: host: properties: ... outputs: ports: - value: { get_attribute: [ YOUR_KUBERNETES_APP, port ]} + value: { get_attribute: [ YOUR-KUBERNETES-APP, port ]} policies: - scalability: @@ -63,55 +66,92 @@ description: ... - scalability: type: tosca.policies.Scaling.MiCADO - targets: [ YOUR_KUBERNETES_APP ] + targets: [ YOUR-KUBERNETES-APP ] properties: ... - scalability: type: tosca.policies.Scaling.MiCADO - targets: [ YOUR_OTHER_KUBERNETES_APP ] + targets: [ YOUR-OTHER-KUBERNETES-APP ] properties: ... -Specification of Kubernetes Deployments (as Docker containers) -============================================================== +Specification of Docker containers (to be orchestrated by Kubernetes) +===================================================================== + +**NOTE** Kubernetes does not allow for underscores in any resource names (read: TOSCA node names). Names must also begin and end with an alphanumeric. + +Under the node_templates section you can define one or more Docker containers and choose to orchestrate them with Kubernetes +(see **YOUR-KUBERNETES-APP**). Each app is described as a separate node with its own definition consisting of +four main parts: type, properties, artifacts and interfaces. + +The **type** keyword for Docker containers must always be ``tosca.nodes.MiCADO.Container.Application.Docker`` + +The **properties** section will contain the options specific to the Docker container runtime + +The **artifacts** section must define the Docker image (see **YOUR_DOCKER_IMAGE**) + +The **interfaces** section tells MiCADO how to orchestrate the container. + +The *create* field *inputs* will override the **workload** metadata & spec of a bare Kubernetes Deployment manifest. + +The *configure* field *inputs* will override the **pod** metadata & spec of that workload. -Under the node_templates section you can define one or more apps to create a Kubernetes Deployment (using Docker compose nomenclature) (see **YOUR_KUBERNETES_APP**). Each app within the Kubernetes deployment gets its own definition consisting of three main parts: type, properties and artifacts. The value of the **type** keyword for the Kubernetes Deployment of a Docker container must always be ``tosca.nodes.MiCADO.Container.Application.Docker``. The **properties** section will contain most of the setting of the app to be deployed using Kubernetes. Under the **artifacts** section the Docker image (see **YOUR_DOCKER_IMAGE**) must be defined. +**A stripped back definition of a node_template looks like this:** :: topology_template: node_templates: - YOUR_KUBERNETES_APP: + YOUR-KUBERNETES-APP: type: tosca.nodes.MiCADO.Container.Application.Docker properties: - ... + name: + command: + args: + env: + ... artifacts: image: type: tosca.artifacts.Deployment.Image.Container.Docker file: YOUR_DOCKER_IMAGE repository: docker_hub + interfaces: + Kubernetes: + create: + implementation: image + inputs: + ... + configure: + inputs: + ... outputs: ports: - value: { get_attribute: [ YOUR_KUBERNETES_APP, port ]} + value: { get_attribute: [ YOUR-KUBERNETES-APP, port ]} -The fields under the **properties** section of the Kubernetes app are derived from a docker-compose file and converted using Kompose. You can find additional information about the properties in the `docker compose documentation ` and see what `Kompose supports here `. The syntax of the property values is currently the same as in docker-compose -file. The Compose properties will be translated into Kubernetes specs on deployment. +The fields under the **properties** section of the Kubernetes app are a collection of options specific to all iterations +of Docker containers. The translator understands both Docker-Compose style naming and Kubernetes style naming, though +the Kubernetes style is recommended. You can find additional information about properties in the +`translator documentation `__. These properties will be translated +into Kubernetes manifests on deployment. -Under the **properties** section of an app (see **YOUR_KUBERNETES_APP**) you can specify the following keywords.: +Under the **properties** section of an app (see **YOUR-KUBERNETES-APP**) here are a few common keywords.: -* **command**: command line expression to be executed by the container. -* **deploy**: Orchestrated deployment options. CPU reservations should be set 0.1 lower than in Swarm (0.9 == 1.0) -* **entrypoint**: override the default entrypoint of container. -* **environment**: map of all required environment variables. -* **expose**: expose ports without publishing them to the host machine. -* **volumes**: list of bind mount (host-container) volumes for the service in the format */source/etc/data:/target/etc/data* -* **ports**: list of published ports to the host machine. **Unlike Docker** this does not make the container accessible from the outside. -* **labels**: map of metadata like Docker labels and/or Kubernetes instructions (see NOTE). +* **name**: name for the container (defaults to the TOSCA node name) +* **command**: override the default command line expression to be executed by the container. +* **args**: override the default entrypoint of container. +* **env**: list of *name:* & *value:* of all required environment variables. +* **resource.requests.cpu**: CPU reservation, should be set 100m lower than max (900m == 1000m) +* **ports**: list of published ports to the host machine, you can specify these keywords in the style of a `Kubernetes Service `__ -*NOTE* + * **targetPort**: the port to target (assumes port if not specified) + * **port**: the port to publish (assumes targetPort if not specified) + * **name**: the name of this port in the service (will be generated if not specified) + * **protocol**: the protocol for the port (defaults to: TCP) + * **nodePort**: the port (30000-32767) to expose on the host (this will create a nodePort Service unless type is explicitly set below) + * **type**: the type of service for this port (defaults to: ClusterIP except if nodePort is defined above) + * **clusterIP**: the desired (internal) IP (10.0.0.0/24) for this service (defaults to next available) + * **metadata**: service metadata, giving the option to set a name for the service. Explicit naming can be used to group different ports together (default grouping is by type) -* **labels** can also be used to pass instructions to Kubernetes (full list: http://kompose.io/user-guide/#labels) -**kompose.service.type: 'nodeport'** will make the container accessible at *:port* where port can be found on the Kubernetes Dashboard under *Discovery and load balancing > Services > my_app > Internal endpoints* Under the **artifacts** section you can define the docker image for the kubernetes app. Three fields must be defined: @@ -120,9 +160,28 @@ kubernetes app. Three fields must be defined: * **file**: docker image for the kubernetes app (e.g. sztakilpds/cqueue_frontend:latest ) * **repository**: name of the repository where the image is located. The name used here (e.g. docker_hub), must be defined at the top of the description under the **repositories** section. -Kubernetes networking is inherently different to the approach taken by Docker. This is a complex subject which is worth a read: https://kubernetes.io/docs/concepts/cluster-administration/networking/ +Under the **interfaces** section you can define orchestrator specific options, here we use the key **Kubernetes:** + +* **create**: *this key tells MiCADO to create a workload (Deployment/DaemonSet/Job/Pod etc...) for this container* + + * **implementation**: this should always point to your image artifact + * **inputs**: top-level workload and workload spec options follow here... Some examples, see `translator documentation `__ + + * **kind:** overwrite the workload type (defaults to Deployment) + * **strategy.type:** change to Recreate to kill pods then update (defaults to RollingUpdate) + +* **configure**: *this key configures the Pod for this workload* -Since every pod gets its own IP, which any pod can by default use to communicate with any other pod, this means there is no network to explicitly define. If **ports** is defined in the definition above, pods can reach each other over CoreDNS via their hostname (container name). + * **inputs**: `PodTemplateSpec `__ options follow here... For example + + * **restartPolicy:** change the restart policy (defaults to Always) + +**A word on networking in Kubernetes** + +Kubernetes networking is inherently different to the approach taken by Docker/Swarm. +This is a complex subject which is worth a read: https://kubernetes.io/docs/concepts/cluster-administration/networking/ . +Since every pod gets its own IP, which any pod can by default use to communicate with any other pod, this means there +is no network to explicitly define. If the **ports** keyword is defined in the definition above, pods can reach each other over CoreDNS via their hostname (container name). Under the **outputs** section (this key is **NOT** nested within *node_templates*) you can define an output to retrieve from Kubernetes via the adaptor. Currently, only port info is obtainable. @@ -130,20 +189,45 @@ you can define an output to retrieve from Kubernetes via the adaptor. Currently, Specification of the Virtual Machine ==================================== -The collection of docker containers (kubernetes applications) specified in the previous section is orchestrated by Kubernetes. This section introduces how the parameters of the virtual machine can be configured which will be hosts the Kubernetes worker node. During operation MiCADO will instantiate as many virtual machines with the parameters defined here as required during scaling. MiCADO currently supports four different cloud interfaces: CloudSigma, CloudBroker, EC2, Nova. The following ports and protocols should be enabled on the virtual machine: +The collection of docker containers (kubernetes applications) specified in the previous section is orchestrated by Kubernetes. This section introduces how the parameters of the virtual machine can be configured which will host the Kubernetes worker node. During operation MiCADO will instantiate as many virtual machines with the parameters defined here as required during scaling. MiCADO currently supports four different cloud interfaces: CloudSigma, CloudBroker, EC2, Nova. + +.. _workerfirewallconfig: + +The following ports and protocols should be enabled on the virtual machine acting as MiCADO worker: :: - ICMP TCP: 22,2377,7946,8300,8301,8302,8500,8600,9100,9200 UDP: 4789,7946,8301,8302,8600 The following subsections details how to configure them. +General +------- + +The **capabilities** sections for all virtual machine definitions that follow are identical and are **ENTIRELY OPTIONAL**. They are filled with metadata to support human readability.: + +* **num_cpus** under *host* is a readable string specifying clock speed of the instance type +* **mem_size** under *host* is a readable string specifying RAM of the instance type +* **type** under *os* is a readable string specifying the operating system type of the image +* **distribution** under *os* is a readable string specifying the OS distro of the image +* **version** under *os* is a readable string specifying the OS version of the image + +The **interfaces** section of all virtual machine definitions that follow are **REQUIRED**, and allow you to provide orchestrator specific inputs, in the examples below we use **Occopus**. + +* **create**: *this key tells MiCADO to create the VM using Occopus* + + * **inputs**: Specific settings for Occopus follow here + + * **interface_cloud:** tells Occopus which cloud to interface with + * **endpoint_cloud:** tells Occopus the endpoint API of the cloud + + + CloudSigma ---------- -To instantiate MiCADO workers on CloudSigma, please use the template below. MiCADO **requires** num_cpus, mem_size, vnc_password, libdrive_id and public_key_id to instantiate VM on *CloudSigma*. +To instantiate MiCADO workers on CloudSigma, please use the template below. MiCADO **requires** num_cpus, mem_size, vnc_password, libdrive_id, public_key_id and firewall_policy to instantiate VM on *CloudSigma*. :: @@ -151,26 +235,43 @@ To instantiate MiCADO workers on CloudSigma, please use the template below. MiCA node_templates: worker_node: type: tosca.nodes.MiCADO.Occopus.CloudSigma.Compute - properties: - cloud: - interface_cloud: cloudsigma - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g for cloudsigma https://zrh.cloudsigma.com/api/2.0 ) - capabilities: - host: - properties: - num_cpus: ADD_NUM_CPUS_FREQ (e.g. 4096) - mem_size: ADD_MEM_SIZE (e.g. 4294967296) - vnc_password: ADD_YOUR_PW (e.g. secret) - libdrive_id: ADD_YOUR_ID_HERE (eg. 87ce928e-e0bc-4cab-9502-514e523783e3) - public_key_id: ADD_YOUR_ID_HERE (e.g. d7c0f1ee-40df-4029-8d95-ec35b34dae1e) - firewall_policy: ADD_YOUR_ID_HERE (e.g. fd97e326-83c8-44d8-90f7-0a19110f3c9d) - -* **num_cpu** is the speed of CPU (e.g. 4096) in terms of MHz of your VM to be instantiated. The CPU frequency required to be between 250 and 100000 + properties: + num_cpus: ADD_NUM_CPUS_FREQ (e.g. 4096) + mem_size: ADD_MEM_SIZE (e.g. 4294967296) + vnc_password: ADD_YOUR_PW (e.g. secret) + libdrive_id: ADD_YOUR_ID_HERE (eg. 87ce928e-e0bc-4cab-9502-514e523783e3) + public_key_id: ADD_YOUR_ID_HERE (e.g. d7c0f1ee-40df-4029-8d95-ec35b34dae1e) + nics: + - firewall_policy: ADD_YOUR_FIREWALL_POLICY_ID_HERE (e.g. fd97e326-83c8-44d8-90f7-0a19110f3c9d) + ip_v4_conf: + conf: dhcp + capabilities: + # OPTIONAL METADATA + host: + properties: + num_cpus: 2GHz + mem_size: 2GB + os: + properties: + type: linux + distribution: ubuntu + version: 16.04 + interfaces: + Occopus: + create: + inputs: + interface_cloud: cloudsigma + endpoint_cloud: ADD_YOUR_ENDPOINT (e.g for cloudsigma https://zrh.cloudsigma.com/api/2.0 ) + +Under the **properties** section of a CloudSigma virtual machine definition these inputs are available.: + +* **num_cpus** is the speed of CPU (e.g. 4096) in terms of MHz of your VM to be instantiated. The CPU frequency required to be between 250 and 100000 * **mem_size** is the amount of RAM (e.g. 4294967296) in terms of bytes to be allocated for your VM. The memory required to be between 268435456 and 137438953472 * **vnc_password** set the password for your VNC session (e.g. secret). * **libdrive_id** is the image id (e.g. 87ce928e-e0bc-4cab-9502-514e523783e3) on your CloudSigma cloud. Select an image containing a base os installation with cloud-init support! * **public_key_id** specifies the keypairs (e.g. d7c0f1ee-40df-4029-8d95-ec35b34dae1e) to be assigned to your VM. -* **firewall_policy** optionally specifies network policies (you can define multiple security groups in the form of a list, e.g. fd97e326-83c8-44d8-90f7-0a19110f3c9d) of your VM. +* **nics[.firewall_policy | .ip_v4_conf.conf]** specifies network policies (you can define multiple security groups in the form of a list for your VM). + CloudBroker ----------- @@ -183,17 +284,30 @@ To instantiate MiCADO workers on CloudBroker, please use the template below. MiC node_templates: worker_node: type: tosca.nodes.MiCADO.Occopus.CloudBroker.Compute - properties: - cloud: - interface_cloud: cloudbroker - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://cola-prototype.cloudbroker.com ) - capabilities: - host: - properties: - deployment_id: ADD_YOUR_ID_HERE (e.g. e7491688-599d-4344-95ef-aff79a60890e) - instance_type_id: ADD_YOUR_ID_HERE (e.g. 9b2028be-9287-4bf6-bbfe-bcbc92f065c0) - key_pair_id: ADD_YOUR_ID_HERE (e.g. d865f75f-d32b-4444-9fbb-3332bcedeb75) - opened_port: ADD_YOUR_PORTS_HERE (e.g. '22,2377,7946,8300,8301,8302,8500,8600,9100,9200,4789') + properties: + deployment_id: ADD_YOUR_ID_HERE (e.g. e7491688-599d-4344-95ef-aff79a60890e) + instance_type_id: ADD_YOUR_ID_HERE (e.g. 9b2028be-9287-4bf6-bbfe-bcbc92f065c0) + key_pair_id: ADD_YOUR_ID_HERE (e.g. d865f75f-d32b-4444-9fbb-3332bcedeb75) + opened_port: ADD_YOUR_PORTS_HERE (e.g. '22,2377,7946,8300,8301,8302,8500,8600,9100,9200,4789') + capabilities: + # OPTIONAL METADATA + host: + properties: + num_cpus: 2GHz + mem_size: 2GB + os: + properties: + type: linux + distribution: ubuntu + version: 16.04 + interfaces: + Occopus: + create: + inputs: + interface_cloud: cloudbroker + endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://cola-prototype.cloudbroker.com ) + +Under the **properties** section of a CloudBroker virtual machine definition these inputs are available.: * **deployment_id** is the id of a preregistered deployment in CloudBroker referring to a cloud, image, region, etc. Make sure the image contains a base OS (preferably Ubuntu) installation with cloud-init support! The id is the UUID of the deployment which can be seen in the address bar of your browser when inspecting the details of the deployment. * **instance_type_id** is the id of a preregistered instance type in CloudBroker referring to the capacity of the virtual machine to be deployed. The id is the UUID of the instance type which can be seen in the address bar of your browser when inspecting the details of the instance type. @@ -212,15 +326,28 @@ To instantiate MiCADO workers on a cloud through EC2 interface, please use the t worker_node: type: tosca.nodes.MiCADO.Occopus.EC2.Compute properties: - cloud: - interface_cloud: ec2 - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://ec2.eu-west-1.amazonaws.com ) - capabilities: - host: - properties: region_name: ADD_YOUR_REGION_NAME_HERE (e.g. eu-west-1) image_id: ADD_YOUR_ID_HERE (e.g. ami-12345678) instance_type: ADD_YOUR_INSTANCE_TYPE_HERE (e.g. t1.small) + capabilities: + # OPTIONAL METADATA + host: + properties: + num_cpus: 2GHz + mem_size: 2GB + os: + properties: + type: linux + distribution: ubuntu + version: 16.04 + interfaces: + Occopus: + create: + inputs: + interface_cloud: ec2 + endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://ec2.eu-west-1.amazonaws.com) + +Under the **properties** section of an EC2 virtual machine definition these inputs are available.: * **region_name** is the region name within an EC2 cloud (e.g. eu-west-1). * **image_id** is the image id (e.g. ami-12345678) on your EC2 cloud. Select an image containing a base os installation with cloud-init support! @@ -241,12 +368,6 @@ To instantiate MiCADO workers on a cloud through Nova interface, please use the worker_node: type: tosca.nodes.MiCADO.Occopus.Nova.Compute properties: - cloud: - interface_cloud: nova - endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://sztaki.cloud.mta.hu:5000/v3) - capabilities: - host: - properties: image_id: ADD_YOUR_ID_HERE (e.g. d4f4e496-031a-4f49-b034-f8dafe28e01c) flavor_name: ADD_YOUR_ID_HERE (e.g. 3) project_id: ADD_YOUR_ID_HERE (e.g. a678d20e71cb4b9f812a31e5f3eb63b0) @@ -254,6 +375,25 @@ To instantiate MiCADO workers on a cloud through Nova interface, please use the key_name: ADD_YOUR_KEY_HERE (e.g. keyname) security_groups: - ADD_YOUR_ID_HERE (e.g. d509348f-21f1-4723-9475-0cf749e05c33) + capabilities: + # OPTIONAL METADATA + host: + properties: + num_cpus: 2GHz + mem_size: 2GB + os: + properties: + type: linux + distribution: ubuntu + version: 16.04 + interfaces: + Occopus: + create: + inputs: + interface_cloud: nova + endpoint_cloud: ADD_YOUR_ENDPOINT (e.g https://sztaki.cloud.mta.hu:5000/v3) + +Under the **properties** section of a Nova virtual machine definition these inputs are available.: * **project_id** is the id of project you would like to use on your target Nova cloud. * **image_id** is the image id on your Nova cloud. Select an image containing a base os installation with cloud-init support! @@ -272,11 +412,11 @@ To utilize the autoscaling functionality of MiCADO, scaling policies can be defi topology_template: node_templates: - YOUR_KUBERNETES_APP: + YOUR-KUBERNETES-APP: type: tosca.nodes.MiCADO.Container.Application.Docker ... ... - YOUR_OTHER_KUBERNETES_APP: + YOUR-OTHER-KUBERNETES-APP: type: tosca.nodes.MiCADO.Container.Application.Docker ... YOUR_VIRTUAL_MACHINE: @@ -291,12 +431,12 @@ To utilize the autoscaling functionality of MiCADO, scaling policies can be defi ... - scalability: type: tosca.policies.Scaling.MiCADO - targets: [ YOUR_KUBERNETES_APP ] + targets: [ YOUR-KUBERNETES-APP ] properties: ... - scalability: type: tosca.policies.Scaling.MiCADO - targets: [ YOUR_OTHER_KUBERNETES_APP ] + targets: [ YOUR-OTHER-KUBERNETES-APP ] properties: ... @@ -334,7 +474,7 @@ The **properties** subsection defines the scaling policy itself. For monitoring The subsections have the following roles: -* **sources** supports the dynamic attachment of an external exporter by specifying a list endpoints of exporters (see example above). Each item found under this subsection is configured under Prometheus to start collecting the information provided/exported by the exporters. Once done, the values of the parameters provided by the exporters become available. **NEW** MiCADO now supports Kubernetes service discovery - to define such a source, simply pass the name of the app as defined in TOSCA and do not specify any port number +* **sources** supports the dynamic attachment of an external exporter by specifying a list endpoints of exporters (see example above). Each item found under this subsection is configured under Prometheus to start collecting the information provided/exported by the exporters. Once done, the values of the parameters provided by the exporters become available. MiCADO supports Kubernetes service discovery to define such a source, simply pass the name of the app as defined in TOSCA and do not specify any port number * **constants** subsection is used to predefined fixed parameters. Values associated to the parameters can be referred by the scaling rule as variable (see ``LOWER_THRESHOLD`` above) or in any other sections referred as Jinja2 variable (see ``MYEXPR`` above). * **queries** contains the list of Prometheus query expressions to be executed and their variable name associated (see ``THELOAD`` above) * **alerts** subsection enables the utilisation of the alerting system of Prometheus. Each alert defined here is registered under Prometheus and fired alerts are represented with a variable of their name set to True during the evaluation of the scaling rule (see ``myalert`` above). diff --git a/user_documentation/rst/deployment.rst b/user_documentation/rst/deployment.rst index 532a9b1..75450f7 100644 --- a/user_documentation/rst/deployment.rst +++ b/user_documentation/rst/deployment.rst @@ -23,12 +23,15 @@ For cloud interfaces supported by MiCADO: For the MiCADO master: * Ubuntu 16.04 -* 2GHz CPU & 2GB RAM +* (Minimum) 2GHz CPU & 2GB RAM +* (Recommended) 2GHz CPU & 4GB RAM For the host where the Ansible playbook is executed (differs depending on local or remote): * Ansible 2.4 or greater * curl +* jq (to pretty-format API responses) +* wrk (to load test nginx & wordpress demonstrators) Ansible ------- @@ -58,6 +61,28 @@ To install curl on Ubuntu, use this command: To install curl on other operating systems follow the `official installation guide `__. +jq +---- + +To install jq on Ubuntu, use this command: + +:: + + sudo apt-get install jq + +To install jq on other operating systems follow the `official installation guide `__. + +wrk +---- + +To install wrk on Ubuntu, use this command: + +:: + + sudo apt-get install wrk + +To install wrk on other operating systems check the sidebar on the `github wiki `__. + Installation ============ @@ -68,14 +93,14 @@ Step 1: Download the ansible playbook. :: - curl --output ansible-micado-0.7.1.tar.gz -L https://github.com/micado-scale/ansible-micado/releases/download/v0.7.1/ansible-micado-0.7.1.tar.gz - tar -zxvf ansible-micado-0.7.1.tar.gz - cd ansible-micado-0.7.1/ + curl --output ansible-micado-0.7.2.tar.gz -L https://github.com/micado-scale/ansible-micado/releases/download/v0.7.2/ansible-micado-0.7.2.tar.gz + tar -zxvf ansible-micado-0.7.2.tar.gz + cd ansible-micado-0.7.2/ Step 2: Specify cloud credential for instantiating MiCADO workers. ------------------------------------------------------------------ -MiCADO master will use this credential to start/stop VM instances (MiCADO workers) to host the application and to realize scaling. Credentials here should belong to the same cloud as where MiCADO master is running. We recommend making a copy of our predefined template and edit it. MiCADO expects the credential in a file, called credentials-cloud-api.yml before deployment. Please, do not modify the structure of the template! +MiCADO master will use this credential against the cloud API to start/stop VM instances (MiCADO workers) to host the application and to realize scaling. Credentials here should belong to the same cloud as where MiCADO master is running. We recommend making a copy of our predefined template and edit it. MiCADO expects the credential in a file, called credentials-cloud-api.yml before deployment. Please, do not modify the structure of the template! :: @@ -112,7 +137,7 @@ Specify the provisioning method for the x509 keypair used for TLS encryption of * The 'self-signed' option generates a new keypair with the specified hostname as subject (or 'micado-master' if omitted). * The 'user-supplied' option lets the user add the keypair as plain multiline strings (in unencrypted format) in the ansible_user_data.yml file under the 'cert' and 'key' subkeys respectively. -Specify the default username and password for the administrative we user in the the ``authentication`` subtree. +Specify the default username and password for the administrative user in the ``authentication`` subtree. Optionally you may use the Ansible Vault mechanism as described in Step 2 to protect the confidentiality and integrity of this file as well. @@ -163,7 +188,7 @@ We recommend making a copy of our predefined template and edit it. Use the templ Edit the ``hosts`` file to set ansible variables for MiCADO master machine. Update the following parameters: -* **ansible_host**: specifies the publicly reachable ip address of MiCADO master. Set the public or floating ip of the master regardless the deployment method is remote or local. The ip specified here is used by the Dashboard for webpage redirection as well +* **ansible_host**: specifies the publicly reachable ip address of MiCADO master. Set the public or floating ``IP`` of the master regardless the deployment method is remote or local. The ip specified here is used by the Dashboard for webpage redirection as well * **ansible_connection**: specifies how the target host can be reached. Use "ssh" for remote or "local" for local installation. In case of remote installation, make sure you can authenticate yourself against MiCADO master. We recommend to deploy your public ssh key on MiCADO master before starting the deployment * **ansible_user**: specifies the name of your sudoer account, defaults to "ubuntu" * **ansible_become**: specifies if account change is needed to become root, defaults to "True" @@ -204,5 +229,6 @@ Accessing user-defined service In case your application contains a container exposing a service, you will have to ensure the following to access it. -* First set **kompose.service.type: 'nodeport'** in the TOSCA description of your app. More information on this in the section of the documentation titled **application description** -* The container will be accessible at *:* . Both can be found on the Kubernetes Dashboard, with **IP** under *Nodes > my_micado_vm > Addresses* and with **port** under *Discovery and load balancing > Services > my_app > Internal endpoints* +* First set **nodePort: xxxxx** (where xxxxx is a port in range 30000-32767) in the **properties: ports:** TOSCA description of your docker container. More information on this in the :ref:`applicationdescription` +* The container will be accessible at *:* . Both, the IP and the port values can be extracted from the Kubernetes Dashboard (in case you forget it). The **IP** can be found under *Nodes > my_micado_vm > Addresses* menu, while the **port** can be found under *Discovery and load balancing > Services > my_app > Internal endpoints* menu. + diff --git a/user_documentation/rst/index.rst b/user_documentation/rst/index.rst index 13f4955..24f9713 100644 --- a/user_documentation/rst/index.rst +++ b/user_documentation/rst/index.rst @@ -1,7 +1,7 @@ MiCADO - autoscaling framework for Kubernetes Deployments in the Cloud -########################################################### +###################################################################### -This software is developed by the `COLA project `__. +This software is developed by the `COLA project `__ and is hosted at the `MiCADO-scale github repository `__. Please, visit the `MiCADO homepage `__ for general information about the product. Introduction ************ diff --git a/user_documentation/rst/release_notes.rst b/user_documentation/rst/release_notes.rst index 06f9b13..da78d63 100644 --- a/user_documentation/rst/release_notes.rst +++ b/user_documentation/rst/release_notes.rst @@ -1,7 +1,32 @@ Release Notes ************* -**v0.7.1 (10 Jan 2018)** +**v0.7.2 (25 Feb 2019)** + +- add checking for minimal memory on micado master at deployment +- support private networks on cloudsigma +- support user-defined contextualisation +- support re-use across other container & cloud orchestrators in ADT +- new TOSCA to Kubernetes Manifest Adaptor +- add support for creating DaemonSets, Jobs, StatefulSets (with limited functionality) and standalone Pods +- add support for creating PersistentVolumes & PVClaims +- add support for specifying custom service details (NodePort, ClusterIP, etc.) +- minor improvements to Grafana dashboard +- support asynchronous calls through TOSCASubmitter API +- fix kubectl error on MiCADO Master restart +- fix TOSCASubmitter rollback on errors +- fix TOSCASubmitter status & output display +- add support for encrypting master-worker communication +- automatically provision and revoke security credentials for worker nodes +- update default MTU to 1400 to ensure compatibility with OpenStack and AWS +- add Credential Store security enabler +- add Security Policy Manager security enabler +- add Image Integrity Verifier Security enabler +- add Crypto Engine security enabler +- add support for kubernetes secrets +- reimplement Credential Manager using the flask-users library + +**v0.7.1 (10 Jan 2019)** - Fix: Add SKIP back to Dashboard (defaults changed in v1.13.1) - Fix: URL not found for Kubernetes manifest files diff --git a/user_documentation/rst/rest_api.rst b/user_documentation/rst/rest_api.rst index 970c6b4..e643bb8 100644 --- a/user_documentation/rst/rest_api.rst +++ b/user_documentation/rst/rest_api.rst @@ -5,41 +5,29 @@ REST API MiCADO has a TOSCA compliant submitter to submit, update, list and remove MiCADO applications. The submitter exposes the following REST API: -* To launch an application specified by a TOSCA description stored locally, use this command: +* To launch an application specified by a TOSCA description stored locally (with an option in bold to specify an ID): :: - curl --insecure -s -F file=@[path to the TOSCA description] -X POST https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/launch/file/ + curl --insecure -s -F file=@[path to the TOSCA template] **-F id=[APPLICATION_ID]** -X POST https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/launch/ -* To launch an application specified by a TOSCA description stored locally and specify an application id, use this command: +* To launch an application specified by a TOSCA description stored behind a url (with an option in bold to specify an ID): :: - curl --insecure -s -F file=@[path to the TOSCA description] -F id=[APPLICATION_ID] -X POST https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/launch/file/ - -* To launch an application specified by a TOSCA description stored behind a url, use this command: - -:: - - curl --insecure -s -d input="[url to TOSCA description]" -X POST https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/launch/url/ - -* To launch an application specified by a TOSCA description stored behind an url and specify an application id, use this command: - -:: - - curl --insecure -s -d input="[url to TOSCA description]" -d id=[ID] -X POST https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/launch/url/ + curl --insecure -s -d input="[url to TOSCA description]" **-d id=[APPLICATION_ID]** -X POST https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/launch/ * To update a running MiCADO application using a TOSCA description stored locally, use this command: :: - curl --insecure -s -F file=@"[path to the TOSCA description]" -X PUT https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/udpate/file/[APPLICATION_ID] + curl --insecure -s -F file=@"[path to the TOSCA description]" -X PUT https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/update/[APPLICATION_ID] * To update a running MiCADO application using a TOSCA description stored behind a url, use this command: :: - curl --insecure -s -d input="[url to TOSCA description]" -X PUT https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/udpate/file/[APPLICATION_ID] + curl --insecure -s -d input="[url to TOSCA description]" -X PUT https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/update/[APPLICATION_ID] * To undeploy a running MiCADO application, use this command: @@ -57,19 +45,25 @@ MiCADO has a TOSCA compliant submitter to submit, update, list and remove MiCADO :: - curl --insecure -s -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/[APPLICATION_ID] + curl --insecure -s -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/[APPLICATION_ID]/status + +* To query the full execution status of MiCADO, use this command: + +:: + + curl --insecure -s -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/info_threads * To query the services of a running MiCADO application, use this command: :: - curl --insecure -s -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/[APPLICATION_ID]/services + curl --insecure -s -d query='services' -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/query/[APPLICATION_ID] * To query the nodes hosting a running MiCADO application, use this command: :: - curl --insecure -s -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/[APPLICATION_ID]/nodes + curl --insecure -s -d query='nodes' -X GET https://[username]:[password]@[IP]:[port]/toscasubmitter/v1.0/app/query/[APPLICATION_ID] diff --git a/user_documentation/rst/tutorials.rst b/user_documentation/rst/tutorials.rst index 3413366..74d0acf 100644 --- a/user_documentation/rst/tutorials.rst +++ b/user_documentation/rst/tutorials.rst @@ -3,7 +3,7 @@ Tutorials ********* -You can find test application(s) under the subdirectories of the ‘testing’ directory. The current **stressng** test can be configured for use with CloudSigma, AWS EC2, OpenStack Nova and deployments via CloudBroker. The current **cqueue** test is configured for CloudSigma. +You can find some demo applications under the subdirectories of the ‘testing’ directory in the downloaded (and unzipped) installation package of MiCADO. stressng ======== @@ -14,7 +14,11 @@ This application contains a single service, performing a constant CPU load. The * Step1: make a copy of the TOSCA file which is appropriate for your cloud - ``stressng_.yaml`` - and name it ``stressng.yaml`` (ie. by issuing the command ``cp stressng_cloudsigma.yaml stressng.yaml``) * Step2: fill in the requested fields beginning with ``ADD_YOUR_...`` . These will differ depending on which cloud you are using. + + **Important:** Make sure you create the appropriate firewall policy for the MiCADO workers as described :ref:`here `! + * In CloudSigma, for example, the ``libdrive_id`` , ``public_key_id`` and ``firewall_policy`` fields must be completed. Without these, CloudSigma does not have enough information to launch your worker nodes. All information is found on the CloudSigma Web UI. ``libdrive_id`` is the long alphanumeric string in the URL when a drive is selected under “Storage/Library”. ``public_key_id`` is under the “Access & Security/Keys Management” menu as **Uuid**. ``firewall_policy`` can be found when selecting a rule defined under the “Networking/Policies” menu. The following ports must be opened for MiCADO workers: *all inbound connections from MiCADO master* + * Step3: Update the parameter file, called ``_settings``. You need the ip address for the MiCADO master and should name the application by setting the APP_ID ***the application ID can not contain any underscores ( _ )** You should also change the SSL user/password/port information if they are different from the default. * Step4: run ``1-submit-tosca-stressng.sh`` to create the minimum number of MiCADO worker nodes and to deploy the Kubernetes Deployment including the stressng app defined in the ``stressng.yaml`` TOSCA description. * Step4a: run ``2-list-apps.sh`` to see currently running applications and their IDs @@ -37,6 +41,8 @@ This application demonstrates a deadline policy using CQueue. CQueue provides a - Replace each ‘cqueue.server.ip.address’ string with the real ip of CQueue server. - Update each ‘ADD_YOUR_ID_HERE’ string with the proper value retrieved under your CloudSigma account. + **Important:** Make sure you create the appropriate firewall policy for the MiCADO workers as described :ref:`here `! + * Step5: Run ``./2-get_date_in_epoch_plus_seconds.sh 600`` to calculate the unix timestamp representing the deadline by which the items (containers) must be finished. Take the value from the last line of the output produced by the script. The value is 600 seconds from now. * Step6: Edit the TOSCA description file, called ``micado-cqworker.yaml``. @@ -58,9 +64,38 @@ This application deploys a http server with nginx. The container features a buil * Step1: make a copy of the TOSCA file which is appropriate for your cloud - ``nginx_.yaml`` - and name it ``nginx.yaml`` * Step2: fill in the requested fields beginning with ``ADD_YOUR_...`` . These will differ depending on which cloud you are using. + + **Important:** Make sure you create the appropriate firewall policy for the MiCADO workers as described :ref:`here `! + * In CloudSigma, for example, the ``libdrive_id`` , ``public_key_id`` and ``firewall_policy`` fields must be completed. Without these, CloudSigma does not have enough information to launch your worker nodes. All information is found on the CloudSigma Web UI. ``libdrive_id`` is the long alphanumeric string in the URL when a drive is selected under “Storage/Library”. ``public_key_id`` is under the “Access & Security/Keys Management” menu as **Uuid**. ``firewall_policy`` can be found when selecting a rule defined under the “Networking/Policies” menu. The following ports must be opened for MiCADO workers: *all inbound connections from MiCADO master* + * Step3: Update the parameter file, called ``_settings``. You need the ip address for the MiCADO master and should name the deployment by setting the APP_ID. ***the application ID can not contain any underscores ( _ )** The APP_NAME must match the name given to the application in TOSCA (default: **nginxapp**) You should also change the SSL user/password/port information if they are different from the default. * Step4: run ``1-submit-tosca-nginx.sh`` to create the minimum number of MiCADO worker nodes and to deploy the Kubernetes Deployment including the nginx app defined in the ``nginx.yaml`` TOSCA description. * Step4a: run ``2-list-apps.sh`` to see currently running applications and their IDs, as well as the ports forwarded to 8080 for accessing the HTTP service * Step5: run ``3-generate-traffic.sh`` to generate some HTTP traffic. After thirty seconds or so, you will see the system respond by scaling up containers, and eventually virtual machines to the maximum specified. -* Step6: run ``4-undeploy-nginx.sh`` to remove the nginx deployment and all the MiCADO worker nodes \ No newline at end of file +* Step5a: the load test will finish after 10 minutes and the infrastructure will scale back down +* Step6: run ``4-undeploy-nginx.sh`` to remove the nginx deployment and all the MiCADO worker nodes + +wordpress +========= + +This application deploys a wordpress blog, complete with MySQL server and a Network File Share for peristent data storage. It is a proof-of-concept and is **NOT** production ready. +The policy defined for this application scales up/down both nodes and the wordpress frontend container based on network load. wrk (apt-get install wrk | https://github.com/wg/wrk) +is recommended for HTTP load testing, but you can use any load generator you wish. + +**Note:** make sure you have the ``jq`` tool and ``wrk`` benchmarking app installed as these are required by the helper scripts to force scaling. Best results for ``wrk`` are seen on multi-core systems. + +* Step1: make a copy of the TOSCA file which is appropriate for your cloud - ``wordpress_.yaml`` - and name it ``wordpress.yaml`` +* Step2: fill in the requested fields beginning with ``ADD_YOUR_...`` . These will differ depending on which cloud you are using. + + **Important:** Make sure you create the appropriate firewall policy for the MiCADO workers as described :ref:`here `! + + * In CloudSigma, for example, the ``libdrive_id`` , ``public_key_id`` and ``firewall_policy`` fields must be completed. Without these, CloudSigma does not have enough information to launch your worker nodes. All information is found on the CloudSigma Web UI. ``libdrive_id`` is the long alphanumeric string in the URL when a drive is selected under “Storage/Library”. ``public_key_id`` is under the “Access & Security/Keys Management” menu as **Uuid**. ``firewall_policy`` can be found when selecting a rule defined under the “Networking/Policies” menu. The following ports must be opened for MiCADO workers: *all inbound connections from MiCADO master* + +* Step3: Update the parameter file, called ``_settings``. You need the ip address for the MiCADO master and should name the deployment by setting the APP_ID. ***the application ID can not contain any underscores ( _ )** The FRONTEND_NAME: must match the name given to the application in TOSCA (default: **wordpress**) You should also change the SSL user/password/port information if they are different from the default. +* Step4: run ``1-submit-tosca-wordpress.sh`` to create the minimum number of MiCADO worker nodes and to deploy the Kubernetes Deployments for the NFS and MySQL servers and the Wordpress frontend. +* Step4a: run ``2-list-apps.sh`` to see currently running applications and their IDs, as well as the nodePort open on the host for accessing the HTTP service (defaults to 30010) +* Step5: navigate to your wordpress blog (generally at :30010) and go through the setup tasks until you can see the front page of your blog +* Step6: run ``3-generate-traffic.sh`` to generate some HTTP traffic. After thirty seconds or so, you will see the system respond by scaling up a VM and containers to the maximum specified. +* Step6a: the load test will stop after 10minutes and the infrastructure will scale back down +* Step7: run ``4-undeploy-wordpress.sh`` to remove the wordpress deployment and all the MiCADO worker nodes