Add public LoadBalancers to your local Kubernetes clusters.
In cloud-based Kubernetes solutions, Services can be exposed as type "LoadBalancer" and your cloud provider will provision a LoadBalancer and start routing traffic, in another words: you get "Ingress" to your services from the outside world.
inlets-operator brings that same experience to your local Kubernetes cluster. The operator automates the creation of an inlets exit-server on public cloud, and runs the client as a Pod inside your cluster. Your Kubernetes Service
will be updated with the public IP of the exit-node and you can start receiving incoming traffic immediately.
This solution is for users who want to gain incoming network access (ingress) to private Kubernetes clusters. These may be running on-premises, on your laptop, within a VM or a Docker container. It even works behind NAT, and through HTTP proxies, without the need to open firewall ports. The cost of the LoadBalancer with a IaaS like DigitalOcean is around 5 USD / mo, which is several times cheaper than AWS or GCP.
Watch a video walk-through where we deploy an IngressController (ingress-nginx) to KinD, and then obtain LetsEncrypt certificates using cert-manager.
The operator detects Services of type LoadBalancer, and then creates a Tunnel
Custom Resource. Its next step is to provision a small VM with a public IP on the public cloud, where it will run the inlets tunnel server. Then an inlets client is deployed as a Pod within your local cluster, which connects to the server and acts like a gateway to your chosen local service.
Pick inlets PRO or OSS.
- Automatic end-to-end encryption of the control-plane using PKI and TLS
- Tunnel any TCP traffic at L4 i.e. Mongo, Postgres, MariaDB, Redis, NATS, SSH and TLS itself.
- Tunnel an IngressController including TLS termination and LetsEncrypt certs from cert-manager
- Punch out multiple ports such as 80 and 443 over the same tunnel
- Commercially licensed and supported. For cloud native operators and developers.
Heavily discounted pricing available for personal use.
- No encryption enabled for the control-plane.
- Tunnel L7 HTTP traffic.
- Punch out only one port per tunnel, port name must be:
http
- Free, OSS, built for community developers.
If you transfer any secrets, login info, business data, or confidential information then you should use inlets PRO for its built-in encryption using TLS and PKI.
Inlets is a Cloud Native Tunnel and is listed on the Cloud Native Landscape under Service Proxies.
- inlets - Cloud Native Tunnel for L7 / HTTP traffic written in Go
- inlets-pro - Cloud Native Tunnel for L4 TCP
- inlets-operator - Public IPs for your private Kubernetes Services and CRD
- inletsctl - Automate the cloud for fast HTTP (L7) and TCP (L4) tunnels
Operator cloud host provisioning:
- Provision VMs/exit-nodes on public cloud
- Provision to Packet.com
- Provision to DigitalOcean
- Provision to Scaleway
- Provision to GCP
- Provision to AWS EC2
- Provision to Linode
- Provision to Azure
- Provision to Civo
- Publish stand-alone Go provisioning library/SDK
With inlets-pro
configured, you get the following additional benefits:
- Automatic configuration of TLS and encryption using secured websocket
wss://
for control-port - Tunnel pure TCP traffic
- Separate data-plane (ports given by Kubernetes) and control-plane (port
8132
)
Other features:
- Automatically update Service type LoadBalancer with a public IP
- Tunnel L7
http
traffic - In-cluster Role, Dockerfile and YAML files
- Raspberry Pi / armhf build and YAML file
- ARM64 (Graviton/Odroid/Packet.com) Dockerfile/build and K8s YAML files
- Control which services get a LoadBalancer using annotations
- Garbage collect hosts when Service or CRD is deleted
- CI with Travis and automated release artifacts
- One-line installer arkade -
arkade install inlets-operator --help
Backlog pending:
- Feel free to request features.
Check out the reference documentation for inlets-operator to get exit-nodes provisioned on different cloud providers here.
The LoadBalancer type is usually provided by a cloud controller, but when that is not available, then you can use the inlets-operator to get a public IP and ingress.
The free OSS version of inlets provides a HTTP tunnel, inlets PRO can provide TCP and full functionality to an IngressController.
First create a deployment for Nginx.
For Kubernetes 1.17 and lower:
kubectl run nginx-1 --image=nginx --port=80 --restart=Always
For 1.18 and higher:
kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/contrib/nginx-sample-deployment.yaml
Now create a service of type LoadBalancer via kubectl expose
:
kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
kubectl get svc
kubectl get tunnel/nginx-1-tunnel -o yaml
kubectl logs deploy/nginx-1-tunnel-client
Check the IP of the LoadBalancer and then access it via the Internet.
inlets PRO can tunnel multiple ports, but inlets OSS is set to take the first port named "http" for your service. With the OSS version of inlets (see example with OpenFaaS), make sure you give the port
a name
of http
, otherwise a default of 80
will be used incorrectly.
apiVersion: v1
kind: Service
metadata:
name: gateway
namespace: openfaas
labels:
app: gateway
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
nodePort: 31112
selector:
app: gateway
type: LoadBalancer
By default the operator will create a tunnel for every LoadBalancer service.
There are three ways to override the behaviour:
To ignore a service such as traefik
type in: kubectl annotate svc/traefik -n kube-system dev.inlets.manage=false
You can also set the operator to ignore the services by default and only manage them when the annotation is true with the flag -annotated-only
To create a service such as traefik
type in: kubectl annotate svc/traefik -n kube-system dev.inlets.manage=true
Running multiple LoadBalancers controllers together, e.g. inlets-operator and MetalLB, can have some issue as both will compete against each other when processing the service.
Although the inlets-operator has the flag -annotated-only
to filter the services, not all other LoadBalancer controller have a similar feature.
In this case, the inlets-operator is still able to expose services by using a ClusterIP service with a Tunnel resource instead of a LoadBalancer service.
Example:
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: inlets.inlets.dev/v1alpha1
kind: Tunnel
metadata:
name: nginx
spec:
serviceName: nginx
auth_token: <token>
The public IP address of the tunnel is available in the service resource:
$ kubectl get services,tunnel
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 192.168.226.216 104.248.163.242 80/TCP 78s
NAME SERVICE TUNNEL HOSTSTATUS HOSTIP HOSTID
tunnel.inlets.inlets.dev/nginx nginx nginx-client active 104.248.163.242 214795742
or use a jsonpath to get the value:
kubectl get service nginx --output jsonpath='{.status.loadBalancer.ingress[0].ip}'
The operator deployment is in the kube-system
namespace.
kubectl logs deploy/inlets-operator -n kube-system -f
Use the same commands as described in the section above.
There used to be separate deployment files in
artifacts
folder calledoperator-amd64.yaml
andoperator-armhf.yaml
. Since version0.2.7
Docker images get built for multiple architectures with the same tag which means that there is now just one deployment file calledoperator.yaml
that can be used on all supported architecures.
The host provisioning code used by the inlets-operator is shared with inletsctl, both tools use the configuration in the grid below.
These costs need to be treated as an estimate and will depend on your bandwidth usage and how many hosts you decide to create. You can at all times check your cloud provider's dashboard, API, or CLI to view your exit-nodes. The hosts provided have been chosen because they are the absolute lowest-cost option that the maintainers could find.
Provider | Price per month | Price per hour | OS image | CPU | Memory | Boot time |
---|---|---|---|---|---|---|
Google Compute Engine | * ~$4.28 | ~$0.006 | Debian GNU Linux 9 (stretch) | 1 | 614MB | ~3-15s |
Packet | ~$51 | $0.07 | Ubuntu 16.04 | 4 | 8GB | ~45-60s |
Digital Ocean | $5 | ~$0.0068 | Ubuntu 16.04 | 1 | 512MB | ~20-30s |
Scaleway | 2.99€ | 0.006€ | Ubuntu 18.04 | 2 | 2GB | 3-5m |
- The first f1-micro instance in a GCP Project (the default instance type for inlets-operator) is free for 720hrs(30 days) a month
Contributions are welcome, see the CONTRIBUTING.md guide.
- inlets pro - L4 TCP tunnel, which can tunnel any TCP traffic with automatic, built-in encryption. Kubernetes-ready with Docker images and YAML manifests.
- inlets - inlets provides an L7 HTTP tunnel for applications through the use of an exit node, it is used by the inlets operator. Encryption can be configured separately.
- metallb - open source LoadBalancer for private Kubernetes clusters, no tunnelling.
- Cloudflare Argo - paid SaaS product from Cloudflare for Cloudflare customers and domains - K8s integration available through Ingress
- ngrok - a popular tunnelling tool, restarts every 7 hours, limits connections per minute, paid SaaS product with no K8s integration available
inlets and the inlets-operator are brought to you by OpenFaaS Ltd and Alex Ellis.