Skip to content

Latest commit

 

History

History
70 lines (48 loc) · 1.93 KB

scaling.mdx

File metadata and controls

70 lines (48 loc) · 1.93 KB
title description
Scaling
How to configure Teleport for large-scale deployments

This section explains the recommended configuration settings for large-scale deployments of Teleport.

For Teleport Cloud customers, the settings in this guide are configured automatically.

(!docs/pages/includes/cloud/call-to-action.mdx!)

Prerequisites

  • Teleport v(=teleport.version=) Open Source or Enterprise.

Hardware recommendations

Set up Teleport with a High Availability configuration.

Scenario Max Recommended Count Proxy Auth Server AWS Instance Types
Teleport SSH Nodes connected to Auth Service 10,000 2x 4 vCPUs, 8GB RAM 2x 8 vCPUs, 16GB RAM m4.2xlarge
Teleport SSH Nodes connected to Auth Service 50,000 2x 4 vCPUs, 16GB RAM 2x 8 vCPUs, 16GB RAM m4.2xlarge
Teleport SSH Nodes connected to Proxy Service through reverse tunnels 10,000 2x 4 vCPUs, 8GB RAM 2x 8 vCPUs, 16+GB RAM m4.2xlarge

Auth and Proxy Configuration

Upgrade Teleport's connection limits from the default connection limit of 15000 to 65000.

# Teleport Auth and Proxy
teleport:
  connection_limits:
    max_connections: 65000
    max_users: 1000

Kernel parameters

Tweak Teleport's systemd unit parameters to allow a higher amount of open files:

[Service]
LimitNOFILE=65536

Verify that Teleport's process has high enough file limits:

$ cat /proc/$(pidof teleport)/limits
# Limit                     Soft Limit           Hard Limit           Units
# Max open files            65536                65536                files

DynamoDB configuration

When using Teleport with DynamoDB, we recommend using on-demand provisioning. This allow DynamoDB to scale with cluster load.

For customers that can not use on-demand provisioning, we recommend at least 250 WCU and 100 RCU for 10k clusters.