|
4 years ago | |
---|---|---|
.. | ||
Makefile | 4 years ago | |
README.md | 4 years ago | |
cluster.tf | 4 years ago | |
cluster_iam.tf | 4 years ago | |
cluster_sg.tf | 4 years ago | |
data.tf | 4 years ago | |
data.tpl | 4 years ago | |
dynamo.tf | 4 years ago | |
route53.tf | 4 years ago | |
s3.tf | 4 years ago | |
ssm.tf | 4 years ago | |
vars.tf | 4 years ago |
This is a simple Terraform example to get you started provisioning an all-in-one Teleport cluster (auth, node, proxy) on a single ec2 instance based on Teleport's pre-built AMI.
Do not use this in production! This example should be used for demo, proof-of-concept, or learning purposes only.
Teleport AMIs are built so you only need to specify environment variables to bring a fully configured instance online. See data.tpl
or our documentation to learn more about supported environment variables.
A series of systemd units bootstrap the instance, via several bash scripts.
While this may not be sufficient for all use cases, it's a great proof-of-concept that you can fork and customize to your liking. Check out our AWS AMI generation code if you're interested in adapting this to your requirements.
This Terraform example will configure the following AWS resources:
A
recordmake plan
and verify the plan is building what you expect.make apply
to begin provisioning.make destroy
to delete the provisioned resources.File | Description |
---|---|
cluster.tf | EC2 instance template and provisioning. |
cluster_iam.tf | IAM role provisioning. Permits ec2 instance to talk to AWS resources (ssm, s3, dynamodb, etc) |
cluster_sg.tf | Security Group provisioning. Ingress network rules. |
data.tf | Misc variables used for provisioning AWS resources. |
data.tpl | Template for Teleport configuration. |
dynamo.tf | DynamoDB table provisioning. Tables used for Teleport state and events. |
route53.tpl | Route53 zone creation. Requires a hosted zone to configure SSL. |
s3.tf | S3 bucket provisioning. Bucket used for session recording storage. |
ssm.tf | Teleport license distribution (if using Teleport enterprise). |
vars.tf | Inbound variables for Teleport configuration. |
Update the included Makefile to define your configuration.
make apply
.ssh ec2-user@<cluster_domain>
.tctl users add <username> ec2-user
tctl users add --roles=admin <username> --logins=ec2-user
Success! You've configured a fully functional Teleport cluster.
# Set up Terraform variables in a separate environment file, or inline here
# Region to run in - we currently have AMIs in the following regions:
# ap-south-1, ap-northeast-2, ap-southeast-1, ap-southeast-2, ap-northeast-1, ca-central-1, eu-central-1, eu-west-1, eu-west-2
# sa-east-1, us-east-1, us-east-2, us-west-1, us-west-2
TF_VAR_region ?="us-east-1"
# Cluster name is a unique cluster name to use, should be unique and not contain spaces or other special characters
TF_VAR_cluster_name ?="TeleportCluster1"
# AWS SSH key pair name to provision in installed instances, must be a key pair available in the above defined region (AWS Console > EC2 > Key Pairs)
TF_VAR_key_name ?="example"
# Full absolute path to the license file, on the machine executing Terraform, for Teleport Enterprise.
# This license will be copied into AWS SSM and then pulled down on the auth nodes to enable Enterprise functionality
TF_VAR_license_path ?="/path/to/license"
# AMI name contains the version of Teleport to install, and whether to use OSS or Enterprise version
# These AMIs are published by Teleport and shared as public whenever a new version of Teleport is released
# To list available AMIs:
# OSS: aws ec2 describe-images --owners 126027368216 --filters 'Name=name,Values=gravitational-teleport-ami-oss*'
# Enterprise: aws ec2 describe-images --owners 126027368216 --filters 'Name=name,Values=gravitational-teleport-ami-ent*'
# FIPS 140-2 images are also available for Enterprise customers, look for '-fips' on the end of the AMI's name
TF_VAR_ami_name ?="gravitational-teleport-ami-ent-5.0.1"
# Route 53 hosted zone to use, must be a root zone registered in AWS, e.g. example.com
TF_VAR_route53_zone ?="example.com"
# Subdomain to set up in the zone above, e.g. cluster.example.com
# This will be used for users connecting to Teleport proxy
TF_VAR_route53_domain ?="cluster.example.com"
# Bucket name to store encrypted LetsEncrypt certificates.
TF_VAR_s3_bucket_name ?="teleport.example.com"
# Email to be used for LetsEncrypt certificate registration process.
TF_VAR_email ?="support@example.com"
# Set to true to use LetsEncrypt to provision certificates
TF_VAR_use_letsencrypt ?=true
# Set to true to use ACM (Amazon Certificate Manager) to provision certificates
# If you wish to use a pre-existing ACM certificate rather than having Terraform generate one for you, you can import it:
# terraform import aws_acm_certificate.cert <certificate_arn>
TF_VAR_use_acm ?=false
# plan
make plan
Please see the AMIS.md file for a list of public Teleport AMI IDs that you can use.