Browse Source

Copied over Teleport's example "starter cluster" from
https://github.com/gravitational/teleport/tree/master/examples/aws/terraform/starter-cluster

Duane Waddle 4 years ago
parent
commit
bf3147cebf

+ 62 - 0
base/teleport-starter-cluster/Makefile

@@ -0,0 +1,62 @@
+# Set up terraform variables in a separate environment file, or inline here
+
+# This region should support EFS
+TF_VAR_region ?= us-gov-east-1
+
+# Cluster name is a unique cluster name to use, should be unique and not contain spaces or other special characters
+TF_VAR_cluster_name ?= teleporttest
+
+# AWS SSH key name to provision in installed instances, should be available in the region
+TF_VAR_key_name ?= duane.waddle
+
+# Full absolute path to the license file for Teleport Enterprise or Pro.
+# This license will be copied into SSM and then pulled down on the auth nodes to enable Enterprise/Pro functionality
+TF_VAR_license_path ?= ~/Downloads/license.pem
+
+# AMI name contains the version of Teleport to install, and whether to use OSS or Enterprise version
+# These AMIs are published by Gravitational and shared as public whenever a new version of Teleport is released
+# To list available AMIs:
+# OSS: aws ec2 describe-images --filters 'Name=name,Values=gravitational-teleport-ami-oss*'
+# Enterprise: aws ec2 describe-images --filters 'Name=name,Values=gravitational-teleport-ami-ent*'
+TF_VAR_ami_name ?= teleport-fips
+
+# Route 53 zone to use, should be the zone registered in AWS, e.g. example.com
+TF_VAR_route53_zone ?= xdrtest.accenturefederalcyber.com
+
+# Subdomain to set up in the zone above, e.g. cluster.example.com
+# This will be used for internet access for users connecting to teleport proxy
+TF_VAR_route53_domain ?= teleporttest
+
+# Bucket name to store encrypted letsencrypt certificates.
+TF_VAR_s3_bucket_name ?= xdr-teleporttest
+
+# Email of your support org, used for Letsencrypt cert registration process.
+TF_VAR_email ?= xdr.eng@accenturefederal.com
+
+# Set to true to use LetsEncrypt to provision certificates
+TF_VAR_use_letsencrypt ?=true
+
+# Set to true to use ACM (Amazon Certificate Manager) to provision certificates
+# If you wish to use a pre-existing ACM certificate rather than having Terraform generate one for you, you can import it:
+# terraform import aws_acm_certificate.cert <certificate_arn>
+TF_VAR_use_acm ?=false
+
+export
+
+# Plan launches terraform plan
+.PHONY: plan
+plan:
+	terraform init
+	terraform plan
+
+# Apply launches terraform apply
+.PHONY: apply
+apply:
+	terraform init
+	terraform apply
+
+# Destroy deletes the provisioned resources
+.PHONY: destroy
+destroy:
+	terraform init
+	terraform destroy

+ 118 - 0
base/teleport-starter-cluster/README.md

@@ -0,0 +1,118 @@
+# Teleport Terraform AWS AMI Simple Example
+
+This is a simple Terraform example to get you started provisioning an all-in-one Teleport cluster (auth, node, proxy) on a single ec2 instance based on Teleport's pre-built AMI.
+
+Do not use this in production! This example should be used for demo, proof-of-concept, or learning purposes only.
+
+## How does this work?
+
+Teleport AMIs are built so you only need to specify environment variables to bring a fully configured instance online. See `data.tpl` or our [documentation](https://gravitational.com/teleport/docs/aws_oss_guide/#single-oss-teleport-amis-manual-gui-setup) to learn more about supported environment variables.
+
+A series of systemd [units](https://github.com/gravitational/teleport/tree/master/assets/aws/files/system) bootstrap the instance, via several bash [scripts](https://github.com/gravitational/teleport/tree/master/assets/aws/files/bin).
+
+While this may not be sufficient for all use cases, it's a great proof-of-concept that you can fork and customize to your liking. Check out our AWS AMI [generation code](https://github.com/gravitational/teleport/tree/master/assets/aws) if you're interested in adapting this to your requirements.
+
+This Terraform example will configure the following AWS resources:
+
+- Teleport all-in-one (auth, node, proxy) single cluster ec2 instance
+- DynamoDB tables (cluster state, cluster events, ssl lock)
+- S3 bucket (session recording storage)
+- Route53 `A` record
+- Security Groups and IAM roles
+
+## Instructions
+
+### Build Requirements
+
+- terraform v0.12+ [install docs](https://learn.hashicorp.com/terraform/getting-started/install.html)
+- awscli v1.14+ [install docs](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)
+
+### Usage
+
+- `make plan` and verify the plan is building what you expect.
+- `make apply` to begin provisioning.
+- `make destroy` to delete the provisioned resources.
+
+### Project layout
+
+File           | Description
+-------------- | ---------------------------------------------------------------------------------------------
+cluster.tf     | EC2 instance template and provisioning.
+cluster_iam.tf | IAM role provisioning. Permits ec2 instance to talk to AWS resources (ssm, s3, dynamodb, etc)
+cluster_sg.tf  | Security Group provisioning. Ingress network rules.
+data.tf        | Misc variables used for provisioning AWS resources.
+data.tpl       | Template for Teleport configuration.
+dynamo.tf      | DynamoDB table provisioning. Tables used for Teleport state and events.
+route53.tpl    | Route53 zone creation. Requires a hosted zone to configure SSL.
+s3.tf          | S3 bucket provisioning. Bucket used for session recording storage.
+ssm.tf         | Teleport license distribution (if using Teleport enterprise).
+vars.tf        | Inbound variables for Teleport configuration.
+
+### Steps
+
+Update the included Makefile to define your configuration.
+
+1. Run `make apply`.
+2. SSH to your new instance. `ssh ec2-user@<cluster_domain>`.
+3. Create a user (this will create a Teleport User and permit login as the local ec2-user).
+   - OSS:
+   `tctl users add <username> ec2-user` 
+   - Enterprise (requires a role):
+    `tctl users add --roles=admin <username> --logins=ec2-user` 
+4. Click the registration link provided by the output. Set a password and configure your 2fa token.
+5. Success! You've configured a fully functional Teleport cluster.
+
+```bash
+# Set up Terraform variables in a separate environment file, or inline here
+
+# Region to run in - we currently have AMIs in the following regions:
+# ap-south-1, ap-northeast-2, ap-southeast-1, ap-southeast-2, ap-northeast-1, ca-central-1, eu-central-1, eu-west-1, eu-west-2
+# sa-east-1, us-east-1, us-east-2, us-west-1, us-west-2
+TF_VAR_region ?="us-east-1"
+
+# Cluster name is a unique cluster name to use, should be unique and not contain spaces or other special characters
+TF_VAR_cluster_name ?="TeleportCluster1"
+
+# AWS SSH key pair name to provision in installed instances, must be a key pair available in the above defined region (AWS Console > EC2 > Key Pairs)
+TF_VAR_key_name ?="example"
+
+# Full absolute path to the license file, on the machine executing Terraform, for Teleport Enterprise.
+# This license will be copied into AWS SSM and then pulled down on the auth nodes to enable Enterprise functionality
+TF_VAR_license_path ?="/path/to/license"
+
+# AMI name contains the version of Teleport to install, and whether to use OSS or Enterprise version
+# These AMIs are published by Teleport and shared as public whenever a new version of Teleport is released
+# To list available AMIs:
+# OSS: aws ec2 describe-images --owners 126027368216 --filters 'Name=name,Values=gravitational-teleport-ami-oss*'
+# Enterprise: aws ec2 describe-images --owners 126027368216 --filters 'Name=name,Values=gravitational-teleport-ami-ent*'
+# FIPS 140-2 images are also available for Enterprise customers, look for '-fips' on the end of the AMI's name
+TF_VAR_ami_name ?="gravitational-teleport-ami-ent-5.0.1"
+
+# Route 53 hosted zone to use, must be a root zone registered in AWS, e.g. example.com
+TF_VAR_route53_zone ?="example.com"
+
+# Subdomain to set up in the zone above, e.g. cluster.example.com
+# This will be used for users connecting to Teleport proxy
+TF_VAR_route53_domain ?="cluster.example.com"
+
+# Bucket name to store encrypted LetsEncrypt certificates.
+TF_VAR_s3_bucket_name ?="teleport.example.com"
+
+# Email to be used for LetsEncrypt certificate registration process.
+TF_VAR_email ?="support@example.com"
+
+# Set to true to use LetsEncrypt to provision certificates
+TF_VAR_use_letsencrypt ?=true
+
+# Set to true to use ACM (Amazon Certificate Manager) to provision certificates
+# If you wish to use a pre-existing ACM certificate rather than having Terraform generate one for you, you can import it:
+# terraform import aws_acm_certificate.cert <certificate_arn>
+TF_VAR_use_acm ?=false
+
+# plan
+make plan
+```
+
+## Public Teleport AMI IDs
+
+Please [see the AMIS.md file](../AMIS.md) for a list of public Teleport AMI IDs that you can use.

+ 31 - 0
base/teleport-starter-cluster/cluster.tf

@@ -0,0 +1,31 @@
+// Configuration data for teleport.yaml generation
+data "template_file" "node_user_data" {
+  template = file("data.tpl")
+
+  vars = {
+    region                   = var.region
+    cluster_name             = var.cluster_name
+    email                    = var.email
+    domain_name              = var.route53_domain
+    dynamo_table_name        = aws_dynamodb_table.teleport.name
+    dynamo_events_table_name = aws_dynamodb_table.teleport_events.name
+    locks_table_name         = aws_dynamodb_table.teleport_locks.name
+    license_path             = var.license_path
+    s3_bucket                = var.s3_bucket_name
+    use_acm                  = var.use_acm
+    use_letsencrypt          = var.use_letsencrypt
+  }
+}
+
+// Auth, node, proxy (aka Teleport Cluster) on single AWS instance
+resource "aws_instance" "cluster" {
+  key_name                    = var.key_name
+  ami                         = data.aws_ami.base.id
+  instance_type               = var.cluster_instance_type
+  subnet_id                   = tolist(data.aws_subnet_ids.all.ids)[0]
+  vpc_security_group_ids      = [aws_security_group.cluster.id]
+  associate_public_ip_address = true
+  user_data                   = data.template_file.node_user_data.rendered
+  iam_instance_profile        = aws_iam_role.cluster.id
+}
+

+ 180 - 0
base/teleport-starter-cluster/cluster_iam.tf

@@ -0,0 +1,180 @@
+/* 
+An IAM Role and Policies are used to permit
+EC2 instances to communicate with various AWS
+resources.
+*/
+
+// IAM Role
+resource "aws_iam_role" "cluster" {
+  name = "${var.cluster_name}-cluster"
+
+  assume_role_policy = <<EOF
+{
+    "Version": "2012-10-17",
+    "Statement": [
+        {
+            "Effect": "Allow",
+            "Principal": {"Service": "ec2.amazonaws.com"},
+            "Action": "sts:AssumeRole"
+        }
+    ]
+}
+EOF
+
+}
+
+// IAM Profile
+resource "aws_iam_instance_profile" "cluster" {
+  name       = "${var.cluster_name}-cluster"
+  role       = aws_iam_role.cluster.name
+  depends_on = [aws_iam_role_policy.cluster_s3]
+}
+
+// Policy to permit cluster to talk to S3 (Session recordings)
+resource "aws_iam_role_policy" "cluster_s3" {
+  name = "${var.cluster_name}-cluster-s3"
+  role = aws_iam_role.cluster.id
+
+  policy = <<EOF
+{
+   "Version": "2012-10-17",
+   "Statement": [
+     {
+       "Effect": "Allow",
+       "Action": [
+         "s3:ListBucket",
+         "s3:ListBucketVersions"
+      ],
+       "Resource": ["arn:aws:s3:::${aws_s3_bucket.storage.bucket}"]
+     },
+     {
+       "Effect": "Allow",
+       "Action": [
+         "s3:PutObject",
+         "s3:GetObject",
+         "s3:GetObjectVersion"
+       ],
+       "Resource": ["arn:aws:s3:::${aws_s3_bucket.storage.bucket}/*"]
+     }
+   ]
+ }
+ 
+EOF
+
+}
+
+// Policy to permit cluster to access SSM (Enterprise license handling)
+resource "aws_iam_role_policy" "cluster_ssm" {
+  name = "${var.cluster_name}-cluster-ssm"
+  role = aws_iam_role.cluster.id
+
+  policy = <<EOF
+{
+    "Version": "2012-10-17",
+    "Statement": [
+        {
+            "Effect": "Allow",
+            "Action": [
+              "ssm:DescribeParameters",
+              "ssm:GetParameters",
+              "ssm:GetParametersByPath",
+              "ssm:GetParameter",
+              "ssm:PutParameter",
+              "ssm:DeleteParameter"
+            ],
+            "Resource": "arn:aws:ssm:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:parameter/teleport/${var.cluster_name}/*"
+        },
+        {
+         "Effect":"Allow",
+         "Action":[
+            "kms:Decrypt"
+         ],
+         "Resource":[
+            "arn:aws:kms:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:key/${data.aws_kms_alias.ssm.target_key_id}"
+         ]
+      }
+    ]
+}
+EOF
+
+}
+
+// Policy to permit cluster to access DynamoDB tables (Cluster state, events, and SSL)
+resource "aws_iam_role_policy" "cluster_dynamo" {
+  name = "${var.cluster_name}-cluster-dynamo"
+  role = aws_iam_role.cluster.id
+
+  policy = <<EOF
+{
+    "Version": "2012-10-17",
+    "Statement": [
+        {
+            "Sid": "AllActionsOnTeleportDB",
+            "Effect": "Allow",
+            "Action": "dynamodb:*",
+            "Resource": "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/${aws_dynamodb_table.teleport.name}"
+        },
+        {
+            "Sid": "AllActionsOnTeleportEventsDB",
+            "Effect": "Allow",
+            "Action": "dynamodb:*",
+            "Resource": "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/${aws_dynamodb_table.teleport_events.name}"
+        },
+        {
+            "Sid": "AllActionsOnTeleportEventsIndexDB",
+            "Effect": "Allow",
+            "Action": "dynamodb:*",
+            "Resource": "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/${aws_dynamodb_table.teleport_events.name}/index/*"
+        },
+        {
+            "Sid": "AllActionsOnTeleportStreamsDB",
+            "Effect": "Allow",
+            "Action": "dynamodb:*",
+            "Resource": "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/${aws_dynamodb_table.teleport.name}/stream/*"
+        },
+        {
+            "Sid": "AllActionsOnLocks",
+            "Effect": "Allow",
+            "Action": "dynamodb:*",
+            "Resource": "arn:aws:dynamodb:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:table/${aws_dynamodb_table.teleport_locks.name}"
+        }
+    ]
+}
+EOF
+
+}
+
+// Policy to permit cluster to access Route53 (SSL)
+resource "aws_iam_role_policy" "cluster_route53" {
+  name = "${var.cluster_name}-cluster-route53"
+  role = aws_iam_role.cluster.id
+
+  policy = <<EOF
+{
+    "Version": "2012-10-17",
+    "Id": "certbot-dns-route53 policy",
+    "Statement": [
+        {
+            "Effect": "Allow",
+            "Action": [
+                "route53:ListHostedZones",
+                "route53:GetChange"
+            ],
+            "Resource": [
+                "*"
+            ]
+        },
+        {
+            "Effect" : "Allow",
+            "Action" : [
+                "route53:ChangeResourceRecordSets"
+            ],
+            "Resource" : [
+                "arn:aws:route53:::hostedzone/${data.aws_route53_zone.cluster.zone_id}"
+            ]
+        }
+    ]
+}
+EOF
+
+}

+ 55 - 0
base/teleport-starter-cluster/cluster_sg.tf

@@ -0,0 +1,55 @@
+/* 
+Security Groups and Rules for Cluster.
+
+Note: Please see our Production Guide for network security
+recommendations. 
+https://gravitational.com/teleport/docs/production/#firewall-configuration
+*/
+
+// Create a Security Group
+resource "aws_security_group" "cluster" {
+  name   = "${var.cluster_name}-cluster"
+  vpc_id = data.aws_vpc.default.id
+
+  tags = {
+    TeleportCluster = var.cluster_name
+  }
+}
+
+// Permit inbound to SSH
+resource "aws_security_group_rule" "cluster_ingress_ssh" {
+  type              = "ingress"
+  from_port         = 22
+  to_port           = 22
+  protocol          = "tcp"
+  cidr_blocks       = ["0.0.0.0/0"]
+  security_group_id = aws_security_group.cluster.id
+}
+// Permit inbound to Teleport Web interface
+resource "aws_security_group_rule" "cluster_ingress_web" {
+  type              = "ingress"
+  from_port         = 3080
+  to_port           = 3080
+  protocol          = "tcp"
+  cidr_blocks       = ["0.0.0.0/0"]
+  security_group_id = aws_security_group.cluster.id
+}
+// Permit inbound to Teleport services
+resource "aws_security_group_rule" "cluster_ingress_services" {
+  type              = "ingress"
+  from_port         = 3022
+  to_port           = 3025
+  protocol          = "tcp"
+  cidr_blocks       = ["0.0.0.0/0"]
+  security_group_id = aws_security_group.cluster.id
+}
+
+// Permit all outbound traffic
+resource "aws_security_group_rule" "cluster_egress" {
+  type              = "egress"
+  from_port         = 0
+  to_port           = 0
+  protocol          = "-1"
+  cidr_blocks       = ["0.0.0.0/0"]
+  security_group_id = aws_security_group.cluster.id
+}

+ 40 - 0
base/teleport-starter-cluster/data.tf

@@ -0,0 +1,40 @@
+provider "aws" {
+  region = var.region
+}
+
+data "aws_vpc" "default" {
+  default = true
+}
+
+data "aws_subnet_ids" "all" {
+  vpc_id = data.aws_vpc.default.id
+}
+
+data "aws_ami" "base" {
+  most_recent = true
+  owners      = [126027368216]
+
+  filter {
+    name   = "name"
+    values = [var.ami_name]
+  }
+}
+
+data "aws_route53_zone" "cluster" {
+  name = var.route53_zone
+}
+
+data "aws_caller_identity" "current" {
+}
+
+data "aws_region" "current" {
+  name = var.region
+}
+
+data "aws_availability_zones" "available" {
+}
+
+// SSM is picking alias for key to use for encryption in SSM
+data "aws_kms_alias" "ssm" {
+  name = var.kms_alias_name
+}

+ 18 - 0
base/teleport-starter-cluster/data.tpl

@@ -0,0 +1,18 @@
+#!/bin/bash
+cat >/etc/teleport.d/conf <<EOF
+TELEPORT_ROLE=auth,node,proxy
+EC2_REGION=${region}
+TELEPORT_AUTH_SERVER_LB=localhost
+TELEPORT_CLUSTER_NAME=${cluster_name}
+TELEPORT_DOMAIN_ADMIN_EMAIL=${email}
+TELEPORT_DOMAIN_NAME=${domain_name}
+TELEPORT_EXTERNAL_HOSTNAME=${domain_name}
+TELEPORT_DYNAMO_TABLE_NAME=${dynamo_table_name}
+TELEPORT_DYNAMO_EVENTS_TABLE_NAME=${dynamo_events_table_name}
+TELEPORT_LICENSE_PATH=${license_path}
+TELEPORT_LOCKS_TABLE_NAME=${locks_table_name}
+TELEPORT_PROXY_SERVER_LB=${domain_name}
+TELEPORT_S3_BUCKET=${s3_bucket}
+USE_LETSENCRYPT=${use_letsencrypt}
+USE_ACM=${use_acm}
+EOF

+ 135 - 0
base/teleport-starter-cluster/dynamo.tf

@@ -0,0 +1,135 @@
+/* 
+DynamoDB is used to store cluster state, event
+metadata, and a simple locking mechanism for SSL 
+cert generation and renewal.
+*/
+
+// DynamoDB table for storing cluster state
+resource "aws_dynamodb_table" "teleport" {
+  name           = var.cluster_name
+  read_capacity  = 10
+  write_capacity = 10
+  hash_key       = "HashKey"
+  range_key      = "FullPath"
+  server_side_encryption {
+    enabled = true
+  }
+
+  lifecycle {
+    ignore_changes = [
+      read_capacity,
+      write_capacity,
+    ]
+  }
+
+  attribute {
+    name = "HashKey"
+    type = "S"
+  }
+
+  attribute {
+    name = "FullPath"
+    type = "S"
+  }
+
+  stream_enabled   = "true"
+  stream_view_type = "NEW_IMAGE"
+
+  ttl {
+    attribute_name = "Expires"
+    enabled        = true
+  }
+
+  tags = {
+    TeleportCluster = var.cluster_name
+  }
+}
+
+// DynamoDB table for storing cluster events
+resource "aws_dynamodb_table" "teleport_events" {
+  name           = "${var.cluster_name}-events"
+  read_capacity  = 10
+  write_capacity = 10
+  hash_key       = "SessionID"
+  range_key      = "EventIndex"
+
+  server_side_encryption {
+    enabled = true
+  }
+
+  global_secondary_index {
+    name            = "timesearch"
+    hash_key        = "EventNamespace"
+    range_key       = "CreatedAt"
+    write_capacity  = 10
+    read_capacity   = 10
+    projection_type = "ALL"
+  }
+
+  lifecycle {
+    ignore_changes = [
+      read_capacity,
+      write_capacity,
+    ]
+  }
+
+  attribute {
+    name = "SessionID"
+    type = "S"
+  }
+
+  attribute {
+    name = "EventIndex"
+    type = "N"
+  }
+
+  attribute {
+    name = "EventNamespace"
+    type = "S"
+  }
+
+  attribute {
+    name = "CreatedAt"
+    type = "N"
+  }
+
+  ttl {
+    attribute_name = "Expires"
+    enabled        = true
+  }
+
+  tags = {
+    TeleportCluster = var.cluster_name
+  }
+}
+
+// DynamoDB table for simple locking mechanism
+resource "aws_dynamodb_table" "teleport_locks" {
+  name           = "${var.cluster_name}-locks"
+  read_capacity  = 5
+  write_capacity = 5
+  hash_key       = "Lock"
+
+  billing_mode = "PROVISIONED"
+
+  lifecycle {
+    ignore_changes = [
+      read_capacity,
+      write_capacity,
+    ]
+  }
+
+  attribute {
+    name = "Lock"
+    type = "S"
+  }
+
+  ttl {
+    attribute_name = "Expires"
+    enabled        = true
+  }
+
+  tags = {
+    TeleportCluster = var.cluster_name
+  }
+}

+ 14 - 0
base/teleport-starter-cluster/route53.tf

@@ -0,0 +1,14 @@
+/* 
+Route53 is used to configure SSL for this cluster. A
+Route53 hosted zone must exist in the AWS account for
+this automation to work. 
+*/
+
+// Create A record to instance IP
+resource "aws_route53_record" "cluster" {
+  zone_id = data.aws_route53_zone.cluster.zone_id
+  name    = var.route53_domain
+  type    = "A"
+  ttl     = "300"
+  records = ["${aws_instance.cluster.public_ip}"]
+}

+ 20 - 0
base/teleport-starter-cluster/s3.tf

@@ -0,0 +1,20 @@
+/* 
+Configuration of S3 bucket for certs and replay
+storage. Uses server side encryption to secure
+session replays and SSL certificates.
+*/
+
+// S3 bucket for cluster storage
+resource "aws_s3_bucket" "storage" {
+  bucket        = var.s3_bucket_name
+  acl           = "private"
+  force_destroy = true
+
+  server_side_encryption_configuration {
+    rule {
+      apply_server_side_encryption_by_default {
+        sse_algorithm = "AES256"
+      }
+    }
+  }
+}

+ 8 - 0
base/teleport-starter-cluster/ssm.tf

@@ -0,0 +1,8 @@
+// SSM for Teleport Enterprise license storage and retrieval
+resource "aws_ssm_parameter" "license" {
+  count     = var.license_path != "" ? 1 : 0
+  name      = "/teleport/${var.cluster_name}/license"
+  type      = "SecureString"
+  value     = file(var.license_path)
+  overwrite = true
+}

+ 70 - 0
base/teleport-starter-cluster/vars.tf

@@ -0,0 +1,70 @@
+// Region is AWS region, the region should support EFS
+variable "region" {
+  type = string
+}
+
+// Teleport cluster name to set up
+variable "cluster_name" {
+  type = string
+}
+
+// Path to Teleport Enterprise license file
+variable "license_path" {
+  type    = string
+  default = ""
+}
+
+// AMI name to use
+variable "ami_name" {
+  type = string
+}
+
+// DNS and letsencrypt integration variables
+// Zone name to host DNS record, e.g. example.com
+variable "route53_zone" {
+  type = string
+}
+
+// Domain name to use for Teleport proxy,
+// e.g. proxy.example.com
+variable "route53_domain" {
+  type = string
+}
+
+// S3 Bucket to create for encrypted letsencrypt certificates
+variable "s3_bucket_name" {
+  type = string
+}
+
+// Email for LetsEncrypt domain registration
+variable "email" {
+  type = string
+}
+
+
+// SSH key name to provision instances withx
+variable "key_name" {
+  type = string
+}
+
+// Whether to use Amazon-issued certificates via ACM or not
+// This must be set to true for any use of ACM whatsoever, regardless of whether Terraform generates/approves the cert
+variable "use_letsencrypt" {
+  type = string
+}
+
+// Whether to use Amazon-issued certificates via ACM or not
+// This must be set to true for any use of ACM whatsoever, regardless of whether Terraform generates/approves the cert
+variable "use_acm" {
+  type = string
+}
+
+variable "kms_alias_name" {
+  default = "alias/aws/ssm"
+}
+
+// Instance type for cluster
+variable "cluster_instance_type" {
+  type    = string
+  default = "t3.nano"
+}