If you are used to Docker you know that containers are stateless. It means that if you shut it down or spawn new containers, anything you wrote to disk will be gone. That’s one of the reasons a lot of people just write stuff to S3 buckets instead.
But what if you are hosting your own database? or your own Nginx caching layer? Or if you want to save your files directly to disk for performance/security reasons?
EBS & EFS volumes
Until ECS Platform Version 1.3, I believe you could only attach EBS volumes to your containers, and only after the release of the platform version 1.4 you could get EFS volumes attached to your docker containers. Make sure you understand the differences regarding performance, costs & limitations before choosing between them.
For the sake of example, I’m going to show you how you can attach an EFS volume on a Docker Container using AWS ECS Fargate using Terraform.
Creating your EFS Volume
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_file_system
resource "aws_efs_file_system" "efs-nginx" {
creation_token = "efs-nginx"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = true
lifecycle_policy {
transition_to_ia = "AFTER_30_DAYS"
}
}
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/efs_mount_target
resource "aws_efs_mount_target" "efs-mt-nginx" {
file_system_id = aws_efs_file_system.efs-nginx.id
subnet_id = "my-subnet-id"
security_groups = [aws_security_group.nginx-sg.id]
}
The above will create your EFS and its mount information so it could be attached later by our containers.
Task Definition
resource "aws_ecs_task_definition" "nginx" {
family = "nginx"
task_role_arn = aws_iam_role.nginx_execution_role.arn
execution_role_arn = aws_iam_role.nginx_execution_role.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = 512
memory = 1024
container_definitions = "${file("task-definitions/nginx.json")}"
volume {
name = "nginxCache"
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs-nginx.id
root_directory = "/"
}
}
}
The task-definitions/nginx.json
is pretty standard. The only thing you need to add to your own task definition is:
"mountPoints": [
{
"sourceVolume": "nginxCache",
"containerPath": "/nginx/cache"
}
],
"ulimits": [
{
"name": "nofile",
"softLimit": 999999,
"hardLimit": 999999
}
Two things to pay attention here:
- The Nginx cache folder is
/nginx/cache
so it will be our mounting point. - 99% of you might not deal with this problem but sometimes the given limitations imposed by ECS Docker Environment might not be compatible with your needs, which in this case is
nofile=1024:4096
.
Security Groups
Another, not so obvious configuration that you need to pay attention to is the fact that the security group you are using between your EFS volumes & your application (Nginx), needs to have an ingress
rule, that allows TCP traffic on port 2049 which is the standard on AWS.
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
self = true
}
Without this, your fate is to deal with all kinds of networking errors with tons of error messages that don’t explain what is going on and your mount won’t work.
What now?
- When your Nginx containers boot up, the caching layer will be cached between them regardless of how many containers you have. The same use case can be applied to whatever use-case you might have.
- This use case might be shitty but the example is still valid if you need any sort of long-term persistence and shared state between containers.
Stay safe!