Skip to main content
Amazon Elastic File System (EFS) provides shared, persistent storage for your ECS containers. Use it when your agents need data to survive container restarts or deployments.

When to Use EFS

Agents like Pal use DuckDB to store structured data locally. Without EFS, this data is lost when containers restart.
Each ag infra up creates new containers. EFS ensures your data persists across deployments.
If you scale to multiple ECS tasks, EFS provides a shared filesystem all containers can access.

Pal Agent Requirements

Pal agent requires EFS in production. Without EFS, Pal’s DuckDB data (notes, bookmarks, research) is lost when containers restart or redeploy.
AgentEFS Required?Why
PalYesStores DuckDB at /data/pal.db
Knowledge AgentNoUses PostgreSQL (RDS)
MCP AgentNoStateless

DuckDB Single Worker Requirement

DuckDB requires single-writer access. The template sets --workers 1 in the uvicorn command. Do not increase workers if using Pal or any DuckDB-based agent.
Multiple workers cause “database is locked” errors and potential data corruption.

Data Persistence Summary

Data TypeStorageSurvives Restart?
Agent memory & sessionsPostgreSQL (RDS)✓ Yes
Knowledge embeddingsPostgreSQL (RDS)✓ Yes
Pal’s DuckDB (notes, bookmarks)Local /data✗ No (needs EFS)
File uploadsLocal filesystem✗ No (needs EFS or S3)

Architecture

Setup Guide

1

Create an EFS File System

Create a new file system in your AWS region:
aws efs create-file-system \
  --creation-token agentos-efs \
  --performance-mode generalPurpose \
  --throughput-mode bursting \
  --region us-east-1
Save the FileSystemId from the response (e.g., fs-0123456789abcdef0).
Use generalPurpose performance mode for most workloads. Only use maxIO for highly parallelized applications.
2

Create an Access Point

Access points provide application-specific entry points with user/permission mapping:
aws efs create-access-point \
  --file-system-id fs-0123456789abcdef0 \
  --posix-user Uid=61000,Gid=61000 \
  --root-directory "Path=/data,CreationInfo={OwnerUid=61000,OwnerGid=61000,Permissions=755}" \
  --region us-east-1
Save the AccessPointId from the response (e.g., fsap-0123456789abcdef0).
The UID/GID 61000 matches the non-root user in the AgentOS container. This ensures your application can read and write to EFS.
3

Configure Infrastructure Settings

Update infra/settings.py with your EFS IDs:
infra/settings.py
infra_settings = InfraSettings(
    ...
    # EFS for persistent storage
    efs_file_system_id="fs-0123456789abcdef0",
    efs_access_point_id="fsap-0123456789abcdef0",
)
4

Create Mount Targets

EFS needs mount targets in each subnet your ECS tasks use. First, deploy to create the EFS security group:
ag infra up prd:aws
Then get the security group ID:
aws ec2 describe-security-groups \
  --filters "Name=group-name,Values=*-efs-sg" \
  --query 'SecurityGroups[0].GroupId' \
  --output text
Create mount targets in each subnet:
# Replace with your subnet IDs and security group
aws efs create-mount-target \
  --file-system-id fs-0123456789abcdef0 \
  --subnet-id subnet-0abc123def456789a \
  --security-groups sg-0123456789abcdef0

aws efs create-mount-target \
  --file-system-id fs-0123456789abcdef0 \
  --subnet-id subnet-0def456789abc123b \
  --security-groups sg-0123456789abcdef0
5

Verify and Redeploy

Redeploy to pick up the EFS configuration:
ag infra up prd:aws -y
Verify the mount by checking your container logs:
aws logs tail /ecs/{infra_name}-prd --follow
You should see your application start without errors. Data written to /data now persists across restarts.

Settings Reference

SettingTypeDescription
efs_file_system_idstrEFS file system ID (e.g., fs-0123456789abcdef0)
efs_access_point_idstrAccess point ID (e.g., fsap-0123456789abcdef0). Optional but recommended for permission mapping.

How It Works

When you configure EFS settings, the infrastructure automatically:
  1. Creates a security group (*-efs-sg) allowing NFS traffic (port 2049) from your app containers
  2. Configures an ECS volume with transit encryption enabled
  3. Mounts the volume at /data in your container
The relevant code in prd_resources.py:
# EFS Volume configuration
prd_efs_volume = EcsVolume(
    name="efs-data-volume",
    efs_volume_configuration={
        "fileSystemId": efs_file_system_id,
        "transitEncryption": "ENABLED",
        "authorizationConfig": {
            "accessPointId": efs_access_point_id,
            "iam": "DISABLED",
        },
    },
)

# Mount point in FastApi app
prd_fastapi = FastApi(
    ...
    ecs_volumes=[prd_efs_volume],
    ecs_container_mount_points=[
        {"sourceVolume": "efs-data-volume", "containerPath": "/data"}
    ],
)

Cost

EFS pricing is based on storage used:
Storage ClassPrice (US East)
Standard$0.30/GB-month
Infrequent Access$0.016/GB-month
Archive$0.008/GB-month
Example costs:
  • 1 GB of agent data: ~$0.30/month
  • 10 GB of documents: ~$3.00/month
There’s no minimum fee. You only pay for what you use.
Enable lifecycle policies to automatically move infrequently accessed files to cheaper storage classes.

Troubleshooting

Ensure you’ve created mount targets in the same subnets specified in infra/settings.py. Each subnet needs its own mount target.
aws efs describe-mount-targets --file-system-id fs-xxx
Check that your access point uses UID/GID 61000 to match the container user. Verify with:
aws efs describe-access-points --access-point-id fsap-xxx
Check that the EFS security group allows inbound NFS (port 2049) from your app security group:
aws ec2 describe-security-groups --group-names "*-efs-sg"
Ensure your application writes to /data, not another directory. Check your DATA_DIR environment variable.