Grafana Cloud

Collect Amazon Elastic Container Service or AWS Fargate OpenTelemetry data

You can configure Grafana Alloy or AWS ADOT to collect OpenTelemetry-compatible data from Amazon Elastic Container Service (ECS) or AWS Fargate and forward it to any OpenTelemetry-compatible endpoint.

Metrics are available from various sources including ECS itself, the ECS instances when using EC2, X-Ray, and your own application. You can also collect logs and traces from your applications instrumented for Prometheus or OTLP.

  1. Collect task and container metrics
  2. Collect application telemetry
  3. Collect EC2 instance metrics
  4. Collect application logs

Before you begin

  • Ensure that you have basic familiarity with instrumenting applications with OpenTelemetry.
  • Have an available Amazon ECS or AWS Fargate deployment.
  • Identify where Alloy writes received telemetry data.
  • Be familiar with the concept of Components in Alloy.

Collect task and container metrics

In this configuration, you add an OpenTelemetry Collector to the task running your application, and it uses the ECS Metadata Endpoint to gather task and container metrics in your cluster.

You can choose between two collector implementations:

  • You can use ADOT, the AWS OpenTelemetry collector. ADOT has native support for scraping task and container metrics. ADOT comes with default configurations that can be selected in the task definition.

  • Alternatively, you can use Alloy as a collector alongside the Prometheus ECS exporter which exposes the ECS metadata endpoint metrics in Prometheus format.

Configure ADOT

If you use ADOT as a collector, add a container to your task definition and use a custom configuration you define in your AWS Systems Manager Parameter Store.

You can find sample OpenTelemetry configuration files in the AWS Observability repository. You can use these samples as a starting point and add the appropriate exporter configuration to send metrics to a Prometheus or OpenTelemetry endpoint.

Read otel-prometheus to find out how to set the Prometheus remote write (AWS managed Prometheus in the example).

Complete the following steps to create a sample task. Refer to the ADOT doc for more information.

  1. Create a Parameter Store entry to hold the collector configuration file.

    1. Open the AWS Console.

    2. In the AWS Console, choose Parameter Store.

    3. Choose Create parameter.

    4. Create a parameter with the following values:

      • Name: collector-config
      • Tier: Standard
      • Type: String
      • Data type: Text
      • Value: Copy and paste your custom OpenTelemetry configuration file.
  2. Download the ECS Fargate or ECS EC2 task definition template from GitHub.

  3. Edit the task definition template and add the following parameters.

    • {{region}}: The region to send the data to.
    • {{ecsTaskRoleArn}}: The AWSOTTaskRole ARN.
    • {{ecsTaskExecutionRoleArn}}: The AWSOTTaskExecutionRole ARN.
    • Add an environment variable named AOT_CONFIG_CONTENT. Select ValueFrom to tell ECS to get the value from the Parameter Store, and set the value to collector-config.
  4. Follow the ECS Fargate setup instructions to create a task definition using the template.

Configure Alloy

Use the following as a starting point for your Alloy configuration:

Alloy
prometheus.scrape "stats" {
  targets    = [
    { "__address__" = "localhost:9779" },
  ]
  metrics_path = "/metrics"
  scheme       = "http"
  forward_to   = [prometheus.remote_write.default.receiver]
}

// Additional OpenTelemetry configuration as in [ecs-default-config]
// OTLP receiver
// statsd
// Use the alloy convert command to use one of the AWS ADOT files
// https://grafana.com/docs/alloy/latest/reference/cli/convert/
...

prometheus.remote_write "default" {
  endpoint {
    url = sys.env("PROMETHEUS_REMOTE_WRITE_URL")
      basic_auth {
        username = sys.env("PROMETHEUS_USERNAME")
        password = sys.env("PROMETHEUS_PASSWORD")
      }
  }
}

This configuration sets up a scrape job for the container metrics and export them to a Prometheus endpoint.

Complete the following steps to create a sample task.

  1. Create a Parameter Store entry to hold the collector configuration file.

    1. Open the AWS Console.

    2. In the AWS Console, choose Parameter Store.

    3. Choose Create parameter.

    4. Create a parameter with the following values:

      • Name: collector-config
      • Tier: Standard
      • Type: String
      • Data type: Text
      • Value: Copy and paste your custom Alloy configuration file.
  2. Download the ECS Fargate or ECS EC2 task definition template from GitHub.

  3. Edit the task definition template and add the following parameters.

    • {{region}}: The region to send the data to.

    • {{ecsTaskRoleArn}}: The AWSOTTaskRole ARN.

    • {{ecsTaskExecutionRoleArn}}: The AWSOTTaskExecutionRole ARN.

    • Set the container image to grafana/alloy:<VERSION>, for example grafana/alloy:latest or a specific version such as grafana/alloy:v1.5.0.

    • Add a custom environment variable named ALLOY_CONFIG_CONTENT. Select ValueFrom to tell ECS to get the value from the Parameter Store, and set the value to collector-config. Alloy doesn’t read this variable directly, but you use it with the command below to pass the configuration.

    • Add environment variables for Prometheus remote write:

      • PROMETHEUS_REMOTE_WRITE_URL
      • PROMETHEUS_USERNAME
      • PROMETHEUS_PASSWORD - For increased security, create a password in AWS Secrets Manager and reference the ARN of the secret in the ValueFrom field.
    • In the Docker configuration, change the Entrypoint to bash,-c.

    • {{command}}: "echo \"$ALLOY_CONFIG_CONTENT\" > /tmp/config_file && exec alloy run --server.http.listen-addr=0.0.0.0:12345 /tmp/config_file". This command writes the configuration from the environment variable to a file and then runs Alloy with that configuration. Make sure you don’t omit the double quotes around the command.

    • Alloy doesn’t support collecting container metrics from the ECS metadata endpoint directly, so you need to add a second container for the Prometheus exporter if needed:

      1. Add a container to the task.
      2. Set the container name to ecs-exporter.
      3. Set the image to quay.io/prometheuscommunity/ecs-exporter:latest.
      4. Add tcp/9779 as a port mapping.
  4. Follow the ECS Fargate setup instructions to create a task definition using the template.

Collect EC2 instance metrics

For ECS Clusters running on EC2, you can collect instance metrics by using AWS ADOT or Alloy in a separate ECS task deployed as a daemon.

Alloy

You can follow the steps described in Configure Alloy to create another task, with the following changes:

  • Only add the Alloy container, not the Prometheus exporter, and run the task as a daemon so it automatically runs one instance per node in your cluster.
  • Update your Alloy configuration to collect metrics from the instance. The configuration varies depending on the type of EC2 node. Refer to the collect documentation for details.

ADOT

The approach described in the AWS OpenTelemetry documentation uses the awscontainerinsightreceiver receiver from OpenTelemetry. ADOT includes this receiver.

You need to use a custom configuration Parameter Store entry based on the sample configuration file to route the telemetry to your final destination.

Collect application telemetry

To collect metrics and traces emitted by your application, use the OTLP endpoints exposed by the collector sidecar container regardless of the collector implementation. Specify localhost as the host name, which is the default for most instrumentation libraries and agents.

For Prometheus endpoints, add a scrape job to the ADOT or Alloy configuration, and use localhost, service port, and endpoint path.

Collect Logs

The easiest way to collect application logs in ECS is to use the AWS FireLens log driver. Depending on your use case, you can forward your logs to the collector container in your task using the Fluent Bit plugin for OpenTelemetry or using the Fluent Bit Loki plugin. You can also send everything directly to your final destination.