May 21, 2020

Running Load Tests From the Cloud With k6

Our teams have been using k6 for a while to load test our applications and we’ve yet to find a scenario that it can’t handle. k6 load tests are written in JavaScript so you not only get the full power of ES6 but also a ton of useful k6 modules and the ability to load your own custom modules. As we already use JavaScript/node for writing acceptance tests, k6 was a perfect match for load testing.

There are a few ways to run k6 load tests:

  1. Install k6 on your machine
  2. Run tests with the k6 Docker image
  3. Use k6 cloud, the commercial side of k6 (previously LoadImpact)

We already use k6 cloud as it provides a convenient way to run and monitor load tests. Running from the cloud is as simple as executing k6 cloud instead of k6 run:

k6 cloud -e GWC_URL=http://example.com -e GWC_APIKEY=sk_xxx --stage "1m:10,2m:100,3m:500,5m:1000" card-payment.js

Once the test has been uploaded to k6 cloud you can run or even schedule it directly from the k6 web app.

There are a few cases where none of the above methods for running k6 are suitable:

  • You need to run load tests against a HTTP application inside your private network (e.g. internal APIs)
  • You do not wish to make your load tests public (k6 run natively supports loading scripts from a URL)
  • You have complex load tests or use private modules making the default docker image unsuitable
  • You want to run extended soak tests that are not covered by your k6 cloud plan

Our solution to the above is to build our own Docker image that contains our load tests and private modules and run it on our own infrastructure. We structure our load tests as follows:

├── Dockerfile
├── README.md
└── test
    ├── card-payment-auto-capture.js
    ├── card-payment-void.js
    ├── card-payment.js
    └── lib
        ├── config.js
        ├── payments-client.js
        └── test-helper.js

The Dockerfile is relatively straightforward since we can use the existing k6 image as a base. Note that we need to reset the entrypoint of the base image so we can override the command that is run:

FROM loadimpact/k6:0.26.1
ENV SCRIPT card-payment.js
COPY ./test /test
WORKDIR /test
# Override the entry point of the base k6 image
ENTRYPOINT []
CMD ["sh", "-c", "k6 run $SCRIPT"]

We use the SCRIPT environment variable to indicate the load test we wish to run. This can be combined with a multitude of environment variables used to configure the load test such as k6_STAGES which can be used to set multiple stages of virtual users and duration.

Running k6 Load Tests from AWS

We run majority of our applications on Amazon ECS and it’s also perfect for running our load tests. ECS provides a way to not only run docker containers as long-running services (e.g. an API) but also run tasks (containers) in an ad-hoc way or on a schedule.

In order to scale up our load tests we can make use of Fargate which, as the strapline suggests, allows you to “run containers without managing the underlying servers/clusters”. Auto-scaling is built-in and we can optimise costs by not paying for any hardware when our load tests are not running.

The ECS task definition isn’t particularly special so I’ve ommitted some of the defaults:

{
  "ipcMode": null,
  "executionRoleArn": "arn:aws:iam::123456789:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "dnsSearchDomains": null,
      "environmentFiles": null,
      "logConfiguration": {
        "logDriver": "awslogs",
        "secretOptions": null,
        "options": {
          "awslogs-group": "/ecs/gateway-load-tests",
          "awslogs-region": "eu-west-2",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "entryPoint": [],
      "portMappings": [],
      "command": [],
      "linuxParameters": null,
      "cpu": 0,
      "environment": [
        {
          "name": "GWC_APIKEY",
          "value": "sk_XXX"
        },
        {
          "name": "GWC_URL",
          "value": "http://example.com"
        },
        {
          "name": "k6_STAGES",
          "value": "1m:1,2m:3,7m:10"
        },
        {
          "name": "SCRIPT",
          "value": "card-payment.js"
        }
      ],
      "resourceRequirements": null,
      "ulimits": [
        {
          "name": "nofile",
          "softLimit": 10000,
          "hardLimit": 50000
        }
      ],
      "image": "123456789.dkr.ecr.eu-west-2.amazonaws.com/ckotech/gateway-load-tests:latest",
      "name": "gateway-load-tests"
    }
  ],
  "placementConstraints": [],
  "memory": "512",
  "compatibilities": [
    "FARGATE"
  ],
  "taskDefinitionArn": "arn:aws:ecs:eu-west-2:123456789:task-definition/gateway-load-tests-fargate:1",
  "family": "gateway-load-tests",
  "networkMode": "awsvpc",
  "cpu": "256",
}

Note that we’re increasing the nofile user limits which you’ll likely need to do if you’re making a lot of HTTP requests. I’ve also set up some default environment variables for the load test task that can be overridden on each task run.

Once you’ve pushed your docker image and created your task definition you can run it by selecting “Run Task” from the Actions menu (full details here).

When you run a task in Fargate you need to specify the subnet and security groups of the cluster in which the container will be run. This can be a bit monotonous to do each time from the console so we can use the AWS CLI instead:

aws ecs run-task \
--launch-type FARGATE \
--count 1 \
--task-definition gateway-load-tests-fargate:1 \
--cluster default \
--network-configuration "awsvpcConfiguration={subnets=[subnet-12345],securityGroups=[sg-67890],assignPublicIp=DISABLED}" \
--overrides '{"containerOverrides":[{"name":"gateway-load-tests","environment":[{"name":"SCRIPT","value":"card-payment-auto-capture.js"}]}]}'

The above command illustrates how you can override or provide additional environment variables. If the inline JSON becomes a bit difficult to manage (or you’re in parsing hell on Windows) then you can always load the overrides from file with file://overrides.json.

Finally if you have Cloudwatch logging enabled you can check the status of your load test using the CLI too:

aws logs get-log-events \
--log-group-name /ecs/gateway-load-tests \
--log-stream-name ecs/gateway-load-tests/931a6446-86fc-44c3-a2ce-6b7b78ed010c

For completeness the command for running the same task on existing EC2 instances is as follows:

aws ecs run-task \
--launch-type FARGATE \
--count 1 \
--task-definition gateway-load-tests-fargate:1 \
--cluster default \
--overrides '{"containerOverrides":[{"name":"gateway-load-tests","environment":[{"name":"SCRIPT","value":"card-payment-auto-capture.js"}]}]}'

© 2022 Ben Foster