Skip Navigation code drift

Mocking AWS at the Network Level

written  &  updated

We use AWS pretty heavily at Taskless, with their compute-on-demand making it easy for us to scale our bi-directional API Gateway from zero to thousands of concurrent requests with no additional effort. It's truly wonderful; that is, until it's time to write tests.

The path most developers go is to grab the aws-sdk-client-mock library, which works really well assuming you don't have presigning, are using AWS Timestream, and can keep all your AWS libraries on the exact same version to avoid type conflicts.

I'm also assuming you're here because you either need pre-signing, are using AWS Timestream, think force-upgrading all AWS client libraries for a single fix is a difficult pill to swallow, or are otherwise jammed on a traditional mocking approach. The good news is, there is another way. We just have to move from the node / module layer down to the network layer.

What Are You Testing, Really?

First, the realization. We don't actually care what the @aws-sdk/* libraries do.

There's a lot of internals in the AWS client libraries (middleware, smithy, signing, etc). Every one of them has their own abstractions, inputs, outputs, and naming conventions. Some concepts, like middleware, are impossible to mock. None of that matters though. What we do care about is how our app reacts to network conditions, including those made by the SDK libraries.

And that we can test.

  • We can introduce network delays and ensure the libraries we depend on handle these events well
  • We can simulate a service being down
  • We use APIs instead of SDKs, which are less volatile
  • We gain a better understanding about how AWS makes its network requests
  • We catch if a dependency suddenly starts phoning home

So how do you go about mocking the network? There's nock and Mock Service Worker / MSW. Both are excellent libraries; I just prefer MSW so I can use the same testing patterns on the node.js and browser. While these next sections are written for MSW, they can be easily applied to a nock-based setup.

Preparing for Network Mocks

Setting up for network based mocking requires a little bit of pre-work. Specifically, we want to knock out all our AWS environment variables with dummy values; this ensures if a request ever does fall through to the network layer, it cannot trigger a billable event. When launching any node.js app, including tests, you can include a NODE_OPTIONS environment variable which passes additional options through to the node process. In the case of a multi-process tester like AVA, this ensures the options are carried to each worker. For a single-process worker like Jest, any node options are passed through as if they were directly on the command line.

{
    // ...
    "scripts": {
        "test": "NODE_OPTIONS='-r ./test.setup.js' ava"
    }
}

The above snippet adjusts our test command to include additional node options. -r <file> tells node.js to include a file first before running any other code. In this case, we want to load a bootstrap file. Our bootstrap will take care of removing all AWS environment variables for us, first by explicitly deleting all aws_ prefixed values, then setting suitable test values for access, secrets, and tokens. I'm using dotenv for readability, but you can also explicitly set process.env if you prefer.

const dotenv = require("dotenv");
const util = require("node:util");

util.inspect.defaultOptions.depth = 10;

for (const key of Object.keys(process.env)) {
  if (key.toLowerCase().startsWith("aws_")) {
    delete process.env[key];
  }
}

dotenv.populate(
  process.env,
  {
    AWS_ACCESS_KEY_ID: "testing",
    AWS_SECRET_ACCESS_KEY: "testing",
    AWS_SECURITY_TOKEN: "testing",
    AWS_SESSION_TOKEN: "testing",
    AWS_DEFAULT_REGION: "us-east-1",
    AWS_REGION: "us-east-1",
    // DEBUG: "*",
  },
  { override: true }
);

One important reason we remove all aws_ items is because the AWS_PROFILE environment variable messes with pre-signing. If the variable is set, smithy middleware will attempt to authenticate and load the profile in question. So, don't be clever; take a scorched earth approach to the aws_* environment variables and ensure they're all replaced.

With confidence our AWS account won't get any surprise billing, we can follow MSW's Getting Started Guide for node.js and create our handlers. We can verify MSW is working because running our tests will tell us about unhandled requests, and our first call to AWS should automatically fail.

To make it easier to mock our network requests, we'll want to create a default handler.

The Default Handler at *

Unhandled requests from MSW are fine, but I find that it's not always helpful telling you where / why something fell through to the unhandled request. I recommend adding these two handlers at the end of your chain, ensuring you get actionable errors when tests try and call out to AWS.

const handlers: RequestHandler[] = [
  // force inercept of any attempts to call the local AWS Credential Provider
  http.all("http://169.254.169.254/*", () => {
    throw new Error("Attempted to call local AWS Credential Provider");
  }),

  // Provide additional info beyond MSW's default unhandled request error
  http.all("*", async (info) => {
    const request = info.request.clone();

    const debug = {
      url: request.url,
      method: request.method,
      headers: Object.fromEntries(request.headers.entries()),
      body: await request.text(),
    };

    console.error("Unhandled Request");
    console.error(JSON.stringify(debug, null, 2));
  }),
];

Our first handler (the 169.254...) takes care of the AWS Credential Provider. In some scenarios, like when AWS_PROFILE is set to a dummy value, the credential provider is automatically called by AWS. Adding a catch-all for the credential provider will tell us immediately if AWS is attempting to verify our test credentials.

The second handler is a better debugger. Instead of a thrown error that tells you the network request came from within AWS, you can unpack the URL, method, headers, and body. Usually this additional data makes it much easier to see what request isn't being mocked. As a bonus, this gives you all the information you need to write a matching handler of your own.

Make a Lot of Mocks

There's no limit to the number of mocks you can have. Don't be afraid to have a dozen handlers for https://dynamodb.*.amazonaws.com, and return undefined if you don't want to handle the request. A return value of undefined from a handler tells MSW to try the next one in-sequence. For example, I check DynamoDB handlers for a specific table using the following TypeScript:

export const isGetForTable = async (original: Request, table?: string) => {
  const request = original.clone();
  const body = (await request.json()) as GetItemInput;
  if (table) {
    return body.TableName === table;
  }

  return true;
};

DynamoDB operates off of JSON, making it trivial to check the TableName for a match. When adding these checks, don't forget to clone() the request object! Because MSW uses the built-in Request object, you can only read from the request body once, just like when you're using fetch().

Most of your helpers will focus on "is this a <blank> command" and "is this a <blank> command for resource <blank>". You only have to write these once, and then you can reuse them anywhere.

A Little JSON, A Little XML, A Few Surprises

As you mock, you're going to discover some clients operate on XML while others work with JSON, even though all v3 endpoints support JSON now. Just roll with it and follow the AWS API doc's XML responses when required. You'll know when this happens because despite returning JSON, the AWS client will complain about a missing < or unhandled { in the response.

Some services, like AWS Timestream, make multiple requests. The first request (the one you'd normally associate with an endpoint) just retrieves the real endpoint, and the second request goes to this discovered endpoint. When you find these discovery-based services, take advantage of wildcard routes to simplify the network mocks.

Finally, while all this may seem more difficult than the usual mocking pattern, keep with it. The AWS API itself is absurdly stable. Seriously. The SQS API version is tagged November 5, 2012. So once you get these mocks working to your liking, they'll continue to work for the foreseeable future. And if AWS changes something that causes your network requests to change, you'll find out about it immediately.

Supplement: Known Weird AWS Replies

Because not everything mocked in AWS is obvious, I'll add specific notes about libraries and mocks as I uncover them.

SQS Uses XML

It's used in almost all infrastructure, but SQS responses are XML. Even worse, you have to include an MD5 digest that is checked by a smithy middleware. The following snippet can create XML success and error responses for the AWS XML API.

export const sendMessageXMLError = (
  code = "TestingError",
  message = "This is a custom error"
) => /* XML */ `
<?xml version="1.0" encoding="UTF-8"?>
<Error>
  <Code>${code}</Code>
  <Message>${message}</Message>
  <RequestId>7fe4446e-b452-53f7-8f85-181e06f2dd99</RequestId>
</Error>
`;

export const sendMessageXMLResponse = (messageBody: string) => /* XML */ `
<?xml version="1.0" encoding="UTF-8"?>
<SendMessageResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
    <SendMessageResult>
        <MessageId>374cec7b-d0c8-4a2e-ad0b-67be763cf97e</MessageId>
        <MD5OfMessageBody>${createHash("md5")
          .update(messageBody)
          .digest("hex")}</MD5OfMessageBody>
    </SendMessageResult>
    <ResponseMetadata>
        <RequestId>7fe4446e-b452-53f7-8f85-181e06f2dd99</RequestId>
    </ResponseMetadata>
</SendMessageResponse>
`;

export const createMockSendMessageRespone = (messageBody: string): Response =>
  HttpResponse.xml(sendMessageXMLResponse(messageBody), {
    status: 200,
  });

S3 PutObjectCommand Requires an Empty Body

When putting an object into s3 via PutObjectCommand, smithy (the engine underneath the AWS client libraries) expects an HTTP body. A test response needs to include both an empty body and a Content-Length header of 0. This helper makes it easier to create responses for the command.

type MockResponseBodyOptions = {
  time?: Date;
  etag?: string;
  requestId?: string;
};

export const createMockPutResponse = (
  options?: MockResponseBodyOptions
): Response =>
  HttpResponse.arrayBuffer(new TextEncoder().encode(), {
    status: 200,
    // https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_ResponseSyntax
    headers: {
      Etag: options?.etag ?? "1b2cf535f27731c974343645a3985328",
      "Last-Modified": options?.time
        ? options.time.toISOString()
        : new Date().toISOString(),
      "x-amz-request-id":
        options?.requestId ?? "7fe4446e-b452-53f7-8f85-181e06f2dd99",
      "content-length": "0",
    },
  });