Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

A recipe for running containers in your Tape tests

Categories
logo + docker logo + tape logo

tl;dr - Add some functions that run docker (or any other container manager, like podman) during your E2E tests for effortless E2E tests including “real” versions of your dependencies.

After developing a bunch of nice new features, testing is important – while I rarely find myself writing intricate mocks unless I’m in a Java or Ruby codebase, E2E tests are what I spend most of my time writing. They offer the greatest value – it doesn’t matter if you can cause a divide by zero bug in one function given abnormal inputs if no one can use the credit card checkout on your app. Since we do wnat to avoid those pesky divide-by-zeroes I do depend on type system to do most of the rest for me (and some unit/integration tests) – I almost exclusively write in compile-time-typechecked languages these days.

One of the pieces of code I’ve found myself carrying around from project to project (whether for clients or myself) are pieces of code for running docker containers before and after tests (and test suites) in order to test some larger dependency in the process of an E2E tests. While not quite to the point of a user-facing E2E test, I find that API-level E2E tests are a fantastic tool for maintaining my own sanity, and have helped me catch breaking bugs.

One of the most recent iterations of this piece of code was written for Scout APM, which develops an Application Monitoring tool of the same name. I develop their NodeJS Agent which actually makes use of this code.

The code

Without much further ado, here it is the core of the abstraction – startContainer:

/**
 * Start a container in a child process for use with tests
 *
 * @param {Test} t - the test (tape) instance
 * @param {string} image - the image name (ex. "postgres")
 * @param {string} tag - the image tag (ex. "12")
 * @returns {Promise<ChildProcess>} A promise that resolves to the spawned child process
 */
export function startContainer(
    t: Test,
    optOverrides: Partial<TestContainerStartOpts>,
): Promise<ContainerAndOpts> {
    const opts =  new TestContainerStartOpts(optOverrides);

    // Build port mapping arguments
    const portMappingArgs: string[] = [];
    Object.entries(opts.portBinding).forEach(([containerPort, localPort]) => {
        portMappingArgs.push("-p");
        portMappingArgs.push(`${localPort}:${containerPort}`);
    });

    // Build env mapping arguments
    const envMappingArgs: string[] = [];
    Object.entries(opts.envBinding).forEach(([envVarName, value]) => {
        envMappingArgs.push("-e");
        envMappingArgs.push(`${envVarName}=${value}`);
    });

    const args = [
        "run",
        "--name", opts.containerName,
        ...portMappingArgs,
        ...envMappingArgs,
        opts.imageWithTag(),
    ];

    // Spawn the docker container
    t.comment(`spawning container [${opts.imageName}:${opts.tagName}] with name [${opts.containerName}]...`);
    const containerProcess = spawn(
        opts.dockerBinPath,
        args,
        {detached: true, stdio: "pipe"} as SpawnOptions,
    );
    opts.setExecutedStartCommand(`${opts.dockerBinPath} ${args.join(" ")}`);

    let resolved = false;
    let stdoutListener;
    let stderrListener;

    const makeListener = (
        type: "stdout" | "stderr",
        emitter: Readable | null,
        expected: string,
        resolve: (res: ContainerAndOpts) => void,
        reject: (err?: Error) => void,
    ) => {
        if (!emitter) {
            return () => reject(new Error(`[${type}] pipe was not Readable`));
        }

        return (line: string | Buffer) => {
            line = line.toString();
            if (!line.includes(expected)) { return; }

            if (type === "stdout" && stdoutListener) { emitter.removeListener("data", stdoutListener); }
            if (type === "stderr" && stderrListener) { emitter.removeListener("data", stderrListener); }

            if (!resolved) {
                resolve({containerProcess, opts});
            }

            resolved = true;
        };
    };

    // Wait until process is listening on the given socket port
    const promise = new Promise((resolve, reject) => {
        // If there's a waitFor specified then we're going to have to listen before we return

        // Wait for specific output on stdout
        if (opts.waitFor && opts.waitFor.stdout) {
            stdoutListener = makeListener("stdout", containerProcess.stdout, opts.waitFor.stdout, resolve, reject);
            if (containerProcess.stdout) { containerProcess.stdout.on("data", stdoutListener); }
            return;
        }

        // Wait for specific output on stderr
        if (opts.waitFor && opts.waitFor.stderr) {
            stderrListener = makeListener("stderr", containerProcess.stderr, opts.waitFor.stderr, resolve, reject);
            if (containerProcess.stderr) { containerProcess.stderr.on("data", stderrListener); }
            return;
        }

        // Wait for a given amount of time
        if (opts.waitFor && opts.waitFor.milliseconds) {
            waitMs(opts.waitFor.milliseconds)
                .then(() => resolve({containerProcess, opts}));
            return;
        }

        // Wait for a given function to evaluate to true
        if (opts.waitFor && opts.waitFor.fn) {
            // Check every second for function to evaluate to true
            const startTime = new Date().getTime();
            const interval = setInterval(() => {
                // Ensure opts are still properly formed
                if (!opts || !opts.waitFor || !opts.waitFor.fn || !opts.waitFor.fn.timeoutMs) {
                    clearInterval(interval);
                    reject(new Error("waitFor object became improperly formed"));
                    return;
                }

                // If we've waited too long then clear interval and exit
                const elapsedMs = new Date().getTime() - startTime;
                if (elapsedMs >= opts.waitFor.fn.timeoutMs) {
                    clearInterval(interval);
                    reject(new Error("function never resolved to true before timeout"));
                    return;
                }

                // If we haven't waited too long, check the function
                opts.waitFor.fn.check({containerProcess, opts})
                    .then(res => {
                        if (!res) { return; }
                        clearInterval(interval);
                        resolve({containerProcess, opts});
                    })
                    .catch(() => undefined);
            }, 1000);

            return;
        }

        containerProcess.on("close", code => {
            if (code !== 0) {
                t.comment("container process closing, piping output to stdout...");
                if (containerProcess.stdout) { containerProcess.stdout.pipe(process.stdout); }
                // t.comment(`command: [${opts.executedStartCommand}]`);
                reject(new Error(`Failed to start container (code ${code}), output will be piped to stdout`));
                return;
            }

            resolve({containerProcess, opts});
        });

    });

    return timeout(promise, opts.startTimeoutMs)
        .catch(err => {
            // If we timed out clean up some waiting stuff, shutdown the process
            // since none of the listeners may have triggered, clean them up
            if (err instanceof TimeoutError) {
                if (opts.waitFor && opts.waitFor.stdout && containerProcess.stdout) {
                    containerProcess.stdout.on("data", stdoutListener);
                }

                if (opts.waitFor && opts.waitFor.stderr && containerProcess.stderr) {
                    containerProcess.stderr.on("data", stderrListener);
                }

                containerProcess.kill();
            }

            // Re-throw the error
            throw err;
        });
}

And here is the end to the process which is much shorter – killContainer:

// Kill a running container
export function killContainer(t: Test, opts: TestContainerStartOpts): Promise<number> {
    const args = ["kill", opts.containerName];

    // Spawn the docker container
    t.comment(`attempting to kill [${opts.containerName}]...`);
    const dockerKillProcess = spawn(
        opts.dockerBinPath,
        args,
        { detached: true, stdio: "ignore"},
    );

    const promise = new Promise((resolve, reject) => {
        dockerKillProcess.on("close", code => {
            resolve(code);
        });
    });

    return timeout(promise, opts.killTimeoutMs);
}

The code here isn’t very groundbreaking – all we’re doing is running a known binary, and doing the bare minimum to keep track of the generated child process, but it is absolute a gamechanger for writing meaningful tests in the presence of dependencies like databases.

Here’s an example of some tests that depend on this functionality:

let PG_CONTAINER_AND_OPTS: TestUtil.ContainerAndOpts | null = null;

// Pseudo test that will start a containerized postgres instance
TestUtil.startContainerizedPostgresTest(test, cao => {
    PG_CONTAINER_AND_OPTS = cao;
});

test("SELECT query during a request is recorded", {timeout: TestUtil.PG_TEST_TIMEOUT_MS}, t => {
    const scout = new Scout(buildScoutConfiguration({
        allowShutdown: true,
        monitor: true,
    }));

    // Setup a PG Client that we'll use later
    let client: Client;

    // ... more test code ...
});

// Pseudo test that will stop a containerized postgres instance that was started
TestUtil.stopContainerizedPostgresTest(test, () => PG_CONTAINER_AND_OPTS);

You’re probably wondering at this point where the startContainer function call I promised is! Well startContainerizedPostgresTest actually uses it, a bit deep down. I’ll go into that, and show the code with what the specialization looks like.

Specialization: postgres

If you’ve been reading long you’ll know I love Postgres – it’s the best F/OSS database (and arguably database period) that has ever been created and is a fantastic tool. Out of the box it’s perfect for new companies who can spare just a little time to define their data model up front (or don’t, with json columns), and scales vertically very well (it scales horizontally as well with a bit more effort).

Since Scout’s NodeJS Agent has a postgres integration, testing to ensure that spans are captured properly was necessary, so I wrote a little specialization of startContainer which is a Tape.Test for postgres:

// Utility function to start a postgres instance
export function startContainerizedPostgresTest(
    test: any,
    cb: (cao: ContainerAndOpts) => void,
    containerEnv?: object,
    tagName?: string,
) {
    tagName = tagName || POSTGRES_IMAGE_TAG;
    const envBinding = Object.assign({}, POSTGRES_CONTAINER_DEFAULT_ENV, containerEnv);

    test("Starting postgres instance", (t: Test) => {
        let port: number;
        let containerAndOpts: ContainerAndOpts;

        getPort()
            .then(p => port = p)
            .then(() => {
                const portBinding = {5432: port};
                return startContainer(t, {
                    imageName: POSTGRES_IMAGE_NAME,
                    tagName,
                    portBinding,
                    envBinding,
                    waitFor: {stdout: POSTGRES_STARTUP_MESSAGE},
                });
            })
            .then(cao => containerAndOpts = cao)
            .then(() => {
                const opts = containerAndOpts.opts;
                t.comment(`Started container [${opts.containerName}] on local port ${opts.portBinding[5432]}`);
                cb(containerAndOpts);
            })
            .then(() => t.end())
            .catch(err => {
                if (containerAndOpts) {
                    return killContainer(t, containerAndOpts.opts)
                        .then(() => t.end(err));
                }

                return t.end(err);
            });
    });
}

And the accompanying function for killing the postgres container:

// Utility function to stop a postgres instance
export function stopContainerizedPostgresTest(test: any, provider: () => ContainerAndOpts | null) {
    stopContainerizedInstanceTest(test, provider, "postgres");
}

// Generic function for making a test that stops a containered instance of some dependency
export function stopContainerizedInstanceTest(test: any, provider: () => ContainerAndOpts | null, name: string) {
    test(`Stopping containerized ${name} instance...`, (t: Test) => {
        const containerAndOpts = provider();
        if (!containerAndOpts) {
            throw new Error("no container w/ opts object provided, can't stop container");
        }

        const opts = containerAndOpts.opts;

        killContainer(t, opts)
            .then(code => t.pass(`successfully stopped container [${opts.containerName}], with code [${code}]`))
            .then(() => t.end())
            .catch(err => t.end(err));
    });
}

Ideas for improvement

This code is far from perfect, but very practically useful! There are a few ways I think it could be improved:

  • Use an abstraction like docker-cli-js instead of running shell commands
  • Make the behavior on test failure a bit more configurable (at present if a test fails, the resources don’t get cleaned up)
  • A better UX for interfacing into tests (maybe overriding & extending tape’s test function?)

While I don’t have any near-term plans to try to pursue these improvements, I thought it was worth sharing.

Wrapup

Well it was great to dive into this (it’d been on my list), and share the code that is hopefully a little useful. I also realized that I’ve written about it before, though that time with more of a focus on how to make it work with GitLab CI.

Oh well, one more post can’t hurt – hopefully this post was a bit more useful