🚨 AtomicJar is now part of Docker 🐋! Read the blog

Configuration of services running in a container

  • Java
  • PostgreSQL
  • LocalStack
Get the code

In this guide you will learn how to

  • Initialize containers by copying files into the containers

  • Run supporting commands inside docker containers using execInContainer()


What we are going to achieve in this guide

We are going to use a Postgres container and create the database schema using SQL scripts by copying them into the container. Also, we are going to learn how we can initialize LocalStack containers to create S3 buckets by executing commands inside the container.

Getting Started

Create a new Java project with the Maven or Gradle build tool and add the following dependencies.

dependencies {
   implementation "org.postgresql:postgresql:42.6.0"
   implementation "ch.qos.logback:logback-classic:1.4.8"

   testImplementation "software.amazon.awssdk:s3:2.20.42"
   // Reason for including AWS Java SDK V1: https://github.com/testcontainers/testcontainers-java/issues/1442
   testImplementation "com.amazonaws:aws-java-sdk-s3:1.12.445"
   testImplementation "org.junit.jupiter:junit-jupiter"
   testImplementation "org.testcontainers:junit-jupiter"
   testImplementation "org.testcontainers:postgresql"
   testImplementation "org.testcontainers:localstack"

Initializing the container by copying files into it

While using Testcontainers for testing, sometimes we need to initialize the container (or the application within the container) by placing files in a certain location within the container.

For example, while using the Postgres database you may want to create the database schema using SQL scripts before running your tests. You can initialize the Postgres database within the Docker container by placing the SQL scripts in the /docker-entrypoint-initdb.d directory.

Let us create a database schema creation SQL script init-db.sql in src/test/resources directory.

create table customers (
     id bigint not null,
     name varchar not null,
     primary key (id)

Now, let’s write a test using the Testcontainers Postgres module.

import org.testcontainers.containers.PostgreSQLContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.MountableFile;

class CustomerServiceTest {

   static PostgreSQLContainer<?> postgres =
           new PostgreSQLContainer<>("postgres:16-alpine")
                        "init-db.sql"), "/docker-entrypoint-initdb.d/"

   void shouldGetCustomers() {

Here we have used the withCopyFileToContainer(MountableFile mountableFile, String containerPath) method to copy the init-db.sql file from the classpath into the /docker-entrypoint-initdb.d/ directory within. Now when you try to run the test, Testcontainers will spin up a Postgres Docker container with the SQL scripts copied into /docker-entrypoint-initdb.d/ directory, where they will be executed automatically before running your tests. And so you can use those tables from your tests.

We can also copy files from any path on the host using MountableFile.forHostPath(String path) as follows:

static PostgreSQLContainer<?> postgres =
       new PostgreSQLContainer<>("postgres:16-alpine")

Initializing the container by executing commands inside container

Some Docker containers provide CLI tools to perform various actions by running commands within the container. While using Testcontainers, we may want to perform some initialization tasks before running our tests.

You can use the container.execInContainer(String…​ command) API to run any available command inside a running container.

For example, Testcontainers provides the LocalStack module which can be used for testing AWS Service integrations. Let’s suppose we are testing the scenario of a file upload into a S3 bucket. Here we may want to create the S3 bucket before running the tests.

Let us see how we can use the LocalStack module, create a S3 bucket using execInContainer() and assert that the bucket exists in your test.

class LocalStackTest {

  static final String bucketName = "mybucket";

  static URI s3Endpoint;
  static String accessKey;
  static String secretKey;
  static String region;

  static LocalStackContainer localStack = new LocalStackContainer(

  static void beforeAll() throws IOException, InterruptedException {
    s3Endpoint = localStack.getEndpointOverride(S3);
    accessKey = localStack.getAccessKey();
    secretKey = localStack.getSecretKey();
    region = localStack.getRegion();

    localStack.execInContainer("awslocal", "s3", "mb", "s3://" + bucketName);


  void shouldListBuckets() {
    StaticCredentialsProvider credentialsProvider =
        AwsBasicCredentials.create(accessKey, secretKey)
    S3Client s3 = S3Client

    List<String> s3Buckets = s3


Here we have created a S3 bucket by running the command "awslocal s3 mb s3://bucketName" via localStack.execInContainer(…​). "awslocal" is a command line tool provided as part of the LocalStack Docker image. Similarly, you can run any arbitrary valid command inside the running container.

We can also run any command and get the output and exit code as follows:

Container.ExecResult execResult =
    localStack.execInContainer("awslocal", "s3", "ls");
String stdout = execResult.getStdout();
int exitCode = execResult.getExitCode();
assertEquals(0, exitCode);
NoteThe withClasspathResourceMapping(), withCopyFileToContainer(), execInContainer() methods are inherited from GenericContainer. So they are available for all Testcontainers modules extending GenericContainer as well.


We have learned how we can configure the test dependency containers using Testcontainers by copying files into containers and executing commands inside the running containers. These container customization features help in setting up the containers in a desired state for our test scenarios.

To learn more about Testcontainers visit http://testcontainers.com