Breeze – Simple Deployments

Learn how to use and scale Docker in local, test, and production

Getting Started With Docker: Running Laravel with Docker Compose

2019-02-06 7 min read Leo Sjöberg

With the interest for Docker in both development and production steadily growing, it can be hard for Docker beginners to figure out the best way to run Docker in their environment. This series of blog posts will take you all the way from simply running a local docker environment with docker-compose (this article), all the way to deploying in production with Kubernetes, Helm, and Terraform.

Docker Compose

Before getting started, I want to make a quick note on something called Docker Compose that we’ll be using. Docker Compose is a way to manage a set of docker containers for a single project, without complicated docker commands and per-container management.


For this setup, we will have

  1. A docker-compose file to declare our containers
  2. An nginx config file
  3. A PHP container definition (defined by a Dockerfile)

So first, let’s get the boilerplate out of the way: the docker-compose.yml file, usually stored in the root of your project. This file holds information about which containers we are using in our project:

version: '3.7'

    image: nginx:latest
      - ./default.conf:/etc/nginx/conf.d/default.conf
      - ./:/app
      - 80:80
    container_name: myapp-fpm
    build: .
      - ./:/app
    image: mysql:8
      MYSQL_USER: homestead
      MYSQL_PASSWORD: secret
      MYSQL_DATABASE: homestead
      - 3306:3306

This is all the config you’ll need. Well, this, the PHP setup, and an nginx config. Let’s quickly go through it to make sure it’s not all too confusing.

First in the docker-compose file, you declare the syntax version, right now, both 2.x and 3.x are supported (and most people still use 2.x, and is still a valid choice). After that, you declare services. Services, in the context of docker compose, is just a way to keep track of the containers. They’re declared by a name on the top level, and then use either an image or a build configuration for the container.

You might notice that the fpm service has the following block:

build: .

build specifies which folder should be used as the template to build our container. Docker will look for a file called Dockerfile in that directory, let’s have a look at that. To create the Dockerfile, simply run

touch Dockerfile

In that Dockerfile is where we put all the configuration you would usually run when you start a new server:

FROM php:7.3-fpm

RUN apt-get update && apt-get install -y \
    curl \
    libssl-dev \
    zlib1g-dev \
    libicu-dev \
RUN docker-php-ext-configure intl
RUN docker-php-ext-install pdo_mysql mbstring intl opcache mcrypt

RUN usermod -u 1000 www-data

WORKDIR /var/www/html

So what we’re doing here is using the official php-fpm 7.3 image as our base (if you want a smaller footprint, feel free to use Alpine). We then run apt-get install to install various libraries that are needed by Laravel, just like you would on a regular OS. After that, you’ll see the unique commands docker-php-ext-configure and docker-php-ext-install. These are commands provided specifically by the PHP image, that makes container configuration a lot easier. We then create a new www-data user (since, in the PHP container, www-data by default has access to /var/www). Then we set the working directory to /var/www/html, making it our “default”, so to speak, from which php-fpm starts any action.

The FPM image is automatically setup to start php-fpm when the container starts, and allows other containers to connect to it on port 9000 (which we will later do from nginx).

Phew, that was lengthy, but the good news is we’re done with a lot of the legwork.

So back to our docker-compose.yml, you might notice the fpm and nginx containers both have a volumes key. volumes is the way in which you bind your local directory to the container, so that any changes you make locally also end up in the container, sort of like how you connect your local directory to a VM through Vagrant with NFS. The line - ./:/var/www/html binds our current directory (./) on the host to the /var/www/html directory in the container.

You might also have noticed that we have another volume declaration on the nginx service; ./default.conf:/etc/nginx/conf.d/default.conf. This binds a configuration file into Nginx’s default configuration, and avoids the hassle of building a custom Nginx image. We don’t yet have that file, so let’s create it!

Simply run

touch default.conf

As for the content of your newly created default.conf,

server {
    listen 80 default_server;
    listen [::]:80 default_server ipv6only=on;

    root /var/www/html/public;
    index index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string;

    location ~ \.php$ {
        try_files $uri /index.php =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass fpm:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;

For the most part, this looks like any old default Nginx configuration. There is one subtle difference. If you look at our location ~ \.php$ block, you can see that we use fastcgi_pass fpm:9000;. Whereas in a VM, you might pass it to or pass it to the FPM socket. In Docker, you simple use the name of your service (as specified in the docker-compose file) as the host, IP numbers be gone!

Alright, there we are, almost done!

Let’s jump back to our docker-compose.yml yet again and look for more things we don’t understand… How about the mysterious ports? ports does a very simple thing: it binds a host port to a container port. So - 80:80 means you can access the container’s port 80, on your port 80. Your port is to the left of the colon (:), the container’s port is to the right. So we do that to be able to hit our webserver by just accessing localhost (since our local port 80 is bound to the nginx container, it’s like hitting the nginx container’s port 80, meaning it’s basically like having nginx run locally, but without you having gone through the messy installation. WIN!). We do the same thing for the mysql service, so that we can connect to our database from our own machine as well. This lets us connect to our database on, which means it’s super easy to setup Sequel Pro too.

Last but not least, we have all those gnarly environment variables. Those are used by the mysql Docker image when you first start your project’s mysql service, to setup some configuration. It will automatically

  1. Create a root user with the root password you specified,
  2. Create a database with the name you specified,
  3. Create a user with the username and password you specified that only has access to the specified database.

This is all without you doing any configuration at all!

Pulling it together

Now that everything is configured, to actually build the containers and run our environment, simply run docker-compose up -d, and once the containers are built, you should be able to access your Laravel project on localhost. Then use docker-compose stop to stop the running containers (using down will remove/destroy the containers as opposed to stopping them).

Running Commands

The straightforward way to run commands inside a container is with docker exec. So the way you would execute an Artisan command would be

docker exec -it myapp-fpm php artisan migrate

Note here that I used myapp-fpm, rather than just fpm. That’s because we’re using Docker rather than docker-compose, so we need to use the complete container name. You can find out the exact name by using docker ps. The default naming scheme is {folder}_{composename}_1, but we specified a manual override in docker-compose.yml. In a similar manner, if you desperately need to get into your container, you can actually access bash interactively inside it:

docker exec -it myapp-fpm bash


So that’s the quickstart on Docker with Laravel. It might seem like a lot of work, but you quite quickly realise that the vast majority of this is reusable, and you actually end up saving both time and resources by using Docker for local development.