Docker on localhost

A long time ago (well… only 7 years in fact), I’ve started my career in the web development. And I still remember what my local environment looked like. And how many hours or days I lost because of my poor setup.

Today, I will tell you the story behind my addiction to Docker through the different evolutions of a huge e-commerce project on which I’m working since the earlier stages.

I will go quickly on the first steps. It's not the most interesting part anyways.

That specific project is composed of five main components. The first one is mandatory, when the second one can be easily replaced by MySQL and the others can be omitted, even if the application behavior can be impacted and makes testing more difficult.

Software Role
Zend Server Web server
Percona Main database server
Redis Cache server
Varnish HTTP accelerator
MongoDB Second database server

Last but not least, the MySQL database is still about 30Go once all data not needed on local environment, like customers details or orders, have been removed. And that restoring process must be as fast as possible because someone who wants to retrieve the latest version of the catalog is blocked until the restoration is completed.

Stone Age

That period was when we were using Windows with Zend Server.

In order to build the environment, you have to download the right executable from the official website and install it. A WAMP stack will be deployed with a default configuration for each service. This kind of installation may be sufficient for simple projects, but you will quickly reach its limits if you want to perform some adjustments or implement additional services not supported on Windows platforms.

Basically, it’s like using WAMP with multiple powerful features provided by Zend.


  • Easy to install because everything is shipped into one executable, like WAMP.
  • You can export/import your server configuration thanks to Zend Server interface.


  • Totally different from the production, like WAMP.
  • Impossible to use softwares not supported on Windows (Percona, Redis, etc.).
  • You have to use MySQL capabilities if you want to restore your local database.

Bronze Age

That period was when we were using Windows with VirtualBox.

Shortly after the beginning of the project, we decided to migrate to something closer to what we have on production. We chose VirtualBox as a replacement of our development environment and everything is packaged into one huge virtual machine.


  • Easy to install and configure, all services and all configurations are saved into the VM image.
  • Isolated from the OS and can be used as a sandbox.
  • Closer to the production environment.


  • The VM image is about 50Go… It takes hours to export and additional hours to import.
  • You have to use MySQL capabilities if you want to restore your local database.

Iron Age

That period was when we were using macOS with VirtualBox.

Everything is running in exactly the same conditions as before. It was a quickwin when we migrated from Windows in order to avoid loosing too much time.

Middle Ages

That period was when we were using macOS with a native installation.

Thanks to Homebrew, a package manager for macOS, it’s possible to install almost all softwares needed easily. The downside is that we are moving away from a production-like status, and it’s still difficult to switch between different kinds of projects.


  • Easy to install because almost everything can be installed with Homebrew.
  • Easy to share the configuration, with a drag-and-drop into a chat room for example.


  • Still different from the production, like MAMP.
  • Difficult to switch between different stack using different services.
  • You have to use MySQL capabilities if you want to restore your local database.

Modern Era

That period was… wait, it’s now!

If you don’t know yet what Docker is, I suggest you to have a look at that overview.

I personally discovered Docker at a conference about PHP. Everybody was talking about it, but it was still unknow to me… So, I started to get my hands on it as soon as I got home.

My first implementation was a monolith where I put what was configured inside our previous virtual machine. That’s definitely not what you should do with Docker, but it was sufficient for a POC and to convince me to go further.

My second implementation is a multi-containers environment based on Docker Compose (I’ve started to use it when it was still called Fig). It currently contains height different services: Apache, Blackfire, Maildev, MongoDB, MySQL, Nginx, Redis and Varnish. Everything needed in order to simulate as much as possible our production stack. If you are curious, you can check my work on GitHub. Several services may appear or disappear in the meantime since I’m currently working on an infrastructure migration for the project for which that environment has been initially built.

At this point, we have already met a context where it will be difficult to be closer to the production since all services can be configured independently. However, one question remains: how speed up the database restoration process? Even with that environment and a custom MySQL configuration, it still takes too long.

It's time to use the Docker magic!

From the official documentation: Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. In other words, it’s where persistent data like MySQL database files are stored. I will not explain in details the mechanisms behind Docker volumes, instead let’s have a look at these two commands.

Don’t be afraid by the format, I will explain it piece by piece.

Instruction Description
docker run Run a command in a new container.
--rm Automatically remove the container when it exits.
--volumes-from XXXXX Mount volumes from the specified container(s).
-v $(pwd):/backup Bind mount a volume: the current directory into /backup.
busybox Name of the image to be used by the new container.
sh -c "XXXXX" Command to be executed with the new container.

To summarize…

The first command runs a new container that will reuse data of a previous container and compress them from the MySQL directory into an archive called backup.tar within the current directory.

The second command runs a new container that will reuse data of a previous container and uncompress an archive called backup.tar within the current directory into the MySQL directory.

Pretty simple when reformulated, isn't it?

Furthermore, because these commands use directly the filesystem through Docker, MySQL is no longer an issue if we want to import a huge database. The bottleneck is although the number of resources allocated to Docker. No more waiting for hours because of a MySQL restoration!


  • Only one command is enough to bootstrap the whole environment once Docker is installed.
  • Once the environment is configured, it can be shared across the whole team easily.
  • Docker can be used on Linux, Windows and macOS.
  • It’s possible to restore MySQL data in less than 15 minutes instead of several hours.


I think the reasons behind my heavy usage of Docker on local environments are obvious now. It became a tool almost as essential as Git for my developer work. Lastly, if you work on complex stacks, you should definitely give it a try.

Thanks for reading!