<![CDATA[Stefan Scherer's Blog]]>https://stefanscherer.github.io/https://stefanscherer.github.io/favicon.pngStefan Scherer's Bloghttps://stefanscherer.github.io/Ghost 1.8Wed, 14 Feb 2018 23:34:20 GMT60<![CDATA[How to find dependencies of containerized Windows apps]]>

Running applications in Windows containers keeps your server clean. The container image must contain all the dependencies that the application needs to run, for example all its DLL's. But sometimes it's hard to figure out why an application doesn't run in a container. Here's my way to find out what's

https://stefanscherer.github.io/find-dependencies-in-windows-containers/5a84a3660f689f0001bafe61Wed, 14 Feb 2018 23:01:47 GMT

Running applications in Windows containers keeps your server clean. The container image must contain all the dependencies that the application needs to run, for example all its DLL's. But sometimes it's hard to figure out why an application doesn't run in a container. Here's my way to find out what's missing.

Process Monitor

To find out what's going on in a Windows Container I often use the Sysinternals Process Monitor. It can capture all major syscalls in Windows such as file activity, starting processes, registry and networking activity.

But how can we use procmon to monitor inside a Windows container?

Well, I heard today that you can run procmon from command line to start and stop capturing events. I tried running procmon in a Windows container, but it doesn't work correctly at the moment.

So the next possibilty is to run procmon on the container host.

On Windows 10 you only have Hyper-V containers. These are "black boxes" from your host operating system. The Process Monitor cannot look inside Hyper-V containers.


To investigate a Windows container we need the "normal" Windows containers without running in Hyper-V isolation. The best solution I came up with is to run a Windows Server 2016 VM and install Process Monitor inside that VM.

When you run a Windows container you can see the container processes in the Task Manager of the Server 2016 VM. And Process Monitor can also see what these processes are doing. We have made some containers out of "glass" to look inside.


Example: PostgreSQL

Let's try this out and put the PostgreSQL database server into a Windows container.

The following Dockerfile downloads the ZIP file of PostgreSQL 10.2, extracts all files and removes the ZIP file again.

# escape=`
FROM microsoft/windowsservercore:10.0.14393.2007 AS download

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]


RUN Invoke-WebRequest $('https://get.enterprisedb.com/postgresql/postgresql-{0}-windows-x64-binaries.zip' -f $env:PG_VERSION) -OutFile 'postgres.zip' -UseBasicParsing ; `
    Expand-Archive postgres.zip -DestinationPath C:\ ; `
    Remove-Item postgres.zip

Now build and run a first container to try out the postgres.exe inside the container.

docker build -t postgres .
docker run -it postgres cmd

Navigate into C:\pgsql\bin folder and run postgres.exe -h.

postgres no output

As you can see, nothing happens. No output. You just are back to the CMD prompt.

Now it's time to install procmon.exe on the container host and run it.

Open a PowerShell terminal in your Windows Server 2016 VM and run

iwr -usebasicparsing https://live.sysinternals.com/procmon.exe -outfile procmon.exe

install procmon

Now run procmon.exe and define a filter to see only file activity looking for DLL files and start capturing.

define procmon filter

I have a prepared filter available for download: depends.PMF
Go to Filter, then Organize Filters... and then Import...

Now in your container run postgres.exe -h again.

As you can see Process Monitor captures file access to \Device\Harddisk\VolumeXX\psql\bin\ which is a folder in your container.

procmon postgres capture

The interesting part is which DLL's cannot be found here. The MSVCR120.dll is missing, the Visual Studio Runtime DLL's.

For other applications you might have to look for config files or folders that are missing that stops your app from running in a Windows container.

Let's append the missing runtime in the Dockerfile with the next few lines:

RUN Invoke-WebRequest 'http://download.microsoft.com/download/0/5/6/056DCDA9-D667-4E27-8001-8A0C6971D6B1/vcredist_x64.exe' -OutFile vcredist_x64.exe ; `
    Start-Process vcredist_x64.exe -ArgumentList '/install', '/passive', '/norestart' -NoNewWindow -Wait ; `
    Remove-Item vcredist_x64.exe

Build the image and run another container and see if it works now.

postgres usage

Yes, this time we see the postgres.exe usage, so it seems we have solved all our dependency problems.

Go NanoServer

Now we have a Windows Server Core image with PostgreSQL server. The image size is now 11.1GByte. Let's go one step further and make it a much smaller NanoServer image.

In NanoServer we cannot run MSI packages or vcredist installers, and soon there is also no PowerShell. But with a so called multi-stage build it's easy to COPY deploy the PostgreSQL binaries and dependencies into a fresh NanoServer image.

We append some more lines to our Dockerfile. Most important line is the second FROM line to start a new stage with the smaller NanoServer image.

FROM microsoft/nanoserver:10.0.14393.2007

Then we COPY the pgsql folder from the first stage into the NanoServer image, as well as the important runtime DLL's.

COPY --from=download /pgsql /pgsql
COPY --from=download /windows/system32/msvcp120.dll /pgsql/bin/msvcp120.dll
COPY --from=download /windows/system32/msvcr120.dll /pgsql/bin/msvcr120.dll

Set the PATH variable to have all tools accessible, expose the standard port and define a command.

RUN setx /M PATH "C:\pgsql\bin;%PATH%"

CMD ["postgres"]

Now build the image again and try it out with

docker run postgres postgres.exe --help

docker run postgres in nano

We still see the usage, so the binaries also work fine in NanoServer. The final postgres images is down at 1.64GByte.
If you do this with a NanoServer 1709 or Insider image the sizes is even smaller at 738MByte. You can have a look at the compressed sizes on the Docker Hub at stefanscherer/postgres-windows.


Process Monitor can help you find issues that prevent applications to run properly in Windows containers. Run it from a Server 2016 container host to observe your or a foreign application.

I hope you find this blog post useful and I love to hear your feedback and experience about Windows containers. Just drop a comment below or ping me on Twitter @stefscherer.

<![CDATA[Terraforming a Windows Insider Server in Azure]]>

There may be different ways to run the Windows Insider Server Preview builds in Azure. Here's my approach to run a Windows Docker engine with the latest Insider build.

Build the Azure VM

On your local machine clone the packer-windows repo which has a Terraform template to build an Azure

https://stefanscherer.github.io/terraforming-a-windows-insider-server-in-azure/5a64f348845d55000179abc4Sun, 21 Jan 2018 21:32:41 GMT

There may be different ways to run the Windows Insider Server Preview builds in Azure. Here's my approach to run a Windows Docker engine with the latest Insider build.

Build the Azure VM

On your local machine clone the packer-windows repo which has a Terraform template to build an Azure VM. The template chooses a V3 machine which is able to run nested VM's.

Create a VM in Azure with Terraform

You need Terraform on your local machine which can be installed with a package manager.


brew install terraform


choco install terraform

Now clone the GitHub repo and go to the template.

git clone https://github.com/StefanScherer/packer-windows
cd packer-windows/nested/terraform

Adjust the variables.tf file with resource group name, account name and password, region and other things. You also need some information for Terraform to create resources in your Azure account. Please read the Azure Provider documentation for details how to obtain these values.

export ARM_CLIENT_ID="uuid"
export ARM_CLIENT_SECRET="uuid"
export ARM_TENANT_ID="uuid"

terraform apply

This command will take some minutes until the VM is up and running. It also runs a provision script to install further tools for you.

RDP into the Packer builder VM

Now log into the Azure VM with a RDP client. This VM has Hyper-V installed as well as Packer and Vagrant, the tools we will use next.

Run Packer and Vagrant in Azure VM

Build the Insider VM

The next step is to build the Windows Insider Server VM. We will use Packer for the task. This produces a Vagrant box file that can be re-used locally on a Windows 10 machine.

Clone the packer-windows repo and run the Packer build with the manually downloaded Insider ISO file.

git clone https://github.com/StefanScherer/packer-windows
cd packer-windows

packer build --only=hyperv-iso --var iso_url=~/Downloads/Windows_InsiderPreview_Server_2_17074.iso windows_server_insider_docker.json

This command will take some minutes as it also downloads the Insider Docker images to have them cached when you start a new VM.

Add the box file so it can be used by Vagrant.

vagrant box add windows_server_insider_docker windows_server_insider_docker_hyperv.box

Boot the Insider VM

Now we're using Vagrant to boot the Insider VM. I'll use my windows-docker-machine Vagrant template which I also use locally on a Mac or Windows 10 laptop.

git clone https://github.com/StefanScherer/windows-docker-machine
cd windows-docker-machine
vagrant plugin install vagrant-reload

vagrant up --provider hyperv insider

This will spin up a VM and creates TLS certificates for the Docker engine running in the Windows Insider Server VM.

You could use it from the Azure VM, but we want to make the nested VM reachable from our laptop.

Now retrieve the IP address of this nested VM to add some port mappings so we can access the nested VM from our local machine.

vagrant ssh-config

Use the IP address shown for the next commands, eg.

netsh interface portproxy add v4tov4 listenport=2376 listenaddress= connectport=2376 connectaddress=
netsh interface portproxy add v4tov4 listenport=9000 listenaddress= connectport=9000 connectaddress=
netsh interface portproxy add v4tov4 listenport=3390 listenaddress= connectport=3389 connectaddress=

Create Docker TLS for external use

As we want to access this Docker engine from our local laptop we have to re-create the TLS certs with the FQDN of the Azure VM.

RDP to Azure and nested VM

Now RDP into the nested VM through port 3390 from your laptop.

You will see a CMD terminal. Run powershell to enter a PowerShell terminal.

Run the create-machine.ps1 provision script again with the IP address and the FQDN of the Azure VM. Also specify the path of your local home directory (in my case -machineHome /Users/stefan) to make the docker-machine configuration work.

C:\Users\demo\insider-docker-machine\scripts\create-machine.ps1 -machineHome /Users/stefan -machineName az-insider -machineIp -machineFqdn az-insider-01.westeurope.cloudapp.azure.com

Run Docker containers

You can copy the generated TLS certificates from the nested VM through the RDP session back to your home directory in $HOME/.docker/machine/machines folder.


Then you can easily switch the Docker environment variables locally on your


eval $(docker-machine env az-insider)

or Windows:

docker-machine env az-insider | iex

Now you should be able to run Docker commands like

docker images
docker run -it microsoft/nanoserver-insider cmd


We have used a lot of tools to create this setup. If you do this only once it seems to be more step than needed. But keep in mind the Insider builds are shipped regularly so you will do some steps again and again.

To repeat some of these steps tools like Packer and Vagrant can help you go faster building VM's as Docker helps you go faster to ship your apps.

  • Packer helps you repeat building a VM from new ISO.
  • Vagrant helps you repeat booting fresh VMs. Destroy early and often. Rebuild is cheap.
  • Docker helps you repeat creating and running applications.

If you have another approach to run Insider builds in Azure please let me know. I love to hear your story. Please use the comments below if you have questions or want to share your setup.

If you liked this blog post please share it with your friends. You can follow me on Twitter @stefscherer to stay updated with Windows containers.

<![CDATA[A sneak peek at LCOW]]>

Last week a major pull request to support Linux Containers on Windows (LCOW) has landed in master branch of the Docker project. With that feature enabled you will be able to run both Linux and Windows containers side-by-side with a single Docker engine.

So let's have a look how a

https://stefanscherer.github.io/sneak-peek-at-lcow/5a64a75ee5611a0001acf91fSun, 21 Jan 2018 15:30:58 GMT

Last week a major pull request to support Linux Containers on Windows (LCOW) has landed in master branch of the Docker project. With that feature enabled you will be able to run both Linux and Windows containers side-by-side with a single Docker engine.

So let's have a look how a Windows 10 developer machine will look like in near future.

LCOW on Windows 10

  • The Docker command docker ps lists all your running Linux and Windows containers.
  • You can use volumes to share data between containers and the host.
  • The containers can talk to each other over the container networks.
  • You can publish ports to your host and use localhost. But wait, this is still a Windows Insider feature coming to Windows 10 1803 release.

Running Linux containers

At the moment you need to specify the --platform option to pull Linux images. This option is also needed when the specific Docker images is a multi-arch image for both Linux and Windows.

docker pull --platform linux alpine

Once you have pulled Linux images you can run them without the --platform option.

docker run alpine uname -a

To allow Windows run Linux containers a small Hyper-V VM is needed. The LinuxKit project provides an image for LCOW at https://github.com/linuxkit/lcow.

Shared volumes

Let's see how containers of different platforms can share data in a simple way. You can bind mount a volume into Linux and Windows containers.

LCOW in action with shared volumes

The following example shares a folder from the host with a Linux and Windows container.

First create a folder on the Windows 10 host:

cd \
mkdir host

Run a Linux container

On the Windows 10 host run a Linux container and bind mount the folder as /test in the Linux container.

docker run -it -v C:\host:/test alpine sh

In the Linux container create a file in that mounted volume.

uname -a > test/hello-from-linux.txt

Run a Windows container

On the Windows 10 host run a Windows container and bind mount the folder as C:\test in the Windows container.

docker run -i -v C:\host:C:\test microsoft/nanoserver:1709 cmd

In the Windows container create a file in that mounted volume.

ver > test\hello-from-windows.txt


On the Windows 10 host list the files in the shared folder

PS C:\> dir host

    Directory: C:\host

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        1/21/2018   4:32 AM             85 hello-from-linux.txt
-a----        1/21/2018   4:33 AM             46 hello-from-windows.txt

This is super convenient for development environments to share configuration files or even source code.

Drafting mixed workloads

With Docker Compose you can spin up a mixed container environment. I just did these first steps to spin up a Linux and Windows web server.

version: "3.2"


    image: nginx
      - type: bind
        source: C:\host
        target: /test
      - 80:80

    image: stefanscherer/hello-dresden:0.0.3-windows-1709
      - type: bind
        source: C:\host
        target: C:\test
      - 81:3000

      name: nat

Think of a Linux database and a Window front end, or vice versa...

Build your own LCOW test environment

If you want to try LCOW yourself I suggest to spin up a fresh Windows 10 1709 VM.


I have tested LCOW with a Windows 10 1709 VM in Azure. Choose a V3 machine to have nested hypervisor which you will need to run Hyper-V containers.

Containers and Hyper-V

Enable the Containers feature and Hyper-V feature:

Enable-WindowsOptionalFeature -Online -FeatureName containers -All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart


Now install the LinuxKit image for LCOW. I have catched the latest from a CircleCI artifact, but soon there will be a new release in the linuxkit/lcow repo.

Invoke-WebRequest -OutFile "$env:TEMP\linuxkit-lcow.zip" "https://23-111085629-gh.circle-artifacts.com/0/release.zip"
Expand-Archive -Path "$env:TEMP\linuxkit-lcow.zip" -DestinationPath "$env:ProgramFiles\Linux Containers" -Force

Docker nightly build

Now download and install the Docker engine. As this pull request only landed in master branch we have to use the nightly build for now.

Invoke-WebRequest -OutFile "$env:TEMP\docker-master.zip" "https://master.dockerproject.com/windows/x86_64/docker.zip"
Expand-Archive -Path "$env:TEMP\docker-master.zip" -DestinationPath $env:ProgramFiles -Force

The next command installs the Docker service and enables the experimental features.

. $env:ProgramFiles\docker\dockerd.exe --register-service --experimental

Set the PATH variable to have the Docker CLI available.

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";$($env:ProgramFiles)\docker", [EnvironmentVariableTarget]::Machine)

Now reboot the machine to finish the Containers and Hyper-V installation. After the reboot the Docker engine should be up and running and the Docker CLI can be used from the PowerShell terminal.

Local Vagrant environment

If you have Vagrant installed with Hyper-V or VMware as your hypervisor, you can spin up a local test environment with a few commands.

First clone my GitHub repo docker-windows-box which has a LCOW environment to play with.

git clone https://github.com/StefanScherer/docker-windows-box
cd docker-windows-box
cd lcow
vagrant up

This will download the Vagrant base box if needed, spins up the Windows 10 VM and automatically installs all features shown above.


With all these new Docker features coming to Windows in the next few months, Windows 10 is evolving to the most interesting developer platform in 2018.

Imagine what's possible: Use a docker-compose.yml to spin up a mixed scenario with Linux and Windows containers, live debug your app from Visual Studio Code, and much more.

If you liked this blog post please share it with your friends. You can follow me on Twitter @stefscherer to stay updated with Windows containers.

<![CDATA[PoC: How to build images for 1709 without 1709]]>

First of all: Happy Halloween! In this blog post you'll see some spooky things - or magic? Anyway I found a way to build Windows Docker images based on the new 1709 images without running on 1709. Sounds weird?

Disclaimer: The tools and described workflow to build such images on

https://stefanscherer.github.io/poc-build-images-for-1709-without-1709/59f90ee4f830c70001a9b8f1Tue, 31 Oct 2017 23:55:00 GMT

First of all: Happy Halloween! In this blog post you'll see some spooky things - or magic? Anyway I found a way to build Windows Docker images based on the new 1709 images without running on 1709. Sounds weird?

Disclaimer: The tools and described workflow to build such images on old Windows Server versions may break at any time. It works for me and some special cases, but it does not mean it works for any other use-case.

The 2016 <-> 1709 gap

As you might know from my previous blog post there is a gap between the old and new Windows images. You cannot pull the new 1709 Docker images on current Windows Server 2016. This means you also cannot build images without updating your build machines to Windows Server 1709.


My favorite CI service for Windows is AppVeyor. They provide a Windows Server 2016 build agent with Docker and the latest base images installed. So it is very simple and convenient to build your Windows Docker images there. For example all my dockerfiles-windows Dockerfiles are built there and the images are pushed to Docker Hub.

I guess it will take a while until we can choose another build agent to start building for 1709 there.

But what should I do in the meantime?

  • Should I build all 1709 images manually on a local VM?
  • Or spin up a VM in Azure? It is possible since today.

But then I don't have the nice GitHub integration. And I have to do all the maintenance of a CI server (cleaning up disk space and so on) myself. Oh I don't want to go that way.

Docker images have layers

Let's have a closer look at how a Docker image looks like. Each Docker image contains of one or more layers. Each layer is read-only. Any change will be done in a new layer on top of the underlying ones.

For example the Windows Docker image of a Node.js application looks more or less like this:


At the bottom you find the Windows base image, then we add the Node.js runtime. Then we can add our application code on top of that. This is how a Dockerfile works. Every FROM, RUN, ... is an extra layer.

Technically all layers are just tarballs with files and directories in it. So when the application and framework layer are independent from the OS system layer it should be possible to rearrange them with a new OS layer.

Rebase Docker image

That is what I have tried to find out. I studied the Docker Hub API and wrote a proof of concept to "rebase" a given Windows Docker image to swap the old Windows OS layers with another one.

The tool works only with information from Docker Hub so it only retrieves metadata and pushes a new manifest back to the Docker Hub. This avoids downloading hundreds of megabytes for the old nanoserver images.

Use cases

  • Easily apply Windows Updates to an existing Windows app in seconds. Only the update layer will be swapped.
  • Provide your app for all available Windows Update layers to avoid download.
  • Sync multiple images based on different Windows Update layers to the current to avoid downloading several different udpate layers for a multi-container application.
  • Create images for Server 1709 without having a machine for it.


  • You cannot move an app from a windowsservercore image to the nanoserver image.
  • You also cannot move PowerShell scripts into the 1709 nanoserver image as there is no PowerShell installed.
  • Your framework or application layer may has modified the Windows registry at build time. It then carries the old registry that may not fit to new base layer.
  • Moving such old application layers on top of new base layers is some kind of time travel. Be warned that this tool may create corrupt images.

You can find the rebase-docker-image tool on GitHub. It is a Node.js command line tool which is also available on NPM.

The usage looks like this:

$ rebase-docker-image \
    stefanscherer/hello-freiburg:windows \
    -t stefanscherer/hello-freiburg:1709 \
    -b microsoft/nanoserver:1709

You specify the existing image, eg. "stefanscherer/hello-freiburg:windows" which is based on nanoserver 10.0.14393.x.

With the -t option you specify the target image name that where the final manifest should be pushed.

The -b option specifies the base image you want to use, so most of the time the "microsoft/nanoserver:1709" image.


When we run the tool it does its job in only a few seconds:

Retrieving information about source image stefanscherer/hello-freiburg:windows
Retrieving information about source base image microsoft/nanoserver:10.0.14393.1715
Retrieving information about target base image microsoft/nanoserver:1709
Rebasing image
Pushing target image stefanscherer/hello-freiburg:1709

Now on a Windows Server 1709 we can run the application.


I tried this tool with some other Windows Docker images and was able to rebase the golang:1.9-nanoserver image to have a Golang build environment for 1709 without rebuilding the Golang image by myself.

But I also found situations where the rebase didn't work, so don't expect it to work everywhere.

AppVeyor CI pipeline

I also want to show you a small CI pipeline using AppVeyor to build a Windows image with curl.exe installed and provide two variants of that Docker image, one for the old nanoserver and one with the nanoserver:1709 image.

The Dockerfile uses a multi-stage build. In the first stage we download and extract curl and its DLL's. The second stage starts again with the empty nanoserver (the fat one for Windows Server 2016) and then we just COPY deploy the binary into the fresh image. An ENTRYOINT finishes the final image.

FROM microsoft/nanoserver AS download
ADD https://skanthak.homepage.t-online.de/download/curl-$CURL_VERSION.cab curl.cab
RUN expand /R curl.cab /F:* .

FROM microsoft/nanoserver
COPY --from=download /curl/AMD64/ /
COPY --from=download /curl/CURL.LIC /
ENTRYPOINT ["curl.exe"]

This image can be built on AppVeyor and pushed to the Docker Hub.

The push.ps1 script pushes this image to Docker Hub.

docker push stefanscherer/curl-windows:$version-2016

Then the rebase tool will be installed and the 1709 variant will be pushed as well to Docker Hub.

npm install -g rebase-docker-image
rebase-docker-image `
  stefanscherer/curl-windows:$version-2016 `
  -t stefanscherer/curl-windows:$version-1709 `
  -b microsoft/nanoserver:1709

To provide my users the best experience I also draft a manifest list, just like we did for multi-arch images at the Captains Hack day. The final "tag" then contains both Windows OS variants.

On Windows you can use Chocolatey to install the manifest-tool. In the future this feature will be integrated into the Docker CLI as "docker manifest" command.

choco install -y manifest-tool
manifest-tool push from-spec manifest.yml

The manifest.yml lists both images and joins them together to the final stefanscherer/curl-windows image.

image: stefanscherer/curl-windows:7.56.1
tags: ['7.56', '7', 'latest']
    image: stefanscherer/curl-windows:7.56.1-2016
      architecture: amd64
      os: windows
    image: stefanscherer/curl-windows:7.56.1-1709
      architecture: amd64
      os: windows

So on both Windows Server 2016 and Windows Server 1709 the users can run the same image and it will work.

PS C:\Users\demo> docker run stefanscherer/curl-windows
curl: try 'curl --help' or 'curl --manual' for more information

This requires the next Docker 17.10 EE version to work correctly, but it should be available soon. With older Docker engines it may pick the wrong version of the list of Docker images and fail running it.


This way to "rebase" Docker images works astonishingly good, but keep in mind that this is not a general purpose solution. It is always better to use the correct version on the host to rebuild your Docker images the official way.

Please use the comment below if you have further questions or share what you think about that idea.


<![CDATA[A closer look at Docker on Windows Server 1709]]>

Today Microsoft has released Windows Server 1709 in Azure. The ISO file is also available in the MSDN subscription to build local VM's. But spinning up a cloud VM makes it easier for more people.

So let's go to Azure and create a new machine. The interesting VM for me

https://stefanscherer.github.io/docker-on-windows-server-1709/59f8a705f830c70001a9b8eeTue, 31 Oct 2017 23:18:14 GMT

Today Microsoft has released Windows Server 1709 in Azure. The ISO file is also available in the MSDN subscription to build local VM's. But spinning up a cloud VM makes it easier for more people.

So let's go to Azure and create a new machine. The interesting VM for me is "Windows Server, version 1709 with Containers" as it comes with Docker preinstalled.

azure search for 1709

After a few minutes you can RDP into the machine. But watch out, it is only a Windows Server Core, so there is no full desktop. But for a Docker host this is good enough.

Docker 17.06.1 EE preinstalled

As you can see the VM comes with the latest Docker 17.06.1 EE and the new 1709 base images installed.

Smaller "1709" base images

On great news is that the base images got smaller. For comparison here are the images of a Windows Server 2016:

Windows Server 2016 images

So with Windows Server 1709 the WindowsServerCore image is only 1/2 the size of the original. And for the NanoServer image is about 1/4 with only 93 MB on the Docker Hub.


That makes the NanoServer image really attractive to deploy modern microservices with it. As you can see, the "latest" tag is still pointing to the old image. As the 1709 release is a semi-annual release supported for the next 18 months and the current Windows Server 2016 is the LTS version, the latest tags still remain to the older, thicker images.

So when you want to go small, then use the "1709" tags:

  • microsoft/windowsservercore:1709
  • microsoft/nanoserver:1709

Where is PowerShell?

The small size of the NanoServer image comes with a cost: There is no PowerShell installed inside the NanoServer image.

So is that really a problem?

Yes and no. Some people have started to write Dockerfiles and installed software using PowerShell in the RUN instructions. This will be a breaking change.

The good news is that there will be a PowerShell Docker image based on the small nanoserver:


Currently there is PowerShell 6.0.0 Beta 9 available and you can run it with

docker run -it microsoft/powershell:nanoserver

As you can see PowerShell takes 53 MB on top of the 93 MB nanoserver.

So if you really want to deploy your software with PowerShell, then you might use this base image in your FROM instruction.

But if you deploy a Golang, Node.js, .NET Core application you probably don't need PowerShell.

My experience with Windows Dockerfiles is that the common tasks are

  • downloading a file, zip, tarball from the internet
  • extracting the archive
  • Setting an environment variable like PATH

These steps could be done with tools like curl (yes, I think of the real one and not the curl alias in PowerShell :-) and some other tools like unzip, tar, ... that are way smaller than the complete PowerShell runtime.

I did a small proof of concept to put some of the tools mentioned into a NanoServer image. You can find the Dockerfile an others in my dockerfiles-windows GitHub repo.


As you can see it only takes about 2 MB to have download and extracting tools. The remaining cmd.exe in the NanoServer image is still good enough to run these tools in the RUN instructions of a Dockerfile.

Multi-stage builds

Another approach to build small images based on NanoServer comes with Docker 17.06. You can use multi-stage builds which brings you so much power and flexibility into a Dockerfile.

You can start with a bigger image, for example the PowerShell image or even the much bigger WindowServerCore image. In that stage of the Dockerfile you have all scripting languages or even build tools or MSI support.

The final stage then uses the smallest NanoServer use COPY deploy instructions for your production image.

Can I use my old images on Server 1709?

Well, it depends. Let's test this with a popular application from portainer.io. When we try to run the application on a Windows Server 1709 we get the following error message: The operating system of the container does not match the operating sytem of the host.


We can make it work when we run the old container with Hyper-V isolation:


For the Hyper-V isolation we need Hyper-V installed. This works in Azure with the v3 machines that allows nested virtualization. If you are using Windows 10 1709 with Hyper-V then you also can run old images in Docker 4 Windows.

But there are many other situations where you are out of luck:

  • other cloud providers that does not have nested virtualization
  • VirtualBox

So my recommendation is to create new Docker images based on 1709 that can be used with Windows 10 1709, or Windows Server 1709 even without Hyper-V. Another advantage is that your users have much smaller downloads and can run your apps much faster.

Can I use the 1709 images on Server 2016?

No. If you try to run one of the 1709 based images on a Windows Server 2016 you see the following error message. Even running it with --isolation=hyperv does not help here as the underlying VM compute of your host does not have all the new features needed.



With Docker on Windows Server 1709 the container images get much smaller. Your downloads are faster and smaller, the containers start faster. If you're interested in Windows Containers then you should switch over to the new server version. The upcoming Linux Containers on Windows feature will run only on Windows 10 1709/Windows Server 1709 and above.

As a software vendor providing Windows Docker images you should provide both variants as people still use Windows 10 and Windows Server 2016 LTS. In a following blog post I'll show a way that makes it easy for your users to just run your container image regardless the host operating system they have.

I hope you are as excited as I am about the new features of the new Windows Server 1709. If you have questions feel free to drop a comment below.


<![CDATA[Cross-build a Node.js app with Docker and deploy to IBM Cloud]]>

After the DockerCon EU and the Moby Summit in Copenhagen last week we also had an additional Docker Captain's Hack Day. After introducing our current projects to the other Captains we also had time to work together on some ideas.

"Put all Captains available into a room, feed them

https://stefanscherer.github.io/cross-build-nodejs-with-docker/59f7701f71f6240001940592Mon, 30 Oct 2017 22:37:03 GMT

After the DockerCon EU and the Moby Summit in Copenhagen last week we also had an additional Docker Captain's Hack Day. After introducing our current projects to the other Captains we also had time to work together on some ideas.

"Put all Captains available into a room, feed them well and see what's happening."


Modernizing Swarm Visualizer

One of the ideas was Swarm Visualizer 2.0. Michael Irwin came up with the idea to rewrite the current Visualizer to be event driven, use a modern React framework and cleanup the code base.

The old one uses a dark theme and shows lots of details for the services with small fonts.

old swarm visualizer

Here's a screenshot of an early version of the new UI. With a click on one of the tasks you get more details about that task and its service. All information is updated immediately when you update the service (eg. add or remove labels).

new swarm visualizer

You can try this new Swarm visualizer yourself with the following command:

docker container run \
  --name swarm-viz \
  -p 3000:3000 \
  -v /var/run/docker.sock:/var/run/docker.sock \

I joined Michael's table as I was curious if we can have this visualizer for Windows, too. Especially the new Windows Server 1709 that makes mapping the Docker API into a Windows container as easy as on Linux.

In this blog post I focus on how to build a Node.js app with Docker and don't look into the details of the app itself. I'll show how to improve the Dockerfile to build for multiple platforms and finally how to build a CI pipeline for that. You can find the project on github.com/mikesir87/swarm-viz.

Initial Dockerfile

The application is built inside a Docker container. So you even can build it without any developer tools installed, you only need Docker.

Let's have a look at the first version of the Dockerfile for the Linux image. It is a multi-stage build with three stages:

# Build frontend
FROM node:8.7-alpine as frontend
COPY client/package.json .
RUN npm install
COPY client/ .
RUN npm run build

# Build backend
FROM node:8.7-alpine as backend
COPY api/package.json .
RUN npm install
COPY api/ .
RUN npm run build

# Put them together
FROM node:8.7-alpine
COPY api/package.json .
RUN npm install --production
COPY --from=backend /app/dist /app/dist
COPY --from=frontend /app/build /app/build
CMD node /app/dist/index.js

The first stage uses FROM node:8.7-alpine to build the frontend in a container.

The second stage builds the backend in another Alpine container. During that build you also need some development dependencies that aren't needed for the final image.

In the third stage only the dependencies that are relevant at runtime are installed with npm install --production. All artifacts needed from the other stages are also copied into the final image.

Make FROM more flexible for Windows

I tried to build the app for Windows Server 1709 and had to create a second Dockerfile as I have to use another FROM as node does not have a Windows variant in the official images. And Windows Server 1709 just came out so I had to create a Node.js base image for Windows myself.

So what I did was copying the Dockerfile to Dockerfile.1709 and changed all the

FROM node:8.7-alpine

lines into

FROM stefanscherer/node-windows:1709

But now we have duplicated the Dockerfile "code" for only this little difference.

Fortunately you now can use build arguments for the FROM instruction. So with only a little change we can have ONE Dockerfile for Linux and Windows.

ARG node=node:8.7-alpine
FROM $node as frontend


On Linux you still can build the image as before without any change.

On Windows I now was able to use this Dockerfile with

docker image build -t viz `
  --build-args node=stefanscherer/node-windows:1709 .

and use a Windows Node.js base image for all stages. First pull request done. Check! 😊

And running the manually built image in Windows Server 1709 looks very similar to Linux:

docker container run `
  -p 3000:3000 `
  -u ContainerAdministrator `
  -v //./pipe/docker_engine://./pipe/docker_engine `

Going multi-arch

We showed the Windows Swarm visualizer to other Captains and we discussed how to go to more platforms. Phil Estes, a very active member of the Docker community who's helping push the multi-architecture support in Docker forward and the maintainer of the manifest-tool, commented:

With Golang it is easy to build multi-arch images, just cross-build a static binary with GOARCH=bar go build app.go and copy the binary in an empty FROM scratch image.

Hm, we use Node.js here, so what has to be done instead?


Well, instead of the scratch image we need the node image for the Node.js runtime. So we had to choose the desired architecture and then copy all sources and dependencies into that image.

Our Node.js application uses Express, Dockerode and some other dependencies, that are platform independent. So this simple copy approach should do it, we thought.

We added another build stage in the Dockerfile where we switch to the desired platform. You may know, the node image on Docker Hub is already a multi-arch image. But in this case we want to build - let's say on Linux/amd64 - for another platform like the IBM s390 mainframe.

With another build argument to specify the target platform for the final stage we came up with this:

ARG node=node:8.7-alpine
ARG target=node:8.7-alpine

FROM $node as frontend

FROM $target
COPY --from=proddeps /app /app
CMD node /app/dist/index.js


As Phil works for IBM he could easily verify our approach. We built an IBM version of the Swarm visualizer with

docker image build -t mikesir87/swarm-viz \
  --build-arg target=s390x/node:8.7 .

and pushed it to the Docker Hub. Phil then pulled and started the container in IBM Cloud and showed us the visualizer UI. Hurray!


The second pull request was accepted. Check! 🎉

Now we needed some more automation to build and push the Docker images.

Adding a multi-arch CI pipeline

I've done that several times for my Raspberry Pi projects, so cherry-picked the relevant parts from other repos. For the CI pipeline we choose Travis CI, but any other CI cloud service could be used that allows multi-stage builds.

The .travis.yml uses a matrix build for all architectures. Currently we're building it for only two platforms:

sudo: required

 - docker

    - ARCH=amd64
    - ARCH=s390x

  - ./travis-build.sh


The travis-build.sh then is called for each architecture of that matrix and we run the corresponding build.

docker image build -t mikesir87/swarm-viz \
    --build-arg target=$ARCH/node:8.7 .


As a final step in the .travis.yml we push every image to Docker Hub and tag it with the Git commit id. At this early stage of the project this is good enough. Later on you can think of tagged release builds etc.

The travis-deploy.sh pushes the Docker image for each architecture to the Docker Hub with a different tag using the $ARCH variable we get from the matrix build.

docker image push "$image:linux-$ARCH-$TRAVIS_COMMIT"

In the amd64 build we additionally download and use the manifest-tool to push a manifest list with the final tag.

manifest-tool push from-args \
    --platforms linux/amd64,linux/s390x \
    --template "$image:OS-ARCH-$TRAVIS_COMMIT" \
    --target "$image:latest"

You can verify that the latest tag is already a manifest list with another Docker image provided by Phil

$ docker container run --rm mplatform/mquery mikesir87/swarm-viz
Image: mikesir87/swarm-viz:latest
 * Manifest List: Yes
 * Supported platforms:
   - amd64/linux
   - s390x/linux

Future improvements

In the near future we will also add a Windows build using AppVeyor CI to provide Windows images and also put them into the manifest list. This step would also be needed for Golang projects as you cannot use the empty scratch image on Windows.


If you watch closely we have used node:8.7 for the final stage. There is no multi-arch alpine image, so there also is no node:8.7-alpine as multi-arch image. But the maintainers of the official Docker images are working hard to add this missing piece to have small images for all architectures.

$ docker container run --rm mplatform/mquery node:8.7-alpine
Image: node:8.7-alpine
 * Manifest List: Yes
 * Supported platforms:
   - amd64/linux


At the end of the Hack day we were really excited how far we came in only a few hours and learned that cross-building Node.js apps with Docker and deploying them as multi-arch Docker images isn't that hard.

Best of all, the users of your Docker images don't have to think about these details. They just can run your image on any platform. Just use the command I showed at the beginning as this already uses the multi-arch variant of the next Swarm visualizer app.

So give multi-arch a try in your next Node.js project to run your app on any platform!

If you want to learn more about multi-arch (and you want to see Phil with a bow tie) then I can recommend the Docker Multi-arch All the Things talk from DockerCon EU with Phil Estes and Michael Friis.

In my lastest multi-arch slidedeck there are also more details about the upcoming docker manifest command that will replace the manifest-tool in the future.

Thanks Michael for coming up with that idea, thanks Phil for the manifest-tool and testing the visualizer. Thanks Dieter and Bret for the photos. You can follow us on Twitter to see what these Captains are doing next.


<![CDATA[DockerCon: LCOW and Windows Server 1709]]>

Last week was a busy week as a Docker Captain. Many of us came to Copenhagen to DockerCon EU 2017. You may have heard of the surprising news about Kubernetes coming to Docker. But there were also some other new announcements about Windows Containers.

Docker on Windows Workshop

On Monday

https://stefanscherer.github.io/dockercon-lcow-and-windows-server-1709/59f3b65cdd4c1b0001e301e7Sat, 28 Oct 2017 00:18:11 GMT

Last week was a busy week as a Docker Captain. Many of us came to Copenhagen to DockerCon EU 2017. You may have heard of the surprising news about Kubernetes coming to Docker. But there were also some other new announcements about Windows Containers.

Docker on Windows Workshop

On Monday I helped Elton Stoneman in his Docker on Windows Workshop. This time it was a full-day workshop and it was fully packed with 50 people.

Docker on Windows Workshop

Elton is always running the workshop at a rapic pace, but don't worry the workshop material is all public available on GitHub. So we went through dockerizing ASP.NET apps, adding Prometheus, Grafana and an ELK stack for monitoring, building a Jenkins CI pipeline and finally running a Docker Swarm. There is lots of things to look up in the material. If you prefer a book, I can recommend his Docker on Windows book which is also fully packed with many tips and tricks.

LCOW - The Inside Story

One of my favorite talks was by John Starks from Microsoft about Linux Container on Windows - The Inside Story. He explained how LinuxKit is used to run a small HyperV VM for the Linux containers to provide the Linux kernel. On his Windows 10 1709 machine he also gave pretty good live demos. The video is online and is worth watching.

Linux and Windows containers side-by-side

In the photo you can see an alpine and nanoserver container running side-by-side. So you will no longer need to switch between Linux and Windows containers, it just works. He also showed that volumes work between Linux and Windows containers. This demo was done with a special Docker engine as not all pull requests haven't been merged. But still challenging for me to try this on a own machine ...

Windows Server 1709

During the DockerCon week Microsoft has announced the availability of Windows Server Version 1709 for download. I first looked at the Azure Portal, but found nothing yet. I also couldn't find the downloads.

So after the LCOW talk I used a Windows 10 VM in Azure and installed the Fall Creators Update to have 1709 on that desktop machine. I found the missing pull request and compiled a Docker engine from source and then I had my LCOW moment:


When you see this the first time working and know what technical details had to be solved make make it look so simple and easy - awesome!

The next day I found the Windows Server 1709 ISO in my MSDN subscription. So I could start working on a Packer template in my packer-windows GitHub repo to automate the creation of such Windows VM's. But DockerCon is to meet people and learn new things: Nicholas Dille went another very interesting way to build a VM with Docker without running Windows Setup.

Smaller Windows images

In the last months we could follow the progress of the Windows Server in several Insider builds. I blogged about the smaller NanoServer Insider images in July going down to 80-90 MByte. Now with the new release of Windows Server 1709 and Windows 10 version 1709 we now can use official images.

  • microsoft/windowsservercore:1709
  • microsoft/nanoserver:1709
  • microsoft/dotnet:2.0.0-*-nanoserver-1709
  • microsoft/aspnet:4.7.1-windowsservercore-1709

The biggest discussion is about having no PowerShell in the small nanoserver image. For me it's a nice fit to just COPY deploy microservices into the Windows image.

I haven't seen an official PowerShell base image based on nanoserver, but there is at least the beta version

  • microsoft/powershell:6.0.0-beta.9-nanoserver-1709

I also have pushed some images to the Docker Hub to get started with other languages and tools.


If you don't have HyperV installed in Windows Server 1709 (maybe you are running a VM in the Cloud) then you cannot run older Windows Docker image on the new server. All images have to be built based on the new 1709 base images.

Windows 10 users always use HyperV to run Linux or Windows containers, so you don't feel that hard constraint on your developer machine.

It will be interesting to see how the multiple Windows versions evolve and when the next Insider program is giving us early access to the upcoming features.

Captains Hack Day

On our Docker Captains Hack Day Michael Irwin has started a better Swarm Visualizer 2.0. During the day we have added a first CI pipeline and - of course - Windows support. But not only Windows! With some magic multi-stage multi-arch builds we also managed to cross-build the visualizer on an Intel machine and create a Docker image for IBM z390 mainframes. Phil Estes tested the image in the IBM cloud. I'll write a more detailed blog post about how to cross-build Node.js apps with Docker.


That was a fascinating week at DockerCon. Thanks to Jenny, Ashlinn, Victor, Mano ... for making this event so wonderful. I had a lot of hallway tracks to talk with many people about Windows Containers in devolpment and production. Share and learn!


<![CDATA[Use Docker to Search in 320 Million Pwned Passwords]]>

This week Troy Hunt, a security researcher announced a freely downloadable list of pwned passwords. Troy is the creator of Have I Been Pwned? website and service that will notify you when one of your registered email addresses have been compromised by a data breach.

In his latest blog post

https://stefanscherer.github.io/use-docker-to-search-in-320-million-pwned-passwords/5986d4ec688a490001540976Sat, 05 Aug 2017 00:55:07 GMT

This week Troy Hunt, a security researcher announced a freely downloadable list of pwned passwords. Troy is the creator of Have I Been Pwned? website and service that will notify you when one of your registered email addresses have been compromised by a data breach.

In his latest blog post he introduced 306 Million Freely Downloadable Pwned Passwords with an update of another 14 Million just on the following day. He also has setup a online search at https://haveibeenpwned.com/Passwords

You can enter passwords and check if they have been compromised. But do not enter actively used passwords here, even if Troy is a nice person living in sunny Australia.

Pwned Passwords online service

My recommendation is

  1. If you are in doubt if your password has been pwned, just change it first and then check the old one in the online form.
  2. Use a Password manager like 1Password to create an individual long random password for each service you use.

But the huge password list is still quite interesting to work with.

Let's build a local search

What you can do is download the list of passwords (about 5 GByte compressed) and search locally in a safe place. You won't get the cleartext passwords, but only SHA1 sums of them. But we can create SHA1 sums of the passwords we want to search in this huge list.

You can download the files that are compressed with 7-Zip. You also need a tool to create a SHA1 sum of a plain text. And then you need another tool, a database or algorithm to quickly search in that text file that has nearly 320 Million lines.

Use Docker for the task

I immediately thought of a Container that has all these tools installed. But I didn't want to add the huge password lists into that container as it would build a Docker image of about 12 GByte or probably 5-6 GB Docker image on the Docker Hub.

The password files should be persisted locally on your laptop and mounted into the container to search in them with the tools needed for the task.

And I want to use some simple tools to get the work done. A first idea was born in the comments of Troys blog post where someone showed a small bash script with grep to search in the file.

I first tried grep, but this took about 2 minutes to find the hash in the file. So I searched a little bit and found sgrep - a tool to grep in sorted files. Luckily the password files are sorted by the SHA1 hash. But I found only the source code and there is no standard package to install it. So we also need a C compiler for that.

In times before Docker you had a lot of stress installing many tools on your computer. But let's see how Docker can help us with all these steps.

Build the Docker image

I found the Sources of sgrep on GitHub and we will need Make and a C compiler to build the sgrep binary.

I will use a multi-stage build Dockerfile and explain every single line. You can build the image line by line and see the benefits of build caches while working on the Dockerfile. So after adding a line to the file you can run docker build -t pwned-passwords . to build and update the image.

For the beginning let's choose a small Linux base image. We will name the first stage as build. So the Dockerfile starts with

FROM alpine:3.6 AS build

The next step is we have to install Git, Make and the C compiler with its header files.

RUN apk update && apk add git make gcc musl-dev

Now we clone the GitHub repo with the source code of sgrep.

RUN git clone https://github.com/colinscape/sgrep

In the next line I'll create a bin folder that is needed for the build process. Then we go to the source directory and run the make command as there is a Makefile in that directory.

RUN mkdir sgrep/bin && cd sgrep/src && make

After these steps we have the sgrep binary compiled for Alpine Linux. But we also have installed a ton of other tools.

Now put all these instructions into a Dockerfile and build the Docker image.

$ docker build -t pwned-passwords .

Let's inspect all image layers we have created so far.

$ docker history --format "{{.ID}}\t{{.Size}}\t{{.CreatedBy}}" pwned-passwords
78171a118279	24.5kB	/bin/sh -c mkdir sgrep/bin && cd sgrep/src...
2323bcb14b5f	93.6kB	/bin/sh -c git clone https://github.com/co...
8ec1470030af	119MB	/bin/sh -c apk update && apk add git make ...
7328f6f8b418	0B	/bin/sh -c #(nop)  CMD ["/bin/sh"]
<missing>	3.97MB	/bin/sh -c #(nop) ADD file:4583e12bf5caec4...

As you can see we now have a Docker image of more than 120 MByte, but the sgrep binary is only 15 KByte. Yes, this is no typo. Yes, we will grep through GByte of data with a tiny 15 KByte binary.

Multi-stage build for the win

With Docker 17.05 and newer you can now add another FROM instruction and start with a new stage in your Dockerfile. The last stage will create the final Docker image. So every instruction after the last FROM defines what goes into the image you want to share eg. on Docker Hub.

So let's start our final stage of our Docker image build with

FROM alpine:3.6

The last stage does not need a name. Now we have an empty Alpine Linux again, all the 120 MByte of development environment won't make it into the final image. But if you build the Docker image more than once the temporary layers are still there and will be reused if they are unmodified. So the Docker build cache helps you speed up while working on the shell script.

In the previous build stage we have created the much faster sgrep command. What we now need is a small shell script that converts a plaintext password into a SHA1 sum and runs the sgrep command.

To create a SHA1 sum I'll use openssl command. And it would be nice if the shell script can download the huge files for us. As the files are compressed with 7-zip we also need wget to download and 7z to extract them.

In the next instruction we install OpenSSL and the 7-Zip tool.

RUN apk update && apk add openssl p7zip

The COPY instruction has an option --from where you can specify another named stage of your build. So we copy the compiled sgrep binary from the build stage into the local bin directory.

COPY --from=build /sgrep/bin/sgrep /usr/local/bin/sgrep

The complete shell script is called search and can be found in my pwned-passwords GitHub repo. Just assume we have it in the current directory. The next COPY instruction copies it from your real machine into the image layer.

COPY search /usr/local/bin/search

As the last line of the Dockerfile we define an entrypoint to run this shell script if we run the Docker container.

ENTRYPOINT ["/usr/local/bin/search"]

Now append these lines to the Dockerfile and build the complete image. You will see that the first layers are already cached and only the last stage will be built.

The search script

You can find the search script in my GitHub repo as well as the Dockerfile. You only need these two tiny files to build the Docker image yourself.

set -e

if [ ! -d /data ]; then
  echo "Please run this container with a volume mounted at /data."
  echo "docker run --rm -v \ $(pwd):/data pwned-passwords $*"
  exit 1

FILES="pwned-passwords-1.0.txt pwned-passwords-update-1.txt"
for i in $FILES
  if [ ! -f "/data/$i" ]; then
    echo "Downloading $i"
    wget -O "/tmp/$i.7z" "https://downloads.pwnedpasswords.com/passwords/$i.7z"
    echo "Extracting $i to /data"
    7z x -o/data "/tmp/$i.7z"
    rm "/tmp/$i.7z"

if [[ $1 != "" ]]
echo "checking $PWD"

hash=`echo -n $PWD | openssl sha1 | awk '{print $2}' | awk 'BEGIN { getline; print toupper($0)  }'`
echo "Hash is $hash"
for i in $(sgrep -c $hash /data/*.txt)
  file=$(echo "$i" | cut -f1 -d:)
  count=$(echo "$i" | cut -f2 -d:)
  if [[ $count -ne 0 ]]; then
    echo "Oh no - pwned! Found $count occurences in $file"
  totalcount=$(( $totalcount + $count ))
if [[ $totalcount -eq 0 ]]; then
  echo "Good news - no pwnage found!"
  exit 1

Build the final image

Now with these two files, Dockerfile and search shell script build the small Docker image.

$ docker build -t pwned-passwords .

Let's have a look at the final image layers with

$ docker history --format "{{.ID}}\t{{.Size}}\t{{.CreatedBy}}" stefanscherer/pwned-passwords
24eca60756c8	0B	/bin/sh -c #(nop)  ENTRYPOINT ["/usr/local...
c1a9fc5fdb78	1.04kB	/bin/sh -c #(nop) COPY file:ea5f7cefd82369...
a1f4a26a50a4	15.7kB	/bin/sh -c #(nop) COPY file:bf96562251dbd1...
f99b3a9601ea	10.7MB	/bin/sh -c apk update && apk add openssl p...
7328f6f8b418	0B	/bin/sh -c #(nop)  CMD ["/bin/sh"]
<missing>	3.97MB	/bin/sh -c #(nop) ADD file:4583e12bf5caec4...

As you can see, OpenSSL and 7-Zip take about 10 MByte, the 16 KByte sgrep binary and the 1 KByte shell script are sitting on top of the 4 MByte Alpine base image.

I also have pushed this image to the Docker Hub with a compressed size of about 7 MByte. If you trust me, you can use this Docker image as well. But you will learn more how multi-stage builds feel like if you build the image yourself.

Search for pwned passwords

We now have a small 14.7 MByte Linux Docker image to search for pwned passwords.

Run the container with a folder mounted to /data. If you forgot this, the script will show you how to run it.

Running the container for the first time it will download the two password files (5 GByte) which may take some minutes depending on your internet connectivity.

After the script has downloaded everything two files should appear in the current folder. For me it looks like this:

file list

Now search for passwords by adding a plaintext password as an argument

$ docker run --rm -v $(pwd):/data pwned-passwords troyhunt
Hash is 0CCE6A0DD219810B5964369F90A94BB52B056494
Oh no - pwned! Found 1 occurences in /data/pwned-passwords-1.0.txt

If you don't trust my script or the sgrep command, the run the container without network connectivity

$ docker run --rm -v $(pwd):/data --network none pwned-passwords secret4949
Hash is 6D26C5C10FF089BFE81AB22152E2C0F31C58E132
Good news - no pwnage found!

So you have luck, you can securely check that your password secure4949 hasn't been breached. But beware this is still no good password :-)

Run pwned-passwords

Works on Windows

If you have Docker installed on your Windows machine, you can also use my Docker image or build the image yourself.

With Docker 4 Windows it only depends on the shell you use.

For PowerShell the command to run the image is

docker run --rm -v "$(pwd):/data" pwned-passwords yourpass


And if you prefer the classic CMD shell use this command

docker run --rm -v "%cd%:/data" pwned-passwords yourpass

CMD shell

On my Windows 7 machine I have to use Docker Machine, but even here you can easily search for pwned passwords. All you have to do is mount a directory for the password files as /data into the container.

docker run --rm -v "/c/Users/stefan.scherer/pwned:/data" stefanscherer/pwned-passwords troyhunt

Windows 7 with pwned-passwords image


You now know that there are Millions of passwords out there that may be used in a brute force attack to other online services.

So please use a password manager instead of predictable patterns how to modify passwords for different services.

You also have learned how Docker can keep your computer clean but still compile some open source projects from source code.

You have seen the benefits of multi-stage builds to create and share minimal Docker images without the development environment.

And you now have the possibility to search your current passwords in a save place without leaking it to the internet. Some other online service may collect all the data entered into a form. So keep your passwords secret and change

If you want to hear more about Docker, follow me on Twitter @stefscherer.

<![CDATA[Exploring new NanoServer Insider images]]>

Last week the first Insider preview container images appeared on the Docker Hub. They promise us much smaller sizes to have more lightweight Windows images for our applications.

To use these Insider container images you also need an Insider preview of Windows Server 2016 or Windows 10. Yes, this is

https://stefanscherer.github.io/exploring-new-nanoserver-insider-images/5986d4ec688a490001540975Tue, 18 Jul 2017 09:42:41 GMT

Last week the first Insider preview container images appeared on the Docker Hub. They promise us much smaller sizes to have more lightweight Windows images for our applications.

To use these Insider container images you also need an Insider preview of Windows Server 2016 or Windows 10. Yes, this is another great announcement that you can get early access and give feedback to the upcoming version of Windows Server. So let's grab it.

Windows Server Insider

  1. Register at Windows Insider program https://insider.windows.com and join the Windows Server Insider program.

  2. Download the Windows Server Insider preview ISO from https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver

Now you can create a VM and install Docker. You can either build the VM manually and follow the docs "Using Insider Container Images" how to install Docker and pull the Insider container images. Or you can use my Packer template and Vagrant environment to automate these steps. The walkthrough is described at


Windows Insider images

There are four new Docker images available with a much smaller footprint.

Windows Insider images

  • microsoft/windowsservercore-insider
  • microsoft/nanoserver-insider
  • microsoft/nanoserver-insider-dotnet
  • microsoft/nanoserver-insider-powershell

The Windows Server Core Insider image got down from 5 GB to only 2 GB which saves a lot of bandwidth and download time.

You may wonder why there are three Nano Server Insider images and why there is one without PowerShell.

Aiming the smallest Windows base image

If we compare the image sizes of the current microsoft/nanoserver image with its base layer and update layer with the new Insider images you can see the reason.

NanoServer sizes

If you want to ship your application in a container image you don't want to ship a whole operating system, but only the parts needed to run the application.

And to ship faster is to ship smaller images. For many applications you do not need eg. PowerShell inside your base image at runtime which would take another 54 MByte to download from the Docker registry.

Let's have a look at current Windows Docker images available on the Docker Hub. To run a Golang webserver for example on an empty Windows Docker host you have to pull the 2MB binary and the two NanoServer base layers with hundreds of MB to run it in a container.

docker pull whoami

Of course these base images have to be downloaded only once as other NanoServer container images will use the same base image. But if you work with Windows containers for a longer time you may have noticed that you still have to download different update layers from time to time that pull another 122 MB.

And if the NanoServer base image is much smaller then the updates also will be smaller and faster to download.

With the new Insider container images you can build and run containerized .NET core applications that are still smaller than the NanoServer + PowerShell base image.


Another example is providing a Node.js container image based on the new NanoServer Insider image with only 92 MByte. We have just cut off "3" hundred MB.

Node.js NanoServer sizes

If we compare that with some of the Linux Node.js container images we are at about the size of the the slim images.

Node.js slim image sizes

Multi-stage build

To build such small Windows images comes with a cost. You have to live without PowerShell. But the new multi-stage build introduced with Docker 17.05 really helps you and you can use PowerShell before the final image layers are built.

If you haven't heard about multi-stage builds its concept is to have multiple FROM instructions in a Dockerfile. Only the last FROM until the end of the file will build the final container image. This is also called the last stage. In all the other stages you don't have to optimze too much and can use the build cache much better. You can read more about multi-stage builds at the Docker Blog.

Let's have a closer look how to build a small Node.js base image. You can find the complete Dockerfile on GitHub.

In the first stage I'm lazy and even use the microsoft/windowsservercore-insider image. The reason is that I'm using the GPG tools to verify the downloads and these tools don't run quiet well in NanoServer at the moment.

# escape=`
FROM microsoft/windowsservercore-insider as download
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN Invoke-WebRequest ... 
RUN Expand-Archive ...

The Dockerfile has a second FROM instruction which then uses the smallest Windows base image. In that stage you normally COPY deploy files and folders from previous stages. In our case we copy the Node.js installation folder into the final image.

The one RUN instruction sets the PATH environment variable with the setx command instead of PowerShell commands.

FROM microsoft/nanoserver-insider
COPY --from=download /nodejs /nodejs
RUN setx PATH "%PATH%;C:\nodejs;%APPDATA%\npm"
CMD [ "node.exe" ]

Users of such a Node.js base image can work as usual by COPY deploy their source tree and node_modules folder into that image and run the application as a small container.

FROM stefanscherer/node-windows:8.1.4-insider
COPY . /code
CMD ["node.exe", "app.js"]

So all you have to do is change the FROM instruction to the smaller insider Node.js image.

Further Insider images

I have pushed some of my first Insider images to the Docker Hub so it may be easier for you to try out different languages.

  • stefanscherer/node-windows:6.11.1-insider
  • stefanscherer/node-windows:8.1.4-insider
  • stefanscherer/golang-windows:1.8.3-insider
  • stefanscherer/dockertls-windows:insider

If you want to see how these images are built, then you can find the Dockerfiles in the latest pull requests of my https://github.com/StefanScherer/dockerfiles-windows repo.

Docker Volumes

If you have worked with Docker Volumes on Windows you may know this already. Node.js and other tools and languages have problems when they want to get the real name of a file or folder that is mapping from the Docker host into the container.

Node.js for example thinks the file is in the folder C:\ContainerMappedDirectories, but cannot find the file there. There is a workaround described in Elton Stoneman's blog post "Introducing the 'G' Drive" to map it to another drive letter.

With the new Insider preview I see a great improvement on that topic. Running normal Windows containers without the HyperV isolation there is no longer a symbolic link.

If we run the Node.js container interactively and map the folder C:\code into the container we can list the C:drive and see that the code folder is a normal directory.

docker run -v C:\code:C:\code stefanscherer/node-windows:8.1.4-insider cmd /c dir

docker run volume

With this setup you are able to mount your source code into the Node.js container and run it eg. with nodemon to live reload it after changing it on the host.

Unfortunately this is not available with the Hyper-V isolation that is the default on Windows 10 Insider machines.

Running the same command with --isolation=hyperv shows the symlinked directory which Node.js cannot handle at the moment.

docker run -v C:\code:C:\code --isolation=hyperv stefanscherer/node-windows:8.1.4-insider cmd /c dir

docker run volume hyperv

But this improvement in native Windows containers looks very promising to solve a lot of headache for all the maintainers of Git for Windows, Golang, Node.js and so on.


Having smaller Windows container images is a huge step forward. I encourage you to try out the much smaller images. You'll learn how it feels to work with them and you can give valuable feedback to the Microsoft Containers team shaping the next version of Windows Server.

Can we make even smaller images? I don't know, but let's find it out. How about naming the new images? Please make suggestions at the Microsoft Tech Community https://techcommunity.microsoft.com.

Please use the comments below if you have further ideas, questions or improvements to share. You can follow me on Twitter @stefscherer to stay up to date with Windows containers.

<![CDATA[Use multi-stage builds for smaller Windows images]]>

I'm still here in Austin, TX at DockerCon 2017 and I want to show you one of the announcements that is very useful to build small Windows Docker images.

On Tuesday's first keynote at DockerCon Solomon Hykes introduced the most impressive feature for me that will make it in version

https://stefanscherer.github.io/use-multi-stage-builds-for-smaller-windows-images/5986d4ec688a490001540974Wed, 19 Apr 2017 22:52:00 GMT

I'm still here in Austin, TX at DockerCon 2017 and I want to show you one of the announcements that is very useful to build small Windows Docker images.

On Tuesday's first keynote at DockerCon Solomon Hykes introduced the most impressive feature for me that will make it in version 17.05.0 of Docker: The multi-stage builds

announcement at DockerCon about multi-stage builds

The demo in the keynote only showed Linux images, but you can use this feature for Windows images as well.

How did we build smaller images in the past?

As we know each instruction in a Dockerfile like COPY or RUN builds a new layer of the image. So everything you do in eg. a RUN instruction is atomic and saved into one layer. It was a common practise to use multi-line RUN instructions to clean up temporary files and cache folders before that instruction ends to minimize the size of that layer.

For me it always looked like a workaround and a little too technical to know where all these temporary files have to be wiped out. So it is great to remove this noise out of your Dockerfiles.

And another workaround that was used in addition was to create two Dockerfiles and a script to simulate such stages and copy files from the first Docker image back to the host and then into the second Docker image. This could lead to errors if you have old temp folders on your host where you copy the results from the first build in. So it will be good that we can remove this complexity and avoid such build scripts entirely.

Multi-stage build on Windows

The idea behind multi-stage builds is that you can define two or more build stages and only the layers of the last stage gets into the final Docker image.

The first stage

As you can see in the nice slide you can start with a first stage and do what you like in there. Maybe you need a complete build environment like MSBuild, or the Golang compiler or dev dependencies to run Node.js tests with your sources.

The FROM instruction now can be followed by a stage name, eg. build. I recommend to introduce that to your Dockefile as we will need this name later again. This is how your Dockerfile then could look like:

FROM microsoft/windowsservercore as build

You do not need to use multi-line RUN instructions any more if you haven't liked it. Just keep your Dockerfile simple, readable and maintainable by your team colleages. The advantage that even you have is that you can use the Docker build cache much better.

Think of a giant multi-line RUN instruction with three big downloads, uncompress and cleanup steps and the third download crashes due to internet connectivity. Then you have to do all the other downloads again if you start the docker build again.

So relax and just download one file per RUN instruction, even put the uncompress into another RUN layer, it doesn't matter for the final image size.

The last stage

The magic comes into the Dockerfile as you can use more than one FROM instructions. Each FROM starts a new build stage and all lines beginning from the last FROM will make it into the final Docker image. The last stage does not need to have a name like the previous ones.

In this last stage you define the minimal runtime environment for your containerised application.

The COPY instruction now has a new option --from where you can specify from with stage you want to copy files or directories into the current stage.

Enough theory. Let's have a look at some real use-cases I already tried out.

Build a Golang program

A simple multi-stage Dockerfile to build a Golang binary from source could look like this:

FROM golang:nanoserver as gobuild
COPY . /code
RUN go build webserver.go

FROM microsoft/nanoserver
COPY --from=gobuild /code/webserver.exe /webserver.exe
CMD ["\\webserver.exe"]

The first four lines describe the normal build. We copy the source codes into the Golang build environment and build the Windows binary with it.

Then with the second FROM instruction we choose an empty NanoServer image. With this we skip about 100 MByte of compressed Golang build environment images for the production image.

The COPY --from=gobuild instruction copies the final Windows binary from the gobuild stage into the final stage.

The last two lines are just the normal things you do, expose the port on which your app is listening and describing the command that should be called when running a container with it.

This Dockerfile now can be easily be built as always with

docker build -t webserver .

The final Docker image only has a 2 MByte compressed layer in addition to the NanoServer base layers.

You can find a full example for such a simple Golang webserver in my dockerfiles-windows repo, the final Docker Hub image is available at stefanscherer/whoami:windows-amd64-1.2.0.

Install MongoDB MSI in NanoServer

Another example for this multi-stage build is that you can use it to install MSI packages and put the installed programs and files into a NanoServer image.

Well, you cannot install MSI packages in NanoServer directly, but you can start with the Windows Server Core image in the build stage and then switch to NanoServer in the final stage.

If you know where the software has been installed you can COPY deploy them in the final stage into the image.

The Dockerfile how to build a MongoDB NanoServer image is also available on GitHub.

The first stage more or less looks like this:

FROM microsoft/windowsservercore as msi
RUN "download MSI page"
RUN "check SHA sum of download"
RUN "run MSI installer"

and the final stage looks like this:

FROM microsoft/nanoserver
COPY --from=msi C:\mongodb\ C:\mongodb\
RUN "put MongoDB binaries into PATH"
VOLUME C:\data\db
EXPOSE 27017
CMD ["mongod.exe"]

Another pro tip: If you really want small Windows Docker images you should also avoid RUN or ENV instructions in the last stage.

The final MongoDB NanoServer image is available at stefanscherer/mongo-windows:3.4.2-nano.


With multi-stage builds coming into Docker 17.05 we will be able to

  • put all build stages into a single Dockerfile to use only one simple docker build command
  • use the build cache by using single line RUN instructions
  • start with ServerCore, then switch to NanoServer
  • use latest NanoServer image with all security updates installed for the last stage even if upstream build layer may be out of date

This gives you an idea what you will be able to do once you have Docker 17.05 or later installed.

Update 2017-05-07: I build all my dockerfiles-windows Windows Docker images with AppVeyor and it is very easy to upgrade to Docker 17.05.0-ce during the build with the script update-docker-ce.ps1. For local Windows Server 2016 VM's you could use this script as well. Sure, at the moment we have to switch from EE to CE edition until 17.06.0-ee also will bring this feature. Your images will still run on 17.03.1-ee production servers.

Please use the comments below if you have further ideas, questions or improvements to share. You can follow me on Twitter @stefscherer.

<![CDATA[Yes, you can "Docker" on Windows 7]]>

This week I was asked to help automating a task to get some Linux binaries and files packaged into a tarball. Some developers tried to spin up a Linux virtual machine and run a script to install tools and then do the packaging. Although I also like and use Vagrant

https://stefanscherer.github.io/yes-you-can-docker-on-windows-7/5986d4ec688a490001540973Fri, 31 Mar 2017 17:02:07 GMT

This week I was asked to help automating a task to get some Linux binaries and files packaged into a tarball. Some developers tried to spin up a Linux virtual machine and run a script to install tools and then do the packaging. Although I also like and use Vagrant still very often, it seemed to me using Docker will be easier to maintain as this could be done in a one-shot container.

The hard facts - Windows 7 Enterprise

The bigger problem was the fact that in some companies you still find Windows 7 Enterprise. It may be a delayed rollout of new notebooks that keep the employees on that old desktop platform.

So using Docker for Windows was no option as it only works with Windows 10 Pro with Hyper-V. This looks like a good setup for new notebooks, but if you want to use Docker now you have to look for other solutions.

Locked-in Hypervisor

Next obstacle was that for Vagrant it is better to use VMware Workstation on Windows 7 instead of VirtualBox. There also may be a company policy to use one specific hypervisor as the knowledge is already there using other server products in the datacenter.

So going down to the Docker Toolbox also was no option as it comes with VirtualBox to run the Linux boot2docker VM.

Embrace your environment

So we went with a manual installation of some Docker tools to get a Linux Docker VM running on the Windows 7 machine. Luckily the developers already had the Chocolatey package manager installed.

Let's recap what I found on the notebooks

  • Windows 7 Enterprise
  • VMware Workstation 9/10/11/12

Well there is a tool Docker Machine to create local Docker VM's very easily, and there is a VMware Workstation plugin available. All these tools are also available as Chocolatey packages.

So what we did on the machines was installing three packages with these simple commands in an administrator terminal.

choco install -y docker
choco install -y docker-machine
choco install -y docker-machine-vmwareworkstation

Then we closed the administrator terminal as the next commands can be done in normal user mode.

My host is my castle

Every developer installs tools that they need for their work. Installing that on the host machine - your desktop or notebook - leads to different machines.

Creating the Docker Machine we ran into a "works on my machine, but doesn't work on your machine" problem I hadn't seen before.

Something while setting up the Linux VM just went wrong. It turned out that copying the Docker TLS certs with SSH just didn't work. A deeper look on what else is installed on the host we found that some implementations of SSH clients just doesn't work very well.

Luckily there is a less known option in the docker-machine binary to ignore external SSH client and use the built-in implementation.

With that knowledge we were able to create a VMware Docker Machine on that laptop with

docker-machine --native-ssh create -d vmwareworkstation default

Using the good old PowerShell on the Windows 7 notebook helps you to use that Linux Docker VM by setting some environment variables.

docker-machine env | iex

After that you can run docker version for example to retrieve client and server version which are both the up-to-date community editions

docker version

Quite exciting to be able to use that Windows 7 notebook with the latest Docker tools installed.

So hopefully Docker and using containers in more and more development tasks helps to keep their notebooks clean and they install less tools on the host and instead running more tools in containers.

I can C: a problem

Using that Docker Machine VM worked really well until we faced another problem. Building some Docker images we ran out of disk space. Oh no, although the Windows 7 notebooks got improved by installing a 1 TB SSD, the C: partition hasn't been increased for some historical reasons.

Face palm

Docker Machine creates the Linux VM's in the current users home directory. This is a good idea, but having a 120 GB partition with only 7 GB left on C: we had to fix it. Taking a deep breath and embracing that environment, we came to the following solution.

We destroyed the Docker Machine again (because it's so easy) and also removed the .docker folder again to link it to a folder that resides on a bigger partition of the SSD.

docker-machine rm -f default
rm $env:USERPROFILE\.docker
mkdir D:\docker
cmd /c mklink /J $env:USERPROFILE\.docker D:\docker

Then we recreated the Docker Machine with the command from above and set the environment variables again.

docker-machine --native-ssh create -d vmwareworkstation default
docker-machine env | iex

And hurray - it worked. The VM with its disk resides on the bigger D: drive and we don't have to set any other global environment variables.

With that setup I made the developers happy. They could start using Docker without waiting for new hardware or asking their admins to resize or reformat their partitions.

We soon had a small Dockerfile and put the already existing provision scripts into an image. So we finished the task running a Linux container that can be thrown away more easily than a whole VM.

Daily work

To recap how to use this Docker Machine you normally do the following steps after booting your notebook.

docker-machine start
docker-machine env | iex

Then you can work with this default Linux Docker VM.

Planning your hardware update

The story ended well, but I recommended to think ahead and plan the next hardware update. So before they just get the new notebook generation they should think about which hypervisor they should use in the future.

Using Windows 10 Enterprise with the built-in Hyper-V would be easier. You can run native Windows containers with it and use Docker for Windows to switch between Linux and Windows containers. Using Vagrant with Hyper-V also gets better and better.

But if company policy still restricts you to use eg. VMware then you also can use the steps above to create a Linux Docker machine. You also cannot use Windows containers directly on Windows 10 machine as Hyper-V does not work in parallel with other hypervisors. In that case you might spin up a Windows Server 2016 VM using my Windows Docker Machine setup. With that you can easily switch between Linux and Windows containers using the docker-machine env command.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. I love to hear about your enterprise setup and how to make Docker work on your developer's machines. You can follow me on Twitter @stefscherer.

<![CDATA[7 Reasons to attend DockerCon]]>

I'm more than happy that I can make it to DockerCon in Austin, Texas. It is only a few weeks until the workshops and conference starts April, 17th. If you still need some good reasons why you should attend I can give you some ideas. And you will get 10%

https://stefanscherer.github.io/7-reasons-to-attend-dockercon/5986d4ec688a490001540972Wed, 29 Mar 2017 22:43:00 GMT

I'm more than happy that I can make it to DockerCon in Austin, Texas. It is only a few weeks until the workshops and conference starts April, 17th. If you still need some good reasons why you should attend I can give you some ideas. And you will get 10% discount with the code CaptainStefan.


On Monday I'll be at the workshop Modernizing monolithic ASP.NET applications with Docker where you can get some hands-on experience with Windows containers. You cannot have a better place if you want to get started with Docker on Windows. Michael Friis and Elton Stoneman from Docker and myself can answer all your questions.

See some Docker Swarm demos

Come to the Community Theater on Tuesday, Apr 18th, 1:00 PM to see my live demo Swarm 2 Go and how our team at SEAL Systems has built a portable multi-arch data center with Raspberry Pi and UP boards.


You will have the chance to play the chaos monkey and unplug cables to see Docker swarm mode in action. With the help of LED's we can visualise failures and how Docker swarm gets healthy again. All steps to build such a cluster is available in an open source repo.

Learn about Docker on Windows

Docker is no longer a thing only on Linux. There are several talks about Docker on the Windows platform that I want to see.

And I also recommend to visit the Microsoft booth to hopefully see some Docker swarm mode on Windows Servers. I really look forward to see the latest news and talking with some of the Microsoft Container and Networking team.

Multiple platforms

If you think Docker is only Linux on Intel machines, then comparing it to an instrument it may look like this.


But as you can see the talks above, Docker is available on multiple platforms: Linux, Windows, from small ARM devices like the Raspberry Pi to big IBM machines.

So the whole spectrum of Docker more looks like this, and once you learned the Docker commands you are able to play this:


So it is time to learn how easy it is to deploy your applications for more than one platform.

See you at DockerCon! Ping me on Twitter @stefscherer or with the DockerCon app to get in touch with me during that conference week.

<![CDATA[How to run encrypted Windows websites with Docker and Træfɪk]]>

Nowadays we read it all the time that every website should be encrytped. Adding TLS certificates to your web server sounds like a hard task to do. You have to update your certificates before they get invalid. I don't run public websites on a regular basis, so I - like

https://stefanscherer.github.io/how-to-run-encrypted-windows-websites-with-docker-and-traefik/5986d4ec688a490001540971Fri, 10 Mar 2017 22:21:00 GMT

Nowadays we read it all the time that every website should be encrytped. Adding TLS certificates to your web server sounds like a hard task to do. You have to update your certificates before they get invalid. I don't run public websites on a regular basis, so I - like many others I guess - have heard of Let's Encrypt, but never really tried it.

But let's learn new things and try it out. I also have promised in the interview in John Willis' Dockercast that I will write a blog post about it. With some modern tools you will see, it's not very complicated to run your Windows website with TLS certificates.

In this blog post I will show you how to run your website in Windows containers with Docker. You can develop your website locally in a container and push it to your server. And another Windows container runs the Træfɪk proxy, that helps us with the TLS certificate as well as with its dynamic configuration to add more than just one website.

Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends like Docker to register and update its configuration for each new started container.

This picture gives you an overview of the architecture:

Traefik architecture

Normally Træfɪk is running inside a container and it is well known in the Linux Docker community. A few weeks ago I have seen that there also are Windows binaries available. Let's see if we can use Træfɪk in a Windows container to provide us encrypted HTTPS traffic to other Windows containers running our IIS website, or other web service.

Step 1: Create a Windows Docker host in Azure

First of all we need a Windows Server 2016 machine with Docker in the cloud. I will use Azure as Microsoft provides a VM template for that. This server will be our webserver later on with an own DNS name and TLS certs running our website.

Go to the Windows Containers quick start guide at docs.microsoft.com and press the "Deploy to Azure" button.

Deploy to Azure

This will bring you to the Azure portal where you can customize the virtual machine. Create a new resource group, choose the location where the server should be running a and public DNS name, as well as the size of the VM.

Customize machine

After you click on "Purchase" the deployment starts which should take only a few minutes.

Azure starts deployment

In the meantime click on the cube symbol on the left. That will show you all resource groups you have.

This Windows + Docker template already creates inbound security rules for HTTPS port 443 as well as the Docker TLS port 2376. So for our purposes we don't need to add more inbound rules.

Step 2: Buy a domain and update DNS records

For Let's Encrypt you need an own domain name to get TLS certificates. For my tests I ordered a domain name at GoDaddy. But after I walked through the steps I realised that Træfɪk also can automatically update your DNS records when you use DNSimple, CloudFlare etc.

But for first time domain name users like me I show you the manual steps. In my case I went to my domain provider and configured the DNS records.

Get the public IP address

Before we can update the DNS record we need the public IP address of the VM. This IP address is also used for the Docker TLS certificates we will create later on.

In the Azure Portal, open the resource group and click on the public IP address.

Resource group

Write down or copy the IP address shown here.

Public IP address

Go back to your domain provider and enter the public IP address in the A record. If you want to run multiple websites within Docker containers, add a CNAME resource record for each sub domain you need. For this tutorial I have added portainer and whoami as additional sub domains.

Update DNS records

After some minutes all the DNS servers should know your domain name with the new IP address of your Windows Server 2016.

Step 3: Secure Docker with TLS

We now log into the Docker host with RDP. You can use the DNS name provided by Azure or use your domain name. But before you connect with RDP, add a shared folder to your RDP session so you can also copy back the Docker TLS client certificates to your local machine. With this you will also be able to control your Windows Docker engine directly from your local computer.

In this example I shared my desktop folder with the Windows VM.

Add folder in RDP client

Now login with the username and password entered at creation time.

Login with RDP

Create Docker TLS certs

To use Docker remotely it is recommended to use client certificates, so nobody without that certs can talk to your Docker engine. The same applies if a Windows container wants to communicate with the Docker engine. Using just the unprotected port 2375 would give every container the possibility to gain access to your Docker host.

Open a PowerShell terminal as an administrator to run a Windows container that can be used to create TLS certificates for your Docker engine. I already have blogged about DockerTLS in more detail so we just use it here as a tool.

Retrieve all local IP addresses to allow the TLS certificate also from the host itself, but as well for other Windows containers to talk to your Docker engine.

$ips = ((Get-NetIPAddress -AddressFamily IPv4).IPAddress) -Join ','

Also create a local folder for the client certificates.

mkdir ~\.docker

Now run the DockerTLS tool with docker run, just append the public IP address from above to the list of IP_ADDRESSES. Also adjust the SERVER_NAME variable to your domain name.

docker run --rm `
  -e SERVER_NAME=schererstefan.xyz `
  -e IP_ADDRESSES=$ips,52.XXXXXXX.198 `
  -v "C:\ProgramData\docker:C:\ProgramData\docker" `
  -v "$env:USERPROFILE\.docker:C:\Users\ContainerAdministrator\.docker" `

Run dockertls

Docker will pull the Windows image from Docker Hub and create the TLS certificates in the correct folders for your Docker engine.

Afterwards you have to restart the Docker engine to use the TLS certificates. The Docker engine now additionally listen on TCP port 2376.

restart-service docker

Restart docker

Add firewall exception for Docker

This step is needed to make other Windows container talk to the Docker engine at port 2376. But it also has another benefit. With these certs you can use the Docker client on your local machine to communicate with the Windows Docker engine in Azure. But I will start Træfɪk later on from the Docker host itself as we need some volume mount points.

The Windows Server's firewall is active, so we now have to add an exception to allow inbound traffic on port 2376. The network security group for the public IP address already has an inbound rule to the VM. This firewall exception now allows the connection to the Docker engine.

Add firewall exception

From now on you can connect to the Docker engine listing on port 2376 from the internet.

Copy Docker client certs to your local machine

To setup a working communication from your local machine, copy the Docker client certificates from the virtual machine through the RDP session back to your local machine.

Copy Docker TLS certs to client

On your local machine try to connect with the remote Windows Docker engine with TLS encryption and the client certs.

$ DOCKER_CERT_PATH=~/Desktop/.docker DOCKER_TLS_VERIFY=1 docker -H tcp://schererstefan.xyz:2376 version

Docker client from Mac

Now you are able to start and stop containers as you like.

Step 4: Run Træfɪk and other services

Now comes the fun part. We use Docker and Docker Compose to describe which containers we want to run.

Install Docker Compose

To spin up all our containers I use Docker Compose and a docker-compose.yml file that describes all services.

The Windows VM does not come with Docker Compose. So we have to install Docker Compose first. If you are working remotely you can use your local installation of Compose and skip this step.

Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-Windows-x86_64.exe" `
  -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe

If you prefer Chocolatey, use choco install docker-compose instead.

Create data folders on Docker host

You need to persist some data outside of the Docker containers, so we create some data folders. Træfɪk retrieves the TLS certs and these should be persisted outside of the container. Otherwise you run into the Let's Encrypt rate limit of 20 requests per week to obtain new certificates. This happened to me trying different things with Træfɪk and starting and killing the container lots of times.

PS C:\Users\demo> mkdir sample
PS C:\Users\demo> cd sample
PS C:\Users\demo\sample> mkdir traefikdata
PS C:\Users\demo\sample> mkdir portainerdata


For a first test we define two services, the traefik service and a example web server called whoami. This tutorial should give you just an idea and you can extend the YAML file to your needs. Run an IIS website? Put it into a container image. And another IIS website? Just run a separate container with that other website in it. You see you don't have to mix multiple sites, just leave them alone in single microservice images.

Open up an editor and create the YAML file.

PS C:\Users\demo\sample> notepad docker-compose.yml
version: '2.1'
    image: stefanscherer/traefik-windows
      - "8080:8080"
      - "443:443"
      - ./traefikdata:C:/etc/traefik
      - ${USERPROFILE}/.docker:C:/etc/ssl:ro

    image: stefanscherer/whoami-windows
      - traefik
      - "traefik.backend=whoami"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:whoami.schererstefan.xyz"

      name: nat

I already have built a Træfɪk Windows Docker image that you can use. There might be an official image in the future. If you don't want to use my image, just use this Dockerfile and replace the image: stefanscherer/traefik-windows with build: ., so Docker Compose will build the Træfɪk image for you.

The Dockerfile looks very simple as we directly add the Go binary to the Nanoserver Docker image and define some volumes and labels.

FROM microsoft/nanoserver

ADD https://github.com/containous/traefik/releases/download/v1.2.0-rc2/traefik_windows-amd64 /traefik.exe

VOLUME C:/etc/traefik
VOLUME C:/etc/ssl

ENTRYPOINT ["/traefik", "--configfile=C:/etc/traefik/traefik.toml"]

# Metadata
LABEL org.label-schema.vendor="Containous" \
      org.label-schema.url="https://traefik.io" \
      org.label-schema.name="Traefik" \
      org.label-schema.description="A modern reverse-proxy" \
      org.label-schema.version="v1.2.0-rc2" \


Træfɪk needs a configuration file where you specify your email address for the Let's Encrypt certificate requests. You will also need the IP address of the container network so that Træfɪk can contact your Docker engine.

$ip=(Get-NetIPAddress -AddressFamily IPv4 `
   | Where-Object -FilterScript { $_.InterfaceAlias -Eq "vEthernet (HNS Internal NIC)" } `
Write-Host $ip

Now open an editor to create the traefik.toml file.

PS C:\Users\demo\sample> notepad traefikdata\traefik.toml

Enter that IP address at the endpoint of the [docker] section. Also adjust the domain names

address = ":8080"

domain = "schererstefan.xyz"
endpoint = "tcp://"
watch = true

ca = "C:/etc/ssl/ca.pem"
cert = "C:/etc/ssl/cert.pem"
key = "C:/etc/ssl/key.pem"

# Sample entrypoint configuration when using ACME
  address = ":443"


# Email address used for registration
# Required
email = "you@yourmailprovider.com"

storage = "c:/etc/traefik/acme.json"
entryPoint = "https"

   main = "schererstefan.xyz"
   sans = ["whoami.schererstefan.xyz", "portainer.schererstefan.xyz", "www.schererstefan.xyz"]

Open firewall for all container ports used

Please notice that the Windows firewall is also active for the container network. The whoami service listens on port 8000 in each container. To make Træfɪk connect to the whoami containers you have to add a firewall exception for port 8000.

Docker automatically adds a firewall exception for all ports mapped to the host with ports: in the docker-compose.yml. But for the exposed ports this does not happen automatically.

Spin up Træfɪk and whoami

Now it's time to spin up the two containers.

docker-compose up

You can see the output of each container and stop them by pressing CTRL+C. If you want to run them detached in the background, use

docker-compose up -d

So see the output of the services you can use docker-compose logs traefik or docker-compose logs whoami at any time.

Træfɪk now fetches TLS certificates for your domain with the given sub domains. Træfɪk listens for starting and stopping containers.

Test with a browser

Now open a browser on your local machine and try your TLS encrypted website with the subdomain whoami. You should see a text like I'm 3e1f17ecbba3 which is the hostname of the container.

Now let's try Træfɪk load balancing feature by scaling up the whoami service.

docker-compose scale whoami=3

Now there are three whoami containers running and Træfɪk knows all three of them. Each request to the subdomain will be load balanced to one of these containers. You can SHIFT-reload your page in the browser and see that each request returns another hostname.

Test whoami service with browser

So we have a secured HTTPS connection to our Windows containers.


The power of Docker is that you can run multiple services on one machine if you have resources left. So let's add another web server, let's choose an IIS server.

Add these lines to the docker-compose.yml.

    image: microsoft/iis
      - 80
      - traefik
      - "traefik.backend=www"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:www.schererstefan.xyz"

Remember to add a firewall exception for port 80 manually. After that spin up the IIS container with

docker-compose up -d www

And check the new sub domain. You will see the welcome page of IIS.

IIS welcome page


Let's add another useful service to monitor your Docker engine. Portainer is a very good UI for that task and it is also available as a Windows Docker image.

Add another few lines to our docker-compose.yml.

    image: portainer/portainer
    command: -H tcp:// --tlsverify
      - ./portainerdata:C:/data
      - ${USERPROFILE}/.docker:C:/certs
      - traefik
      - "traefik.backend=portainer"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:portainer.schererstefan.xyz"

Portainer also needs the client certs to communicate with the Docker engine. Another volume mount point is used to persist data like your admin login outside the container.

Now run Portainer with

docker-compose up -d portainer

Then open your browser on your local machine with the subdomain. When you open it the first time Portainer will ask you for an admin password. Enter a password you want to use and then login with it.

Portainer login

Now you have an UI to see all containers running, all Docker images downloaded etc.

Portainer dashboard

Portainer containers


What we have learned is that Træfɪk works pretty good on Windows. It helps us securing our websites with TLS certificates. In combination with Docker Compose you can add or remove new websites on the fly or even scale some services with the built-in load balancer of Træfɪk.

Read more details in the Træfɪk documentation as I can give you only a short intro of its capabilities.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

<![CDATA[Setup a Windows Docker CI with AppVeyor]]>

I love GitHub and all the services around it. It enables you to work from anywhere or any device and still have your complete CI pipeline in your pocket. Every thing is done with a git push. You can add services like Codeship, Travis, Circle and lots of others to

https://stefanscherer.github.io/setup-windows-docker-ci-appveyor/5986d4ec688a49000154096fFri, 10 Mar 2017 05:54:00 GMT

I love GitHub and all the services around it. It enables you to work from anywhere or any device and still have your complete CI pipeline in your pocket. Every thing is done with a git push. You can add services like Codeship, Travis, Circle and lots of others to build and test your code and even the pull requests you get from others.

But I'm on Windows

To build applications for Windows there is a similar cloud based CI service, called AppVeyor.

And it works pretty similar to the other well known services for Linux:

  1. Put a YAML file into your repo with the build, test and deploy steps
  2. Connect your repo to the cloud CI service
  3. From now on a git push will do a lot for you.

Your CI pipeline is set up in a few clicks.


Here is an example how such a YAML file looks like for AppVeyor. This is from a small C/C++ project I made long time ago during holiday without Visual Studio at hand. I just created that GitHub repo, added the appveyor.yml and voila - I got a compiled and statically linked Windows binary at GitHub releases.

version: 1.0.{build}
configuration: Release
platform: x64
  project: myfavoriteproject.sln
  verbosity: minimal
test: off
- path: x64/Release/myfavoriteproject.exe
  name: Release
- provider: GitHub
    secure: xxxxx

The build worker in AppVeyor is fully armed with lots of development tools, so you can build projects for serveral languages like Node.js, .NET, Ruby, Python, Java ...

Docker build

AppVeyor now has released a new build worker with Windows Server 2016 and Docker Enterprise Edition 17.03.0-ee-1 pre-installed. That instantly enables you to build, test and publish Windows Docker images in the same lightweight way.

Docker build with AppVeyor

All you have to do is to select the new build worker by adding image: Visual Studio 2017 to your appveyor.yml. No more work to do to get a fully Windows Docker engine for your build.

The following appveyor.yml gives you an idea how easy an automated Docker build for Windows can be:

version: 1.0.{build}
image: Visual Studio 2017

    secure: xxxxxxx
    secure: yyyyyyy
  - docker version

  - docker build -t me/myfavoriteapp .

  - docker run me/myfavoriteapp

  - docker login -u="$env:DOCKER_USER" -p="$env:DOCKER_PASS"
  - docker push me/myfavoriteapp

This is a very simple example. For the tests you can think of some more sophisticated tests like using Pester, Serverspec or Cucumber. For the deploy steps you can decide when to run these, eg. only for a tagged build to push a new release.

Docker Compose

You are not limited to build a single Docker image and run one container. Your build agent is a full Windows Docker host, so you also can install Docker Compose and spin up a multi-container application. The nice thing about AppVeyor is that the builders also have Chocolatey preinstalled. So you only have to add a short single command to your appveyor.yml to download and install Docker Compose.

choco install docker-compose

Docker Swarm

You also might turn the Docker engine into a single node Docker swarm manager to work with the new commands docker stack deploy. You can create a Docker Swarm with this command

docker swarm init

Add project to build

Adding AppVeyor to one of your GitHub repos is very simple. Sign in to AppVeyor with your GitHub account and select your project to add.

AppVeyor add project

Now you can also check the pull requests you or others create on GitHub.

GitHub pull request checks green

You can click on the green checkmark to view the console output of the build.

AppVeyor pull request build green

Tell me a secret

To push to the Docker Hub we need to configure some secrets in AppVeyor. After you are logged in to AppVeyor you can select the "Encrypt data" menu item from the drop down menu or use the link https://ci.appveyor.com/tools/encrypt

There you can enter your cleartext secret and it creates the encrypted configuration data you can use in your appveyor.yml.

Appveyor encrypt configuration data

These secret variables don't get injected in pull request builds, so nobody can fork your repo and send you an ls env: pull request to expose that variables in the output.

Immutable builds

One of the biggest advantages over self-hosting a CI pipeline is that you get immutable builds. You just do not have to care about the dirt and dust your build left on the build worker. AppVeyor - like all other cloud based CI systems - just throws away the build worker and you get another empty one for the next build.

AppVeyor immutable build

Even if you build Windows Docker images you don't have to cleanup your Docker host. You can concentrate on your code, the build and your tests, and forget about maintain your CI workers.


I have some GitHub repos that already use AppVeyor to build Windows Docker images, so you can have a look how my setup works:


AppVeyor is my #1 when it comes to automated Windows builds. With the Docker support built-in it becomes even more interesting.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

<![CDATA[Is there a Windows Docker image for ...?]]>

Do you want to try out Windows containers, but don't want to start too low level? If you are using one of the following programming languages you can benefit of already available official Docker images for Windows.

These Docker images are well maintained and you can just start and put

https://stefanscherer.github.io/is-there-a-windows-docker-image-for/5986d4ec688a490001540970Tue, 21 Feb 2017 23:56:58 GMT

Do you want to try out Windows containers, but don't want to start too low level? If you are using one of the following programming languages you can benefit of already available official Docker images for Windows.

These Docker images are well maintained and you can just start and put your application code inside and run your application easily in a Windows container.

Someone else did the hard work how to install the runtime or compiler for language XYZ into Windows Server Core container or even a Nanoserver container.

Prefer NanoServer

So starting to work with NanoServer is really easy with Docker as you only choose the right image for the FROM instruction in your Dockerfile. You can start with windowsservercore images, but I encourage you to test with nanoserver as well. For these languages it is easy to switch and the final Docker images are much smaller.

So let's have a look which languages are already available. The corresponding Docker Hub page normally has a short intro how to use these Docker images.


The Go programming language is available on the Docker Hub as image golang. To get the latest Go 1.8 for either Windows Server Core or NanoServer you choose one of these.

  • FROM golang:windowsservercore
  • FROM golang:nanoserver

Have a look at the tags page if you want another version or if you want to pin a specific version of Golang.


When you hear Java you might immediately think of Oracle Java. But searching for alternatives I found three OpenJDK distros for Windows. One of them recently made it into the official openjdk Docker images. Both Windows Server Core and NanoServer are supported.

  • FROM openjdk:windowsservercore
  • FROM openjdk:nanoserver

If you prefer Oracle Java for private installations, you can build a Docker image with the Dockerfiles provided in the oracle/docker-images repository.


For Node.js there are pull requests awaiting a CI build agent for Windows to make it into the official node images.

In the meantime you can use one of my maintained images, for example the latest Node LTS version for both Windows Server Core and NanoServer:

  • FROM stefanscherer/node-windows:6
  • FROM stefanscherer/node-windows:6-nano

You also can find more tags and versions at the Docker Hub.


The script language Python is available as Windows Server Core Docker image at the official python images. Both major versions of Python are available.

  • FROM python:3-windowsservercore
  • FROM python:2-windowsservercore

I also have a Python Docker image for NanoServer with Python 3.6 to create smaller Docker images.

  • FROM stefanscherer/python-windows:nano

.NET Core

Microsoft provides Linux and Windows Docker images for .NET Core at microsoft/dotnet. For Windows it is NanoServer only, but this is no disadvantage as you should plan for the smaller NanoServer images.

  • FROM microsoft/dotnet:nanoserver


For ASP.NET there are Windows Server Core Docker images for the major versions 3 and 4 with IIS installed at microsoft/aspnet.

  • FROM microsoft/aspnet:4.6.2-windowsservercore
  • FROM microsoft/aspnet:3.5-windowsservercore


The number of programming languages provided in Windows Docker images is growing. This makes it relatively easy to port Linux applications to Windows or use Docker images to distribute apps for both platforms.

Haven't found an image for your language? Have I missed something? Please let me know, and use the comments below if you have questions how to get started. Thanks for your interest. You can follow me on Twitter @stefscherer.