<![CDATA[Stefan Scherer's Blog]]>https://stefanscherer.github.io/https://stefanscherer.github.io/favicon.pngStefan Scherer's Bloghttps://stefanscherer.github.io/Ghost 1.5Sun, 06 Aug 2017 09:43:50 GMT60<![CDATA[Use Docker to Search in 320 Million Pwned Passwords]]>

This week Troy Hunt, a security researcher announced a freely downloadable list of pwned passwords. Troy is the creator of Have I Been Pwned? website and service that will notify you when one of your registered email addresses have been compromised by a data breach.

In his latest blog post

]]>
https://stefanscherer.github.io/use-docker-to-search-in-320-million-pwned-passwords/5986d4ec688a490001540976Sat, 05 Aug 2017 00:55:07 GMT

This week Troy Hunt, a security researcher announced a freely downloadable list of pwned passwords. Troy is the creator of Have I Been Pwned? website and service that will notify you when one of your registered email addresses have been compromised by a data breach.

In his latest blog post he introduced 306 Million Freely Downloadable Pwned Passwords with an update of another 14 Million just on the following day. He also has setup a online search at https://haveibeenpwned.com/Passwords

You can enter passwords and check if they have been compromised. But do not enter actively used passwords here, even if Troy is a nice person living in sunny Australia.

Pwned Passwords online service

My recommendation is

  1. If you are in doubt if your password has been pwned, just change it first and then check the old one in the online form.
  2. Use a Password manager like 1Password to create an individual long random password for each service you use.

But the huge password list is still quite interesting to work with.

Let's build a local search

What you can do is download the list of passwords (about 5 GByte compressed) and search locally in a safe place. You won't get the cleartext passwords, but only SHA1 sums of them. But we can create SHA1 sums of the passwords we want to search in this huge list.

You can download the files that are compressed with 7-Zip. You also need a tool to create a SHA1 sum of a plain text. And then you need another tool, a database or algorithm to quickly search in that text file that has nearly 320 Million lines.

Use Docker for the task

I immediately thought of a Container that has all these tools installed. But I didn't want to add the huge password lists into that container as it would build a Docker image of about 12 GByte or probably 5-6 GB Docker image on the Docker Hub.

The password files should be persisted locally on your laptop and mounted into the container to search in them with the tools needed for the task.

And I want to use some simple tools to get the work done. A first idea was born in the comments of Troys blog post where someone showed a small bash script with grep to search in the file.

I first tried grep, but this took about 2 minutes to find the hash in the file. So I searched a little bit and found sgrep - a tool to grep in sorted files. Luckily the password files are sorted by the SHA1 hash. But I found only the source code and there is no standard package to install it. So we also need a C compiler for that.

In times before Docker you had a lot of stress installing many tools on your computer. But let's see how Docker can help us with all these steps.

Build the Docker image

I found the Sources of sgrep on GitHub and we will need Make and a C compiler to build the sgrep binary.

I will use a multi-stage build Dockerfile and explain every single line. You can build the image line by line and see the benefits of build caches while working on the Dockerfile. So after adding a line to the file you can run docker build -t pwned-passwords . to build and update the image.

For the beginning let's choose a small Linux base image. We will name the first stage as build. So the Dockerfile starts with

FROM alpine:3.6 AS build

The next step is we have to install Git, Make and the C compiler with its header files.

RUN apk update && apk add git make gcc musl-dev

Now we clone the GitHub repo with the source code of sgrep.

RUN git clone https://github.com/colinscape/sgrep

In the next line I'll create a bin folder that is needed for the build process. Then we go to the source directory and run the make command as there is a Makefile in that directory.

RUN mkdir sgrep/bin && cd sgrep/src && make

After these steps we have the sgrep binary compiled for Alpine Linux. But we also have installed a ton of other tools.

Now put all these instructions into a Dockerfile and build the Docker image.

$ docker build -t pwned-passwords .

Let's inspect all image layers we have created so far.

$ docker history --format "{{.ID}}\t{{.Size}}\t{{.CreatedBy}}" pwned-passwords
78171a118279	24.5kB	/bin/sh -c mkdir sgrep/bin && cd sgrep/src...
2323bcb14b5f	93.6kB	/bin/sh -c git clone https://github.com/co...
8ec1470030af	119MB	/bin/sh -c apk update && apk add git make ...
7328f6f8b418	0B	/bin/sh -c #(nop)  CMD ["/bin/sh"]
<missing>	3.97MB	/bin/sh -c #(nop) ADD file:4583e12bf5caec4...

As you can see we now have a Docker image of more than 120 MByte, but the sgrep binary is only 15 KByte. Yes, this is no typo. Yes, we will grep through GByte of data with a tiny 15 KByte binary.

Multi-stage build for the win

With Docker 17.05 and newer you can now add another FROM instruction and start with a new stage in your Dockerfile. The last stage will create the final Docker image. So every instruction after the last FROM defines what goes into the image you want to share eg. on Docker Hub.

So let's start our final stage of our Docker image build with

FROM alpine:3.6

The last stage does not need a name. Now we have an empty Alpine Linux again, all the 120 MByte of development environment won't make it into the final image. But if you build the Docker image more than once the temporary layers are still there and will be reused if they are unmodified. So the Docker build cache helps you speed up while working on the shell script.

In the previous build stage we have created the much faster sgrep command. What we now need is a small shell script that converts a plaintext password into a SHA1 sum and runs the sgrep command.

To create a SHA1 sum I'll use openssl command. And it would be nice if the shell script can download the huge files for us. As the files are compressed with 7-zip we also need wget to download and 7z to extract them.

In the next instruction we install OpenSSL and the 7-Zip tool.

RUN apk update && apk add openssl p7zip

The COPY instruction has an option --from where you can specify another named stage of your build. So we copy the compiled sgrep binary from the build stage into the local bin directory.

COPY --from=build /sgrep/bin/sgrep /usr/local/bin/sgrep

The complete shell script is called search and can be found in my pwned-passwords GitHub repo. Just assume we have it in the current directory. The next COPY instruction copies it from your real machine into the image layer.

COPY search /usr/local/bin/search

As the last line of the Dockerfile we define an entrypoint to run this shell script if we run the Docker container.

ENTRYPOINT ["/usr/local/bin/search"]

Now append these lines to the Dockerfile and build the complete image. You will see that the first layers are already cached and only the last stage will be built.

The search script

You can find the search script in my GitHub repo as well as the Dockerfile. You only need these two tiny files to build the Docker image yourself.

#!/bin/sh
set -e

if [ ! -d /data ]; then
  echo "Please run this container with a volume mounted at /data."
  echo "docker run --rm -v \ $(pwd):/data pwned-passwords $*"
  exit 1
fi

FILES="pwned-passwords-1.0.txt pwned-passwords-update-1.txt"
for i in $FILES
do
  if [ ! -f "/data/$i" ]; then
    echo "Downloading $i"
    wget -O "/tmp/$i.7z" "https://downloads.pwnedpasswords.com/passwords/$i.7z"
    echo "Extracting $i to /data"
    7z x -o/data "/tmp/$i.7z"
    rm "/tmp/$i.7z"
  fi
done

if [[ $1 != "" ]]
then
PWD=$1
else
PWD="password"
echo "checking $PWD"
fi

hash=`echo -n $PWD | openssl sha1 | awk '{print $2}' | awk 'BEGIN { getline; print toupper($0)  }'`
echo "Hash is $hash"
totalcount=0
for i in $(sgrep -c $hash /data/*.txt)
do
  file=$(echo "$i" | cut -f1 -d:)
  count=$(echo "$i" | cut -f2 -d:)
  if [[ $count -ne 0 ]]; then
    echo "Oh no - pwned! Found $count occurences in $file"
  fi
  totalcount=$(( $totalcount + $count ))
done
if [[ $totalcount -eq 0 ]]; then
  echo "Good news - no pwnage found!"
else
  exit 1
fi

Build the final image

Now with these two files, Dockerfile and search shell script build the small Docker image.

$ docker build -t pwned-passwords .

Let's have a look at the final image layers with

$ docker history --format "{{.ID}}\t{{.Size}}\t{{.CreatedBy}}" stefanscherer/pwned-passwords
24eca60756c8	0B	/bin/sh -c #(nop)  ENTRYPOINT ["/usr/local...
c1a9fc5fdb78	1.04kB	/bin/sh -c #(nop) COPY file:ea5f7cefd82369...
a1f4a26a50a4	15.7kB	/bin/sh -c #(nop) COPY file:bf96562251dbd1...
f99b3a9601ea	10.7MB	/bin/sh -c apk update && apk add openssl p...
7328f6f8b418	0B	/bin/sh -c #(nop)  CMD ["/bin/sh"]
<missing>	3.97MB	/bin/sh -c #(nop) ADD file:4583e12bf5caec4...

As you can see, OpenSSL and 7-Zip take about 10 MByte, the 16 KByte sgrep binary and the 1 KByte shell script are sitting on top of the 4 MByte Alpine base image.

I also have pushed this image to the Docker Hub with a compressed size of about 7 MByte. If you trust me, you can use this Docker image as well. But you will learn more how multi-stage builds feel like if you build the image yourself.

Search for pwned passwords

We now have a small 14.7 MByte Linux Docker image to search for pwned passwords.

Run the container with a folder mounted to /data. If you forgot this, the script will show you how to run it.

Running the container for the first time it will download the two password files (5 GByte) which may take some minutes depending on your internet connectivity.

After the script has downloaded everything two files should appear in the current folder. For me it looks like this:

file list

Now search for passwords by adding a plaintext password as an argument

$ docker run --rm -v $(pwd):/data pwned-passwords troyhunt
Hash is 0CCE6A0DD219810B5964369F90A94BB52B056494
Oh no - pwned! Found 1 occurences in /data/pwned-passwords-1.0.txt

If you don't trust my script or the sgrep command, the run the container without network connectivity

$ docker run --rm -v $(pwd):/data --network none pwned-passwords secret4949
Hash is 6D26C5C10FF089BFE81AB22152E2C0F31C58E132
Good news - no pwnage found!

So you have luck, you can securely check that your password secure4949 hasn't been breached. But beware this is still no good password :-)

Run pwned-passwords

Works on Windows

If you have Docker installed on your Windows machine, you can also use my Docker image or build the image yourself.

With Docker 4 Windows it only depends on the shell you use.

For PowerShell the command to run the image is

docker run --rm -v "$(pwd):/data" pwned-passwords yourpass

PowerShell

And if you prefer the classic CMD shell use this command

docker run --rm -v "%cd%:/data" pwned-passwords yourpass

CMD shell

On my Windows 7 machine I have to use Docker Machine, but even here you can easily search for pwned passwords. All you have to do is mount a directory for the password files as /data into the container.

docker run --rm -v "/c/Users/stefan.scherer/pwned:/data" stefanscherer/pwned-passwords troyhunt

Windows 7 with pwned-passwords image

Conclusion

You now know that there are Millions of passwords out there that may be used in a brute force attack to other online services.

So please use a password manager instead of predictable patterns how to modify passwords for different services.

You also have learned how Docker can keep your computer clean but still compile some open source projects from source code.

You have seen the benefits of multi-stage builds to create and share minimal Docker images without the development environment.

And you now have the possibility to search your current passwords in a save place without leaking it to the internet. Some other online service may collect all the data entered into a form. So keep your passwords secret and change

If you want to hear more about Docker, follow me on Twitter @stefscherer.

]]>
<![CDATA[Exploring new NanoServer Insider images]]>

Last week the first Insider preview container images appeared on the Docker Hub. They promise us much smaller sizes to have more lightweight Windows images for our applications.

To use these Insider container images you also need an Insider preview of Windows Server 2016 or Windows 10. Yes, this is

]]>
https://stefanscherer.github.io/exploring-new-nanoserver-insider-images/5986d4ec688a490001540975Tue, 18 Jul 2017 09:42:41 GMT

Last week the first Insider preview container images appeared on the Docker Hub. They promise us much smaller sizes to have more lightweight Windows images for our applications.

To use these Insider container images you also need an Insider preview of Windows Server 2016 or Windows 10. Yes, this is another great announcement that you can get early access and give feedback to the upcoming version of Windows Server. So let's grab it.

Windows Server Insider

  1. Register at Windows Insider program https://insider.windows.com and join the Windows Server Insider program.

  2. Download the Windows Server Insider preview ISO from https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver

Now you can create a VM and install Docker. You can either build the VM manually and follow the docs "Using Insider Container Images" how to install Docker and pull the Insider container images. Or you can use my Packer template and Vagrant environment to automate these steps. The walkthrough is described at

https://github.com/StefanScherer/insider-docker-machine

Windows Insider images

There are four new Docker images available with a much smaller footprint.

Windows Insider images

  • microsoft/windowsservercore-insider
  • microsoft/nanoserver-insider
  • microsoft/nanoserver-insider-dotnet
  • microsoft/nanoserver-insider-powershell

The Windows Server Core Insider image got down from 5 GB to only 2 GB which saves a lot of bandwidth and download time.

You may wonder why there are three Nano Server Insider images and why there is one without PowerShell.

Aiming the smallest Windows base image

If we compare the image sizes of the current microsoft/nanoserver image with its base layer and update layer with the new Insider images you can see the reason.

NanoServer sizes

If you want to ship your application in a container image you don't want to ship a whole operating system, but only the parts needed to run the application.

And to ship faster is to ship smaller images. For many applications you do not need eg. PowerShell inside your base image at runtime which would take another 54 MByte to download from the Docker registry.

Let's have a look at current Windows Docker images available on the Docker Hub. To run a Golang webserver for example on an empty Windows Docker host you have to pull the 2MB binary and the two NanoServer base layers with hundreds of MB to run it in a container.

docker pull whoami

Of course these base images have to be downloaded only once as other NanoServer container images will use the same base image. But if you work with Windows containers for a longer time you may have noticed that you still have to download different update layers from time to time that pull another 122 MB.

And if the NanoServer base image is much smaller then the updates also will be smaller and faster to download.

With the new Insider container images you can build and run containerized .NET core applications that are still smaller than the NanoServer + PowerShell base image.

Node.js

Another example is providing a Node.js container image based on the new NanoServer Insider image with only 92 MByte. We have just cut off "3" hundred MB.

Node.js NanoServer sizes

If we compare that with some of the Linux Node.js container images we are at about the size of the the slim images.

Node.js slim image sizes

Multi-stage build

To build such small Windows images comes with a cost. You have to live without PowerShell. But the new multi-stage build introduced with Docker 17.05 really helps you and you can use PowerShell before the final image layers are built.

If you haven't heard about multi-stage builds its concept is to have multiple FROM instructions in a Dockerfile. Only the last FROM until the end of the file will build the final container image. This is also called the last stage. In all the other stages you don't have to optimze too much and can use the build cache much better. You can read more about multi-stage builds at the Docker Blog.

Let's have a closer look how to build a small Node.js base image. You can find the complete Dockerfile on GitHub.

In the first stage I'm lazy and even use the microsoft/windowsservercore-insider image. The reason is that I'm using the GPG tools to verify the downloads and these tools don't run quiet well in NanoServer at the moment.

# escape=`
FROM microsoft/windowsservercore-insider as download
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN Invoke-WebRequest ... 
RUN Expand-Archive ...

The Dockerfile has a second FROM instruction which then uses the smallest Windows base image. In that stage you normally COPY deploy files and folders from previous stages. In our case we copy the Node.js installation folder into the final image.

The one RUN instruction sets the PATH environment variable with the setx command instead of PowerShell commands.

FROM microsoft/nanoserver-insider
ENV NPM_CONFIG_LOGLEVEL info
COPY --from=download /nodejs /nodejs
RUN setx PATH "%PATH%;C:\nodejs;%APPDATA%\npm"
CMD [ "node.exe" ]

Users of such a Node.js base image can work as usual by COPY deploy their source tree and node_modules folder into that image and run the application as a small container.

FROM stefanscherer/node-windows:8.1.4-insider
WORKDIR /code
COPY . /code
CMD ["node.exe", "app.js"]

So all you have to do is change the FROM instruction to the smaller insider Node.js image.

Further Insider images

I have pushed some of my first Insider images to the Docker Hub so it may be easier for you to try out different languages.

  • stefanscherer/node-windows:6.11.1-insider
  • stefanscherer/node-windows:8.1.4-insider
  • stefanscherer/golang-windows:1.8.3-insider
  • stefanscherer/dockertls-windows:insider

If you want to see how these images are built, then you can find the Dockerfiles in the latest pull requests of my https://github.com/StefanScherer/dockerfiles-windows repo.

Docker Volumes

If you have worked with Docker Volumes on Windows you may know this already. Node.js and other tools and languages have problems when they want to get the real name of a file or folder that is mapping from the Docker host into the container.

Node.js for example thinks the file is in the folder C:\ContainerMappedDirectories, but cannot find the file there. There is a workaround described in Elton Stoneman's blog post "Introducing the 'G' Drive" to map it to another drive letter.

With the new Insider preview I see a great improvement on that topic. Running normal Windows containers without the HyperV isolation there is no longer a symbolic link.

If we run the Node.js container interactively and map the folder C:\code into the container we can list the C:drive and see that the code folder is a normal directory.

docker run -v C:\code:C:\code stefanscherer/node-windows:8.1.4-insider cmd /c dir

docker run volume

With this setup you are able to mount your source code into the Node.js container and run it eg. with nodemon to live reload it after changing it on the host.

Unfortunately this is not available with the Hyper-V isolation that is the default on Windows 10 Insider machines.

Running the same command with --isolation=hyperv shows the symlinked directory which Node.js cannot handle at the moment.

docker run -v C:\code:C:\code --isolation=hyperv stefanscherer/node-windows:8.1.4-insider cmd /c dir

docker run volume hyperv

But this improvement in native Windows containers looks very promising to solve a lot of headache for all the maintainers of Git for Windows, Golang, Node.js and so on.

Conclusion

Having smaller Windows container images is a huge step forward. I encourage you to try out the much smaller images. You'll learn how it feels to work with them and you can give valuable feedback to the Microsoft Containers team shaping the next version of Windows Server.

Can we make even smaller images? I don't know, but let's find it out. How about naming the new images? Please make suggestions at the Microsoft Tech Community https://techcommunity.microsoft.com.

Please use the comments below if you have further ideas, questions or improvements to share. You can follow me on Twitter @stefscherer to stay up to date with Windows containers.

]]>
<![CDATA[Use multi-stage builds for smaller Windows images]]>

I'm still here in Austin, TX at DockerCon 2017 and I want to show you one of the announcements that is very useful to build small Windows Docker images.

On Tuesday's first keynote at DockerCon Solomon Hykes introduced the most impressive feature for me that will make it in version

]]>
https://stefanscherer.github.io/use-multi-stage-builds-for-smaller-windows-images/5986d4ec688a490001540974Wed, 19 Apr 2017 22:52:00 GMT

I'm still here in Austin, TX at DockerCon 2017 and I want to show you one of the announcements that is very useful to build small Windows Docker images.

On Tuesday's first keynote at DockerCon Solomon Hykes introduced the most impressive feature for me that will make it in version 17.05.0 of Docker: The multi-stage builds

announcement at DockerCon about multi-stage builds

The demo in the keynote only showed Linux images, but you can use this feature for Windows images as well.

How did we build smaller images in the past?

As we know each instruction in a Dockerfile like COPY or RUN builds a new layer of the image. So everything you do in eg. a RUN instruction is atomic and saved into one layer. It was a common practise to use multi-line RUN instructions to clean up temporary files and cache folders before that instruction ends to minimize the size of that layer.

For me it always looked like a workaround and a little too technical to know where all these temporary files have to be wiped out. So it is great to remove this noise out of your Dockerfiles.

And another workaround that was used in addition was to create two Dockerfiles and a script to simulate such stages and copy files from the first Docker image back to the host and then into the second Docker image. This could lead to errors if you have old temp folders on your host where you copy the results from the first build in. So it will be good that we can remove this complexity and avoid such build scripts entirely.

Multi-stage build on Windows

The idea behind multi-stage builds is that you can define two or more build stages and only the layers of the last stage gets into the final Docker image.

The first stage

As you can see in the nice slide you can start with a first stage and do what you like in there. Maybe you need a complete build environment like MSBuild, or the Golang compiler or dev dependencies to run Node.js tests with your sources.

The FROM instruction now can be followed by a stage name, eg. build. I recommend to introduce that to your Dockefile as we will need this name later again. This is how your Dockerfile then could look like:

FROM microsoft/windowsservercore as build

You do not need to use multi-line RUN instructions any more if you haven't liked it. Just keep your Dockerfile simple, readable and maintainable by your team colleages. The advantage that even you have is that you can use the Docker build cache much better.

Think of a giant multi-line RUN instruction with three big downloads, uncompress and cleanup steps and the third download crashes due to internet connectivity. Then you have to do all the other downloads again if you start the docker build again.

So relax and just download one file per RUN instruction, even put the uncompress into another RUN layer, it doesn't matter for the final image size.

The last stage

The magic comes into the Dockerfile as you can use more than one FROM instructions. Each FROM starts a new build stage and all lines beginning from the last FROM will make it into the final Docker image. The last stage does not need to have a name like the previous ones.

In this last stage you define the minimal runtime environment for your containerised application.

The COPY instruction now has a new option --from where you can specify from with stage you want to copy files or directories into the current stage.

Enough theory. Let's have a look at some real use-cases I already tried out.

Build a Golang program

A simple multi-stage Dockerfile to build a Golang binary from source could look like this:

FROM golang:nanoserver as gobuild
COPY . /code
WORKDIR /code
RUN go build webserver.go

FROM microsoft/nanoserver
COPY --from=gobuild /code/webserver.exe /webserver.exe
EXPOSE 8080
CMD ["\\webserver.exe"]

The first four lines describe the normal build. We copy the source codes into the Golang build environment and build the Windows binary with it.

Then with the second FROM instruction we choose an empty NanoServer image. With this we skip about 100 MByte of compressed Golang build environment images for the production image.

The COPY --from=gobuild instruction copies the final Windows binary from the gobuild stage into the final stage.

The last two lines are just the normal things you do, expose the port on which your app is listening and describing the command that should be called when running a container with it.

This Dockerfile now can be easily be built as always with

docker build -t webserver .

The final Docker image only has a 2 MByte compressed layer in addition to the NanoServer base layers.

You can find a full example for such a simple Golang webserver in my dockerfiles-windows repo, the final Docker Hub image is available at stefanscherer/whoami:windows-amd64-1.2.0.

Install MongoDB MSI in NanoServer

Another example for this multi-stage build is that you can use it to install MSI packages and put the installed programs and files into a NanoServer image.

Well, you cannot install MSI packages in NanoServer directly, but you can start with the Windows Server Core image in the build stage and then switch to NanoServer in the final stage.

If you know where the software has been installed you can COPY deploy them in the final stage into the image.

The Dockerfile how to build a MongoDB NanoServer image is also available on GitHub.

The first stage more or less looks like this:

FROM microsoft/windowsservercore as msi
RUN "download MSI page"
RUN "check SHA sum of download"
RUN "run MSI installer"

and the final stage looks like this:

FROM microsoft/nanoserver
COPY --from=msi C:\mongodb\ C:\mongodb\
...
RUN "put MongoDB binaries into PATH"
VOLUME C:\data\db
EXPOSE 27017
CMD ["mongod.exe"]

Another pro tip: If you really want small Windows Docker images you should also avoid RUN or ENV instructions in the last stage.

The final MongoDB NanoServer image is available at stefanscherer/mongo-windows:3.4.2-nano.

Conclusion

With multi-stage builds coming into Docker 17.05 we will be able to

  • put all build stages into a single Dockerfile to use only one simple docker build command
  • use the build cache by using single line RUN instructions
  • start with ServerCore, then switch to NanoServer
  • use latest NanoServer image with all security updates installed for the last stage even if upstream build layer may be out of date

This gives you an idea what you will be able to do once you have Docker 17.05 or later installed.

Update 2017-05-07: I build all my dockerfiles-windows Windows Docker images with AppVeyor and it is very easy to upgrade to Docker 17.05.0-ce during the build with the script update-docker-ce.ps1. For local Windows Server 2016 VM's you could use this script as well. Sure, at the moment we have to switch from EE to CE edition until 17.06.0-ee also will bring this feature. Your images will still run on 17.03.1-ee production servers.

Please use the comments below if you have further ideas, questions or improvements to share. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Yes, you can "Docker" on Windows 7]]>

This week I was asked to help automating a task to get some Linux binaries and files packaged into a tarball. Some developers tried to spin up a Linux virtual machine and run a script to install tools and then do the packaging. Although I also like and use Vagrant

]]>
https://stefanscherer.github.io/yes-you-can-docker-on-windows-7/5986d4ec688a490001540973Fri, 31 Mar 2017 17:02:07 GMT

This week I was asked to help automating a task to get some Linux binaries and files packaged into a tarball. Some developers tried to spin up a Linux virtual machine and run a script to install tools and then do the packaging. Although I also like and use Vagrant still very often, it seemed to me using Docker will be easier to maintain as this could be done in a one-shot container.

The hard facts - Windows 7 Enterprise

The bigger problem was the fact that in some companies you still find Windows 7 Enterprise. It may be a delayed rollout of new notebooks that keep the employees on that old desktop platform.

So using Docker for Windows was no option as it only works with Windows 10 Pro with Hyper-V. This looks like a good setup for new notebooks, but if you want to use Docker now you have to look for other solutions.

Locked-in Hypervisor

Next obstacle was that for Vagrant it is better to use VMware Workstation on Windows 7 instead of VirtualBox. There also may be a company policy to use one specific hypervisor as the knowledge is already there using other server products in the datacenter.

So going down to the Docker Toolbox also was no option as it comes with VirtualBox to run the Linux boot2docker VM.

Embrace your environment

So we went with a manual installation of some Docker tools to get a Linux Docker VM running on the Windows 7 machine. Luckily the developers already had the Chocolatey package manager installed.

Let's recap what I found on the notebooks

  • Windows 7 Enterprise
  • VMware Workstation 9/10/11/12

Well there is a tool Docker Machine to create local Docker VM's very easily, and there is a VMware Workstation plugin available. All these tools are also available as Chocolatey packages.

So what we did on the machines was installing three packages with these simple commands in an administrator terminal.

choco install -y docker
choco install -y docker-machine
choco install -y docker-machine-vmwareworkstation

Then we closed the administrator terminal as the next commands can be done in normal user mode.

My host is my castle

Every developer installs tools that they need for their work. Installing that on the host machine - your desktop or notebook - leads to different machines.

Creating the Docker Machine we ran into a "works on my machine, but doesn't work on your machine" problem I hadn't seen before.

Something while setting up the Linux VM just went wrong. It turned out that copying the Docker TLS certs with SSH just didn't work. A deeper look on what else is installed on the host we found that some implementations of SSH clients just doesn't work very well.

Luckily there is a less known option in the docker-machine binary to ignore external SSH client and use the built-in implementation.

With that knowledge we were able to create a VMware Docker Machine on that laptop with

docker-machine --native-ssh create -d vmwareworkstation default

Using the good old PowerShell on the Windows 7 notebook helps you to use that Linux Docker VM by setting some environment variables.

docker-machine env | iex

After that you can run docker version for example to retrieve client and server version which are both the up-to-date community editions

docker version

Quite exciting to be able to use that Windows 7 notebook with the latest Docker tools installed.

So hopefully Docker and using containers in more and more development tasks helps to keep their notebooks clean and they install less tools on the host and instead running more tools in containers.

I can C: a problem

Using that Docker Machine VM worked really well until we faced another problem. Building some Docker images we ran out of disk space. Oh no, although the Windows 7 notebooks got improved by installing a 1 TB SSD, the C: partition hasn't been increased for some historical reasons.

Face palm

Docker Machine creates the Linux VM's in the current users home directory. This is a good idea, but having a 120 GB partition with only 7 GB left on C: we had to fix it. Taking a deep breath and embracing that environment, we came to the following solution.

We destroyed the Docker Machine again (because it's so easy) and also removed the .docker folder again to link it to a folder that resides on a bigger partition of the SSD.

docker-machine rm -f default
rm $env:USERPROFILE\.docker
mkdir D:\docker
cmd /c mklink /J $env:USERPROFILE\.docker D:\docker

Then we recreated the Docker Machine with the command from above and set the environment variables again.

docker-machine --native-ssh create -d vmwareworkstation default
docker-machine env | iex

And hurray - it worked. The VM with its disk resides on the bigger D: drive and we don't have to set any other global environment variables.

With that setup I made the developers happy. They could start using Docker without waiting for new hardware or asking their admins to resize or reformat their partitions.

We soon had a small Dockerfile and put the already existing provision scripts into an image. So we finished the task running a Linux container that can be thrown away more easily than a whole VM.

Daily work

To recap how to use this Docker Machine you normally do the following steps after booting your notebook.

docker-machine start
docker-machine env | iex

Then you can work with this default Linux Docker VM.

Planning your hardware update

The story ended well, but I recommended to think ahead and plan the next hardware update. So before they just get the new notebook generation they should think about which hypervisor they should use in the future.

Using Windows 10 Enterprise with the built-in Hyper-V would be easier. You can run native Windows containers with it and use Docker for Windows to switch between Linux and Windows containers. Using Vagrant with Hyper-V also gets better and better.

But if company policy still restricts you to use eg. VMware then you also can use the steps above to create a Linux Docker machine. You also cannot use Windows containers directly on Windows 10 machine as Hyper-V does not work in parallel with other hypervisors. In that case you might spin up a Windows Server 2016 VM using my Windows Docker Machine setup. With that you can easily switch between Linux and Windows containers using the docker-machine env command.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. I love to hear about your enterprise setup and how to make Docker work on your developer's machines. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[7 Reasons to attend DockerCon]]>

I'm more than happy that I can make it to DockerCon in Austin, Texas. It is only a few weeks until the workshops and conference starts April, 17th. If you still need some good reasons why you should attend I can give you some ideas. And you will get 10%

]]>
https://stefanscherer.github.io/7-reasons-to-attend-dockercon/5986d4ec688a490001540972Wed, 29 Mar 2017 22:43:00 GMT

I'm more than happy that I can make it to DockerCon in Austin, Texas. It is only a few weeks until the workshops and conference starts April, 17th. If you still need some good reasons why you should attend I can give you some ideas. And you will get 10% discount with the code CaptainStefan.

Workshops

On Monday I'll be at the workshop Modernizing monolithic ASP.NET applications with Docker where you can get some hands-on experience with Windows containers. You cannot have a better place if you want to get started with Docker on Windows. Michael Friis and Elton Stoneman from Docker and myself can answer all your questions.

See some Docker Swarm demos

Come to the Community Theater on Tuesday, Apr 18th, 1:00 PM to see my live demo Swarm 2 Go and how our team at SEAL Systems has built a portable multi-arch data center with Raspberry Pi and UP boards.

picloud

You will have the chance to play the chaos monkey and unplug cables to see Docker swarm mode in action. With the help of LED's we can visualise failures and how Docker swarm gets healthy again. All steps to build such a cluster is available in an open source repo.

Learn about Docker on Windows

Docker is no longer a thing only on Linux. There are several talks about Docker on the Windows platform that I want to see.

And I also recommend to visit the Microsoft booth to hopefully see some Docker swarm mode on Windows Servers. I really look forward to see the latest news and talking with some of the Microsoft Container and Networking team.

Multiple platforms

If you think Docker is only Linux on Intel machines, then comparing it to an instrument it may look like this.

keyboard

But as you can see the talks above, Docker is available on multiple platforms: Linux, Windows, from small ARM devices like the Raspberry Pi to big IBM machines.

So the whole spectrum of Docker more looks like this, and once you learned the Docker commands you are able to play this:

organ

So it is time to learn how easy it is to deploy your applications for more than one platform.

See you at DockerCon! Ping me on Twitter @stefscherer or with the DockerCon app to get in touch with me during that conference week.

]]>
<![CDATA[How to run encrypted Windows websites with Docker and Træfɪk]]>

Nowadays we read it all the time that every website should be encrytped. Adding TLS certificates to your web server sounds like a hard task to do. You have to update your certificates before they get invalid. I don't run public websites on a regular basis, so I - like

]]>
https://stefanscherer.github.io/how-to-run-encrypted-windows-websites-with-docker-and-traefik/5986d4ec688a490001540971Fri, 10 Mar 2017 22:21:00 GMT

Nowadays we read it all the time that every website should be encrytped. Adding TLS certificates to your web server sounds like a hard task to do. You have to update your certificates before they get invalid. I don't run public websites on a regular basis, so I - like many others I guess - have heard of Let's Encrypt, but never really tried it.

But let's learn new things and try it out. I also have promised in the interview in John Willis' Dockercast that I will write a blog post about it. With some modern tools you will see, it's not very complicated to run your Windows website with TLS certificates.

In this blog post I will show you how to run your website in Windows containers with Docker. You can develop your website locally in a container and push it to your server. And another Windows container runs the Træfɪk proxy, that helps us with the TLS certificate as well as with its dynamic configuration to add more than just one website.

Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends like Docker to register and update its configuration for each new started container.

This picture gives you an overview of the architecture:

Traefik architecture

Normally Træfɪk is running inside a container and it is well known in the Linux Docker community. A few weeks ago I have seen that there also are Windows binaries available. Let's see if we can use Træfɪk in a Windows container to provide us encrypted HTTPS traffic to other Windows containers running our IIS website, or other web service.

Step 1: Create a Windows Docker host in Azure

First of all we need a Windows Server 2016 machine with Docker in the cloud. I will use Azure as Microsoft provides a VM template for that. This server will be our webserver later on with an own DNS name and TLS certs running our website.

Go to the Windows Containers quick start guide at docs.microsoft.com and press the "Deploy to Azure" button.

Deploy to Azure

This will bring you to the Azure portal where you can customize the virtual machine. Create a new resource group, choose the location where the server should be running a and public DNS name, as well as the size of the VM.

Customize machine

After you click on "Purchase" the deployment starts which should take only a few minutes.

Azure starts deployment

In the meantime click on the cube symbol on the left. That will show you all resource groups you have.

This Windows + Docker template already creates inbound security rules for HTTPS port 443 as well as the Docker TLS port 2376. So for our purposes we don't need to add more inbound rules.

Step 2: Buy a domain and update DNS records

For Let's Encrypt you need an own domain name to get TLS certificates. For my tests I ordered a domain name at GoDaddy. But after I walked through the steps I realised that Træfɪk also can automatically update your DNS records when you use DNSimple, CloudFlare etc.

But for first time domain name users like me I show you the manual steps. In my case I went to my domain provider and configured the DNS records.

Get the public IP address

Before we can update the DNS record we need the public IP address of the VM. This IP address is also used for the Docker TLS certificates we will create later on.

In the Azure Portal, open the resource group and click on the public IP address.

Resource group

Write down or copy the IP address shown here.

Public IP address

Go back to your domain provider and enter the public IP address in the A record. If you want to run multiple websites within Docker containers, add a CNAME resource record for each sub domain you need. For this tutorial I have added portainer and whoami as additional sub domains.

Update DNS records

After some minutes all the DNS servers should know your domain name with the new IP address of your Windows Server 2016.

Step 3: Secure Docker with TLS

We now log into the Docker host with RDP. You can use the DNS name provided by Azure or use your domain name. But before you connect with RDP, add a shared folder to your RDP session so you can also copy back the Docker TLS client certificates to your local machine. With this you will also be able to control your Windows Docker engine directly from your local computer.

In this example I shared my desktop folder with the Windows VM.

Add folder in RDP client

Now login with the username and password entered at creation time.

Login with RDP

Create Docker TLS certs

To use Docker remotely it is recommended to use client certificates, so nobody without that certs can talk to your Docker engine. The same applies if a Windows container wants to communicate with the Docker engine. Using just the unprotected port 2375 would give every container the possibility to gain access to your Docker host.

Open a PowerShell terminal as an administrator to run a Windows container that can be used to create TLS certificates for your Docker engine. I already have blogged about DockerTLS in more detail so we just use it here as a tool.

Retrieve all local IP addresses to allow the TLS certificate also from the host itself, but as well for other Windows containers to talk to your Docker engine.

$ips = ((Get-NetIPAddress -AddressFamily IPv4).IPAddress) -Join ','

Also create a local folder for the client certificates.

mkdir ~\.docker

Now run the DockerTLS tool with docker run, just append the public IP address from above to the list of IP_ADDRESSES. Also adjust the SERVER_NAME variable to your domain name.

docker run --rm `
  -e SERVER_NAME=schererstefan.xyz `
  -e IP_ADDRESSES=$ips,52.XXXXXXX.198 `
  -v "C:\ProgramData\docker:C:\ProgramData\docker" `
  -v "$env:USERPROFILE\.docker:C:\Users\ContainerAdministrator\.docker" `
  stefanscherer/dockertls-windows

Run dockertls

Docker will pull the Windows image from Docker Hub and create the TLS certificates in the correct folders for your Docker engine.

Afterwards you have to restart the Docker engine to use the TLS certificates. The Docker engine now additionally listen on TCP port 2376.

restart-service docker

Restart docker

Add firewall exception for Docker

This step is needed to make other Windows container talk to the Docker engine at port 2376. But it also has another benefit. With these certs you can use the Docker client on your local machine to communicate with the Windows Docker engine in Azure. But I will start Træfɪk later on from the Docker host itself as we need some volume mount points.

The Windows Server's firewall is active, so we now have to add an exception to allow inbound traffic on port 2376. The network security group for the public IP address already has an inbound rule to the VM. This firewall exception now allows the connection to the Docker engine.

Add firewall exception

From now on you can connect to the Docker engine listing on port 2376 from the internet.

Copy Docker client certs to your local machine

To setup a working communication from your local machine, copy the Docker client certificates from the virtual machine through the RDP session back to your local machine.

Copy Docker TLS certs to client

On your local machine try to connect with the remote Windows Docker engine with TLS encryption and the client certs.

$ DOCKER_CERT_PATH=~/Desktop/.docker DOCKER_TLS_VERIFY=1 docker -H tcp://schererstefan.xyz:2376 version

Docker client from Mac

Now you are able to start and stop containers as you like.

Step 4: Run Træfɪk and other services

Now comes the fun part. We use Docker and Docker Compose to describe which containers we want to run.

Install Docker Compose

To spin up all our containers I use Docker Compose and a docker-compose.yml file that describes all services.

The Windows VM does not come with Docker Compose. So we have to install Docker Compose first. If you are working remotely you can use your local installation of Compose and skip this step.

Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-Windows-x86_64.exe" `
  -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe

If you prefer Chocolatey, use choco install docker-compose instead.

Create data folders on Docker host

You need to persist some data outside of the Docker containers, so we create some data folders. Træfɪk retrieves the TLS certs and these should be persisted outside of the container. Otherwise you run into the Let's Encrypt rate limit of 20 requests per week to obtain new certificates. This happened to me trying different things with Træfɪk and starting and killing the container lots of times.

PS C:\Users\demo> mkdir sample
PS C:\Users\demo> cd sample
PS C:\Users\demo\sample> mkdir traefikdata
PS C:\Users\demo\sample> mkdir portainerdata

docker-compose.yml

For a first test we define two services, the traefik service and a example web server called whoami. This tutorial should give you just an idea and you can extend the YAML file to your needs. Run an IIS website? Put it into a container image. And another IIS website? Just run a separate container with that other website in it. You see you don't have to mix multiple sites, just leave them alone in single microservice images.

Open up an editor and create the YAML file.

PS C:\Users\demo\sample> notepad docker-compose.yml
version: '2.1'
services:
  traefik:
    image: stefanscherer/traefik-windows
    ports:
      - "8080:8080"
      - "443:443"
    volumes:
      - ./traefikdata:C:/etc/traefik
      - ${USERPROFILE}/.docker:C:/etc/ssl:ro

  whoami:
    image: stefanscherer/whoami-windows
    depends_on:
      - traefik
    labels:
      - "traefik.backend=whoami"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:whoami.schererstefan.xyz"

networks:
  default:
    external:
      name: nat

I already have built a Træfɪk Windows Docker image that you can use. There might be an official image in the future. If you don't want to use my image, just use this Dockerfile and replace the image: stefanscherer/traefik-windows with build: ., so Docker Compose will build the Træfɪk image for you.

The Dockerfile looks very simple as we directly add the Go binary to the Nanoserver Docker image and define some volumes and labels.

FROM microsoft/nanoserver

ADD https://github.com/containous/traefik/releases/download/v1.2.0-rc2/traefik_windows-amd64 /traefik.exe

VOLUME C:/etc/traefik
VOLUME C:/etc/ssl

EXPOSE 80
ENTRYPOINT ["/traefik", "--configfile=C:/etc/traefik/traefik.toml"]

# Metadata
LABEL org.label-schema.vendor="Containous" \
      org.label-schema.url="https://traefik.io" \
      org.label-schema.name="Traefik" \
      org.label-schema.description="A modern reverse-proxy" \
      org.label-schema.version="v1.2.0-rc2" \
      org.label-schema.docker.schema-version="1.0"

traefik.toml

Træfɪk needs a configuration file where you specify your email address for the Let's Encrypt certificate requests. You will also need the IP address of the container network so that Træfɪk can contact your Docker engine.

$ip=(Get-NetIPAddress -AddressFamily IPv4 `
   | Where-Object -FilterScript { $_.InterfaceAlias -Eq "vEthernet (HNS Internal NIC)" } `
   ).IPAddress
Write-Host $ip

Now open an editor to create the traefik.toml file.

PS C:\Users\demo\sample> notepad traefikdata\traefik.toml

Enter that IP address at the endpoint of the [docker] section. Also adjust the domain names

[web]
address = ":8080"

[docker]
domain = "schererstefan.xyz"
endpoint = "tcp://172.24.128.1:2376"
watch = true

[docker.tls]
ca = "C:/etc/ssl/ca.pem"
cert = "C:/etc/ssl/cert.pem"
key = "C:/etc/ssl/key.pem"

# Sample entrypoint configuration when using ACME
[entryPoints]
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]

[acme]

# Email address used for registration
#
# Required
#
email = "you@yourmailprovider.com"

storage = "c:/etc/traefik/acme.json"
entryPoint = "https"

[[acme.domains]]
   main = "schererstefan.xyz"
   sans = ["whoami.schererstefan.xyz", "portainer.schererstefan.xyz", "www.schererstefan.xyz"]

Open firewall for all container ports used

Please notice that the Windows firewall is also active for the container network. The whoami service listens on port 8000 in each container. To make Træfɪk connect to the whoami containers you have to add a firewall exception for port 8000.

Docker automatically adds a firewall exception for all ports mapped to the host with ports: in the docker-compose.yml. But for the exposed ports this does not happen automatically.

Spin up Træfɪk and whoami

Now it's time to spin up the two containers.

docker-compose up

You can see the output of each container and stop them by pressing CTRL+C. If you want to run them detached in the background, use

docker-compose up -d

So see the output of the services you can use docker-compose logs traefik or docker-compose logs whoami at any time.

Træfɪk now fetches TLS certificates for your domain with the given sub domains. Træfɪk listens for starting and stopping containers.

Test with a browser

Now open a browser on your local machine and try your TLS encrypted website with the subdomain whoami. You should see a text like I'm 3e1f17ecbba3 which is the hostname of the container.

Now let's try Træfɪk load balancing feature by scaling up the whoami service.

docker-compose scale whoami=3

Now there are three whoami containers running and Træfɪk knows all three of them. Each request to the subdomain will be load balanced to one of these containers. You can SHIFT-reload your page in the browser and see that each request returns another hostname.

Test whoami service with browser

So we have a secured HTTPS connection to our Windows containers.

IIS

The power of Docker is that you can run multiple services on one machine if you have resources left. So let's add another web server, let's choose an IIS server.

Add these lines to the docker-compose.yml.

  www:
    image: microsoft/iis
    expose:
      - 80
    depends_on:
      - traefik
    labels:
      - "traefik.backend=www"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:www.schererstefan.xyz"

Remember to add a firewall exception for port 80 manually. After that spin up the IIS container with

docker-compose up -d www

And check the new sub domain. You will see the welcome page of IIS.

IIS welcome page

Portainer

Let's add another useful service to monitor your Docker engine. Portainer is a very good UI for that task and it is also available as a Windows Docker image.

Add another few lines to our docker-compose.yml.

  portainer:
    image: portainer/portainer
    command: -H tcp://172.24.128.1:2376 --tlsverify
    volumes:
      - ./portainerdata:C:/data
      - ${USERPROFILE}/.docker:C:/certs
    depends_on:
      - traefik
    labels:
      - "traefik.backend=portainer"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:portainer.schererstefan.xyz"

Portainer also needs the client certs to communicate with the Docker engine. Another volume mount point is used to persist data like your admin login outside the container.

Now run Portainer with

docker-compose up -d portainer

Then open your browser on your local machine with the subdomain. When you open it the first time Portainer will ask you for an admin password. Enter a password you want to use and then login with it.

Portainer login

Now you have an UI to see all containers running, all Docker images downloaded etc.

Portainer dashboard

Portainer containers

Conclusion

What we have learned is that Træfɪk works pretty good on Windows. It helps us securing our websites with TLS certificates. In combination with Docker Compose you can add or remove new websites on the fly or even scale some services with the built-in load balancer of Træfɪk.

Read more details in the Træfɪk documentation as I can give you only a short intro of its capabilities.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Setup a Windows Docker CI with AppVeyor]]>

I love GitHub and all the services around it. It enables you to work from anywhere or any device and still have your complete CI pipeline in your pocket. Every thing is done with a git push. You can add services like Codeship, Travis, Circle and lots of others to

]]>
https://stefanscherer.github.io/setup-windows-docker-ci-appveyor/5986d4ec688a49000154096fFri, 10 Mar 2017 05:54:00 GMT

I love GitHub and all the services around it. It enables you to work from anywhere or any device and still have your complete CI pipeline in your pocket. Every thing is done with a git push. You can add services like Codeship, Travis, Circle and lots of others to build and test your code and even the pull requests you get from others.

But I'm on Windows

To build applications for Windows there is a similar cloud based CI service, called AppVeyor.

And it works pretty similar to the other well known services for Linux:

  1. Put a YAML file into your repo with the build, test and deploy steps
  2. Connect your repo to the cloud CI service
  3. From now on a git push will do a lot for you.

Your CI pipeline is set up in a few clicks.

appveyor.yml

Here is an example how such a YAML file looks like for AppVeyor. This is from a small C/C++ project I made long time ago during holiday without Visual Studio at hand. I just created that GitHub repo, added the appveyor.yml and voila - I got a compiled and statically linked Windows binary at GitHub releases.

version: 1.0.{build}
configuration: Release
platform: x64
build:
  project: myfavoriteproject.sln
  verbosity: minimal
test: off
artifacts:
- path: x64/Release/myfavoriteproject.exe
  name: Release
deploy:
- provider: GitHub
  auth_token:
    secure: xxxxx

The build worker in AppVeyor is fully armed with lots of development tools, so you can build projects for serveral languages like Node.js, .NET, Ruby, Python, Java ...

Docker build

AppVeyor now has released a new build worker with Windows Server 2016 and Docker Enterprise Edition 17.03.0-ee-1 pre-installed. That instantly enables you to build, test and publish Windows Docker images in the same lightweight way.

Docker build with AppVeyor

All you have to do is to select the new build worker by adding image: Visual Studio 2017 to your appveyor.yml. No more work to do to get a fully Windows Docker engine for your build.

The following appveyor.yml gives you an idea how easy an automated Docker build for Windows can be:

version: 1.0.{build}
image: Visual Studio 2017

environment:
  DOCKER_USER:
    secure: xxxxxxx
  DOCKER_PASS:
    secure: yyyyyyy
install:
  - docker version

build_script:
  - docker build -t me/myfavoriteapp .

test_script:
  - docker run me/myfavoriteapp

deploy_script:
  - docker login -u="$env:DOCKER_USER" -p="$env:DOCKER_PASS"
  - docker push me/myfavoriteapp

This is a very simple example. For the tests you can think of some more sophisticated tests like using Pester, Serverspec or Cucumber. For the deploy steps you can decide when to run these, eg. only for a tagged build to push a new release.

Docker Compose

You are not limited to build a single Docker image and run one container. Your build agent is a full Windows Docker host, so you also can install Docker Compose and spin up a multi-container application. The nice thing about AppVeyor is that the builders also have Chocolatey preinstalled. So you only have to add a short single command to your appveyor.yml to download and install Docker Compose.

choco install docker-compose

Docker Swarm

You also might turn the Docker engine into a single node Docker swarm manager to work with the new commands docker stack deploy. You can create a Docker Swarm with this command

docker swarm init

Add project to build

Adding AppVeyor to one of your GitHub repos is very simple. Sign in to AppVeyor with your GitHub account and select your project to add.

AppVeyor add project

Now you can also check the pull requests you or others create on GitHub.

GitHub pull request checks green

You can click on the green checkmark to view the console output of the build.

AppVeyor pull request build green

Tell me a secret

To push to the Docker Hub we need to configure some secrets in AppVeyor. After you are logged in to AppVeyor you can select the "Encrypt data" menu item from the drop down menu or use the link https://ci.appveyor.com/tools/encrypt

There you can enter your cleartext secret and it creates the encrypted configuration data you can use in your appveyor.yml.

Appveyor encrypt configuration data

These secret variables don't get injected in pull request builds, so nobody can fork your repo and send you an ls env: pull request to expose that variables in the output.

Immutable builds

One of the biggest advantages over self-hosting a CI pipeline is that you get immutable builds. You just do not have to care about the dirt and dust your build left on the build worker. AppVeyor - like all other cloud based CI systems - just throws away the build worker and you get another empty one for the next build.

AppVeyor immutable build

Even if you build Windows Docker images you don't have to cleanup your Docker host. You can concentrate on your code, the build and your tests, and forget about maintain your CI workers.

Examples

I have some GitHub repos that already use AppVeyor to build Windows Docker images, so you can have a look how my setup works:

Conclusion

AppVeyor is my #1 when it comes to automated Windows builds. With the Docker support built-in it becomes even more interesting.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Is there a Windows Docker image for ...?]]>

Do you want to try out Windows containers, but don't want to start too low level? If you are using one of the following programming languages you can benefit of already available official Docker images for Windows.

These Docker images are well maintained and you can just start and put

]]>
https://stefanscherer.github.io/is-there-a-windows-docker-image-for/5986d4ec688a490001540970Tue, 21 Feb 2017 23:56:58 GMT

Do you want to try out Windows containers, but don't want to start too low level? If you are using one of the following programming languages you can benefit of already available official Docker images for Windows.

These Docker images are well maintained and you can just start and put your application code inside and run your application easily in a Windows container.

Someone else did the hard work how to install the runtime or compiler for language XYZ into Windows Server Core container or even a Nanoserver container.

Prefer NanoServer

So starting to work with NanoServer is really easy with Docker as you only choose the right image for the FROM instruction in your Dockerfile. You can start with windowsservercore images, but I encourage you to test with nanoserver as well. For these languages it is easy to switch and the final Docker images are much smaller.

So let's have a look which languages are already available. The corresponding Docker Hub page normally has a short intro how to use these Docker images.

Go

The Go programming language is available on the Docker Hub as image golang. To get the latest Go 1.8 for either Windows Server Core or NanoServer you choose one of these.

  • FROM golang:windowsservercore
  • FROM golang:nanoserver

Have a look at the tags page if you want another version or if you want to pin a specific version of Golang.

Java

When you hear Java you might immediately think of Oracle Java. But searching for alternatives I found three OpenJDK distros for Windows. One of them recently made it into the official openjdk Docker images. Both Windows Server Core and NanoServer are supported.

  • FROM openjdk:windowsservercore
  • FROM openjdk:nanoserver

If you prefer Oracle Java for private installations, you can build a Docker image with the Dockerfiles provided in the oracle/docker-images repository.

Node.JS

For Node.js there are pull requests awaiting a CI build agent for Windows to make it into the official node images.

In the meantime you can use one of my maintained images, for example the latest Node LTS version for both Windows Server Core and NanoServer:

  • FROM stefanscherer/node-windows:6
  • FROM stefanscherer/node-windows:6-nano

You also can find more tags and versions at the Docker Hub.

Python

The script language Python is available as Windows Server Core Docker image at the official python images. Both major versions of Python are available.

  • FROM python:3-windowsservercore
  • FROM python:2-windowsservercore

I also have a Python Docker image for NanoServer with Python 3.6 to create smaller Docker images.

  • FROM stefanscherer/python-windows:nano

.NET Core

Microsoft provides Linux and Windows Docker images for .NET Core at microsoft/dotnet. For Windows it is NanoServer only, but this is no disadvantage as you should plan for the smaller NanoServer images.

  • FROM microsoft/dotnet:nanoserver

ASP.NET

For ASP.NET there are Windows Server Core Docker images for the major versions 3 and 4 with IIS installed at microsoft/aspnet.

  • FROM microsoft/aspnet:4.6.2-windowsservercore
  • FROM microsoft/aspnet:3.5-windowsservercore

Conclusion

The number of programming languages provided in Windows Docker images is growing. This makes it relatively easy to port Linux applications to Windows or use Docker images to distribute apps for both platforms.

Haven't found an image for your language? Have I missed something? Please let me know, and use the comments below if you have questions how to get started. Thanks for your interest. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Getting started with Docker Swarm-mode on Windows 10]]>

Last Friday I noticed a blog post that Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10. A long awaited feature to use Docker Swarm on Windows, so it's time to test-drive it.

Well you wonder why this feature is available on

]]>
https://stefanscherer.github.io/docker-swarm-mode-windows10/5986d4ec688a49000154096dMon, 13 Feb 2017 01:31:00 GMT

Last Friday I noticed a blog post that Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10. A long awaited feature to use Docker Swarm on Windows, so it's time to test-drive it.

Well you wonder why this feature is available on Windows 10 and not Windows Server 2016. Sure it will make more sense in production running a Docker Swarm on multiple servers. The reason is that the Insider preview is the fastest channel to ship new features. Unfortunately there is no equivalent for Windows Server editions.

So if you need it for Windows Server you have to wait a little longer. You can indeed test Swarm-Mode on Windows Server 2016 and Docker 1.13, but only without the Overlay network. To test Swarm-Mode with Overlay network you will need some machines running Windows 10 Insider 15031.

Preparation

In my case I use Vagrant to spin up Windows VM's locally on my notebook. The advantage is that you can describe some test scenarios with a Vagrantfile and share it on GitHub.

I already have played with Docker Swarm-Mode in December and created a Vagrant environment with some Windows Server 2016 VM's. I'll re-use this scenario and just replace the underlying Vagrant box.

So the hardest part is to build a Windows 10 Insider 15031 VM. The latest ISO file with Windows 10 Insider 15025 is a good starting point. You have to switch to the Fast Ring to fetch the latest updates for Insider 15031.

Normally I use Packer with my packer-windows templates available on GitHub to automatically create such Vagrant boxes. In this case I only have a semi-automated template. Download the ISO file, build a VM with the windows_10_insider.json template and update it to Insider 15031 manually. With such a VM, build the final Vagrant box with the windows_10_docker.json Packer template.

What we now have is a Windows 10 Insider 15031 VM with the Containers and Hyper-V features activated, Docker 1.13.1 installed and both Microsoft Docker images downloaded. All the time consuming things should be done in a Packer build to make the final vagrant up a breeze.

In my case I had to add the Vagrant box with

vagrant box add windows_10_docker ./windows_10_insider_15031_docker_vmware.box

Vagrant 1.9.1 is able to use linked clones for VMware Fusion, VirtualBox and Hyper-V. So you need this big Vagrant box only once on disk. For the Docker Swarm only a clone will be started for each VM to save time and disk space.

Create the Swarm

Now we use the prepared Vagrant environment and adjust it

git clone https://github.com/StefanScherer/docker-windows-box
cd docker-windows-box/swarm-mode
vi Vagrantfile

In the Vagrantfile I had to change only the name of the box after config.vm.box to the newly added Vagrant box. This is like changing the FROM in a Dockerfile.

git diff Vagrantfile

I also adjusted the memory a little bit to spin up more Hyper-V containers.

But now we are ready to create the Docker Swarm with a simple

vagrant up

This will spin up three Windows 10 VM's and build the Docker Swarm automatically for you. But using linked clones and the well prepared Vagrant basebox it takes only some minutes to have a complete Docker Swarm up and running.

docker node ls

After all three VM's are up and running, go into the first VM and open a PowerShell terminal. With

docker node ls

you can check if your Docker Swarm is active.

Create a network

Now we create a new overlay network with

docker network create --driver=overlay sample

You can list all networks with docker network ls as there are already some others.

Create a whoami service

With this new overlay network we start a simple service. I've prepared a Windows version of the whoami service. This is a simple webserver that just responds with its internal container hostname.

docker service create --name=whoami --endpoint-mode dnsrr `
  --network=sample stefanscherer/whoami-windows:latest

At the moment only DNS round robin is implemented as described in the Microsoft blog post. You cannot use to publish ports externally right now. More to come in the near future.

Run visualizer

To make it more visible what happens in the next steps I recommend to run the Visualizer. On the first VM you can run the Visualizer with this script:

C:\vagrant\scripts\run-visualizer.ps1

Now open a browser with another helper script:

C:\vagrant\scripts\open-visualizer.ps1

Now you can scale up the service to spread it over your Docker swarm.

docker service scale whoami=4

This will bring up the service on all three nodes and one of the nodes is running two instances of the whoami service.

Visualizer

Just play around scaling the service up and down a little bit.

Build and create another service

As I've mentioned above you cannot publish ports and there is no routing mesh at the moment. So the next thing is to create another service that will access the whoami service inside the overlay network. On Linux you probably would use curl to do that. I tried just a simple PowerShell script to do the same.

Two small files are needed to create a Docker image. First the simple script askthem.ps1:

while ($true) {
  (Invoke-WebRequest -UseBasicParsing http://whoami:8000).Content
  Start-Sleep 1
}

As you can see the PowerShell script will access the webserver with the hostname whoami on port 8000.

Now put this Script into a Docker image with this Dockerfile:

FROM microsoft/nanoserver
COPY askthem.ps1 askthem.ps1
CMD ["powershell", "-file", "askthem.ps1"]

Now build the Docker image with

docker build -t askthem .

We now can start the second service that consumes the whoami service.

docker service create --name=askthem --network=sample askthem:latest

You now should see one instance of the newly created askthem service. Let's have a look at the logs. As this Vagrant environment enables the experimental features of Docker we are able to get the logs with this command:

docker service logs askthem

In my case I had luck and the askthem service got a response from one of the whoami containers that is running on a different Docker node.

Windows 10 Swarm-Mode

I haven't figured out why all the responses are from the same container. Maybe PowerShell or the askthem container itself caches the DNS requests.

But it still proves that overlay networking is working across multiple Windows machines.

More to play with

The Vagrant environment has some more things prepared. You also can spin up Portainer that gives you a nice UI to your Docker swarm. You can have a look at your Nodes, the Docker images, the containers and services running and so on.

I also found out that you can scale services in the Portainer UI by changing the replicas. Running Visualizer and Portainer side-by-side demonstrates that:

Visualizer and Portainer

Conclusion

I think this setup can help you trying out the new Overlay network in Windows 10 Insider, and hopefully in Windows Server 2016 very soon as well.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Dockerizing Ghost and Buster to run a blog on GitHub pages]]>

I'm running this blog for nearly three years now. One of my first posts was the description how to setup Ghost for GitHub pages. In the past I've installed lots of tools on my Mac to run Ghost and Buster locally.

I still like this setup hosting only the static

]]>
https://stefanscherer.github.io/dockerizing-ghost-buster/5986d4ec688a49000154096eSat, 11 Feb 2017 18:46:46 GMT

I'm running this blog for nearly three years now. One of my first posts was the description how to setup Ghost for GitHub pages. In the past I've installed lots of tools on my Mac to run Ghost and Buster locally.

I still like this setup hosting only the static files at GitHub without maintaining an online server. But over time you also have to update Ghost, the Node version used and so on. That's why I have revisited my setup to make it easier for me to update Ghost by running all tools in Docker containers.

Requirements

  • Docker for Mac
  • git (is already installed)
  • docker-compose (already installed with D4M)

You can find my setup and all files in my GitHub repo StefanScherer/ghost-buster-docker.

As I'm upgrading from my local Ghost installation to this dockerized version I already have some content, the static files and my GitHub pages repo. Please refer to my old blog post how to create your repo. The following commands should give you an idea how to setup the two folders content and static.

git clone https://github.com/YOURNAME/ghost-buster-docker
cd ghost-buster-docker
mkdir content
git clone https://github.com/YOURNAME/YOURNAME.github.io static

docker-compose.yml

To simplify running Ghost and Buster I've created a docker-compose.yml with all the published ports and volume mount points.

There are three services

  • ghost
  • buster
  • preview
version: '2.1'

services:
  ghost:
    image: ghost:0.11.4
    volumes:
      - ./content:/var/lib/ghost
    ports:
      - 2368:2368

  buster:
    image: stefanscherer/buster
    command: /buster.sh
    volumes:
      - ./static:/static
      - ./buster.sh:/buster.sh

  preview:
    image: nginx
    volumes:
      - ./static:/usr/share/nginx/html:ro
    ports:
      - 2369:80

Edit content with Ghost

To create new blog post or edit existing posts you spin up the ghost container with

docker-compose up -d ghost

and then open up your browser at https://stefanscherer.github.io/ghost to login and edit content. As you can see the folder content is mapped into the ghost container to persist your Ghost blog data and images on your host machine.

Generate static files

To generate the static HTML pages we use the second service with Buster installed. This is no real service, so we do not "up" but "run" it with

docker-compose run buster

Now you have updated files in the static folder. You may edit the local script buster.sh to fix some links that Buster broke in the past in my pages.

Preview static files

From time to time it is useful to check the generated static HTML files before pushing them to GitHub pages. The third service is useful to run a webserver with the created static pages.

docker-compose up -d preview

Open your browser at http://localhost:2369 and check if everything looks good. In my setup I've added Disqus and first wanted to try out the results of modifying the post.hbs file of the theme.

Deploy static files

If you are happy with the new static files it's time to push them. I've added a small script deploy.sh to do the final steps on the host as only git is used here. As I'm using GitHub with SSH and a passphrase I don't want to put that into a container. Have a look at the shell script and you will see that it's only a git add && git commit && git push script.

./deploy.sh

Conclusion

I think this setup will help me in the future to update Ghost more easily.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Winspector - a tool to inspect your and other's Windows images]]>

In my previous blog post I showed you how to get Windows Updates into your container images. But how do you know if your underlying Docker image you use in the FROM line of your Dockerfile also uses the correct version of the Windows base image?

Is there a way

]]>
https://stefanscherer.github.io/winspector/5986d4ec688a49000154096cSun, 08 Jan 2017 14:00:00 GMT

In my previous blog post I showed you how to get Windows Updates into your container images. But how do you know if your underlying Docker image you use in the FROM line of your Dockerfile also uses the correct version of the Windows base image?

Is there a way to look into container images without downloading them?

There are several services like imagelayers.io, microbadger, shields.io and others which provide badges and online views for existing Docker images at Docker Hub. Unfortunately not all support Windows images at the moment.

Enter winspector

I found an inspector tool written in Python that might be useful for that task. I've enhanced it and created a tool called winspector which is available as Docker image stefanscherer/winspector for Windows and Linux. With this tool you can inspect any Windows Docker images on the Docker Hub.

Winspector will show you

  • The creation date of the image and the Docker version and Windows version used at build time.
  • The number of layers down to the Windows base image
  • Which Windows base image the given image depends on. So you know whether a random Windows image uses the up to date Windows base image or not.
  • The size of each layer. This is useful to when you try to optimize your image size.
  • The "application size" without the Windows base layers. So you get an idea how small your Windows application image really is and what other users have to download provided that they already have the base image.
  • The history of the image. It tries to reconstruct the Dockerfile commands that have been used to build the image.

Run it from Windows

If you have Docker running with Windows containers, use this command to run the tool with any given image name and an optional tag.

docker run --rm stefanscherer/winspector microsoft/iis

run from windows

At the moment the Docker image depends on the windowsservercore base image. I'll try to move it to nanoserver to reduce download size for Windows 10 users.

Run it from Mac / Linux

If you have a Linux Docker engine running, just use the exact same command as on Windows. The Docker image stefanscherer/winspector is a multiarch Docker image and Docker will pull the correct OS specific image for you automatically.

docker run --rm stefanscherer/winspector microsoft/iis

run from mac

Inspecting some images

Now let's try winspector and inspect a random Docker image. We could start with the Windows base image itself.

$ docker run --rm stefanscherer/winspector microsoft/windowsservercore

Even for this image it can show you some details:

Image name: microsoft/windowsservercore
Tag: latest
Number of layers: 2
Sizes of layers:
  sha256:3889bb8d808bbae6fa5a33e07... - 4069985900 byte
  sha256:3430754e4d171ead00cf67667... - 913145061 byte
Total size (including Windows base layers): 4983130961 byte
Application size (w/o Windows base layers): 0 byte
Windows base image used:
  microsoft/windowsservercore:10.0.14393.447 full
  microsoft/windowsservercore:10.0.14393.693 update

As you can see the latest windowsservercore image has two layers. The sizes shown here are the download sizes of the compressed layers. The smaller one is the layer that will be replaced by a newer update layer with the next release.

How big is the winspector image?

Now let's have a look at the winspector Windows image to see what winspector can retrieve for you.

$ docker run --rm stefanscherer/winspector stefanscherer/winspector:windows-1.4.3

The (shortened) output looks like this:

Image name: stefanscherer/winspector
Tag: windows-1.4.3
Number of layers: 14
Schema version: 1
Architecture: amd64
Created: 2017-01-15 21:35:22 with Docker 1.13.0-rc7 on windows 10.0.14393.693
Sizes of layers:
  ...

Total size (including Windows base layers): 360497565 byte
Application size (w/o Windows base layers): 27188879 byte
Windows base image used:
  microsoft/nanoserver:10.0.14393.447 full
  microsoft/nanoserver:10.0.14393.693 update
History:
  ...

So the winspector Windows image is about 27 MByte and it uses the latest nanoserver base image.

Inspecting Linux images

And winspector is not restricted to Windows images, you can inspect Linux images as well.

If you run

$ docker run --rm stefanscherer/winspector stefanscherer/winspector:linux-1.4.3

then winspector will show you

Image name: stefanscherer/winspector
Tag: linux-1.4.3
Number of layers: 8
Schema version: 1
Architecture: amd64
Created: 2017-01-15 21:34:21 with Docker 1.12.3 on linux 
Sizes of layers:
  ...
Total size (including Windows base layers): 32708231 byte
Application size (w/o Windows base layers): 32708231 byte
Windows base image used:
  It does not seem to be a Windows image
History:
  ...

As you can see the Linux image is about 32 MByte.

So once you have downloaded the latest Windows base images like windowsservercore or nanoserver the download experience is the same for both platforms.

Conclusion

With winspector you can check any Windows container image on the Docker Hub which version of Windows it uses.

You can also see how big each image layer is and learn how to optimize commands in your Dockerfile to create smaller Windows images.

The tool is open source on GitHub at github.com/StefanScherer/winspector. It is community driven, so feel free to send me feedback in form of issues or pull requests.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Keep your Windows Containers up to date]]>

Last year in October Microsoft has released Windows Server 2016 and with it the official support for Windows Containers. If you have tried Windows Containers already and built some Windows Container images you may wonder how to implement an update strategy.

How can I install Windows Updates in my container

]]>
https://stefanscherer.github.io/keep-your-windows-containers-up-to-date/5986d4ec688a49000154096bSun, 08 Jan 2017 09:23:21 GMT

Last year in October Microsoft has released Windows Server 2016 and with it the official support for Windows Containers. If you have tried Windows Containers already and built some Windows Container images you may wonder how to implement an update strategy.

How can I install Windows Updates in my container image?

Working with containers is not the same as working with real servers or VM's you support for months or years. A container image is a static snapshot of the filesystem (and Windows registry and so on) at a given time.

You won't enter a running container and run the Windows Update there. But how should we do it then?

Container images have layers

First have a look how a container image looks like. It is not just a snapshot. A container image consist of multiple layers. When you look at your Dockerfile you normally use a line like FROM microsoft/windowsservercore.

Your container image then uses the Windows base image that contains a layer with all the files needed to run Windows containers.

If you have some higher level application you may use other prebuilt container images like FROM microsoft/iis or FROM microsoft/aspnet. These images also re-use the FROM microsoft/windowsservercore as base image.

Windows app image layers

On top of that you build your own application image with your code and content needed to run the application in a self contained Windows container.

Behind the scenes your application image now uses several layers that will be downloaded from the Docker Hub or any other container registry. Same layers can be re-used for different other images. If you build multiple ASP.NET appliations as Docker images they will re-use the same layers below.

But now back to our first question: How to apply Windows Updates in a container image?

The Windows base images

Let's have a closer look at the Windows base images. Microsoft provides two base images: windowsservercore and nanoserver. Both base images are updated on a regular basis to roll out all security fixes and bug fixes. You might know that the base image for windowsservercore is about 4 to 5 GByte to download.

So do we have to download the whole base image each time for each update?

If we look closer how the base images are built we see that they contain two layers: One big base layer that will be used for a longer period of time. And there is a smaller update layer that contains only the patched and updated files for the new release.

Windows Server Core updates

So updating to a newer Windows base image version isn't painful as only the update layer must be pulled from the Docker Hub.

But in the long term it does not make sense to stick forever to the old base layer. Security scanners will mark them as vulnerable and also all the images that are built from them. And the update layer will increase in size for each new release. So from time to time there is a "breaking" change that replaces the base layer and a new base layer will be used for upcoming releases. We have seen that with the latest release in December.

Windows Server Core major update

From time to time you will have to download the big new base layer which is about 4 GByte for windowsservercore (and only about 240 MByte for nanoserver, so try to use nanoserver whereever you can) when you want to use the latest Windows image release.

Keep or update?

Should I avoid updating the Windows image to revision 576 to keep my downloads small? No!

My recommendation is to update all your Windows container images and rebuild them with the newest Windows image. You have to download that bigger base layer also only once and all your container images will re-use it.

Perhaps your application code also has some updates you want to ship. It's a good time to ship it on top of the newest Windows base image. So I recommend to run

docker pull microsoft/windowsservercore
docker pull microsoft/nanoserver

before you build new Windows container images to have the latest OS base image with all security fixes and bug fixes in it.

If you want to keep track which version of the Windows image you use, you can use the tags provided for each release.

Instead of using only the latest version in your Dockerfile

FROM microsoft/windowsservercore

you can append the tag

FROM microsoft/windowsservercore:10.0.14393.576

But I still recommend to update the tag after a new Windows image has been published.

You can find the tags for windowsservercore and nanoserver on the Docker Hub.

What about the framework images?

Typically you build your application on top of some kind of framework like ASP.NET, IIS or a runtime language like Node.js, Python and so on. You should have a look at the update cycles of these framework images. The maintainers have to rebuild the framework images after a new release of the Windows base image came out.

If you see some of your framework images lag behind, encourage the maintainer to update the Windows base image and to rebuild the framework image.

With such updated framework images - they hopefully come with a new version tag - you can rebuild your application.

TL/DR

So your part to get Windows Updates into your Windows Container images is to choose the newer image in your Dockerfile and rebuild your application image with it.

If you haven't used version tags of the image below, do a docker pull ... of that image to get sure to have the updated one before you rebuild.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[How to protect a Windows 2016 Docker engine with TLS]]>

Today I have started a Windows Server 2016 VM with Container support in Azure. This is pretty easy as there is a prebuilt VM with the Docker base images. But I want a secured connection from my laptop to the Windows Docker engine running in Azure.

There is a tutorial

]]>
https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/5986d4ec688a49000154096aSun, 23 Oct 2016 22:35:19 GMT

Today I have started a Windows Server 2016 VM with Container support in Azure. This is pretty easy as there is a prebuilt VM with the Docker base images. But I want a secured connection from my laptop to the Windows Docker engine running in Azure.

There is a tutorial Protect the Docker daemon socket at the website of Docker which uses the openssl tool to create all the certificates etc. But how should we do this on Windows?

Just containerize what's there

I have seen the DockerTLS script in a GitHub repo from Microsoft. But this script installs OpenSSL on my machine which I don't want to.

My first thought was, let's put this script + OpenSSL into a Docker image and run it in a Windows container.

So this Dockerfile was my first attempt to just use Chocolatey to install OpenSSL, download the PowerShell script from the Microsoft GitHub repo. Done. The script can run in a safe environment and I don't have to install software on my Docker host.

DockerTLS

But there is still work to do on the host to configure the Docker engine which I wanted to automate a little more. So it would be great to have a tool that can

  • generate all TLS certs
  • create or update the Docker daemon.json file
  • Put the client certs into my home directory

But still we need a program or script with OpenSSL to do that. I thought this tool should be deployed in a Docker image and shared on the Docker Hub. And here it is:

docker run dockertls

dockertls

The script generate-certs.ps1 creates the TLS certs and copies them to the folders that would be used on the Docker host. The script would directly work on a Docker host if you have OpenSSL/LibreSSL installed.

The dockertls Docker image is created with this Dockerfile. It installs LibreSSL from OpenBSD (thanks to Michael Friis for that optimization) and copies the PowerShell script inside the image.

You can find the full source code of the dockertls image in my dockerfiles-windows GitHub repo if you want to build the Docker image yourself.

Otherwise you can just the dockertls Docker image from the Docker Hub.

Dry run

As you don't trust me or my Docker image you can do a dry run with some temporary folders where the container can copy files into without destroying your Docker host.

Just create two folders:

mkdir server
mkdir client\.docker

Now run the Windows container with the environment variables SERVER_NAME and IP_ADDRESSES as well as two volume mounts to write the certs back to the host:

docker run --rm `
  -e SERVER_NAME=$(hostname) `
  -e IP_ADDRESSES=127.0.0.1,192.168.254.123 `
  -v "$(pwd)\server:C:\ProgramData\docker" `
  -v "$(pwd)\client\.docker:C:\Users\ContainerAdministrator\.docker" `
  stefanscherer/dockertls-windows

Afterwards check the folders:

dir server\certs.d
dir server\config
dir client\.docker

You will see that there are three pem files for the server, the daemon.json file as well as three pem files for the client.

Of course you could manually copy the files and try them out. But this Docker image can do this for you as well.

Full run

You may have to create the .docker folder in your home directory.

mkdir $env:USERPROFILE\.docker

Now run the container with the correct paths on the host so it can copy all certs and configs to the right place. The script can read an existing daemon.json and update it to keep all other configuration untouched.

docker run --rm `
  -e SERVER_NAME=$(hostname) `
  -e IP_ADDRESSES=127.0.0.1,192.168.254.123 `
  -v "C:\ProgramData\docker:C:\ProgramData\docker" `
  -v "$env:USERPROFILE\.docker:C:\Users\ContainerAdministrator\.docker" `
  stefanscherer/dockertls-windows

Now you have to restart the Docker service in an administrator Shell with

restart-service docker

One last step is needed on your host. You have to open the port 2376 in your firewall so you can access the machine from the outside. But then you're done on your host.

You can recreate the TLS certs with the same command and just restart the Docker service afterwards.

Test TLS connection

Now test the connection to the TLS secured Docker service with

docker --tlsverify `
  --tlscacert=$env:USERPROFILE\.docker\ca.pem `
  --tlscert=$env:USERPROFILE\.docker\cert.pem `
  --tlskey=$env:USERPROFILE\.docker\key.pem `
  -H=tcp://127.0.0.1:2376 version

Or just set some environment variables

$env:DOCKER_HOST="tcp://127.0.0.1:2376"
$env:DOCKER_TLS_VERIFY="1"
docker version

Azure

In an Azure VM you should use your DNS name for the VM in the SERVER_NAME environment variable and your public and local IP addresses of that machine.

docker-run

You have to open the firewall port 2376 on your Windows Docker host.

For Azure you also have to add a incoming rule for port 2376 in your network security group.

Then you have to securely transfer the three client pem files from your Azure VM to your laptop.

I've done that on my old Windows 10 machine which is only a 32bit machine, but I still can work with the Windows 2016 Docker engine running in Azure.

docker-version

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Run Linux and Windows Containers on Windows 10]]>

At DockerCon 2016 in Seattle Docker announced the public beta of Docker for Windows. With this you can work with Docker running Linux containers in a very easy way on Windows 10 Pro with Hyper-V installed. In the meantime there is a stable version and a beta channel to retrieve

]]>
https://stefanscherer.github.io/run-linux-and-windows-containers-on-windows-10/5986d4ec688a490001540968Sat, 24 Sep 2016 12:55:29 GMT

At DockerCon 2016 in Seattle Docker announced the public beta of Docker for Windows. With this you can work with Docker running Linux containers in a very easy way on Windows 10 Pro with Hyper-V installed. In the meantime there is a stable version and a beta channel to retrieve newer versions.

And Microsoft has added the Containers feature in the Windows 10 Anniversary Update. With some installation steps you are able to run Windows Hyper-V Containers on your Windows 10 machine.

But there is a little bit of confusion which sort of containers can be started with each of the two installations. And you can't run both Docker Engines side-by-side without some adjustments.

This is because each of the installations use the same default named pipe //./pipe/docker_engine causing one of the engines to fail to start.

Beta 26 to rule them all

Beginning with the Docker for Windows Beta 26 there is a very easy approach to solve this confusion. You only have to install Docker for Windows with the MSI installer. There is a new menu item in the Docker tray icon to switch between Linux and Windows containers.

switching

As you can see in the video you don't have to change environment variables or use the -H option of the Docker client to talk to the other Docker engine.

So if you download Docker for Windows beta or switch to the beta channel in your installation you can try this out yourself.

The installer will activate the Containers feature if you haven't done that yet. A reboot is required for this to add this feature.

From now on you can easily switch with the menu item in the tray icon.

There also is a command line tool to switch the engine. In a PowerShell windows you can type

& 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon

and it switches from Linux to Windows or vice versa. Take care and type the option as shown here as the option is case sensitive.

Proxy for the rescue

But how does the switching work without the need to use another named pipe or socket from the Docker client?

The answer is that there is running a Proxy process com.docker.proxy.exe which listens on the default named pipe //./pipe/docker_engine.

If you switch from Linux to Windows the Windows Docker engine dockerd.exe will be started for you which is listening on another named pipe //./pipe/docker_engine_windows and a new started Proxy process redirects to this.

Under the hood

I have installed the Sysinternals Process Monitor tool to learn what happens while switching from Linux to Windows containers. With the Process Tree function you can see a timeline with green bars when each process has started or exited.

The following screenshot shows the processes before and after the switch. I have switched about in the middle of the green bar.

The current com.docker.proxy.exe (above dockerd.exe in the list) that talked to the MobyLinuxVM exits as the dark green bar highlights that.

The dockerd.exe Windows Docker engine is started, as well as a new com.docker.proxy.exe (below dockerd.exe) which talks to the Windows Docker engine.

So just after the switch you still can use the docker.exe Client or your Docker integration in your favorite editor or IDE without any environment changes.

Running both container worlds in parallel

The proxy process just switches the connection to the Docker engine. After such a switch both the Linux and Windows Docker engine are running.

Run a Linux web server

To try this out we first switch back to the Linux containers. Now we run the default nginx web server on port 80

docker run -p 80:80 -d nginx

then switch to the Windows containers with

& 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon

docker-run-nginx

Now let's run some Windows containers. But first we try if the Linux container is still running and reachable with

start http://localhost

With the start command you open Edge with the welcome page of the nginx running in a Linux container

nginx

Yes, the Linux container is still running.

Build a Windows web server

On Windows 10 you only can run Nanoserver containers. There is no IIS docker image for Nanoserver. Ignite update: You can run Nanoserver AND windowsservercore containers on Windows 10.

But to demo how simple nanoserver containers could be I'll keep the following sample as it is. So we create our own small Node.js web server. First we write the simple web server app

notepad app.js

Enter this code as the mini web server in the file app.js and save the file.

var http = require('http');
var port = 81;

function handleRequest(req, res) {
  res.end('Hello from Windows container, path = ' + req.url);
}

var server = http.createServer(handleRequest);

server.listen(port);

Now we build a Windows Docker image with that application. We open another editor to create the Dockerfile with this command

notepad Dockerfile.

Enter this as the Dockerfile. As you can see only the FROM line is different from a typical Linux Dockerfile. This one uses a Windows base image from the Docker Hub.

FROM stefanscherer/node-windows:6.7.0-nano

COPY app.js app.js

CMD [ "node", "app.js" ]

Save the file and build the Docker image with the usual command

docker build -t webserver .

Run the Windows web server as a Docker container with

docker run -p 81:81 -d webserver

docker-run-webserver

At the moment you can't connect directly with 127.0.0.1 to the container. But it is possible to use the IP address of the container. We need the ID or name of the container, so list the containers running with

docker ps

Then open the browser with the container's IP address:

start http://$(docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" grave_thompson):81

docker-inspect

Additionally the port forwarding from the host to the container allows you to contact the web server on port 81 from another machine.

curl-to-windows-10

And yes, the Windows container is also handling requests.

Conclusion

The new Docker for Windows beta combines the two container worlds and simplifies building Docker images for both Linux and Windows, making a Windows 10 machine a good development platform for both.

And with a little awareness when to switch to the right Docker engine, both Linux and Windows containers can run side-by-side.

Please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Adding Hyper-V support to 2016 TP5 Docker VM]]>

Back in June I have attended the DockerCon in Seattle. Beside lots of new features in Docker 1.12 we heard about Windows Server and Docker and upcoming features in the Windows Docker engine.

Another highlight for me after the conference was a visit at the Microsoft Campus in Redmond

]]>
https://stefanscherer.github.io/adding-hyper-v-support-to-2016-tp5-docker-vm/5986d4ec688a490001540969Thu, 04 Aug 2016 19:59:37 GMT

Back in June I have attended the DockerCon in Seattle. Beside lots of new features in Docker 1.12 we heard about Windows Server and Docker and upcoming features in the Windows Docker engine.

Another highlight for me after the conference was a visit at the Microsoft Campus in Redmond to meet the Windows Container team around Taylor Brown. After a meeting and having lunch we talked about making my Packer template for a Windows Server 2016 TP5 Docker VM work with Hyper-V. At that time my packer template supported only VirtualBox and VMware with a blog post describing how to build it.

So Patrick Lang from Microsoft and I started to have a look at the pull request mitchellh/packer#2576 by Taliesin Sisson that adds a Hyper-V builder to Packer. After a couple of days (already back to Germany working in different time zones) we improved the template through GitHub and finally got it working.

packer build, vagrant up

If you haven't heard about Packer and Vagrant let me explain it with the following diagram. If you want to create a VM from an ISO file you normally click through your hypervisor UI and then follow the installation steps inside the VM.

packer build, vagrant up

With Packer you can automate that step building a VM from an ISO file, put all steps into a Packer template and then just share the template so others can just run

packer build template.json

In our case the output is a Vagrant box. That is a compressed VM ready to be used with the next tool - Vagrant. It takes a Vagrant box, creates a copy of it to turn it on so you can work again and again with the same predefined VM that was built by Packer. You want to turn your VM on? Just type

vagrant up

You want to stop the VM after work? Just type

vagrant halt

You want to try something out and want to undo all that to start over with the clean state. Just destroy it and start it again.

vagrant destroy
vagrant up

There are much more commands and even snapshots can be used. The advantage is that you don't have to know all the buttons in your hypervisor. Both Packer and Vagrant are available for Windows, Mac and Linux and also support multiple hypervisors and even cloud providers.

So you only have to learn one or both of these tools and you're done if you have to work with VM's.

Adding Hyper-V builder

The Packer template for a VM has one or more builder sections. The Hyper-V section looks like this and contains the typical steps

  • Adding files for a virtual floppy for the first boot
  • Defining disk size, memory and CPU's
  • How to login into the VM
    {
      "vm_name":"WindowsServer2016TP5Docker",
      "type": "hyperv-iso",
      "disk_size": 41440,
      "boot_wait": "0s",
      "headless": false,
      "guest_additions_mode":"disable",
      "iso_url": "{{user `iso_url`}}",
      "iso_checksum_type": "{{user `iso_checksum_type`}}",
      "iso_checksum": "{{user `iso_checksum`}}",
      "floppy_files": [
        "./answer_files/2016/Autounattend.xml",
        "./floppy/WindowsPowershell.lnk",
        "./floppy/PinTo10.exe",
        "./scripts/disable-winrm.ps1",
        "./scripts/docker/enable-winrm.ps1",
        "./scripts/microsoft-updates.bat",
        "./scripts/win-updates.ps1"
      ],
      "communicator":"winrm",
      "winrm_username": "vagrant",
      "winrm_password": "vagrant",
      "winrm_timeout" : "4h",
      "shutdown_command": "shutdown /s /t 10 /f /d p:4:1 /c \"Packer Shutdown\"",
      "ram_size_mb": 2048,
      "cpu": 2,
      "switch_name":"{{user `hyperv_switchname`}}",
      "enable_secure_boot":true
    },

Packer can also download ISO files from a download link to make automation very easy.

The installation of a Windows Server 2016 VM can be automated with an Autounattend.xml file. This file contains information to setup the Windows VM until the WinRM service is up and running and Packer can login from the host machine to run further provision scripts to setup the VM with additional installations.

In case of the Windows Server 2016 TP5 Docker VM we additionally install Docker 1.12 and pull the Windows base OS docker images into the VM.

All these steps defined in the Packer template build a good Vagrant box to have Docker preinstalled with the base docker image as it takes some time to download it the first time.

So after a vagrant destroy you still have the Windows OS docker images installed and can work with a clean installation again. Only from time to time when there is a new OS docker image version you have to rebuild your Vagrant box with Packer.

Build the Hyper-V Vagrant box

To build the Vagrant box locally on a Windows 10 machine you only need the Hyper-V feature activated and you need a special version of packer.exe (notice: with choco install packer you only get the upstream packer where the hyperv builder is not integrated yet). The packer.exe with hyperv builder can be downloaded at https://dl.bintray.com/taliesins/Packer/.

Clone my packer template from GitHub and build it with these commands:

git clone https://github.com/StefanScherer/packer-windows
cd packer-windows
packer build --only=hyperv-iso windows_2016_docker.json

This will take some time downloading and caching the ISO file, booting, installing the software and pulling the first Docker images.

Share Vagrant boxes with Atlas

Another advantage of Vagrant is that you can share Vagrant base boxes through Atlas, a service by HashiCorp. So only one has to run Packer and build the Vagrant box and provide it for other team members or the community.

packer atlas vagrant

Others can create a Vagrantfile with the box name of one of the provided Vagrant boxes. That name will be used at the first vagrant up to download the correct Vagrant box for the hypervisor to be used.

Even Microsoft has its first Vagrant box at Atlas which can be used with VirtualBox only at the moment. But it is only a matter of time that more Hyper-V based Vagrant boxes will show up in Atlas, also boxes for other hypervisors.

If you don't have a Vagrantfile you even can create a simple one to start a new test environment with two commands and a suitable Vagrant box from Atlas.

vagrant init Microsoft/EdgeOnWindows10
vagrant up --provider virtualbox

Vagrant itself can log into the VM through WinRM and run further provision scripts to setup a good development or test environment. It is just a decision what to install in a Vagrant box with Packer and what to install with Vagrant afterwards. You decide which flexibility you want or if you prefer a faster vagrant up experience with a full provisioned Vagrant box that was built with a longer running Packer build once.

docker-windows-box

If you are looking for a test environment for Windows Docker containers you might have a look at my docker-windows-box GitHub repo that installs Git and some additional Docker tools to get started working on some Windows Dockerfiles.

docker windows box

Conclusion

I'm happy that there is a Hyper-V builder for Packer that really works. Vagrant already has a Hyper-V provider built in so you can have the same experience running and working with VM's as others have with VMware or VirtualBox.

With a such a TP5 Vagrant box it is very easy to get in touch with Windows Docker Containers, regardless if you are working on Windows 10 with Hyper-V or from your Mac or Linux machine with another hypervisor.

Packer multiprovider

Please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>