<![CDATA[Stefan Scherer's Blog]]>https://stefanscherer.github.io/https://stefanscherer.github.io/favicon.pngStefan Scherer's Bloghttps://stefanscherer.github.io/Ghost 1.8Sun, 03 Dec 2017 08:09:26 GMT60<![CDATA[PoC: How to build images for 1709 without 1709]]>

First of all: Happy Halloween! In this blog post you'll see some spooky things - or magic? Anyway I found a way to build Windows Docker images based on the new 1709 images without running on 1709. Sounds weird?

Disclaimer: The tools and described workflow to build such images on

]]>
https://stefanscherer.github.io/poc-build-images-for-1709-without-1709/59f90ee4f830c70001a9b8f1Tue, 31 Oct 2017 23:55:00 GMT

First of all: Happy Halloween! In this blog post you'll see some spooky things - or magic? Anyway I found a way to build Windows Docker images based on the new 1709 images without running on 1709. Sounds weird?

Disclaimer: The tools and described workflow to build such images on old Windows Server versions may break at any time. It works for me and some special cases, but it does not mean it works for any other use-case.

The 2016 <-> 1709 gap

As you might know from my previous blog post there is a gap between the old and new Windows images. You cannot pull the new 1709 Docker images on current Windows Server 2016. This means you also cannot build images without updating your build machines to Windows Server 1709.

AppVeyor

My favorite CI service for Windows is AppVeyor. They provide a Windows Server 2016 build agent with Docker and the latest base images installed. So it is very simple and convenient to build your Windows Docker images there. For example all my dockerfiles-windows Dockerfiles are built there and the images are pushed to Docker Hub.

I guess it will take a while until we can choose another build agent to start building for 1709 there.

But what should I do in the meantime?

  • Should I build all 1709 images manually on a local VM?
  • Or spin up a VM in Azure? It is possible since today.

But then I don't have the nice GitHub integration. And I have to do all the maintenance of a CI server (cleaning up disk space and so on) myself. Oh I don't want to go that way.

Docker images have layers

Let's have a closer look at how a Docker image looks like. Each Docker image contains of one or more layers. Each layer is read-only. Any change will be done in a new layer on top of the underlying ones.

For example the Windows Docker image of a Node.js application looks more or less like this:

windows_image_layers-2

At the bottom you find the Windows base image, then we add the Node.js runtime. Then we can add our application code on top of that. This is how a Dockerfile works. Every FROM, RUN, ... is an extra layer.

Technically all layers are just tarballs with files and directories in it. So when the application and framework layer are independent from the OS system layer it should be possible to rearrange them with a new OS layer.

Rebase Docker image

That is what I have tried to find out. I studied the Docker Hub API and wrote a proof of concept to "rebase" a given Windows Docker image to swap the old Windows OS layers with another one.

The tool works only with information from Docker Hub so it only retrieves metadata and pushes a new manifest back to the Docker Hub. This avoids downloading hundreds of megabytes for the old nanoserver images.

Use cases

  • Easily apply Windows Updates to an existing Windows app in seconds. Only the update layer will be swapped.
  • Provide your app for all available Windows Update layers to avoid download.
  • Sync multiple images based on different Windows Update layers to the current to avoid downloading several different udpate layers for a multi-container application.
  • Create images for Server 1709 without having a machine for it.

Limits

  • You cannot move an app from a windowsservercore image to the nanoserver image.
  • You also cannot move PowerShell scripts into the 1709 nanoserver image as there is no PowerShell installed.
  • Your framework or application layer may has modified the Windows registry at build time. It then carries the old registry that may not fit to new base layer.
  • Moving such old application layers on top of new base layers is some kind of time travel. Be warned that this tool may create corrupt images.

You can find the rebase-docker-image tool on GitHub. It is a Node.js command line tool which is also available on NPM.

The usage looks like this:

$ rebase-docker-image \
    stefanscherer/hello-freiburg:windows \
    -t stefanscherer/hello-freiburg:1709 \
    -b microsoft/nanoserver:1709

You specify the existing image, eg. "stefanscherer/hello-freiburg:windows" which is based on nanoserver 10.0.14393.x.

With the -t option you specify the target image name that where the final manifest should be pushed.

The -b option specifies the base image you want to use, so most of the time the "microsoft/nanoserver:1709" image.

rebase_docker_image

When we run the tool it does its job in only a few seconds:

Retrieving information about source image stefanscherer/hello-freiburg:windows
Retrieving information about source base image microsoft/nanoserver:10.0.14393.1715
Retrieving information about target base image microsoft/nanoserver:1709
Rebasing image
Pushing target image stefanscherer/hello-freiburg:1709
Done.

Now on a Windows Server 1709 we can run the application.

hello-freiburg-1709.png-shadow

I tried this tool with some other Windows Docker images and was able to rebase the golang:1.9-nanoserver image to have a Golang build environment for 1709 without rebuilding the Golang image by myself.

But I also found situations where the rebase didn't work, so don't expect it to work everywhere.

AppVeyor CI pipeline

I also want to show you a small CI pipeline using AppVeyor to build a Windows image with curl.exe installed and provide two variants of that Docker image, one for the old nanoserver and one with the nanoserver:1709 image.

The Dockerfile uses a multi-stage build. In the first stage we download and extract curl and its DLL's. The second stage starts again with the empty nanoserver (the fat one for Windows Server 2016) and then we just COPY deploy the binary into the fresh image. An ENTRYOINT finishes the final image.

FROM microsoft/nanoserver AS download
ENV CURL_VERSION 7.56.1
WORKDIR /curl
ADD https://skanthak.homepage.t-online.de/download/curl-$CURL_VERSION.cab curl.cab
RUN expand /R curl.cab /F:* .

FROM microsoft/nanoserver
COPY --from=download /curl/AMD64/ /
COPY --from=download /curl/CURL.LIC /
ENTRYPOINT ["curl.exe"]

This image can be built on AppVeyor and pushed to the Docker Hub.

The push.ps1 script pushes this image to Docker Hub.

docker push stefanscherer/curl-windows:$version-2016

Then the rebase tool will be installed and the 1709 variant will be pushed as well to Docker Hub.

npm install -g rebase-docker-image
rebase-docker-image `
  stefanscherer/curl-windows:$version-2016 `
  -t stefanscherer/curl-windows:$version-1709 `
  -b microsoft/nanoserver:1709

To provide my users the best experience I also draft a manifest list, just like we did for multi-arch images at the Captains Hack day. The final "tag" then contains both Windows OS variants.

On Windows you can use Chocolatey to install the manifest-tool. In the future this feature will be integrated into the Docker CLI as "docker manifest" command.

choco install -y manifest-tool
manifest-tool push from-spec manifest.yml

The manifest.yml lists both images and joins them together to the final stefanscherer/curl-windows image.

image: stefanscherer/curl-windows:7.56.1
tags: ['7.56', '7', 'latest']
manifests:
  -
    image: stefanscherer/curl-windows:7.56.1-2016
    platform:
      architecture: amd64
      os: windows
  -
    image: stefanscherer/curl-windows:7.56.1-1709
    platform:
      architecture: amd64
      os: windows

So on both Windows Server 2016 and Windows Server 1709 the users can run the same image and it will work.

PS C:\Users\demo> docker run stefanscherer/curl-windows
curl: try 'curl --help' or 'curl --manual' for more information

This requires the next Docker 17.10 EE version to work correctly, but it should be available soon. With older Docker engines it may pick the wrong version of the list of Docker images and fail running it.

Conclusion

This way to "rebase" Docker images works astonishingly good, but keep in mind that this is not a general purpose solution. It is always better to use the correct version on the host to rebuild your Docker images the official way.

Please use the comment below if you have further questions or share what you think about that idea.

Stefan
@stefscherer

]]>
<![CDATA[A closer look at Docker on Windows Server 1709]]>

Today Microsoft has released Windows Server 1709 in Azure. The ISO file is also available in the MSDN subscription to build local VM's. But spinning up a cloud VM makes it easier for more people.

So let's go to Azure and create a new machine. The interesting VM for me

]]>
https://stefanscherer.github.io/docker-on-windows-server-1709/59f8a705f830c70001a9b8eeTue, 31 Oct 2017 23:18:14 GMT

Today Microsoft has released Windows Server 1709 in Azure. The ISO file is also available in the MSDN subscription to build local VM's. But spinning up a cloud VM makes it easier for more people.

So let's go to Azure and create a new machine. The interesting VM for me is "Windows Server, version 1709 with Containers" as it comes with Docker preinstalled.

azure search for 1709

After a few minutes you can RDP into the machine. But watch out, it is only a Windows Server Core, so there is no full desktop. But for a Docker host this is good enough.

Docker 17.06.1 EE preinstalled

As you can see the VM comes with the latest Docker 17.06.1 EE and the new 1709 base images installed.

Smaller "1709" base images

On great news is that the base images got smaller. For comparison here are the images of a Windows Server 2016:

Windows Server 2016 images

So with Windows Server 1709 the WindowsServerCore image is only 1/2 the size of the original. And for the NanoServer image is about 1/4 with only 93 MB on the Docker Hub.

docker-hub-nanoserver

That makes the NanoServer image really attractive to deploy modern microservices with it. As you can see, the "latest" tag is still pointing to the old image. As the 1709 release is a semi-annual release supported for the next 18 months and the current Windows Server 2016 is the LTS version, the latest tags still remain to the older, thicker images.

So when you want to go small, then use the "1709" tags:

  • microsoft/windowsservercore:1709
  • microsoft/nanoserver:1709

Where is PowerShell?

The small size of the NanoServer image comes with a cost: There is no PowerShell installed inside the NanoServer image.

So is that really a problem?

Yes and no. Some people have started to write Dockerfiles and installed software using PowerShell in the RUN instructions. This will be a breaking change.

The good news is that there will be a PowerShell Docker image based on the small nanoserver:

docker-hub-powershell

Currently there is PowerShell 6.0.0 Beta 9 available and you can run it with

docker run -it microsoft/powershell:nanoserver

As you can see PowerShell takes 53 MB on top of the 93 MB nanoserver.

So if you really want to deploy your software with PowerShell, then you might use this base image in your FROM instruction.

But if you deploy a Golang, Node.js, .NET Core application you probably don't need PowerShell.

My experience with Windows Dockerfiles is that the common tasks are

  • downloading a file, zip, tarball from the internet
  • extracting the archive
  • Setting an environment variable like PATH

These steps could be done with tools like curl (yes, I think of the real one and not the curl alias in PowerShell :-) and some other tools like unzip, tar, ... that are way smaller than the complete PowerShell runtime.

I did a small proof of concept to put some of the tools mentioned into a NanoServer image. You can find the Dockerfile an others in my dockerfiles-windows GitHub repo.

docker-hub-busybox-windows

As you can see it only takes about 2 MB to have download and extracting tools. The remaining cmd.exe in the NanoServer image is still good enough to run these tools in the RUN instructions of a Dockerfile.

Multi-stage builds

Another approach to build small images based on NanoServer comes with Docker 17.06. You can use multi-stage builds which brings you so much power and flexibility into a Dockerfile.

You can start with a bigger image, for example the PowerShell image or even the much bigger WindowServerCore image. In that stage of the Dockerfile you have all scripting languages or even build tools or MSI support.

The final stage then uses the smallest NanoServer use COPY deploy instructions for your production image.

Can I use my old images on Server 1709?

Well, it depends. Let's test this with a popular application from portainer.io. When we try to run the application on a Windows Server 1709 we get the following error message: The operating system of the container does not match the operating sytem of the host.

portainer-on-1709.png-shadow

We can make it work when we run the old container with Hyper-V isolation:

portainer-hyperv.png-shadow

For the Hyper-V isolation we need Hyper-V installed. This works in Azure with the v3 machines that allows nested virtualization. If you are using Windows 10 1709 with Hyper-V then you also can run old images in Docker 4 Windows.

But there are many other situations where you are out of luck:

  • other cloud providers that does not have nested virtualization
  • VirtualBox

So my recommendation is to create new Docker images based on 1709 that can be used with Windows 10 1709, or Windows Server 1709 even without Hyper-V. Another advantage is that your users have much smaller downloads and can run your apps much faster.

Can I use the 1709 images on Server 2016?

No. If you try to run one of the 1709 based images on a Windows Server 2016 you see the following error message. Even running it with --isolation=hyperv does not help here as the underlying VM compute of your host does not have all the new features needed.

nano1709-on-2016.png-shadow

Conclusion

With Docker on Windows Server 1709 the container images get much smaller. Your downloads are faster and smaller, the containers start faster. If you're interested in Windows Containers then you should switch over to the new server version. The upcoming Linux Containers on Windows feature will run only on Windows 10 1709/Windows Server 1709 and above.

As a software vendor providing Windows Docker images you should provide both variants as people still use Windows 10 and Windows Server 2016 LTS. In a following blog post I'll show a way that makes it easy for your users to just run your container image regardless the host operating system they have.

I hope you are as excited as I am about the new features of the new Windows Server 1709. If you have questions feel free to drop a comment below.

Stefan
@stefscherer

]]>
<![CDATA[Cross-build a Node.js app with Docker and deploy to IBM Cloud]]>

After the DockerCon EU and the Moby Summit in Copenhagen last week we also had an additional Docker Captain's Hack Day. After introducing our current projects to the other Captains we also had time to work together on some ideas.

"Put all Captains available into a room, feed them

]]>
https://stefanscherer.github.io/cross-build-nodejs-with-docker/59f7701f71f6240001940592Mon, 30 Oct 2017 22:37:03 GMT

After the DockerCon EU and the Moby Summit in Copenhagen last week we also had an additional Docker Captain's Hack Day. After introducing our current projects to the other Captains we also had time to work together on some ideas.

"Put all Captains available into a room, feed them well and see what's happening."

captains-hack-day

Modernizing Swarm Visualizer

One of the ideas was Swarm Visualizer 2.0. Michael Irwin came up with the idea to rewrite the current Visualizer to be event driven, use a modern React framework and cleanup the code base.

The old one uses a dark theme and shows lots of details for the services with small fonts.

old swarm visualizer

Here's a screenshot of an early version of the new UI. With a click on one of the tasks you get more details about that task and its service. All information is updated immediately when you update the service (eg. add or remove labels).

new swarm visualizer

You can try this new Swarm visualizer yourself with the following command:

docker container run \
  --name swarm-viz \
  -p 3000:3000 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  mikesir87/swarm-viz

I joined Michael's table as I was curious if we can have this visualizer for Windows, too. Especially the new Windows Server 1709 that makes mapping the Docker API into a Windows container as easy as on Linux.

In this blog post I focus on how to build a Node.js app with Docker and don't look into the details of the app itself. I'll show how to improve the Dockerfile to build for multiple platforms and finally how to build a CI pipeline for that. You can find the project on github.com/mikesir87/swarm-viz.

Initial Dockerfile

The application is built inside a Docker container. So you even can build it without any developer tools installed, you only need Docker.

Let's have a look at the first version of the Dockerfile for the Linux image. It is a multi-stage build with three stages:

# Build frontend
FROM node:8.7-alpine as frontend
WORKDIR /app
COPY client/package.json .
RUN npm install
COPY client/ .
RUN npm run build

# Build backend
FROM node:8.7-alpine as backend
WORKDIR /app
COPY api/package.json .
RUN npm install
COPY api/ .
RUN npm run build

# Put them together
FROM node:8.7-alpine
EXPOSE 3000
WORKDIR /app
COPY api/package.json .
RUN npm install --production
COPY --from=backend /app/dist /app/dist
COPY --from=frontend /app/build /app/build
CMD node /app/dist/index.js

The first stage uses FROM node:8.7-alpine to build the frontend in a container.

The second stage builds the backend in another Alpine container. During that build you also need some development dependencies that aren't needed for the final image.

In the third stage only the dependencies that are relevant at runtime are installed with npm install --production. All artifacts needed from the other stages are also copied into the final image.

Make FROM more flexible for Windows

I tried to build the app for Windows Server 1709 and had to create a second Dockerfile as I have to use another FROM as node does not have a Windows variant in the official images. And Windows Server 1709 just came out so I had to create a Node.js base image for Windows myself.

So what I did was copying the Dockerfile to Dockerfile.1709 and changed all the

FROM node:8.7-alpine

lines into

FROM stefanscherer/node-windows:1709

But now we have duplicated the Dockerfile "code" for only this little difference.

Fortunately you now can use build arguments for the FROM instruction. So with only a little change we can have ONE Dockerfile for Linux and Windows.

ARG node=node:8.7-alpine
FROM $node as frontend

add-arg

On Linux you still can build the image as before without any change.

On Windows I now was able to use this Dockerfile with

docker image build -t viz `
  --build-args node=stefanscherer/node-windows:1709 .

and use a Windows Node.js base image for all stages. First pull request done. Check! 😊

And running the manually built image in Windows Server 1709 looks very similar to Linux:

docker container run `
  -p 3000:3000 `
  -u ContainerAdministrator `
  -v //./pipe/docker_engine://./pipe/docker_engine `
  viz

Going multi-arch

We showed the Windows Swarm visualizer to other Captains and we discussed how to go to more platforms. Phil Estes, a very active member of the Docker community who's helping push the multi-architecture support in Docker forward and the maintainer of the manifest-tool, commented:

With Golang it is easy to build multi-arch images, just cross-build a static binary with GOARCH=bar go build app.go and copy the binary in an empty FROM scratch image.

Hm, we use Node.js here, so what has to be done instead?

captain-hack-day-1

Well, instead of the scratch image we need the node image for the Node.js runtime. So we had to choose the desired architecture and then copy all sources and dependencies into that image.

Our Node.js application uses Express, Dockerode and some other dependencies, that are platform independent. So this simple copy approach should do it, we thought.

We added another build stage in the Dockerfile where we switch to the desired platform. You may know, the node image on Docker Hub is already a multi-arch image. But in this case we want to build - let's say on Linux/amd64 - for another platform like the IBM s390 mainframe.

With another build argument to specify the target platform for the final stage we came up with this:

ARG node=node:8.7-alpine
ARG target=node:8.7-alpine

FROM $node as frontend
...

FROM $target
EXPOSE 3000
COPY --from=proddeps /app /app
CMD node /app/dist/index.js

add-target

As Phil works for IBM he could easily verify our approach. We built an IBM version of the Swarm visualizer with

docker image build -t mikesir87/swarm-viz \
  --build-arg target=s390x/node:8.7 .

and pushed it to the Docker Hub. Phil then pulled and started the container in IBM Cloud and showed us the visualizer UI. Hurray!

deploy-to-ibm

The second pull request was accepted. Check! 🎉

Now we needed some more automation to build and push the Docker images.

Adding a multi-arch CI pipeline

I've done that several times for my Raspberry Pi projects, so cherry-picked the relevant parts from other repos. For the CI pipeline we choose Travis CI, but any other CI cloud service could be used that allows multi-stage builds.

The .travis.yml uses a matrix build for all architectures. Currently we're building it for only two platforms:

sudo: required

services:
 - docker

env:
  matrix:
    - ARCH=amd64
    - ARCH=s390x

script:
  - ./travis-build.sh

build

The travis-build.sh then is called for each architecture of that matrix and we run the corresponding build.

docker image build -t mikesir87/swarm-viz \
    --build-arg target=$ARCH/node:8.7 .

deploy

As a final step in the .travis.yml we push every image to Docker Hub and tag it with the Git commit id. At this early stage of the project this is good enough. Later on you can think of tagged release builds etc.

The travis-deploy.sh pushes the Docker image for each architecture to the Docker Hub with a different tag using the $ARCH variable we get from the matrix build.

docker image push "$image:linux-$ARCH-$TRAVIS_COMMIT"

In the amd64 build we additionally download and use the manifest-tool to push a manifest list with the final tag.

manifest-tool push from-args \
    --platforms linux/amd64,linux/s390x \
    --template "$image:OS-ARCH-$TRAVIS_COMMIT" \
    --target "$image:latest"

You can verify that the latest tag is already a manifest list with another Docker image provided by Phil

$ docker container run --rm mplatform/mquery mikesir87/swarm-viz
Image: mikesir87/swarm-viz:latest
 * Manifest List: Yes
 * Supported platforms:
   - amd64/linux
   - s390x/linux

Future improvements

In the near future we will also add a Windows build using AppVeyor CI to provide Windows images and also put them into the manifest list. This step would also be needed for Golang projects as you cannot use the empty scratch image on Windows.

ci-pipeline-1

If you watch closely we have used node:8.7 for the final stage. There is no multi-arch alpine image, so there also is no node:8.7-alpine as multi-arch image. But the maintainers of the official Docker images are working hard to add this missing piece to have small images for all architectures.

$ docker container run --rm mplatform/mquery node:8.7-alpine
Image: node:8.7-alpine
 * Manifest List: Yes
 * Supported platforms:
   - amd64/linux

Conclusion

At the end of the Hack day we were really excited how far we came in only a few hours and learned that cross-building Node.js apps with Docker and deploying them as multi-arch Docker images isn't that hard.

Best of all, the users of your Docker images don't have to think about these details. They just can run your image on any platform. Just use the command I showed at the beginning as this already uses the multi-arch variant of the next Swarm visualizer app.

So give multi-arch a try in your next Node.js project to run your app on any platform!

If you want to learn more about multi-arch (and you want to see Phil with a bow tie) then I can recommend the Docker Multi-arch All the Things talk from DockerCon EU with Phil Estes and Michael Friis.

In my lastest multi-arch slidedeck there are also more details about the upcoming docker manifest command that will replace the manifest-tool in the future.

Thanks Michael for coming up with that idea, thanks Phil for the manifest-tool and testing the visualizer. Thanks Dieter and Bret for the photos. You can follow us on Twitter to see what these Captains are doing next.

Stefan
@stefscherer

]]>
<![CDATA[DockerCon: LCOW and Windows Server 1709]]>

Last week was a busy week as a Docker Captain. Many of us came to Copenhagen to DockerCon EU 2017. You may have heard of the surprising news about Kubernetes coming to Docker. But there were also some other new announcements about Windows Containers.

Docker on Windows Workshop

On Monday

]]>
https://stefanscherer.github.io/dockercon-lcow-and-windows-server-1709/59f3b65cdd4c1b0001e301e7Sat, 28 Oct 2017 00:18:11 GMT

Last week was a busy week as a Docker Captain. Many of us came to Copenhagen to DockerCon EU 2017. You may have heard of the surprising news about Kubernetes coming to Docker. But there were also some other new announcements about Windows Containers.

Docker on Windows Workshop

On Monday I helped Elton Stoneman in his Docker on Windows Workshop. This time it was a full-day workshop and it was fully packed with 50 people.

Docker on Windows Workshop

Elton is always running the workshop at a rapic pace, but don't worry the workshop material is all public available on GitHub. So we went through dockerizing ASP.NET apps, adding Prometheus, Grafana and an ELK stack for monitoring, building a Jenkins CI pipeline and finally running a Docker Swarm. There is lots of things to look up in the material. If you prefer a book, I can recommend his Docker on Windows book which is also fully packed with many tips and tricks.

LCOW - The Inside Story

One of my favorite talks was by John Starks from Microsoft about Linux Container on Windows - The Inside Story. He explained how LinuxKit is used to run a small HyperV VM for the Linux containers to provide the Linux kernel. On his Windows 10 1709 machine he also gave pretty good live demos. The video is online and is worth watching.

Linux and Windows containers side-by-side

In the photo you can see an alpine and nanoserver container running side-by-side. So you will no longer need to switch between Linux and Windows containers, it just works. He also showed that volumes work between Linux and Windows containers. This demo was done with a special Docker engine as not all pull requests haven't been merged. But still challenging for me to try this on a own machine ...

Windows Server 1709

During the DockerCon week Microsoft has announced the availability of Windows Server Version 1709 for download. I first looked at the Azure Portal, but found nothing yet. I also couldn't find the downloads.

So after the LCOW talk I used a Windows 10 VM in Azure and installed the Fall Creators Update to have 1709 on that desktop machine. I found the missing pull request and compiled a Docker engine from source and then I had my LCOW moment:

docker-run-lcow

When you see this the first time working and know what technical details had to be solved make make it look so simple and easy - awesome!

The next day I found the Windows Server 1709 ISO in my MSDN subscription. So I could start working on a Packer template in my packer-windows GitHub repo to automate the creation of such Windows VM's. But DockerCon is to meet people and learn new things: Nicholas Dille went another very interesting way to build a VM with Docker without running Windows Setup.

Smaller Windows images

In the last months we could follow the progress of the Windows Server in several Insider builds. I blogged about the smaller NanoServer Insider images in July going down to 80-90 MByte. Now with the new release of Windows Server 1709 and Windows 10 version 1709 we now can use official images.

  • microsoft/windowsservercore:1709
  • microsoft/nanoserver:1709
  • microsoft/dotnet:2.0.0-*-nanoserver-1709
  • microsoft/aspnet:4.7.1-windowsservercore-1709

The biggest discussion is about having no PowerShell in the small nanoserver image. For me it's a nice fit to just COPY deploy microservices into the Windows image.

I haven't seen an official PowerShell base image based on nanoserver, but there is at least the beta version

  • microsoft/powershell:6.0.0-beta.9-nanoserver-1709

I also have pushed some images to the Docker Hub to get started with other languages and tools.

Bildschirmfoto-2017-10-19-um-11.14.43

If you don't have HyperV installed in Windows Server 1709 (maybe you are running a VM in the Cloud) then you cannot run older Windows Docker image on the new server. All images have to be built based on the new 1709 base images.

Windows 10 users always use HyperV to run Linux or Windows containers, so you don't feel that hard constraint on your developer machine.

It will be interesting to see how the multiple Windows versions evolve and when the next Insider program is giving us early access to the upcoming features.

Captains Hack Day

On our Docker Captains Hack Day Michael Irwin has started a better Swarm Visualizer 2.0. During the day we have added a first CI pipeline and - of course - Windows support. But not only Windows! With some magic multi-stage multi-arch builds we also managed to cross-build the visualizer on an Intel machine and create a Docker image for IBM z390 mainframes. Phil Estes tested the image in the IBM cloud. I'll write a more detailed blog post about how to cross-build Node.js apps with Docker.

captain-hack-day

That was a fascinating week at DockerCon. Thanks to Jenny, Ashlinn, Victor, Mano ... for making this event so wonderful. I had a lot of hallway tracks to talk with many people about Windows Containers in devolpment and production. Share and learn!

Stefan
@stefscherer

]]>
<![CDATA[Use Docker to Search in 320 Million Pwned Passwords]]>

This week Troy Hunt, a security researcher announced a freely downloadable list of pwned passwords. Troy is the creator of Have I Been Pwned? website and service that will notify you when one of your registered email addresses have been compromised by a data breach.

In his latest blog post

]]>
https://stefanscherer.github.io/use-docker-to-search-in-320-million-pwned-passwords/5986d4ec688a490001540976Sat, 05 Aug 2017 00:55:07 GMT

This week Troy Hunt, a security researcher announced a freely downloadable list of pwned passwords. Troy is the creator of Have I Been Pwned? website and service that will notify you when one of your registered email addresses have been compromised by a data breach.

In his latest blog post he introduced 306 Million Freely Downloadable Pwned Passwords with an update of another 14 Million just on the following day. He also has setup a online search at https://haveibeenpwned.com/Passwords

You can enter passwords and check if they have been compromised. But do not enter actively used passwords here, even if Troy is a nice person living in sunny Australia.

Pwned Passwords online service

My recommendation is

  1. If you are in doubt if your password has been pwned, just change it first and then check the old one in the online form.
  2. Use a Password manager like 1Password to create an individual long random password for each service you use.

But the huge password list is still quite interesting to work with.

Let's build a local search

What you can do is download the list of passwords (about 5 GByte compressed) and search locally in a safe place. You won't get the cleartext passwords, but only SHA1 sums of them. But we can create SHA1 sums of the passwords we want to search in this huge list.

You can download the files that are compressed with 7-Zip. You also need a tool to create a SHA1 sum of a plain text. And then you need another tool, a database or algorithm to quickly search in that text file that has nearly 320 Million lines.

Use Docker for the task

I immediately thought of a Container that has all these tools installed. But I didn't want to add the huge password lists into that container as it would build a Docker image of about 12 GByte or probably 5-6 GB Docker image on the Docker Hub.

The password files should be persisted locally on your laptop and mounted into the container to search in them with the tools needed for the task.

And I want to use some simple tools to get the work done. A first idea was born in the comments of Troys blog post where someone showed a small bash script with grep to search in the file.

I first tried grep, but this took about 2 minutes to find the hash in the file. So I searched a little bit and found sgrep - a tool to grep in sorted files. Luckily the password files are sorted by the SHA1 hash. But I found only the source code and there is no standard package to install it. So we also need a C compiler for that.

In times before Docker you had a lot of stress installing many tools on your computer. But let's see how Docker can help us with all these steps.

Build the Docker image

I found the Sources of sgrep on GitHub and we will need Make and a C compiler to build the sgrep binary.

I will use a multi-stage build Dockerfile and explain every single line. You can build the image line by line and see the benefits of build caches while working on the Dockerfile. So after adding a line to the file you can run docker build -t pwned-passwords . to build and update the image.

For the beginning let's choose a small Linux base image. We will name the first stage as build. So the Dockerfile starts with

FROM alpine:3.6 AS build

The next step is we have to install Git, Make and the C compiler with its header files.

RUN apk update && apk add git make gcc musl-dev

Now we clone the GitHub repo with the source code of sgrep.

RUN git clone https://github.com/colinscape/sgrep

In the next line I'll create a bin folder that is needed for the build process. Then we go to the source directory and run the make command as there is a Makefile in that directory.

RUN mkdir sgrep/bin && cd sgrep/src && make

After these steps we have the sgrep binary compiled for Alpine Linux. But we also have installed a ton of other tools.

Now put all these instructions into a Dockerfile and build the Docker image.

$ docker build -t pwned-passwords .

Let's inspect all image layers we have created so far.

$ docker history --format "{{.ID}}\t{{.Size}}\t{{.CreatedBy}}" pwned-passwords
78171a118279	24.5kB	/bin/sh -c mkdir sgrep/bin && cd sgrep/src...
2323bcb14b5f	93.6kB	/bin/sh -c git clone https://github.com/co...
8ec1470030af	119MB	/bin/sh -c apk update && apk add git make ...
7328f6f8b418	0B	/bin/sh -c #(nop)  CMD ["/bin/sh"]
<missing>	3.97MB	/bin/sh -c #(nop) ADD file:4583e12bf5caec4...

As you can see we now have a Docker image of more than 120 MByte, but the sgrep binary is only 15 KByte. Yes, this is no typo. Yes, we will grep through GByte of data with a tiny 15 KByte binary.

Multi-stage build for the win

With Docker 17.05 and newer you can now add another FROM instruction and start with a new stage in your Dockerfile. The last stage will create the final Docker image. So every instruction after the last FROM defines what goes into the image you want to share eg. on Docker Hub.

So let's start our final stage of our Docker image build with

FROM alpine:3.6

The last stage does not need a name. Now we have an empty Alpine Linux again, all the 120 MByte of development environment won't make it into the final image. But if you build the Docker image more than once the temporary layers are still there and will be reused if they are unmodified. So the Docker build cache helps you speed up while working on the shell script.

In the previous build stage we have created the much faster sgrep command. What we now need is a small shell script that converts a plaintext password into a SHA1 sum and runs the sgrep command.

To create a SHA1 sum I'll use openssl command. And it would be nice if the shell script can download the huge files for us. As the files are compressed with 7-zip we also need wget to download and 7z to extract them.

In the next instruction we install OpenSSL and the 7-Zip tool.

RUN apk update && apk add openssl p7zip

The COPY instruction has an option --from where you can specify another named stage of your build. So we copy the compiled sgrep binary from the build stage into the local bin directory.

COPY --from=build /sgrep/bin/sgrep /usr/local/bin/sgrep

The complete shell script is called search and can be found in my pwned-passwords GitHub repo. Just assume we have it in the current directory. The next COPY instruction copies it from your real machine into the image layer.

COPY search /usr/local/bin/search

As the last line of the Dockerfile we define an entrypoint to run this shell script if we run the Docker container.

ENTRYPOINT ["/usr/local/bin/search"]

Now append these lines to the Dockerfile and build the complete image. You will see that the first layers are already cached and only the last stage will be built.

The search script

You can find the search script in my GitHub repo as well as the Dockerfile. You only need these two tiny files to build the Docker image yourself.

#!/bin/sh
set -e

if [ ! -d /data ]; then
  echo "Please run this container with a volume mounted at /data."
  echo "docker run --rm -v \ $(pwd):/data pwned-passwords $*"
  exit 1
fi

FILES="pwned-passwords-1.0.txt pwned-passwords-update-1.txt"
for i in $FILES
do
  if [ ! -f "/data/$i" ]; then
    echo "Downloading $i"
    wget -O "/tmp/$i.7z" "https://downloads.pwnedpasswords.com/passwords/$i.7z"
    echo "Extracting $i to /data"
    7z x -o/data "/tmp/$i.7z"
    rm "/tmp/$i.7z"
  fi
done

if [[ $1 != "" ]]
then
PWD=$1
else
PWD="password"
echo "checking $PWD"
fi

hash=`echo -n $PWD | openssl sha1 | awk '{print $2}' | awk 'BEGIN { getline; print toupper($0)  }'`
echo "Hash is $hash"
totalcount=0
for i in $(sgrep -c $hash /data/*.txt)
do
  file=$(echo "$i" | cut -f1 -d:)
  count=$(echo "$i" | cut -f2 -d:)
  if [[ $count -ne 0 ]]; then
    echo "Oh no - pwned! Found $count occurences in $file"
  fi
  totalcount=$(( $totalcount + $count ))
done
if [[ $totalcount -eq 0 ]]; then
  echo "Good news - no pwnage found!"
else
  exit 1
fi

Build the final image

Now with these two files, Dockerfile and search shell script build the small Docker image.

$ docker build -t pwned-passwords .

Let's have a look at the final image layers with

$ docker history --format "{{.ID}}\t{{.Size}}\t{{.CreatedBy}}" stefanscherer/pwned-passwords
24eca60756c8	0B	/bin/sh -c #(nop)  ENTRYPOINT ["/usr/local...
c1a9fc5fdb78	1.04kB	/bin/sh -c #(nop) COPY file:ea5f7cefd82369...
a1f4a26a50a4	15.7kB	/bin/sh -c #(nop) COPY file:bf96562251dbd1...
f99b3a9601ea	10.7MB	/bin/sh -c apk update && apk add openssl p...
7328f6f8b418	0B	/bin/sh -c #(nop)  CMD ["/bin/sh"]
<missing>	3.97MB	/bin/sh -c #(nop) ADD file:4583e12bf5caec4...

As you can see, OpenSSL and 7-Zip take about 10 MByte, the 16 KByte sgrep binary and the 1 KByte shell script are sitting on top of the 4 MByte Alpine base image.

I also have pushed this image to the Docker Hub with a compressed size of about 7 MByte. If you trust me, you can use this Docker image as well. But you will learn more how multi-stage builds feel like if you build the image yourself.

Search for pwned passwords

We now have a small 14.7 MByte Linux Docker image to search for pwned passwords.

Run the container with a folder mounted to /data. If you forgot this, the script will show you how to run it.

Running the container for the first time it will download the two password files (5 GByte) which may take some minutes depending on your internet connectivity.

After the script has downloaded everything two files should appear in the current folder. For me it looks like this:

file list

Now search for passwords by adding a plaintext password as an argument

$ docker run --rm -v $(pwd):/data pwned-passwords troyhunt
Hash is 0CCE6A0DD219810B5964369F90A94BB52B056494
Oh no - pwned! Found 1 occurences in /data/pwned-passwords-1.0.txt

If you don't trust my script or the sgrep command, the run the container without network connectivity

$ docker run --rm -v $(pwd):/data --network none pwned-passwords secret4949
Hash is 6D26C5C10FF089BFE81AB22152E2C0F31C58E132
Good news - no pwnage found!

So you have luck, you can securely check that your password secure4949 hasn't been breached. But beware this is still no good password :-)

Run pwned-passwords

Works on Windows

If you have Docker installed on your Windows machine, you can also use my Docker image or build the image yourself.

With Docker 4 Windows it only depends on the shell you use.

For PowerShell the command to run the image is

docker run --rm -v "$(pwd):/data" pwned-passwords yourpass

PowerShell

And if you prefer the classic CMD shell use this command

docker run --rm -v "%cd%:/data" pwned-passwords yourpass

CMD shell

On my Windows 7 machine I have to use Docker Machine, but even here you can easily search for pwned passwords. All you have to do is mount a directory for the password files as /data into the container.

docker run --rm -v "/c/Users/stefan.scherer/pwned:/data" stefanscherer/pwned-passwords troyhunt

Windows 7 with pwned-passwords image

Conclusion

You now know that there are Millions of passwords out there that may be used in a brute force attack to other online services.

So please use a password manager instead of predictable patterns how to modify passwords for different services.

You also have learned how Docker can keep your computer clean but still compile some open source projects from source code.

You have seen the benefits of multi-stage builds to create and share minimal Docker images without the development environment.

And you now have the possibility to search your current passwords in a save place without leaking it to the internet. Some other online service may collect all the data entered into a form. So keep your passwords secret and change

If you want to hear more about Docker, follow me on Twitter @stefscherer.

]]>
<![CDATA[Exploring new NanoServer Insider images]]>

Last week the first Insider preview container images appeared on the Docker Hub. They promise us much smaller sizes to have more lightweight Windows images for our applications.

To use these Insider container images you also need an Insider preview of Windows Server 2016 or Windows 10. Yes, this is

]]>
https://stefanscherer.github.io/exploring-new-nanoserver-insider-images/5986d4ec688a490001540975Tue, 18 Jul 2017 09:42:41 GMT

Last week the first Insider preview container images appeared on the Docker Hub. They promise us much smaller sizes to have more lightweight Windows images for our applications.

To use these Insider container images you also need an Insider preview of Windows Server 2016 or Windows 10. Yes, this is another great announcement that you can get early access and give feedback to the upcoming version of Windows Server. So let's grab it.

Windows Server Insider

  1. Register at Windows Insider program https://insider.windows.com and join the Windows Server Insider program.

  2. Download the Windows Server Insider preview ISO from https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver

Now you can create a VM and install Docker. You can either build the VM manually and follow the docs "Using Insider Container Images" how to install Docker and pull the Insider container images. Or you can use my Packer template and Vagrant environment to automate these steps. The walkthrough is described at

https://github.com/StefanScherer/insider-docker-machine

Windows Insider images

There are four new Docker images available with a much smaller footprint.

Windows Insider images

  • microsoft/windowsservercore-insider
  • microsoft/nanoserver-insider
  • microsoft/nanoserver-insider-dotnet
  • microsoft/nanoserver-insider-powershell

The Windows Server Core Insider image got down from 5 GB to only 2 GB which saves a lot of bandwidth and download time.

You may wonder why there are three Nano Server Insider images and why there is one without PowerShell.

Aiming the smallest Windows base image

If we compare the image sizes of the current microsoft/nanoserver image with its base layer and update layer with the new Insider images you can see the reason.

NanoServer sizes

If you want to ship your application in a container image you don't want to ship a whole operating system, but only the parts needed to run the application.

And to ship faster is to ship smaller images. For many applications you do not need eg. PowerShell inside your base image at runtime which would take another 54 MByte to download from the Docker registry.

Let's have a look at current Windows Docker images available on the Docker Hub. To run a Golang webserver for example on an empty Windows Docker host you have to pull the 2MB binary and the two NanoServer base layers with hundreds of MB to run it in a container.

docker pull whoami

Of course these base images have to be downloaded only once as other NanoServer container images will use the same base image. But if you work with Windows containers for a longer time you may have noticed that you still have to download different update layers from time to time that pull another 122 MB.

And if the NanoServer base image is much smaller then the updates also will be smaller and faster to download.

With the new Insider container images you can build and run containerized .NET core applications that are still smaller than the NanoServer + PowerShell base image.

Node.js

Another example is providing a Node.js container image based on the new NanoServer Insider image with only 92 MByte. We have just cut off "3" hundred MB.

Node.js NanoServer sizes

If we compare that with some of the Linux Node.js container images we are at about the size of the the slim images.

Node.js slim image sizes

Multi-stage build

To build such small Windows images comes with a cost. You have to live without PowerShell. But the new multi-stage build introduced with Docker 17.05 really helps you and you can use PowerShell before the final image layers are built.

If you haven't heard about multi-stage builds its concept is to have multiple FROM instructions in a Dockerfile. Only the last FROM until the end of the file will build the final container image. This is also called the last stage. In all the other stages you don't have to optimze too much and can use the build cache much better. You can read more about multi-stage builds at the Docker Blog.

Let's have a closer look how to build a small Node.js base image. You can find the complete Dockerfile on GitHub.

In the first stage I'm lazy and even use the microsoft/windowsservercore-insider image. The reason is that I'm using the GPG tools to verify the downloads and these tools don't run quiet well in NanoServer at the moment.

# escape=`
FROM microsoft/windowsservercore-insider as download
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN Invoke-WebRequest ... 
RUN Expand-Archive ...

The Dockerfile has a second FROM instruction which then uses the smallest Windows base image. In that stage you normally COPY deploy files and folders from previous stages. In our case we copy the Node.js installation folder into the final image.

The one RUN instruction sets the PATH environment variable with the setx command instead of PowerShell commands.

FROM microsoft/nanoserver-insider
ENV NPM_CONFIG_LOGLEVEL info
COPY --from=download /nodejs /nodejs
RUN setx PATH "%PATH%;C:\nodejs;%APPDATA%\npm"
CMD [ "node.exe" ]

Users of such a Node.js base image can work as usual by COPY deploy their source tree and node_modules folder into that image and run the application as a small container.

FROM stefanscherer/node-windows:8.1.4-insider
WORKDIR /code
COPY . /code
CMD ["node.exe", "app.js"]

So all you have to do is change the FROM instruction to the smaller insider Node.js image.

Further Insider images

I have pushed some of my first Insider images to the Docker Hub so it may be easier for you to try out different languages.

  • stefanscherer/node-windows:6.11.1-insider
  • stefanscherer/node-windows:8.1.4-insider
  • stefanscherer/golang-windows:1.8.3-insider
  • stefanscherer/dockertls-windows:insider

If you want to see how these images are built, then you can find the Dockerfiles in the latest pull requests of my https://github.com/StefanScherer/dockerfiles-windows repo.

Docker Volumes

If you have worked with Docker Volumes on Windows you may know this already. Node.js and other tools and languages have problems when they want to get the real name of a file or folder that is mapping from the Docker host into the container.

Node.js for example thinks the file is in the folder C:\ContainerMappedDirectories, but cannot find the file there. There is a workaround described in Elton Stoneman's blog post "Introducing the 'G' Drive" to map it to another drive letter.

With the new Insider preview I see a great improvement on that topic. Running normal Windows containers without the HyperV isolation there is no longer a symbolic link.

If we run the Node.js container interactively and map the folder C:\code into the container we can list the C:drive and see that the code folder is a normal directory.

docker run -v C:\code:C:\code stefanscherer/node-windows:8.1.4-insider cmd /c dir

docker run volume

With this setup you are able to mount your source code into the Node.js container and run it eg. with nodemon to live reload it after changing it on the host.

Unfortunately this is not available with the Hyper-V isolation that is the default on Windows 10 Insider machines.

Running the same command with --isolation=hyperv shows the symlinked directory which Node.js cannot handle at the moment.

docker run -v C:\code:C:\code --isolation=hyperv stefanscherer/node-windows:8.1.4-insider cmd /c dir

docker run volume hyperv

But this improvement in native Windows containers looks very promising to solve a lot of headache for all the maintainers of Git for Windows, Golang, Node.js and so on.

Conclusion

Having smaller Windows container images is a huge step forward. I encourage you to try out the much smaller images. You'll learn how it feels to work with them and you can give valuable feedback to the Microsoft Containers team shaping the next version of Windows Server.

Can we make even smaller images? I don't know, but let's find it out. How about naming the new images? Please make suggestions at the Microsoft Tech Community https://techcommunity.microsoft.com.

Please use the comments below if you have further ideas, questions or improvements to share. You can follow me on Twitter @stefscherer to stay up to date with Windows containers.

]]>
<![CDATA[Use multi-stage builds for smaller Windows images]]>

I'm still here in Austin, TX at DockerCon 2017 and I want to show you one of the announcements that is very useful to build small Windows Docker images.

On Tuesday's first keynote at DockerCon Solomon Hykes introduced the most impressive feature for me that will make it in version

]]>
https://stefanscherer.github.io/use-multi-stage-builds-for-smaller-windows-images/5986d4ec688a490001540974Wed, 19 Apr 2017 22:52:00 GMT

I'm still here in Austin, TX at DockerCon 2017 and I want to show you one of the announcements that is very useful to build small Windows Docker images.

On Tuesday's first keynote at DockerCon Solomon Hykes introduced the most impressive feature for me that will make it in version 17.05.0 of Docker: The multi-stage builds

announcement at DockerCon about multi-stage builds

The demo in the keynote only showed Linux images, but you can use this feature for Windows images as well.

How did we build smaller images in the past?

As we know each instruction in a Dockerfile like COPY or RUN builds a new layer of the image. So everything you do in eg. a RUN instruction is atomic and saved into one layer. It was a common practise to use multi-line RUN instructions to clean up temporary files and cache folders before that instruction ends to minimize the size of that layer.

For me it always looked like a workaround and a little too technical to know where all these temporary files have to be wiped out. So it is great to remove this noise out of your Dockerfiles.

And another workaround that was used in addition was to create two Dockerfiles and a script to simulate such stages and copy files from the first Docker image back to the host and then into the second Docker image. This could lead to errors if you have old temp folders on your host where you copy the results from the first build in. So it will be good that we can remove this complexity and avoid such build scripts entirely.

Multi-stage build on Windows

The idea behind multi-stage builds is that you can define two or more build stages and only the layers of the last stage gets into the final Docker image.

The first stage

As you can see in the nice slide you can start with a first stage and do what you like in there. Maybe you need a complete build environment like MSBuild, or the Golang compiler or dev dependencies to run Node.js tests with your sources.

The FROM instruction now can be followed by a stage name, eg. build. I recommend to introduce that to your Dockefile as we will need this name later again. This is how your Dockerfile then could look like:

FROM microsoft/windowsservercore as build

You do not need to use multi-line RUN instructions any more if you haven't liked it. Just keep your Dockerfile simple, readable and maintainable by your team colleages. The advantage that even you have is that you can use the Docker build cache much better.

Think of a giant multi-line RUN instruction with three big downloads, uncompress and cleanup steps and the third download crashes due to internet connectivity. Then you have to do all the other downloads again if you start the docker build again.

So relax and just download one file per RUN instruction, even put the uncompress into another RUN layer, it doesn't matter for the final image size.

The last stage

The magic comes into the Dockerfile as you can use more than one FROM instructions. Each FROM starts a new build stage and all lines beginning from the last FROM will make it into the final Docker image. The last stage does not need to have a name like the previous ones.

In this last stage you define the minimal runtime environment for your containerised application.

The COPY instruction now has a new option --from where you can specify from with stage you want to copy files or directories into the current stage.

Enough theory. Let's have a look at some real use-cases I already tried out.

Build a Golang program

A simple multi-stage Dockerfile to build a Golang binary from source could look like this:

FROM golang:nanoserver as gobuild
COPY . /code
WORKDIR /code
RUN go build webserver.go

FROM microsoft/nanoserver
COPY --from=gobuild /code/webserver.exe /webserver.exe
EXPOSE 8080
CMD ["\\webserver.exe"]

The first four lines describe the normal build. We copy the source codes into the Golang build environment and build the Windows binary with it.

Then with the second FROM instruction we choose an empty NanoServer image. With this we skip about 100 MByte of compressed Golang build environment images for the production image.

The COPY --from=gobuild instruction copies the final Windows binary from the gobuild stage into the final stage.

The last two lines are just the normal things you do, expose the port on which your app is listening and describing the command that should be called when running a container with it.

This Dockerfile now can be easily be built as always with

docker build -t webserver .

The final Docker image only has a 2 MByte compressed layer in addition to the NanoServer base layers.

You can find a full example for such a simple Golang webserver in my dockerfiles-windows repo, the final Docker Hub image is available at stefanscherer/whoami:windows-amd64-1.2.0.

Install MongoDB MSI in NanoServer

Another example for this multi-stage build is that you can use it to install MSI packages and put the installed programs and files into a NanoServer image.

Well, you cannot install MSI packages in NanoServer directly, but you can start with the Windows Server Core image in the build stage and then switch to NanoServer in the final stage.

If you know where the software has been installed you can COPY deploy them in the final stage into the image.

The Dockerfile how to build a MongoDB NanoServer image is also available on GitHub.

The first stage more or less looks like this:

FROM microsoft/windowsservercore as msi
RUN "download MSI page"
RUN "check SHA sum of download"
RUN "run MSI installer"

and the final stage looks like this:

FROM microsoft/nanoserver
COPY --from=msi C:\mongodb\ C:\mongodb\
...
RUN "put MongoDB binaries into PATH"
VOLUME C:\data\db
EXPOSE 27017
CMD ["mongod.exe"]

Another pro tip: If you really want small Windows Docker images you should also avoid RUN or ENV instructions in the last stage.

The final MongoDB NanoServer image is available at stefanscherer/mongo-windows:3.4.2-nano.

Conclusion

With multi-stage builds coming into Docker 17.05 we will be able to

  • put all build stages into a single Dockerfile to use only one simple docker build command
  • use the build cache by using single line RUN instructions
  • start with ServerCore, then switch to NanoServer
  • use latest NanoServer image with all security updates installed for the last stage even if upstream build layer may be out of date

This gives you an idea what you will be able to do once you have Docker 17.05 or later installed.

Update 2017-05-07: I build all my dockerfiles-windows Windows Docker images with AppVeyor and it is very easy to upgrade to Docker 17.05.0-ce during the build with the script update-docker-ce.ps1. For local Windows Server 2016 VM's you could use this script as well. Sure, at the moment we have to switch from EE to CE edition until 17.06.0-ee also will bring this feature. Your images will still run on 17.03.1-ee production servers.

Please use the comments below if you have further ideas, questions or improvements to share. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Yes, you can "Docker" on Windows 7]]>

This week I was asked to help automating a task to get some Linux binaries and files packaged into a tarball. Some developers tried to spin up a Linux virtual machine and run a script to install tools and then do the packaging. Although I also like and use Vagrant

]]>
https://stefanscherer.github.io/yes-you-can-docker-on-windows-7/5986d4ec688a490001540973Fri, 31 Mar 2017 17:02:07 GMT

This week I was asked to help automating a task to get some Linux binaries and files packaged into a tarball. Some developers tried to spin up a Linux virtual machine and run a script to install tools and then do the packaging. Although I also like and use Vagrant still very often, it seemed to me using Docker will be easier to maintain as this could be done in a one-shot container.

The hard facts - Windows 7 Enterprise

The bigger problem was the fact that in some companies you still find Windows 7 Enterprise. It may be a delayed rollout of new notebooks that keep the employees on that old desktop platform.

So using Docker for Windows was no option as it only works with Windows 10 Pro with Hyper-V. This looks like a good setup for new notebooks, but if you want to use Docker now you have to look for other solutions.

Locked-in Hypervisor

Next obstacle was that for Vagrant it is better to use VMware Workstation on Windows 7 instead of VirtualBox. There also may be a company policy to use one specific hypervisor as the knowledge is already there using other server products in the datacenter.

So going down to the Docker Toolbox also was no option as it comes with VirtualBox to run the Linux boot2docker VM.

Embrace your environment

So we went with a manual installation of some Docker tools to get a Linux Docker VM running on the Windows 7 machine. Luckily the developers already had the Chocolatey package manager installed.

Let's recap what I found on the notebooks

  • Windows 7 Enterprise
  • VMware Workstation 9/10/11/12

Well there is a tool Docker Machine to create local Docker VM's very easily, and there is a VMware Workstation plugin available. All these tools are also available as Chocolatey packages.

So what we did on the machines was installing three packages with these simple commands in an administrator terminal.

choco install -y docker
choco install -y docker-machine
choco install -y docker-machine-vmwareworkstation

Then we closed the administrator terminal as the next commands can be done in normal user mode.

My host is my castle

Every developer installs tools that they need for their work. Installing that on the host machine - your desktop or notebook - leads to different machines.

Creating the Docker Machine we ran into a "works on my machine, but doesn't work on your machine" problem I hadn't seen before.

Something while setting up the Linux VM just went wrong. It turned out that copying the Docker TLS certs with SSH just didn't work. A deeper look on what else is installed on the host we found that some implementations of SSH clients just doesn't work very well.

Luckily there is a less known option in the docker-machine binary to ignore external SSH client and use the built-in implementation.

With that knowledge we were able to create a VMware Docker Machine on that laptop with

docker-machine --native-ssh create -d vmwareworkstation default

Using the good old PowerShell on the Windows 7 notebook helps you to use that Linux Docker VM by setting some environment variables.

docker-machine env | iex

After that you can run docker version for example to retrieve client and server version which are both the up-to-date community editions

docker version

Quite exciting to be able to use that Windows 7 notebook with the latest Docker tools installed.

So hopefully Docker and using containers in more and more development tasks helps to keep their notebooks clean and they install less tools on the host and instead running more tools in containers.

I can C: a problem

Using that Docker Machine VM worked really well until we faced another problem. Building some Docker images we ran out of disk space. Oh no, although the Windows 7 notebooks got improved by installing a 1 TB SSD, the C: partition hasn't been increased for some historical reasons.

Face palm

Docker Machine creates the Linux VM's in the current users home directory. This is a good idea, but having a 120 GB partition with only 7 GB left on C: we had to fix it. Taking a deep breath and embracing that environment, we came to the following solution.

We destroyed the Docker Machine again (because it's so easy) and also removed the .docker folder again to link it to a folder that resides on a bigger partition of the SSD.

docker-machine rm -f default
rm $env:USERPROFILE\.docker
mkdir D:\docker
cmd /c mklink /J $env:USERPROFILE\.docker D:\docker

Then we recreated the Docker Machine with the command from above and set the environment variables again.

docker-machine --native-ssh create -d vmwareworkstation default
docker-machine env | iex

And hurray - it worked. The VM with its disk resides on the bigger D: drive and we don't have to set any other global environment variables.

With that setup I made the developers happy. They could start using Docker without waiting for new hardware or asking their admins to resize or reformat their partitions.

We soon had a small Dockerfile and put the already existing provision scripts into an image. So we finished the task running a Linux container that can be thrown away more easily than a whole VM.

Daily work

To recap how to use this Docker Machine you normally do the following steps after booting your notebook.

docker-machine start
docker-machine env | iex

Then you can work with this default Linux Docker VM.

Planning your hardware update

The story ended well, but I recommended to think ahead and plan the next hardware update. So before they just get the new notebook generation they should think about which hypervisor they should use in the future.

Using Windows 10 Enterprise with the built-in Hyper-V would be easier. You can run native Windows containers with it and use Docker for Windows to switch between Linux and Windows containers. Using Vagrant with Hyper-V also gets better and better.

But if company policy still restricts you to use eg. VMware then you also can use the steps above to create a Linux Docker machine. You also cannot use Windows containers directly on Windows 10 machine as Hyper-V does not work in parallel with other hypervisors. In that case you might spin up a Windows Server 2016 VM using my Windows Docker Machine setup. With that you can easily switch between Linux and Windows containers using the docker-machine env command.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. I love to hear about your enterprise setup and how to make Docker work on your developer's machines. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[7 Reasons to attend DockerCon]]>

I'm more than happy that I can make it to DockerCon in Austin, Texas. It is only a few weeks until the workshops and conference starts April, 17th. If you still need some good reasons why you should attend I can give you some ideas. And you will get 10%

]]>
https://stefanscherer.github.io/7-reasons-to-attend-dockercon/5986d4ec688a490001540972Wed, 29 Mar 2017 22:43:00 GMT

I'm more than happy that I can make it to DockerCon in Austin, Texas. It is only a few weeks until the workshops and conference starts April, 17th. If you still need some good reasons why you should attend I can give you some ideas. And you will get 10% discount with the code CaptainStefan.

Workshops

On Monday I'll be at the workshop Modernizing monolithic ASP.NET applications with Docker where you can get some hands-on experience with Windows containers. You cannot have a better place if you want to get started with Docker on Windows. Michael Friis and Elton Stoneman from Docker and myself can answer all your questions.

See some Docker Swarm demos

Come to the Community Theater on Tuesday, Apr 18th, 1:00 PM to see my live demo Swarm 2 Go and how our team at SEAL Systems has built a portable multi-arch data center with Raspberry Pi and UP boards.

picloud

You will have the chance to play the chaos monkey and unplug cables to see Docker swarm mode in action. With the help of LED's we can visualise failures and how Docker swarm gets healthy again. All steps to build such a cluster is available in an open source repo.

Learn about Docker on Windows

Docker is no longer a thing only on Linux. There are several talks about Docker on the Windows platform that I want to see.

And I also recommend to visit the Microsoft booth to hopefully see some Docker swarm mode on Windows Servers. I really look forward to see the latest news and talking with some of the Microsoft Container and Networking team.

Multiple platforms

If you think Docker is only Linux on Intel machines, then comparing it to an instrument it may look like this.

keyboard

But as you can see the talks above, Docker is available on multiple platforms: Linux, Windows, from small ARM devices like the Raspberry Pi to big IBM machines.

So the whole spectrum of Docker more looks like this, and once you learned the Docker commands you are able to play this:

organ

So it is time to learn how easy it is to deploy your applications for more than one platform.

See you at DockerCon! Ping me on Twitter @stefscherer or with the DockerCon app to get in touch with me during that conference week.

]]>
<![CDATA[How to run encrypted Windows websites with Docker and Træfɪk]]>

Nowadays we read it all the time that every website should be encrytped. Adding TLS certificates to your web server sounds like a hard task to do. You have to update your certificates before they get invalid. I don't run public websites on a regular basis, so I - like

]]>
https://stefanscherer.github.io/how-to-run-encrypted-windows-websites-with-docker-and-traefik/5986d4ec688a490001540971Fri, 10 Mar 2017 22:21:00 GMT

Nowadays we read it all the time that every website should be encrytped. Adding TLS certificates to your web server sounds like a hard task to do. You have to update your certificates before they get invalid. I don't run public websites on a regular basis, so I - like many others I guess - have heard of Let's Encrypt, but never really tried it.

But let's learn new things and try it out. I also have promised in the interview in John Willis' Dockercast that I will write a blog post about it. With some modern tools you will see, it's not very complicated to run your Windows website with TLS certificates.

In this blog post I will show you how to run your website in Windows containers with Docker. You can develop your website locally in a container and push it to your server. And another Windows container runs the Træfɪk proxy, that helps us with the TLS certificate as well as with its dynamic configuration to add more than just one website.

Træfɪk is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease. It supports several backends like Docker to register and update its configuration for each new started container.

This picture gives you an overview of the architecture:

Traefik architecture

Normally Træfɪk is running inside a container and it is well known in the Linux Docker community. A few weeks ago I have seen that there also are Windows binaries available. Let's see if we can use Træfɪk in a Windows container to provide us encrypted HTTPS traffic to other Windows containers running our IIS website, or other web service.

Step 1: Create a Windows Docker host in Azure

First of all we need a Windows Server 2016 machine with Docker in the cloud. I will use Azure as Microsoft provides a VM template for that. This server will be our webserver later on with an own DNS name and TLS certs running our website.

Go to the Windows Containers quick start guide at docs.microsoft.com and press the "Deploy to Azure" button.

Deploy to Azure

This will bring you to the Azure portal where you can customize the virtual machine. Create a new resource group, choose the location where the server should be running a and public DNS name, as well as the size of the VM.

Customize machine

After you click on "Purchase" the deployment starts which should take only a few minutes.

Azure starts deployment

In the meantime click on the cube symbol on the left. That will show you all resource groups you have.

This Windows + Docker template already creates inbound security rules for HTTPS port 443 as well as the Docker TLS port 2376. So for our purposes we don't need to add more inbound rules.

Step 2: Buy a domain and update DNS records

For Let's Encrypt you need an own domain name to get TLS certificates. For my tests I ordered a domain name at GoDaddy. But after I walked through the steps I realised that Træfɪk also can automatically update your DNS records when you use DNSimple, CloudFlare etc.

But for first time domain name users like me I show you the manual steps. In my case I went to my domain provider and configured the DNS records.

Get the public IP address

Before we can update the DNS record we need the public IP address of the VM. This IP address is also used for the Docker TLS certificates we will create later on.

In the Azure Portal, open the resource group and click on the public IP address.

Resource group

Write down or copy the IP address shown here.

Public IP address

Go back to your domain provider and enter the public IP address in the A record. If you want to run multiple websites within Docker containers, add a CNAME resource record for each sub domain you need. For this tutorial I have added portainer and whoami as additional sub domains.

Update DNS records

After some minutes all the DNS servers should know your domain name with the new IP address of your Windows Server 2016.

Step 3: Secure Docker with TLS

We now log into the Docker host with RDP. You can use the DNS name provided by Azure or use your domain name. But before you connect with RDP, add a shared folder to your RDP session so you can also copy back the Docker TLS client certificates to your local machine. With this you will also be able to control your Windows Docker engine directly from your local computer.

In this example I shared my desktop folder with the Windows VM.

Add folder in RDP client

Now login with the username and password entered at creation time.

Login with RDP

Create Docker TLS certs

To use Docker remotely it is recommended to use client certificates, so nobody without that certs can talk to your Docker engine. The same applies if a Windows container wants to communicate with the Docker engine. Using just the unprotected port 2375 would give every container the possibility to gain access to your Docker host.

Open a PowerShell terminal as an administrator to run a Windows container that can be used to create TLS certificates for your Docker engine. I already have blogged about DockerTLS in more detail so we just use it here as a tool.

Retrieve all local IP addresses to allow the TLS certificate also from the host itself, but as well for other Windows containers to talk to your Docker engine.

$ips = ((Get-NetIPAddress -AddressFamily IPv4).IPAddress) -Join ','

Also create a local folder for the client certificates.

mkdir ~\.docker

Now run the DockerTLS tool with docker run, just append the public IP address from above to the list of IP_ADDRESSES. Also adjust the SERVER_NAME variable to your domain name.

docker run --rm `
  -e SERVER_NAME=schererstefan.xyz `
  -e IP_ADDRESSES=$ips,52.XXXXXXX.198 `
  -v "C:\ProgramData\docker:C:\ProgramData\docker" `
  -v "$env:USERPROFILE\.docker:C:\Users\ContainerAdministrator\.docker" `
  stefanscherer/dockertls-windows

Run dockertls

Docker will pull the Windows image from Docker Hub and create the TLS certificates in the correct folders for your Docker engine.

Afterwards you have to restart the Docker engine to use the TLS certificates. The Docker engine now additionally listen on TCP port 2376.

restart-service docker

Restart docker

Add firewall exception for Docker

This step is needed to make other Windows container talk to the Docker engine at port 2376. But it also has another benefit. With these certs you can use the Docker client on your local machine to communicate with the Windows Docker engine in Azure. But I will start Træfɪk later on from the Docker host itself as we need some volume mount points.

The Windows Server's firewall is active, so we now have to add an exception to allow inbound traffic on port 2376. The network security group for the public IP address already has an inbound rule to the VM. This firewall exception now allows the connection to the Docker engine.

Add firewall exception

From now on you can connect to the Docker engine listing on port 2376 from the internet.

Copy Docker client certs to your local machine

To setup a working communication from your local machine, copy the Docker client certificates from the virtual machine through the RDP session back to your local machine.

Copy Docker TLS certs to client

On your local machine try to connect with the remote Windows Docker engine with TLS encryption and the client certs.

$ DOCKER_CERT_PATH=~/Desktop/.docker DOCKER_TLS_VERIFY=1 docker -H tcp://schererstefan.xyz:2376 version

Docker client from Mac

Now you are able to start and stop containers as you like.

Step 4: Run Træfɪk and other services

Now comes the fun part. We use Docker and Docker Compose to describe which containers we want to run.

Install Docker Compose

To spin up all our containers I use Docker Compose and a docker-compose.yml file that describes all services.

The Windows VM does not come with Docker Compose. So we have to install Docker Compose first. If you are working remotely you can use your local installation of Compose and skip this step.

Invoke-WebRequest "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-Windows-x86_64.exe" `
  -UseBasicParsing -OutFile $Env:ProgramFiles\docker\docker-compose.exe

If you prefer Chocolatey, use choco install docker-compose instead.

Create data folders on Docker host

You need to persist some data outside of the Docker containers, so we create some data folders. Træfɪk retrieves the TLS certs and these should be persisted outside of the container. Otherwise you run into the Let's Encrypt rate limit of 20 requests per week to obtain new certificates. This happened to me trying different things with Træfɪk and starting and killing the container lots of times.

PS C:\Users\demo> mkdir sample
PS C:\Users\demo> cd sample
PS C:\Users\demo\sample> mkdir traefikdata
PS C:\Users\demo\sample> mkdir portainerdata

docker-compose.yml

For a first test we define two services, the traefik service and a example web server called whoami. This tutorial should give you just an idea and you can extend the YAML file to your needs. Run an IIS website? Put it into a container image. And another IIS website? Just run a separate container with that other website in it. You see you don't have to mix multiple sites, just leave them alone in single microservice images.

Open up an editor and create the YAML file.

PS C:\Users\demo\sample> notepad docker-compose.yml
version: '2.1'
services:
  traefik:
    image: stefanscherer/traefik-windows
    ports:
      - "8080:8080"
      - "443:443"
    volumes:
      - ./traefikdata:C:/etc/traefik
      - ${USERPROFILE}/.docker:C:/etc/ssl:ro

  whoami:
    image: stefanscherer/whoami-windows
    depends_on:
      - traefik
    labels:
      - "traefik.backend=whoami"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:whoami.schererstefan.xyz"

networks:
  default:
    external:
      name: nat

I already have built a Træfɪk Windows Docker image that you can use. There might be an official image in the future. If you don't want to use my image, just use this Dockerfile and replace the image: stefanscherer/traefik-windows with build: ., so Docker Compose will build the Træfɪk image for you.

The Dockerfile looks very simple as we directly add the Go binary to the Nanoserver Docker image and define some volumes and labels.

FROM microsoft/nanoserver

ADD https://github.com/containous/traefik/releases/download/v1.2.0-rc2/traefik_windows-amd64 /traefik.exe

VOLUME C:/etc/traefik
VOLUME C:/etc/ssl

EXPOSE 80
ENTRYPOINT ["/traefik", "--configfile=C:/etc/traefik/traefik.toml"]

# Metadata
LABEL org.label-schema.vendor="Containous" \
      org.label-schema.url="https://traefik.io" \
      org.label-schema.name="Traefik" \
      org.label-schema.description="A modern reverse-proxy" \
      org.label-schema.version="v1.2.0-rc2" \
      org.label-schema.docker.schema-version="1.0"

traefik.toml

Træfɪk needs a configuration file where you specify your email address for the Let's Encrypt certificate requests. You will also need the IP address of the container network so that Træfɪk can contact your Docker engine.

$ip=(Get-NetIPAddress -AddressFamily IPv4 `
   | Where-Object -FilterScript { $_.InterfaceAlias -Eq "vEthernet (HNS Internal NIC)" } `
   ).IPAddress
Write-Host $ip

Now open an editor to create the traefik.toml file.

PS C:\Users\demo\sample> notepad traefikdata\traefik.toml

Enter that IP address at the endpoint of the [docker] section. Also adjust the domain names

[web]
address = ":8080"

[docker]
domain = "schererstefan.xyz"
endpoint = "tcp://172.24.128.1:2376"
watch = true

[docker.tls]
ca = "C:/etc/ssl/ca.pem"
cert = "C:/etc/ssl/cert.pem"
key = "C:/etc/ssl/key.pem"

# Sample entrypoint configuration when using ACME
[entryPoints]
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]

[acme]

# Email address used for registration
#
# Required
#
email = "you@yourmailprovider.com"

storage = "c:/etc/traefik/acme.json"
entryPoint = "https"

[[acme.domains]]
   main = "schererstefan.xyz"
   sans = ["whoami.schererstefan.xyz", "portainer.schererstefan.xyz", "www.schererstefan.xyz"]

Open firewall for all container ports used

Please notice that the Windows firewall is also active for the container network. The whoami service listens on port 8000 in each container. To make Træfɪk connect to the whoami containers you have to add a firewall exception for port 8000.

Docker automatically adds a firewall exception for all ports mapped to the host with ports: in the docker-compose.yml. But for the exposed ports this does not happen automatically.

Spin up Træfɪk and whoami

Now it's time to spin up the two containers.

docker-compose up

You can see the output of each container and stop them by pressing CTRL+C. If you want to run them detached in the background, use

docker-compose up -d

So see the output of the services you can use docker-compose logs traefik or docker-compose logs whoami at any time.

Træfɪk now fetches TLS certificates for your domain with the given sub domains. Træfɪk listens for starting and stopping containers.

Test with a browser

Now open a browser on your local machine and try your TLS encrypted website with the subdomain whoami. You should see a text like I'm 3e1f17ecbba3 which is the hostname of the container.

Now let's try Træfɪk load balancing feature by scaling up the whoami service.

docker-compose scale whoami=3

Now there are three whoami containers running and Træfɪk knows all three of them. Each request to the subdomain will be load balanced to one of these containers. You can SHIFT-reload your page in the browser and see that each request returns another hostname.

Test whoami service with browser

So we have a secured HTTPS connection to our Windows containers.

IIS

The power of Docker is that you can run multiple services on one machine if you have resources left. So let's add another web server, let's choose an IIS server.

Add these lines to the docker-compose.yml.

  www:
    image: microsoft/iis
    expose:
      - 80
    depends_on:
      - traefik
    labels:
      - "traefik.backend=www"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:www.schererstefan.xyz"

Remember to add a firewall exception for port 80 manually. After that spin up the IIS container with

docker-compose up -d www

And check the new sub domain. You will see the welcome page of IIS.

IIS welcome page

Portainer

Let's add another useful service to monitor your Docker engine. Portainer is a very good UI for that task and it is also available as a Windows Docker image.

Add another few lines to our docker-compose.yml.

  portainer:
    image: portainer/portainer
    command: -H tcp://172.24.128.1:2376 --tlsverify
    volumes:
      - ./portainerdata:C:/data
      - ${USERPROFILE}/.docker:C:/certs
    depends_on:
      - traefik
    labels:
      - "traefik.backend=portainer"
      - "traefik.frontend.entryPoints=https"
      - "traefik.frontend.rule=Host:portainer.schererstefan.xyz"

Portainer also needs the client certs to communicate with the Docker engine. Another volume mount point is used to persist data like your admin login outside the container.

Now run Portainer with

docker-compose up -d portainer

Then open your browser on your local machine with the subdomain. When you open it the first time Portainer will ask you for an admin password. Enter a password you want to use and then login with it.

Portainer login

Now you have an UI to see all containers running, all Docker images downloaded etc.

Portainer dashboard

Portainer containers

Conclusion

What we have learned is that Træfɪk works pretty good on Windows. It helps us securing our websites with TLS certificates. In combination with Docker Compose you can add or remove new websites on the fly or even scale some services with the built-in load balancer of Træfɪk.

Read more details in the Træfɪk documentation as I can give you only a short intro of its capabilities.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Setup a Windows Docker CI with AppVeyor]]>

I love GitHub and all the services around it. It enables you to work from anywhere or any device and still have your complete CI pipeline in your pocket. Every thing is done with a git push. You can add services like Codeship, Travis, Circle and lots of others to

]]>
https://stefanscherer.github.io/setup-windows-docker-ci-appveyor/5986d4ec688a49000154096fFri, 10 Mar 2017 05:54:00 GMT

I love GitHub and all the services around it. It enables you to work from anywhere or any device and still have your complete CI pipeline in your pocket. Every thing is done with a git push. You can add services like Codeship, Travis, Circle and lots of others to build and test your code and even the pull requests you get from others.

But I'm on Windows

To build applications for Windows there is a similar cloud based CI service, called AppVeyor.

And it works pretty similar to the other well known services for Linux:

  1. Put a YAML file into your repo with the build, test and deploy steps
  2. Connect your repo to the cloud CI service
  3. From now on a git push will do a lot for you.

Your CI pipeline is set up in a few clicks.

appveyor.yml

Here is an example how such a YAML file looks like for AppVeyor. This is from a small C/C++ project I made long time ago during holiday without Visual Studio at hand. I just created that GitHub repo, added the appveyor.yml and voila - I got a compiled and statically linked Windows binary at GitHub releases.

version: 1.0.{build}
configuration: Release
platform: x64
build:
  project: myfavoriteproject.sln
  verbosity: minimal
test: off
artifacts:
- path: x64/Release/myfavoriteproject.exe
  name: Release
deploy:
- provider: GitHub
  auth_token:
    secure: xxxxx

The build worker in AppVeyor is fully armed with lots of development tools, so you can build projects for serveral languages like Node.js, .NET, Ruby, Python, Java ...

Docker build

AppVeyor now has released a new build worker with Windows Server 2016 and Docker Enterprise Edition 17.03.0-ee-1 pre-installed. That instantly enables you to build, test and publish Windows Docker images in the same lightweight way.

Docker build with AppVeyor

All you have to do is to select the new build worker by adding image: Visual Studio 2017 to your appveyor.yml. No more work to do to get a fully Windows Docker engine for your build.

The following appveyor.yml gives you an idea how easy an automated Docker build for Windows can be:

version: 1.0.{build}
image: Visual Studio 2017

environment:
  DOCKER_USER:
    secure: xxxxxxx
  DOCKER_PASS:
    secure: yyyyyyy
install:
  - docker version

build_script:
  - docker build -t me/myfavoriteapp .

test_script:
  - docker run me/myfavoriteapp

deploy_script:
  - docker login -u="$env:DOCKER_USER" -p="$env:DOCKER_PASS"
  - docker push me/myfavoriteapp

This is a very simple example. For the tests you can think of some more sophisticated tests like using Pester, Serverspec or Cucumber. For the deploy steps you can decide when to run these, eg. only for a tagged build to push a new release.

Docker Compose

You are not limited to build a single Docker image and run one container. Your build agent is a full Windows Docker host, so you also can install Docker Compose and spin up a multi-container application. The nice thing about AppVeyor is that the builders also have Chocolatey preinstalled. So you only have to add a short single command to your appveyor.yml to download and install Docker Compose.

choco install docker-compose

Docker Swarm

You also might turn the Docker engine into a single node Docker swarm manager to work with the new commands docker stack deploy. You can create a Docker Swarm with this command

docker swarm init

Add project to build

Adding AppVeyor to one of your GitHub repos is very simple. Sign in to AppVeyor with your GitHub account and select your project to add.

AppVeyor add project

Now you can also check the pull requests you or others create on GitHub.

GitHub pull request checks green

You can click on the green checkmark to view the console output of the build.

AppVeyor pull request build green

Tell me a secret

To push to the Docker Hub we need to configure some secrets in AppVeyor. After you are logged in to AppVeyor you can select the "Encrypt data" menu item from the drop down menu or use the link https://ci.appveyor.com/tools/encrypt

There you can enter your cleartext secret and it creates the encrypted configuration data you can use in your appveyor.yml.

Appveyor encrypt configuration data

These secret variables don't get injected in pull request builds, so nobody can fork your repo and send you an ls env: pull request to expose that variables in the output.

Immutable builds

One of the biggest advantages over self-hosting a CI pipeline is that you get immutable builds. You just do not have to care about the dirt and dust your build left on the build worker. AppVeyor - like all other cloud based CI systems - just throws away the build worker and you get another empty one for the next build.

AppVeyor immutable build

Even if you build Windows Docker images you don't have to cleanup your Docker host. You can concentrate on your code, the build and your tests, and forget about maintain your CI workers.

Examples

I have some GitHub repos that already use AppVeyor to build Windows Docker images, so you can have a look how my setup works:

Conclusion

AppVeyor is my #1 when it comes to automated Windows builds. With the Docker support built-in it becomes even more interesting.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Is there a Windows Docker image for ...?]]>

Do you want to try out Windows containers, but don't want to start too low level? If you are using one of the following programming languages you can benefit of already available official Docker images for Windows.

These Docker images are well maintained and you can just start and put

]]>
https://stefanscherer.github.io/is-there-a-windows-docker-image-for/5986d4ec688a490001540970Tue, 21 Feb 2017 23:56:58 GMT

Do you want to try out Windows containers, but don't want to start too low level? If you are using one of the following programming languages you can benefit of already available official Docker images for Windows.

These Docker images are well maintained and you can just start and put your application code inside and run your application easily in a Windows container.

Someone else did the hard work how to install the runtime or compiler for language XYZ into Windows Server Core container or even a Nanoserver container.

Prefer NanoServer

So starting to work with NanoServer is really easy with Docker as you only choose the right image for the FROM instruction in your Dockerfile. You can start with windowsservercore images, but I encourage you to test with nanoserver as well. For these languages it is easy to switch and the final Docker images are much smaller.

So let's have a look which languages are already available. The corresponding Docker Hub page normally has a short intro how to use these Docker images.

Go

The Go programming language is available on the Docker Hub as image golang. To get the latest Go 1.8 for either Windows Server Core or NanoServer you choose one of these.

  • FROM golang:windowsservercore
  • FROM golang:nanoserver

Have a look at the tags page if you want another version or if you want to pin a specific version of Golang.

Java

When you hear Java you might immediately think of Oracle Java. But searching for alternatives I found three OpenJDK distros for Windows. One of them recently made it into the official openjdk Docker images. Both Windows Server Core and NanoServer are supported.

  • FROM openjdk:windowsservercore
  • FROM openjdk:nanoserver

If you prefer Oracle Java for private installations, you can build a Docker image with the Dockerfiles provided in the oracle/docker-images repository.

Node.JS

For Node.js there are pull requests awaiting a CI build agent for Windows to make it into the official node images.

In the meantime you can use one of my maintained images, for example the latest Node LTS version for both Windows Server Core and NanoServer:

  • FROM stefanscherer/node-windows:6
  • FROM stefanscherer/node-windows:6-nano

You also can find more tags and versions at the Docker Hub.

Python

The script language Python is available as Windows Server Core Docker image at the official python images. Both major versions of Python are available.

  • FROM python:3-windowsservercore
  • FROM python:2-windowsservercore

I also have a Python Docker image for NanoServer with Python 3.6 to create smaller Docker images.

  • FROM stefanscherer/python-windows:nano

.NET Core

Microsoft provides Linux and Windows Docker images for .NET Core at microsoft/dotnet. For Windows it is NanoServer only, but this is no disadvantage as you should plan for the smaller NanoServer images.

  • FROM microsoft/dotnet:nanoserver

ASP.NET

For ASP.NET there are Windows Server Core Docker images for the major versions 3 and 4 with IIS installed at microsoft/aspnet.

  • FROM microsoft/aspnet:4.6.2-windowsservercore
  • FROM microsoft/aspnet:3.5-windowsservercore

Conclusion

The number of programming languages provided in Windows Docker images is growing. This makes it relatively easy to port Linux applications to Windows or use Docker images to distribute apps for both platforms.

Haven't found an image for your language? Have I missed something? Please let me know, and use the comments below if you have questions how to get started. Thanks for your interest. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Getting started with Docker Swarm-mode on Windows 10]]>

Last Friday I noticed a blog post that Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10. A long awaited feature to use Docker Swarm on Windows, so it's time to test-drive it.

Well you wonder why this feature is available on

]]>
https://stefanscherer.github.io/docker-swarm-mode-windows10/5986d4ec688a49000154096dMon, 13 Feb 2017 01:31:00 GMT

Last Friday I noticed a blog post that Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10. A long awaited feature to use Docker Swarm on Windows, so it's time to test-drive it.

Well you wonder why this feature is available on Windows 10 and not Windows Server 2016. Sure it will make more sense in production running a Docker Swarm on multiple servers. The reason is that the Insider preview is the fastest channel to ship new features. Unfortunately there is no equivalent for Windows Server editions.

So if you need it for Windows Server you have to wait a little longer. You can indeed test Swarm-Mode on Windows Server 2016 and Docker 1.13, but only without the Overlay network. To test Swarm-Mode with Overlay network you will need some machines running Windows 10 Insider 15031.

Preparation

In my case I use Vagrant to spin up Windows VM's locally on my notebook. The advantage is that you can describe some test scenarios with a Vagrantfile and share it on GitHub.

I already have played with Docker Swarm-Mode in December and created a Vagrant environment with some Windows Server 2016 VM's. I'll re-use this scenario and just replace the underlying Vagrant box.

So the hardest part is to build a Windows 10 Insider 15031 VM. The latest ISO file with Windows 10 Insider 15025 is a good starting point. You have to switch to the Fast Ring to fetch the latest updates for Insider 15031.

Normally I use Packer with my packer-windows templates available on GitHub to automatically create such Vagrant boxes. In this case I only have a semi-automated template. Download the ISO file, build a VM with the windows_10_insider.json template and update it to Insider 15031 manually. With such a VM, build the final Vagrant box with the windows_10_docker.json Packer template.

What we now have is a Windows 10 Insider 15031 VM with the Containers and Hyper-V features activated, Docker 1.13.1 installed and both Microsoft Docker images downloaded. All the time consuming things should be done in a Packer build to make the final vagrant up a breeze.

In my case I had to add the Vagrant box with

vagrant box add windows_10_docker ./windows_10_insider_15031_docker_vmware.box

Vagrant 1.9.1 is able to use linked clones for VMware Fusion, VirtualBox and Hyper-V. So you need this big Vagrant box only once on disk. For the Docker Swarm only a clone will be started for each VM to save time and disk space.

Create the Swarm

Now we use the prepared Vagrant environment and adjust it

git clone https://github.com/StefanScherer/docker-windows-box
cd docker-windows-box/swarm-mode
vi Vagrantfile

In the Vagrantfile I had to change only the name of the box after config.vm.box to the newly added Vagrant box. This is like changing the FROM in a Dockerfile.

git diff Vagrantfile

I also adjusted the memory a little bit to spin up more Hyper-V containers.

But now we are ready to create the Docker Swarm with a simple

vagrant up

This will spin up three Windows 10 VM's and build the Docker Swarm automatically for you. But using linked clones and the well prepared Vagrant basebox it takes only some minutes to have a complete Docker Swarm up and running.

docker node ls

After all three VM's are up and running, go into the first VM and open a PowerShell terminal. With

docker node ls

you can check if your Docker Swarm is active.

Create a network

Now we create a new overlay network with

docker network create --driver=overlay sample

You can list all networks with docker network ls as there are already some others.

Create a whoami service

With this new overlay network we start a simple service. I've prepared a Windows version of the whoami service. This is a simple webserver that just responds with its internal container hostname.

docker service create --name=whoami --endpoint-mode dnsrr `
  --network=sample stefanscherer/whoami-windows:latest

At the moment only DNS round robin is implemented as described in the Microsoft blog post. You cannot use to publish ports externally right now. More to come in the near future.

Run visualizer

To make it more visible what happens in the next steps I recommend to run the Visualizer. On the first VM you can run the Visualizer with this script:

C:\vagrant\scripts\run-visualizer.ps1

Now open a browser with another helper script:

C:\vagrant\scripts\open-visualizer.ps1

Now you can scale up the service to spread it over your Docker swarm.

docker service scale whoami=4

This will bring up the service on all three nodes and one of the nodes is running two instances of the whoami service.

Visualizer

Just play around scaling the service up and down a little bit.

Build and create another service

As I've mentioned above you cannot publish ports and there is no routing mesh at the moment. So the next thing is to create another service that will access the whoami service inside the overlay network. On Linux you probably would use curl to do that. I tried just a simple PowerShell script to do the same.

Two small files are needed to create a Docker image. First the simple script askthem.ps1:

while ($true) {
  (Invoke-WebRequest -UseBasicParsing http://whoami:8080).Content
  Start-Sleep 1
}

As you can see the PowerShell script will access the webserver with the hostname whoami on port 8080.

Now put this Script into a Docker image with this Dockerfile:

FROM microsoft/nanoserver
COPY askthem.ps1 askthem.ps1
CMD ["powershell", "-file", "askthem.ps1"]

Now build the Docker image with

docker build -t askthem .

We now can start the second service that consumes the whoami service.

docker service create --name=askthem --network=sample askthem:latest

You now should see one instance of the newly created askthem service. Let's have a look at the logs. As this Vagrant environment enables the experimental features of Docker we are able to get the logs with this command:

docker service logs askthem

In my case I had luck and the askthem service got a response from one of the whoami containers that is running on a different Docker node.

Windows 10 Swarm-Mode

I haven't figured out why all the responses are from the same container. Maybe PowerShell or the askthem container itself caches the DNS requests.

But it still proves that overlay networking is working across multiple Windows machines.

More to play with

The Vagrant environment has some more things prepared. You also can spin up Portainer that gives you a nice UI to your Docker swarm. You can have a look at your Nodes, the Docker images, the containers and services running and so on.

I also found out that you can scale services in the Portainer UI by changing the replicas. Running Visualizer and Portainer side-by-side demonstrates that:

Visualizer and Portainer

Conclusion

I think this setup can help you trying out the new Overlay network in Windows 10 Insider, and hopefully in Windows Server 2016 very soon as well.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Dockerizing Ghost and Buster to run a blog on GitHub pages]]>

I'm running this blog for nearly three years now. One of my first posts was the description how to setup Ghost for GitHub pages. In the past I've installed lots of tools on my Mac to run Ghost and Buster locally.

I still like this setup hosting only the static

]]>
https://stefanscherer.github.io/dockerizing-ghost-buster/5986d4ec688a49000154096eSat, 11 Feb 2017 18:46:46 GMT

I'm running this blog for nearly three years now. One of my first posts was the description how to setup Ghost for GitHub pages. In the past I've installed lots of tools on my Mac to run Ghost and Buster locally.

I still like this setup hosting only the static files at GitHub without maintaining an online server. But over time you also have to update Ghost, the Node version used and so on. That's why I have revisited my setup to make it easier for me to update Ghost by running all tools in Docker containers.

Requirements

  • Docker for Mac
  • git (is already installed)
  • docker-compose (already installed with D4M)

You can find my setup and all files in my GitHub repo StefanScherer/ghost-buster-docker.

As I'm upgrading from my local Ghost installation to this dockerized version I already have some content, the static files and my GitHub pages repo. Please refer to my old blog post how to create your repo. The following commands should give you an idea how to setup the two folders content and static.

git clone https://github.com/YOURNAME/ghost-buster-docker
cd ghost-buster-docker
mkdir content
git clone https://github.com/YOURNAME/YOURNAME.github.io static

docker-compose.yml

To simplify running Ghost and Buster I've created a docker-compose.yml with all the published ports and volume mount points.

There are three services

  • ghost
  • buster
  • preview
version: '2.1'

services:
  ghost:
    image: ghost:0.11.4
    volumes:
      - ./content:/var/lib/ghost
    ports:
      - 2368:2368

  buster:
    image: stefanscherer/buster
    command: /buster.sh
    volumes:
      - ./static:/static
      - ./buster.sh:/buster.sh

  preview:
    image: nginx
    volumes:
      - ./static:/usr/share/nginx/html:ro
    ports:
      - 2369:80

Edit content with Ghost

To create new blog post or edit existing posts you spin up the ghost container with

docker-compose up -d ghost

and then open up your browser at https://stefanscherer.github.io/ghost to login and edit content. As you can see the folder content is mapped into the ghost container to persist your Ghost blog data and images on your host machine.

Generate static files

To generate the static HTML pages we use the second service with Buster installed. This is no real service, so we do not "up" but "run" it with

docker-compose run buster

Now you have updated files in the static folder. You may edit the local script buster.sh to fix some links that Buster broke in the past in my pages.

Preview static files

From time to time it is useful to check the generated static HTML files before pushing them to GitHub pages. The third service is useful to run a webserver with the created static pages.

docker-compose up -d preview

Open your browser at http://localhost:2369 and check if everything looks good. In my setup I've added Disqus and first wanted to try out the results of modifying the post.hbs file of the theme.

Deploy static files

If you are happy with the new static files it's time to push them. I've added a small script deploy.sh to do the final steps on the host as only git is used here. As I'm using GitHub with SSH and a passphrase I don't want to put that into a container. Have a look at the shell script and you will see that it's only a git add && git commit && git push script.

./deploy.sh

Conclusion

I think this setup will help me in the future to update Ghost more easily.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>
<![CDATA[Winspector - a tool to inspect your and other's Windows images]]>

In my previous blog post I showed you how to get Windows Updates into your container images. But how do you know if your underlying Docker image you use in the FROM line of your Dockerfile also uses the correct version of the Windows base image?

Is there a way

]]>
https://stefanscherer.github.io/winspector/5986d4ec688a49000154096cSun, 08 Jan 2017 14:00:00 GMT

In my previous blog post I showed you how to get Windows Updates into your container images. But how do you know if your underlying Docker image you use in the FROM line of your Dockerfile also uses the correct version of the Windows base image?

Is there a way to look into container images without downloading them?

There are several services like imagelayers.io, microbadger, shields.io and others which provide badges and online views for existing Docker images at Docker Hub. Unfortunately not all support Windows images at the moment.

Enter winspector

I found an inspector tool written in Python that might be useful for that task. I've enhanced it and created a tool called winspector which is available as Docker image stefanscherer/winspector for Windows and Linux. With this tool you can inspect any Windows Docker images on the Docker Hub.

Winspector will show you

  • The creation date of the image and the Docker version and Windows version used at build time.
  • The number of layers down to the Windows base image
  • Which Windows base image the given image depends on. So you know whether a random Windows image uses the up to date Windows base image or not.
  • The size of each layer. This is useful to when you try to optimize your image size.
  • The "application size" without the Windows base layers. So you get an idea how small your Windows application image really is and what other users have to download provided that they already have the base image.
  • The history of the image. It tries to reconstruct the Dockerfile commands that have been used to build the image.

Run it from Windows

If you have Docker running with Windows containers, use this command to run the tool with any given image name and an optional tag.

docker run --rm stefanscherer/winspector microsoft/iis

run from windows

At the moment the Docker image depends on the windowsservercore base image. I'll try to move it to nanoserver to reduce download size for Windows 10 users.

Run it from Mac / Linux

If you have a Linux Docker engine running, just use the exact same command as on Windows. The Docker image stefanscherer/winspector is a multiarch Docker image and Docker will pull the correct OS specific image for you automatically.

docker run --rm stefanscherer/winspector microsoft/iis

run from mac

Inspecting some images

Now let's try winspector and inspect a random Docker image. We could start with the Windows base image itself.

$ docker run --rm stefanscherer/winspector microsoft/windowsservercore

Even for this image it can show you some details:

Image name: microsoft/windowsservercore
Tag: latest
Number of layers: 2
Sizes of layers:
  sha256:3889bb8d808bbae6fa5a33e07... - 4069985900 byte
  sha256:3430754e4d171ead00cf67667... - 913145061 byte
Total size (including Windows base layers): 4983130961 byte
Application size (w/o Windows base layers): 0 byte
Windows base image used:
  microsoft/windowsservercore:10.0.14393.447 full
  microsoft/windowsservercore:10.0.14393.693 update

As you can see the latest windowsservercore image has two layers. The sizes shown here are the download sizes of the compressed layers. The smaller one is the layer that will be replaced by a newer update layer with the next release.

How big is the winspector image?

Now let's have a look at the winspector Windows image to see what winspector can retrieve for you.

$ docker run --rm stefanscherer/winspector stefanscherer/winspector:windows-1.4.3

The (shortened) output looks like this:

Image name: stefanscherer/winspector
Tag: windows-1.4.3
Number of layers: 14
Schema version: 1
Architecture: amd64
Created: 2017-01-15 21:35:22 with Docker 1.13.0-rc7 on windows 10.0.14393.693
Sizes of layers:
  ...

Total size (including Windows base layers): 360497565 byte
Application size (w/o Windows base layers): 27188879 byte
Windows base image used:
  microsoft/nanoserver:10.0.14393.447 full
  microsoft/nanoserver:10.0.14393.693 update
History:
  ...

So the winspector Windows image is about 27 MByte and it uses the latest nanoserver base image.

Inspecting Linux images

And winspector is not restricted to Windows images, you can inspect Linux images as well.

If you run

$ docker run --rm stefanscherer/winspector stefanscherer/winspector:linux-1.4.3

then winspector will show you

Image name: stefanscherer/winspector
Tag: linux-1.4.3
Number of layers: 8
Schema version: 1
Architecture: amd64
Created: 2017-01-15 21:34:21 with Docker 1.12.3 on linux 
Sizes of layers:
  ...
Total size (including Windows base layers): 32708231 byte
Application size (w/o Windows base layers): 32708231 byte
Windows base image used:
  It does not seem to be a Windows image
History:
  ...

As you can see the Linux image is about 32 MByte.

So once you have downloaded the latest Windows base images like windowsservercore or nanoserver the download experience is the same for both platforms.

Conclusion

With winspector you can check any Windows container image on the Docker Hub which version of Windows it uses.

You can also see how big each image layer is and learn how to optimize commands in your Dockerfile to create smaller Windows images.

The tool is open source on GitHub at github.com/StefanScherer/winspector. It is community driven, so feel free to send me feedback in form of issues or pull requests.

As always, please leave a comment if you have questions or improvements or want to share your thoughts. You can follow me on Twitter @stefscherer.

]]>