Tuesday, August 4, 2015

Provisioning a RHEL GCE machine with packer "sudo: sorry, you must have a tty to run sudo"

RedHat has long had an insane requiretty default for sudo, something that is fixed in later versions. Packer rightly doesn't set a tty by default, so if you provision a RHEL machine you'll likely see this:
sudo: sorry, you must have a tty to run sudo
In fact RHEL is quite the special snowflake, you need to set ssh_pty and also specify a username since root ssh is disabled by default. Here's my working builder:
      "name": "rhel-7.1",
      "type": "googlecompute",
      "account_file": "account.json",
      "project_id": "YOUR_PROJECT_ID",
      "source_image": "rhel-7-v20150710",
      "zone": "us-central1-f",
      "instance_name": "rhel-7",
      "image_name": "rhel-7-{{timestamp}}",
      "machine_type": "n1-standard-1",
      "ssh_pty": "true",
      "ssh_username": "someuser"

Thursday, July 30, 2015

Docker cheat sheet

Intended as a living document, just some basics for now.

Build from the Dockerfile in the current directory and tag the image as "demo":
docker build -t demo .
There's a good Dockerfile reference document and Dockerfile best practices document that you should read when writing a Dockerfile. Make sure you have a .dockerignore file that excludes all unnecessary stuff from the image to reduce bloat and reduce the amount of context that needs to be sent to the docker daemon on each rebuild.

Run bash inside the container to poke around inside it:
docker run -it ubuntu bash
List local available images:
docker images
See what containers are running:
docker ps
Bash helper functions (credit to raiford). "dckr clean" is particularly useful when building an image that results in lots of orphans due to errors in the Dockerfile:
if which docker &> /dev/null; then
  function dckr {
    case $1 in
        # Clean up orphaned images.
        docker rmi -f $(docker images -q -f dangling=true)
        # Delete All Docker images with prompt.
        read -r -p "Delete all docker images? [y/N] " response
        if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]; then
          docker rmi $(docker images -q)
        # Kill all running docker processes
        docker kill $(docker ps -q)
        echo "Commands: clean, killall"

Docker vs. Vagrant

Docker and Vagrant are somewhat similar technologies, so what are their relative strengths and weaknesses and when should you choose one over the other? I'm fairly new to both, but here's a compare-and-contrast based on what I've learned so far.

The tl;dr is that Docker is really best for running applications in production and fast testing across linux flavors. Vagrant handles Windows and OS X in addition to Linux, and is good for automating building and packaging of software, and testing where you need a full OS stack running. If you need to build a .dmg or a .exe installer for your software, Vagrant is a good place to do that.

Feature Docker Vagrant
Guest OSLinux only (for now, see update below), based on linux containersMac (on Mac hardware), Windows, Linux, etc. Relies on VirtualBox or VMWare.
Host OS Mac/Win/Linux. Windows and OS X use boot2docker which is essentially a small linux virtualbox VM that runs the linux containers Mac/Win/Linux
Configuration Dockerfile describes steps to build the docker container, which holds everything needed to run the program. You can start from standard images downloaded from dockerhub such as Ubuntu, which is an extremely minimal ubuntu install, and you can upload your own images. Vagrantfile describes what OS you want, any host<->guest file shares, and any provisioning scripts that should be run when the vm ("box") is started. You can start from standard fairly-minimal boxes for many OSes downloaded from Atlas, and upload your own box.
Building When you build a docker image it creates a new container for each instruction in the Dockerfile and commits it to the image. Each step is cacheable so if you modify something in the Dockerfile, it only needs to re-execute from that point onward. Many people use Vagrant without building their own box, but you can build your own. It's essentially a case of getting the VM running how vagrant likes it (packer can help here), making your customizations, then exporting and uploading.
Running When you docker "run" something it creates a container from the image specified, and runs your application inside. All of the regular system stuff you might expect to be running (rsyslog, ntpd, sshd, cron) isn't. You have Ubuntu installed in there, but it isn't running Ubuntu. The "run" command allows you to specify shared folders with the host. You run the full-blown OS inside a VM with all the bells and whistles (rsyslog, sshd, etc.) you would expect. When you "vagrant up" it brings up the VM and runs any provisioning scripts you specified in the Vagrantfile. This is typically where you do all the installing necessary to get your build environment ready. The provision script is the heart of the reproducible build you're creating. Once the provision script is done you'll typically have a Makefile or other scripts that will SSH into the environment and do the actual software building.
SSH access If you're trying to ssh into your container, you're almost certainly doing it wrong. SSH is a core part of the technology and used heavily. vagrant invisibly manages selecting ports for forwarding ssh to each of your VMs
Startup time <1 second <1 minute
Suitable for Production and Dev/test Dev/test only

Further reading

If you read this far you should also read this, where the authors of both tools explain the differences:
"Vagrant is a tool for managing virtual machines. Docker is a tool for building and deploying applications by packaging them into lightweight containers."

Update Aug 24, 2015

Microsoft has thrown their weight behind docker, and will be implementing shared-kernel containerization for Windows Server, so you will be able to run Windows Server containers on Windows Server as Mark explains here. You'll still need linux to run linux containers natively, and boot2docker will continue to be a way to bring linux containers to other OSes. Once container-capable windows servers are available from Microsoft/Amazon/Google clouds, much of the niche that Vagrant occupies will have been eroded, and windows server docker containers should be a better solution to the problem of being unable to share windows build VMs due to licensing.

Changing the IP address of the docker0 interface on Ubuntu

Docker picks an address range for the docker0 range that it thinks is unused. Sometimes it makes a bad choice. The docs tell you that this can be changed, however, it's not a particularly obvious process. Here's how you do it on Ubuntu.

First, stop the daemon:
sudo service docker stop
Edit /etc/default/docker and add a line like this:
Then bring the interface down:
sudo ip link set docker0 down
Then delete the bridge:
sudo brctl delbr docker0
Then restart the service:
sudo service docker start
If you don't delete the bridge docker will fail to start. But it won't write anything to the logs. If you run it interactively you'll see the problem:
$ sudo docker -d --bip=
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) 
INFO[0000] [graphdriver] using prior storage driver "aufs" 
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1 
FATA[0000] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: bridge IPv4 ( does not match requested configuration 

Friday, May 1, 2015

Measure windows cmd.exe command execution time

The *nix command 'time' is very useful for timing how long commands take. To do the same with the windows command line you have two options. The best is to use powershell's 'Measure-Command', in this case measuring 2 seconds of sleep:
C:\>powershell "Measure-Command{sleep 2}"

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 1
Milliseconds      : 992
Ticks             : 19924681
TotalDays         : 2.30609733796296E-05
TotalHours        : 0.000553463361111111
TotalMinutes      : 0.0332078016666667
TotalSeconds      : 1.9924681
TotalMilliseconds : 1992.4681
By default Measure-Command will suppress stdout:
C:\Users\vagrant>powershell "Measure-Command{ps}"

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 78
Ticks             : 781767
TotalDays         : 9.04822916666667E-07
TotalHours        : 2.171575E-05
TotalMinutes      : 0.001302945
TotalSeconds      : 0.0781767
TotalMilliseconds : 78.1767
If you want to see it you can just pipe it to Out-Default:
C:\>powershell "Measure-Command{ps | Out-Default}"

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
     23       5     3272       3540    46     0.06   3056 cmd


    221      15     4388       3620    95     0.09   2496 wmpnetwk

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 309
Ticks             : 3097644
TotalDays         : 3.58523611111111E-06
TotalHours        : 8.60456666666667E-05
TotalMinutes      : 0.00516274
TotalSeconds      : 0.3097644
TotalMilliseconds : 309.7644
Or if you want to time a bat file:
C:\>powershell "Measure-Command{& C:\mybatfile.bat | Out-Default}"
There is another way to time commands using the %TIME% environment variable. However there is a gotcha, this example doesn't work because the environment variables are evaluated at the same time when the command is executed:
::This is broken
C:\>echo %TIME% && sleep 2 && echo %TIME%
You can work around this by running up a new cmd.exe with /v:on which enables delayed environment variable expansion when you specify them with "!". The downside to this approach is that you need to do the maths yourself, and you may need to redirect the output of your target command or your echo statements to a file to avoid the first one disappearing off the screen if there is lots of output.
C:\>cmd /v:on /c "echo !TIME! && sleep 2 && echo !TIME!"

Thursday, April 30, 2015

Installing a windows MSI on the commandline, detecting MSI parameters (aka properties)

Installing a MSI is fairly straightforward. Here's an example of installing python without any gui prompting to a designated directory.
msiexec /i "python-2.7.9.amd64.msi" /passive TARGETDIR="C:\tools\python2"
But how do I know about the TARGETDIR option I hear you say? Amazingly the msiexec binary can't tell you what parameters (public external properties) the MSI will accept. TARGETDIR appears to be a standard installshield property, but the MSI can expose others. The only way I found that doesn't require additional tools was to enumerate the parameters by dropping the MSI onto this piece of VBScript:
Call GetArguments(ArgArray) 

If IsArray(ArgArray) then 

For Each ArrayElement In ArgArray 
Wscript.Echo GetMSIProperties(ArrayElement) 


WScript.Echo "Drag and drop MSI-File over the Script" 

End if 

' ---------------------------------------- 
Private Function GetMSIProperties(strMSIFile) 

Dim oWI : Set oWI = CreateObject("WindowsInstaller.Installer") 
Dim oDB : Set oDB = oWI.OpenDatabase(strMSIFile, 2) 
Dim oView : Set oView = oDB.OpenView("Select * From Property") 
Dim oRecord 

Set oRecord = oView.Fetch 

If oRecord Is Nothing Then Exit Do 

iColumnCount = oRecord.FieldCount 
rowData = Empty 
delim = " " 

For iColumn = 1 To iColumnCount 
If iColumn = iColumnCount Then delim = vbLf 
rowData = rowData & oRecord.StringData(iColumn) & delim 

strMessage = strMessage & rowData 

Set oRecord = Nothing 
Set oView = Nothing 
Set oDB = Nothing 
Set oWI = Nothing 

GetMSIProperties = strMessage 

End Function 

' ---------------------------------------- 
Private Function GetArguments(SourceArray) 

Dim iCount : iCount = 0 

If wscript.arguments.count > 0 then 

ReDim ArgArray(wscript.arguments.count -1) 

For Each Argument in wscript.arguments 

ArgArray(iCount) = Argument 
iCount = iCount +1 

iCount = Null 
GetArguments = ArgArray 

End if 

End Function 
Which will pop up a GUI window with information about the MSI including the available properties. If you're building MSIs regularly you probably already have the relevant SDK installed and can use Orca to inspect the property table.

Tuesday, April 28, 2015

Using tox to test code under multiple python versions

Tox is a great project that makes testing against different python versions simple, standing on the shoulders of pip and virtualenv. For our project the tox specification is incredibly simple:
$ cat tox.ini 
envlist = py27, py34

commands = nosetests -v
deps =
This tells tox to create two virtualenv's, one for python 2.7 and one for python 3.4, install dependencies listed in requirements.txt, and then run tests with "nosetests -v". You can run the test against all of the configured environments with just
Or a particular subset like this:
tox -e py34