Monday, October 26, 2015

Return a new return_value for each call when mocking tempfile in python

Here is a python snippet that mocks out tempfile.NamedTemporaryFile() to use a StringIO and add it to an array of file handles.

The somewhat unintuitive bit here is that by passing side_effect a function mock will use the result of the function as the return_value for the mocked tempfile.NamedTemporaryFile(). Or as the docs say: "The function is called with the same arguments as the mock, and unless it returns DEFAULT, the return value of this function is used as the return value."

def _GetTempFile(self, *args, **kwargs):
  tempfile = StringIO.StringIO() 
  tempfile.flush = mock.MagicMock()
  self.output_streams.append(tempfile)
  return tempfile

with mock.patch.object(tempfile, "NamedTemporaryFile", side_effect=self._GetTempFile):
  # mocked code...

Debugging pyinstaller binary install missing import "ImportError: No module named google.protobuf"

We were missing an import in our pyinstaller, the error was:
ImportError: No module named google.protobuf
There is a simple fix for this, you just need to add an __init__.py, but after doing that it seemed pyinstaller still wasn't picking up the library. So I wanted to check what it was actually importing. Here's how you can do that. First unzip the self-extracting installer:
unzip myinstaller.exe
Then inspect it with pyi-archive_viewer (which you should have if you pip installed pyinstaller):
$ pyi-archive_viewer myprogram.exe
 pos, length, uncompressed, iscompressed, type, name
[(0, 5392645, 5392645, 0, 'z', 'out00-PYZ.pyz'),
 (5392645, 174, 239, 1, 'm', 'struct'),
 (5392819, 1161, 2711, 1, 'm', 'pyi_os_path'),
 (5393980, 4803, 13033, 1, 'm', 'pyi_archive'),
 (5398783, 3976, 13864, 1, 'm', 'pyi_importers'),
 (5402759, 1682, 3769, 1, 's', '_pyi_bootstrap'),
 (5404441, 4173, 13142, 1, 's', 'pyi_carchive'),
 (5408614, 316, 646, 1, 's', 'pyi_rth_Tkinter'),
 (5408930, 433, 918, 1, 's', 'pyi_rth_pkgres'),
 (5409363, 981, 2129, 1, 's', 'pyi_rth_win32comgenpy'),
 (5410344, 941, 2291, 1, 's', 'client')]
? O out00-PYZ.pyz
{'BaseHTTPServer': (False, 1702685L, 8486),
 'ConfigParser': (False, 1070260L, 9034),
 'Cookie': (False, 2449334L, 9023),
 'Crypto': (True, 1151586L, 621),
 'Crypto.Cipher': (True, 4004931L, 1053),
 'Crypto.Cipher.AES': (False, 2553735L, 1656),
 'Crypto.Cipher.ARC4': (False, 5042987L, 1762),

[snip]

 'google': (True, 3743324L, 102),
 'google.protobuf': (True, 328350L, 111),
 'google.protobuf.descriptor': (False, 2112676L, 9136),
 'google.protobuf.descriptor_database': (False, 344343L, 1713),
 'google.protobuf.descriptor_pb2': (False, 4695199L, 6549),
 'google.protobuf.descriptor_pool': (False, 2080592L, 6890),
 'google.protobuf.internal': (True, 1403354L, 120),
 'google.protobuf.internal.api_implementation': (False, 5300293L, 697),
 'google.protobuf.internal.containers': (False, 4954671L, 3244),
 'google.protobuf.internal.cpp_message': (False, 1447735L, 8287),
 'google.protobuf.internal.decoder': (False, 1318001L, 7942),
 'google.protobuf.internal.encoder': (False, 5337352L, 7379),
 'google.protobuf.internal.enum_type_wrapper': (False, 5202459L, 1039),
 'google.protobuf.internal.message_listener': (False, 2642102L, 1112),
 'google.protobuf.internal.python_message': (False, 4574735L, 13240),
 'google.protobuf.internal.type_checkers': (False, 4723000L, 3257),
 'google.protobuf.internal.wire_format': (False, 4040106L, 2834),
 'google.protobuf.message': (False, 370977L, 3410),
 'google.protobuf.pyext': (True, 650580L, 116),
 'google.protobuf.pyext.cpp_message': (False, 349637L, 763),
 'google.protobuf.reflection': (False, 2578008L, 2557),
 'google.protobuf.symbol_database': (False, 3971343L, 2075),
 'google.protobuf.text_encoding': (False, 2736366L, 1447),
 'google.protobuf.text_format': (False, 4991343L, 8425),

This shows that google.protobuf was successfully included by pyinstaller. In my case pyinstaller was doing the right thing, I had another step in the installer generation process that was using an older version of the installer that was still missing the proto library.

Thursday, September 17, 2015

Patching a python class attribute in a unit test using mock

I often want to change a class attribute in a unit test. While it is sort of documented in the official documentation, it's missing an actual example of this use case. So say you have a class:
class MyClass(object):
  myattr = 1
and you want to test the functionality of the class, but need to change the value of myattr from the default:
import mock
import modulename

with mock.patch.object(modulename.MyClass, "myattr", new=50):
  # do test stuff

Friday, September 4, 2015

Snippet for verifying a SHA256 hash of a downloaded file in bash

This snippet will download openssl and verify the hash against a hard-coded value. This is useful on old operating systems (specifically CentOS 5.11) that can't actually establish an SSL connection to lots of sites.
#!/bin/bash

SSL_VERSION=1.0.2d
SSL_SHA256=671c36487785628a703374c652ad2cebea45fa920ae5681515df25d9f2c9a8c8
RETRIEVED_HASH=$(wget -q -O - http://www.openssl.org/source/openssl-${SSL_VERSION}.tar.gz | tee openssl-${SSL_VERSION}.tar.gz | sha256sum | cut -d' ' -f1)
if [ "${RETRIEVED_HASH}" != "${SSL_SHA256}" ]; then
  echo "Bad hash for openssl-${SSL_VERSION}.tar.gz, quitting"
  exit 1
fi

Tuesday, August 4, 2015

Provisioning a RHEL GCE machine with packer "sudo: sorry, you must have a tty to run sudo"

RedHat has long had an insane requiretty default for sudo, something that is fixed in later versions. Packer rightly doesn't set a tty by default, so if you provision a RHEL machine you'll likely see this:
sudo: sorry, you must have a tty to run sudo
In fact RHEL is quite the special snowflake, you need to set ssh_pty and also specify a username since root ssh is disabled by default. Here's my working builder:
      "name": "rhel-7.1",
      "type": "googlecompute",
      "account_file": "account.json",
      "project_id": "YOUR_PROJECT_ID",
      "source_image": "rhel-7-v20150710",
      "zone": "us-central1-f",
      "instance_name": "rhel-7",
      "image_name": "rhel-7-{{timestamp}}",
      "machine_type": "n1-standard-1",
      "ssh_pty": "true",
      "ssh_username": "someuser"

Thursday, July 30, 2015

Docker cheat sheet

Intended as a living document, just some basics for now.

Build from the Dockerfile in the current directory and tag the image as "demo":
docker build -t demo .
Build but don't cache the intermediate containers. This is useful if the previous build failed and the intermediate container is broken:
docker build --no-cache .
There's a good Dockerfile reference document and Dockerfile best practices document that you should read when writing a Dockerfile. Make sure you have a .dockerignore file that excludes all unnecessary stuff from the image to reduce bloat and reduce the amount of context that needs to be sent to the docker daemon on each rebuild.

Run bash inside the container to poke around inside it:
docker run -it ubuntu bash
Share a host directory with the container:
docker run -it -v /home/me/somedir:/mounted_inside ubuntu:latest bash
List local available images:
docker images
See what containers are running:
docker ps
Bash helper functions (credit to raiford). "dckr clean" is particularly useful when building an image that results in lots of orphans due to errors in the Dockerfile:
if which docker &> /dev/null; then
  function dckr {
    case $1 in
      clean)
        # Clean up orphaned images.
        docker rmi -f $(docker images -q -f dangling=true)
        ;;
      cleanall)
        # Delete All Docker images with prompt.
        read -r -p "Delete all docker images? [y/N] " response
        if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]; then
          docker rmi $(docker images -q)
        fi
        ;;
      killall)
        # Kill all running docker processes
        docker kill $(docker ps -q)
        ;;
      *)
        echo "Commands: clean, killall"
        ;;
    esac
  }
fi
To run a previously created container with bash, start it as normal and then use exec (this assumes your original container can actually run successfully):
docker start [container id]
docker exec -it [container id] /bin/bash

Docker vs. Vagrant

Docker and Vagrant are somewhat similar technologies, so what are their relative strengths and weaknesses and when should you choose one over the other? I'm fairly new to both, but here's a compare-and-contrast based on what I've learned so far.

The tl;dr is that Docker is really best for running applications in production and fast testing across linux flavors. Vagrant handles Windows and OS X in addition to Linux, and is good for automating building and packaging of software, and testing where you need a full OS stack running. If you need to build a .dmg or a .exe installer for your software, Vagrant is a good place to do that.

Feature Docker Vagrant
Guest OSLinux only (for now, see update below), based on linux containersMac (on Mac hardware), Windows, Linux, etc. Relies on VirtualBox or VMWare.
Host OS Mac/Win/Linux. Windows and OS X use boot2docker which is essentially a small linux virtualbox VM that runs the linux containers Mac/Win/Linux
Configuration Dockerfile describes steps to build the docker container, which holds everything needed to run the program. You can start from standard images downloaded from dockerhub such as Ubuntu, which is an extremely minimal ubuntu install, and you can upload your own images. Vagrantfile describes what OS you want, any host<->guest file shares, and any provisioning scripts that should be run when the vm ("box") is started. You can start from standard fairly-minimal boxes for many OSes downloaded from Atlas, and upload your own box.
Building When you build a docker image it creates a new container for each instruction in the Dockerfile and commits it to the image. Each step is cacheable so if you modify something in the Dockerfile, it only needs to re-execute from that point onward. Many people use Vagrant without building their own box, but you can build your own. It's essentially a case of getting the VM running how vagrant likes it (packer can help here), making your customizations, then exporting and uploading.
Running When you docker "run" something it creates a container from the image specified, and runs your application inside. All of the regular system stuff you might expect to be running (rsyslog, ntpd, sshd, cron) isn't. You have Ubuntu installed in there, but it isn't running Ubuntu. The "run" command allows you to specify shared folders with the host. You run the full-blown OS inside a VM with all the bells and whistles (rsyslog, sshd, etc.) you would expect. When you "vagrant up" it brings up the VM and runs any provisioning scripts you specified in the Vagrantfile. This is typically where you do all the installing necessary to get your build environment ready. The provision script is the heart of the reproducible build you're creating. Once the provision script is done you'll typically have a Makefile or other scripts that will SSH into the environment and do the actual software building.
SSH access If you're trying to ssh into your container, you're almost certainly doing it wrong. SSH is a core part of the technology and used heavily. vagrant invisibly manages selecting ports for forwarding ssh to each of your VMs
Startup time <1 second <1 minute
Suitable for Production and Dev/test Dev/test only

































Further reading

If you read this far you should also read this, where the authors of both tools explain the differences:
"Vagrant is a tool for managing virtual machines. Docker is a tool for building and deploying applications by packaging them into lightweight containers."

Update Aug 24, 2015

Microsoft has thrown their weight behind docker, and will be implementing shared-kernel containerization for Windows Server, so you will be able to run Windows Server containers on Windows Server as Mark explains here. You'll still need linux to run linux containers natively, and boot2docker will continue to be a way to bring linux containers to other OSes. Once container-capable windows servers are available from Microsoft/Amazon/Google clouds, much of the niche that Vagrant occupies will have been eroded, and windows server docker containers should be a better solution to the problem of being unable to share windows build VMs due to licensing.

Changing the IP address of the docker0 interface on Ubuntu

Docker picks an address range for the docker0 range that it thinks is unused. Sometimes it makes a bad choice. The docs tell you that this can be changed, however, it's not a particularly obvious process. Here's how you do it on Ubuntu.

First, stop the daemon:
sudo service docker stop
Edit /etc/default/docker and add a line like this:
DOCKER_OPTS="--bip=192.168.1.1/24"
Then bring the interface down:
sudo ip link set docker0 down
Then delete the bridge:
sudo brctl delbr docker0
Then restart the service:
sudo service docker start
If you don't delete the bridge docker will fail to start. But it won't write anything to the logs. If you run it interactively you'll see the problem:
$ sudo docker -d --bip=192.168.1.1/24
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) 
INFO[0000] [graphdriver] using prior storage driver "aufs" 
WARN[0000] Running modprobe bridge nf_nat failed with message: , error: exit status 1 
FATA[0000] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: bridge IPv4 (10.0.42.1) does not match requested configuration 192.168.1.1 

Friday, May 1, 2015

Measure windows cmd.exe command execution time

The *nix command 'time' is very useful for timing how long commands take. To do the same with the windows command line you have two options. The best is to use powershell's 'Measure-Command', in this case measuring 2 seconds of sleep:
C:\>powershell "Measure-Command{sleep 2}"


Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 1
Milliseconds      : 992
Ticks             : 19924681
TotalDays         : 2.30609733796296E-05
TotalHours        : 0.000553463361111111
TotalMinutes      : 0.0332078016666667
TotalSeconds      : 1.9924681
TotalMilliseconds : 1992.4681
By default Measure-Command will suppress stdout:
C:\Users\vagrant>powershell "Measure-Command{ps}"


Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 78
Ticks             : 781767
TotalDays         : 9.04822916666667E-07
TotalHours        : 2.171575E-05
TotalMinutes      : 0.001302945
TotalSeconds      : 0.0781767
TotalMilliseconds : 78.1767
If you want to see it you can just pipe it to Out-Default:
C:\>powershell "Measure-Command{ps | Out-Default}"

Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
     23       5     3272       3540    46     0.06   3056 cmd

[snip]

    221      15     4388       3620    95     0.09   2496 wmpnetwk

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 309
Ticks             : 3097644
TotalDays         : 3.58523611111111E-06
TotalHours        : 8.60456666666667E-05
TotalMinutes      : 0.00516274
TotalSeconds      : 0.3097644
TotalMilliseconds : 309.7644
Or if you want to time a bat file:
C:\>powershell "Measure-Command{& C:\mybatfile.bat | Out-Default}"
There is another way to time commands using the %TIME% environment variable. However there is a gotcha, this example doesn't work because the environment variables are evaluated at the same time when the command is executed:
::This is broken
C:\>echo %TIME% && sleep 2 && echo %TIME%
10:47:36.10
10:47:36.10
You can work around this by running up a new cmd.exe with /v:on which enables delayed environment variable expansion when you specify them with "!". The downside to this approach is that you need to do the maths yourself, and you may need to redirect the output of your target command or your echo statements to a file to avoid the first one disappearing off the screen if there is lots of output.
C:\>cmd /v:on /c "echo !TIME! && sleep 2 && echo !TIME!"
10:42:15.71
10:42:17.73

Thursday, April 30, 2015

Installing a windows MSI on the commandline, detecting MSI parameters (aka properties)

Installing a MSI is fairly straightforward. Here's an example of installing python without any gui prompting to a designated directory.
msiexec /i "python-2.7.9.amd64.msi" /passive TARGETDIR="C:\tools\python2"
But how do I know about the TARGETDIR option I hear you say? Amazingly the msiexec binary can't tell you what parameters (public external properties) the MSI will accept. TARGETDIR appears to be a standard installshield property, but the MSI can expose others. The only way I found that doesn't require additional tools was to enumerate the parameters by dropping the MSI onto this piece of VBScript:
Call GetArguments(ArgArray) 


If IsArray(ArgArray) then 

For Each ArrayElement In ArgArray 
Wscript.Echo GetMSIProperties(ArrayElement) 
Next 

Else 

WScript.Echo "Drag and drop MSI-File over the Script" 

End if 


' ---------------------------------------- 
Private Function GetMSIProperties(strMSIFile) 

Dim oWI : Set oWI = CreateObject("WindowsInstaller.Installer") 
Dim oDB : Set oDB = oWI.OpenDatabase(strMSIFile, 2) 
Dim oView : Set oView = oDB.OpenView("Select * From Property") 
Dim oRecord 
oView.Execute 

Do 
Set oRecord = oView.Fetch 

If oRecord Is Nothing Then Exit Do 

iColumnCount = oRecord.FieldCount 
rowData = Empty 
delim = " " 

For iColumn = 1 To iColumnCount 
If iColumn = iColumnCount Then delim = vbLf 
rowData = rowData & oRecord.StringData(iColumn) & delim 
Next 

strMessage = strMessage & rowData 
Loop 

Set oRecord = Nothing 
Set oView = Nothing 
Set oDB = Nothing 
Set oWI = Nothing 

GetMSIProperties = strMessage 

End Function 



' ---------------------------------------- 
Private Function GetArguments(SourceArray) 

Dim iCount : iCount = 0 

If wscript.arguments.count > 0 then 

ReDim ArgArray(wscript.arguments.count -1) 

For Each Argument in wscript.arguments 

ArgArray(iCount) = Argument 
iCount = iCount +1 
Next 


iCount = Null 
GetArguments = ArgArray 


End if 

End Function 
Which will pop up a GUI window with information about the MSI including the available properties. If you're building MSIs regularly you probably already have the relevant SDK installed and can use Orca to inspect the property table.

Tuesday, April 28, 2015

Using tox to test code under multiple python versions

Tox is a great project that makes testing against different python versions simple, standing on the shoulders of pip and virtualenv. For our project the tox specification is incredibly simple:
$ cat tox.ini 
[tox]
envlist = py27, py34

[testenv]
commands = nosetests -v
deps =
    nose
    -rrequirements.txt
This tells tox to create two virtualenv's, one for python 2.7 and one for python 3.4, install dependencies listed in requirements.txt, and then run tests with "nosetests -v". You can run the test against all of the configured environments with just
tox
Or a particular subset like this:
tox -e py34

Thursday, April 23, 2015

Python requirements.txt: including other files, and installing specific git commits

Some notes on the lesser-known aspects of requirements.txt files.

Recursively including other requirements files

You can recursively include other requirements files like this:
$ cat requirements.txt 
-r client_requirements.txt
-r server_requirements.txt

$ cat client_requirements.txt 
mox==0.5.3
pexpect==3.3

$ cat server_requirements.txt 
pexpect==3.3

$ pip install -r requirements.txt
Unfortunately, this particular configuration won't actually work. You get:
Double requirement given: pexpect==3.3 (from -r server_requirements.txt (line 1)) (already in pexpect==3.3 (from -r client_requirements.txt (line 2)), name='pexpect')
pip can't actually handle complex dependency resolution, which means that if you have the same dependency in more than one file, even if the versions don't conflict, it will refuse to install anything.

Installing specific git commits

If you've ever had to work around a bug in a released version you might find yourself doing something like this:
git clone -b develop https://github.com/pyinstaller/pyinstaller.git
cd pyinstaller
git reset --hard edb5d438d8df5255a5c8f70f42f11f75aa4e08cf
python setup.py install
But pip can actually do all of that for you, this effectively does the same thing:
pip install git+https://github.com/pyinstaller/pyinstaller.git@edb5d438d8df5255a5c8f70f42f11f75aa4e08cf#egg=PyInstaller
and you can put that line into your requirements.txt:
$ cat requirements.txt
-r client_requirements.txt
-r server_requirements.txt
git+https://github.com/pyinstaller/pyinstaller.git@edb5d438d8df5255a5c8f70f42f11f75aa4e08cf#egg=PyInstaller
When this commit goes into a pip release, you'll also be able to specify install options inside requirements.txt.

Tuesday, April 21, 2015

Applying a diff patch on windows

Applying a diff patch can be done on windows relatively painlessly using git. The catch is that git might try to mess with the line endings. Use this to tell it to leave the line endings alone and apply the patch:
git config --global core.autocrlf false
git apply -p0 mypatch.patch

Thursday, April 16, 2015

Writing windows .bat files for cmd.exe: cheatsheet

Here's a cheatsheet of some things I've discovered when writing a .bat script to build things.

For comments you can use "rem" i.e. remark but "::" is nicer IMHO:
rem This is a comment
:: This is too
To set environment variables: (Note that ALLUSERSPROFILE will only get expanded once, so if it changes this will break)
:: Permanently in HKLM which will persist for all shell sessions:
SETX /M PATH "%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"

:: or temporarily, just for this one
SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
A few warning notes here. %PATH% expands to the combination of user and system paths, i.e. the contents of both of these keys:
HKEY_CURRENT_USER\Environment
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager\Environment
This means that if you're doing a setx as above and there is stuff in HKCU then you are duplicating it in the path every time. This is a problem because of the 1024 character limit on %PATH%. So you may want to clear out the HKCU path:
SETX PATH ""
Download a file with powershell:
powershell -NoProfile -ExecutionPolicy unrestricted -Command "(new-object System.Net.WebClient).DownloadFile('https://downloads.activestate.com/ActivePerl/releases/5.20.2.2001/ActivePerl-5.20.2.2001-MSWin32-x86-64int-298913.msi', 'ActivePerl-5.20.2.2001-MSWin32-x86-64int-298913.msi')"
There seems to be no actual equivalent of the bash "set -e", i.e. exit if any command returns an error, but you can do it per command like this, where the number is the error code:
choco install git -y || echo "git install failed" && exit /b 1
Recursively delete a directory, with no prompting:
rd /s /q openssl-1.0.2a
Get a visual studio environment for building:
:: 64 bit
call "%PROGRAMFILES% (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\vcvars64.bat"

::32 bit
call "%PROGRAMFILES% (x86)\Microsoft Visual Studio 12.0\VC\bin\vcvars32.bat"
Copy a whole directory recursively. Create the destination directory if necessary:
xcopy C:\Build-OpenSSL-VC-64\include C:\pkg\include\ /s /e /h /y
Calling other .bat files

You basically have two options, either create a completely new process (so changes made to environment variables, CWD etc. will not affect the parent):
cmd /c "mybat.bat"
Or use call, which is effectively like inlining all the commands from that .bat file:
call mybat.bat
Note that if you don't use call, some_other_command in this example will never get executed since you're essentially saying "replace everything from here on with the contents of mybat.bat":
mybat.bat
some_other_command
Mounting a shared folder

This will mount a virtualbox shared folder, clearing any existing drive mapped to X:
net use x: /delete
net use x: /persistent:no \\VBOXSVR\host
x:

Monday, April 13, 2015

Vagrant, Packer, and Boxcutter FTW: Create a windows build environment with one command

A quick intro to vagrant, packer, and boxcutter, which are all projects under the Hashicorp umbrella.  From reading the project websites it's not immediately obvious how all of these relate and compliment each other, they are all vm-related after all.  I'll introduce each and then give an end-to-end example.

Vagrant

Firstly vagrant: it allows you to specify your build environment in a way that is completely repeatable by anyone in the world.  No more "builds fine for me".  You can test in exactly the same environment, with the same versions of dependencies, installed by the same scripts.

So what's special about that? Why not just have a base vm and an install script that sets everything up?  For a long time we essentially did that but a bit worse: we had a server with a set of build VMs on it.  Everyone who wanted to use the VMs would add their ssh key, and copy a giant ssh_config which set up all the port forwards needed to talk to the VMs and build the project.  We never quite got around to automating the dependency installation, so whenever a dependency needed updating we'd need to ssh in and upgrade each VM manually.  The server lived in one timezone, but developers were also on the other side of the world so copying chunks of data over for building was slow and timeouts were fairly common.

With vagrant we saw the following advantages:

  • Bringing up a new vm is fast, so there's no need to keep them around and potentially contaminate your new build with previous build products.  If you always provision a new vm to build then maintaining dependencies is as simple as updating the provisioning scripts.
  • Port conflicts for multiple VMs are managed automatically, no need for the big ssh_config
  • Builds are performed on local VMs, so no waiting on network copies to servers on the other side of the world. Shared folders provide a simple way to move data in and out of the VM.
  • Testing additional operating systems and architectures is simple thanks to the atlas catalog of vagrant boxes.
  • Updating dependencies across multiple architectures and operating systems is as simple as modifying the provisioning script.  While this isn't technically an advantage of vagrant, it encourages this kind of automation.
  • Instead of describing how to set up a build environment in pages of documentation, you can add a Vagrantfile and some scripts to your project and reduce all that documentation to a single command like 'make'.
  • We don't need to store the virtual machines ourselves, just upload a base build box to the catalog.  When people build our project vagrant will fetch the VM and check the hash of the downloaded VM matches what we specify in our Vagrantfile (or at least it will soon).
OK that sounds good, but I want to make my own base VMs and it's tedious.  Enter packer.

Packer

So vagrant helps you automate provisioning and running a build environment working from a base VM.  Packer helps you with building the base VM in the first place which opens up a lot of possibilities for testing and automation.  For the purposes of this post, packer is just going to help us avoid the tedious GUI click fest of installing a windows build VM and getting it into a usable state.

Since packer basically eats a JSON config and turns it into a virtual machine, you expect to find an example of a packer config to follow right? You're not the first person to want a Windows 7 VM.  Enter boxcutter.

Boxcutter

Boxcutter is basically a giant library of packer configs that generate almost any VM you can possibly think of.  For this post we're going to use the windows boxcutter repo.

First install vagrant, packer, and whatever virtualisation software you plan to use (e.g. virtualbox or vmware: make sure you have the latest version) then get the boxcutter repo:
$ git clone https://github.com/boxcutter/windows.git boxcutter-windows
Then get a windows VM (this took around 45min for me, but is a spectacularly less work than doing it yourself):
$ make virtualbox/eval-win7x64-enterprise
The boxcutter scripts include all the tricks for minimising disk usage (doing a defrag, zero-ing free disk space etc.), so this is about as small as it can possibly be:
$ ls -lh box/virtualbox/eval-win7x64-enterprise-nocm-1.0.4.box 
-rw-r----- 1 user group 3.2G Apr 13 14:31 box/virtualbox/eval-win7x64-enterprise-nocm-1.0.4.box
Add it into vagrant:
$ vagrant box add box/virtualbox/eval-win7x64-enterprise-nocm-1.0.4.box --name eval-win7x64
Reference it in your Vagrantfile something like this:
  config.vm.define "eval-win7x64" do |box|
    box.vm.box = "eval-win7x64"
    box.vm.guest = :windows
    box.vm.communicator = "winrm"
  end
Fire it up:
$ vagrant up eval-win7x64
Log in:
$ vagrant ssh eval-win7x64
Last login: Mon Apr 13 14:19:31 2015 from 10.0.2.2
Microsoft Windows [Version 6.1.7600]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\vagrant>
Troubleshooting

Note if you see errors like these:
==> virtualbox-iso: Provisioning with shell script: script/vagrant.bat
    virtualbox-iso: 'C:/Windows/Temp/script.bat' is not recognized as an internal or external command,
    virtualbox-iso:
    virtualbox-iso: operable program or batch file.
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Script exited with non-zero exit status: 1
or
==> virtualbox-iso: Error detaching ISO: VBoxManage error: VBoxManage: error: Assertion failed: [SUCCEEDED(rc)] at '/build/buildd/virtualbox-4.3.10-dfsg/src/VBox/Main/src-server/MachineImpl.cpp' (10875) in nsresult Machine::saveStorageControllers(settings::Storage&).
==> virtualbox-iso: VBoxManage: error: COM RC = E_ACCESSDENIED (0x80070005).
==> virtualbox-iso: VBoxManage: error: Please contact the product vendor!
==> virtualbox-iso: VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component SessionMachine, interface IMachine, callee nsISupports
==> virtualbox-iso: VBoxManage: error: Context: "SaveSettings()" at line 888 of file VBoxManageStorageController.cpp
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Error detaching ISO: VBoxManage error: VBoxManage: error: Assertion failed: [SUCCEEDED(rc)] at '/build/buildd/virtualbox-4.3.10-dfsg/src/VBox/Main/src-server/MachineImpl.cpp' (10875) in nsresult Machine::saveStorageControllers(settings::Storage&).
VBoxManage: error: COM RC = E_ACCESSDENIED (0x80070005).
VBoxManage: error: Please contact the product vendor!
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component SessionMachine, interface IMachine, callee nsISupports
VBoxManage: error: Context: "SaveSettings()" at line 888 of file VBoxManageStorageController.cpp
Then you need to upgrade your virtualbox. I saw these when using the virtualbox shipped with ubuntu.

Thursday, April 9, 2015

Creating a new github release (creating tags in git)

To create a new github release you'll want to first create a git tag in your repo. There's a good article on tagging here.

Show tags:
$ git tag
20150409
Tag a particular commit:
git tag -a 20150408 4abcdefg051f382098493f6043482f13437adf05 -m "Pre format change"
Push your new tag:
$ git push origin 20150408
You can now see it in the github web interface and can add notes etc. and turn it into a release.

Thursday, March 26, 2015

Windows cacls.exe cheatsheet

The windows cacls.exe is a confusing, un-intuitive, and poorly documented tool. I'm recording a few of my more frequent examples here as a cheat sheet.

Give all users full control over all files and subdirs in a directory:
cacls.exe target_dir /t /e /g Users:f

Tuesday, March 24, 2015

[SOLVED] fatal error C1902: Program database manager mismatch; please check your installation

I was running the Visual Studio compiler by ssh'ing into a cygwin SSH server running on windows and running "cmd /c msbuild" which gave me this error:
fatal error C1902: Program database manager mismatch; please check your installation
But strangely, if I connected over RDP and ran exactly the same command in a windows shell...it worked. This email led me to the solution so I'm giving it a signalboost. It turns out that if you authenticate to the SSH server with a key you run as a different user and in a login environment that somehow messes up the visual studio build process. You can see the difference by running whoami as below.

If you authenticated with a password you should see a proper username (in this case vagrant):
$ /cygdrive/c/WINDOWS/system32/whoami.exe
build-pc\vagrant
But if you authenticate with a key, you'll see something like this:
$ /cygdrive/c/WINDOWS/system32/whoami.exe
grr-build-pc\cyg_server

Monday, March 23, 2015

Building a Windows Vagrant VM to use WinRM

You basically have two options for talking to windows vagrant VMs. WinRM or SSH. For ssh you'll need to install cygwin's openssh server, or another ssh server. If you use cygwin you'll end up ssh'ed into the cygwin environment, which may confuse scripts you use that are expecting regular windows command shell. If that's the case, you need winrm. I followed these instructions, which also have some great tips for reducing the box size.

But I still had some trouble setting up WinRM. As a quick and dirty check before you go to the effort of creating the box, add a port forward in virtualbox for the winrm port 5985:
And try hitting this URL:
$ wget http://localhost:5985/wsman
--2015-03-23 13:25:00--  http://localhost:5985/wsman
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5985... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5985... connected.
HTTP request sent, awaiting response... 405 
2015-03-23 13:25:00 ERROR 405: (no description).
You should get a successful connection, but an error as above because this isn't a valid winrm call. If that doesn't work make sure you followed the vagrant instructions for enabling winrm. Try running this inside the VM:
winrm quickconfig -q
In my case quickconfig complained that:
WinRM firewall exception will not work since one of the network connection types on this machine is set to Public. Change the network connection type to either Domain or Private and try again.
But when I looked at network connections in the network connection manager I couldn't change the VirtualBox host networking adapter type as advised. This is a known problem where Windows 7 network connections get stuck in public, I fixed it with this tool from Microsoft. You can verify your winrm configuration is functioning correctly by installing pywinrm and running a command:
In [33]: import winrm

In [34]: s = winrm.Session('127.0.0.1', auth=('vagrant', 'vagrant'))

In [35]: s.run_cmd('ver').std_out
Out[35]: '\r\nMicrosoft Windows [Version 6.1.7601]\r\n'

Switching a git checked-out repo from SSL to SSH

Occasionally I'll clone a git repo (or a tool will do it for me) over SSL like this:
$ git clone https://github.com/google/gxui.git
Cloning into 'gxui'...
remote: Counting objects: 906, done.
remote: Total 906 (delta 0), reused 0 (delta 0), pack-reused 905
Receiving objects: 100% (906/906), 477.53 KiB | 0 bytes/s, done.
Resolving deltas: 100% (548/548), done.
Checking connectivity... done.
$ cd gxui/
$ git remote -v
origin https://github.com/google/gxui.git (fetch)
origin https://github.com/google/gxui.git (push)
But later on I might want to push code, which is more convenient over SSH since I have a github key added. Switch the remote like this:
git remote set-url origin git@github.com:google/gxui.git
$ git remote -v
origin git@github.com:google/gxui.git (fetch)
origin git@github.com:google/gxui.git (push)

Wednesday, March 4, 2015

Install a OS X .dmg or .pkg from the commandline

If you have a dmg, you need to attach it:
hdiutil attach GPG_Suite-2015.02-b5-1161.dmg
Then install it:
sudo installer -pkg /Volumes/GPG\ Suite/Install.pkg -target /

Wednesday, February 25, 2015

Packaging an existing virtualbox vm for use with vagrant

It's not super-obvious from the vagrant documentation, but you can use the "vagrant package" command to package up an existing VirtualBox VM. This will shut down the VM, create the metadata.json, and zip up the disk. To use it you need to know the name that VirtualBox has for your VM, list them with:
$ VBoxManage list vms
"ubuntu-lucid64" {aaaabbbb-cccc-dddd-1234}
"CentOS build" {aaaabbbb-cccc-dddd-1234}
Package up the vagrant VM with:
$ vagrant package --base "CentOS build" --output centos_5.11_64
$ vagrant box add centos_5.11_64 --name centos_5.11_64

Friday, February 20, 2015

Workaround for broken vagrant up ssh "unsupported encryption type"

Vagrant is still not playing nicely with SSH certificates loaded into ssh-agent. In my case this seemed to only be a problem during provisioning (i.e. "vagrant up"), using "vagrant ssh" after the box was up worked fine. The error is:
The private key you're attempting to use with this Vagrant box uses
an unsupported encryption type. The SSH library Vagrant uses does not support
this key type. Please use `ssh-rsa` or `ssh-dss` instead. Note that
sometimes keys in your ssh-agent can interfere with this as well,
so verify the keys are valid there in addition to standard
file paths.
You can try clearing out some keys from ssh agent with:
$ ssh-add -D
All identities removed.
Except ssh is probably lying if you're running goobuntu, the keys are still there. There's all sorts of confusion about this behaviour, which seems to be the fault of gnome-keyring, which allegedly only allows you to delete manually added keys. If your SSH certs are automatically loaded it seems like you're out of luck.

By far the easiest workaround is to simply temporarily disable the ssh agent:
SSH_AUTH_SOCK="" vagrant up

Python: distinguish strings from other iterable objects

I often want to do a sanity check on a parameter that is supposed to be an iterable. The problem is that strings are also iterable, but iterating over the characters in a string is basically never what I want. This will raise if I'm passed a string or something that isn't iterable.
import collections

def CheckForIterable(someparam):
  if (isinstance(someparam, basestring) or not isinstance(someparam, collections.Iterable)):
    raise ValueError("Expected an iterable, got %s" % str(someparam))
In [50]: CheckForIterable("a")
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
 in ()
----> 1 CheckForIterable("a")

 in CheckForIterable(someparam)
      1 def CheckForIterable(someparam):
      2   if (isinstance(someparam, basestring) or not isinstance(someparam, collections.Iterable)):
----> 3     raise ValueError("Expected an iterable, got %s" % str(someparam))

ValueError: Expected an iterable, got a

In [51]: CheckForIterable(["a"])

In [52]: CheckForIterable((1,2))

In [53]: CheckForIterable((1))
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
 in ()
----> 1 CheckForIterable((1))

 in CheckForIterable(someparam)
      1 def CheckForIterable(someparam):
      2   if (isinstance(someparam, basestring) or not isinstance(someparam, collections.Iterable)):
----> 3     raise ValueError("Expected an iterable, got %s" % str(someparam))

ValueError: Expected an iterable, got 1

Tuesday, February 10, 2015

Building an OS X vagrant VMware Fusion VM

Install VMWare Fusion, vagrant, pay money to buy the vagrant vmware plugin.  Follow the instructions in the email to install the plugin and download and install the license, something like:

vagrant plugin install vagrant-vmware-fusion
vagrant plugin license vagrant-vmware-fusion ~/license.lic
This blog has great instructions to build the OS X base box. Boiling it down:
  • Install the base OS in VMWare Fusion with user/pass of vagrant/vagrant. This is fairly easy with modern VMware Fusion and an installer dmg.
  • Install VMWare tools from the VMWare menu.
  • Install all updates:
    sudo softwareupdate --install --all
  • If you want to be able to install and use Homebrew, install the relevant Command Line Tools for XCode from apple (developer account required).
  • If you want to be able to make packages, copy PackageMaker.app to /Applications from the Auxiliary Tools for XCode Late - July 2012 from apple (developer account required).
  • Enable remote logon (i.e. SSH) via System Preferences > Sharing
  • Setup passwordless sudo, vagrant ssh, and ssh config as vagrant recommends.
  • If you created any snapshots during this process, delete them
  • Set the free disk space to zero (not sure this is actually necessary, but the original post claims better compression):
    $ diskutil secureErase freespace 0 Macintosh\ HD
    $ sudo halt
    
  • Defrag and compress as vagrant recommends:
    cd ~/Documents/Virtual Machines.localized/OS X 10.8.vmwarevm
    /Applications/VMware\ Fusion.app/Contents/Library/vmware-vdiskmanager -d Virtual\ Disk.vmdk
    /Applications/VMware\ Fusion.app/Contents/Library/vmware-vdiskmanager -k Virtual\ Disk.vmdk
    
  • Copy the relevant files to a new directory:
    mkdir ~/vagrantbox
    cd ~/vagrantbox/
    cp Virtual\ Disk.vmdk OS\ X\ 10.8.nvram OS\ X\ 10.8.vm* .
    
  • Create a basic metadata.json
    echo '{"provider":"vmware_fusion"}' > metadata.json
    
  • Zip it up:
    tar -cvzf OS_X_10.8.5.box ./*
    
  • Add it to vagrant:
    vagrant box add OS_X_10.8.5 OS_X_10.8.5.box
    
  • Add a config to your Vagrantfile:
      config.vm.define "OS_X_10.8.5" do |box|
        box.vm.box = "OS_X_10.8.5"
        # Random keypair generation and insertion doesn't seem to work on OS X
        box.ssh.insert_key = false
        box.vm.provider "vmware_fusion" do |v|
          v.vmx["memsize"] = "4096"
          v.vmx["numvcpus"] = "2"
        end
      end
    
  • Try it out:
    vagrant up OS_X_10.8.5
If you're seeing this error message you didn't install the vmware fusion vagrant plugin:
The provider 'vmware_fusion' could not be found, but was requested to
back the machine 'OS_X_10.8.5'. Please use a provider that exists.

Tuesday, January 13, 2015

Python: Find the first true element (i.e. one that matches a condition) in an array

Say you want to check if any value of one array is in another array, and you only care if there is any match at all. Not what the value is, or how many matches there are. Once you have found a match you want to stop processing. Here's a one-liner that will do that and return bool if there is a single match. In my case both arrays already only held unique values.
In [34]: a=[1,2,3]

In [35]: b=[6, 8, 2, 5, 3]

In [36]: next(ifilter(lambda x: x in b, a), None)
Out[36]: 2

In [37]: b=[6, 8, 5]

In [38]: next(ifilter(lambda x: x in b, a), None)