Monday, October 26, 2015

Return a new return_value for each call when mocking tempfile in python

Here is a python snippet that mocks out tempfile.NamedTemporaryFile() to use a StringIO and add it to an array of file handles.

The somewhat unintuitive bit here is that by passing side_effect a function mock will use the result of the function as the return_value for the mocked tempfile.NamedTemporaryFile(). Or as the docs say: "The function is called with the same arguments as the mock, and unless it returns DEFAULT, the return value of this function is used as the return value."

def _GetTempFile(self, *args, **kwargs):
  tempfile = StringIO.StringIO() 
  tempfile.flush = mock.MagicMock()
  return tempfile

with mock.patch.object(tempfile, "NamedTemporaryFile", side_effect=self._GetTempFile):
  # mocked code...

Debugging pyinstaller binary install missing import "ImportError: No module named google.protobuf"

We were missing an import in our pyinstaller, the error was:
ImportError: No module named google.protobuf
There is a simple fix for this, you just need to add an, but after doing that it seemed pyinstaller still wasn't picking up the library. So I wanted to check what it was actually importing. Here's how you can do that. First unzip the self-extracting installer:
unzip myinstaller.exe
Then inspect it with pyi-archive_viewer (which you should have if you pip installed pyinstaller):
$ pyi-archive_viewer myprogram.exe
 pos, length, uncompressed, iscompressed, type, name
[(0, 5392645, 5392645, 0, 'z', 'out00-PYZ.pyz'),
 (5392645, 174, 239, 1, 'm', 'struct'),
 (5392819, 1161, 2711, 1, 'm', 'pyi_os_path'),
 (5393980, 4803, 13033, 1, 'm', 'pyi_archive'),
 (5398783, 3976, 13864, 1, 'm', 'pyi_importers'),
 (5402759, 1682, 3769, 1, 's', '_pyi_bootstrap'),
 (5404441, 4173, 13142, 1, 's', 'pyi_carchive'),
 (5408614, 316, 646, 1, 's', 'pyi_rth_Tkinter'),
 (5408930, 433, 918, 1, 's', 'pyi_rth_pkgres'),
 (5409363, 981, 2129, 1, 's', 'pyi_rth_win32comgenpy'),
 (5410344, 941, 2291, 1, 's', 'client')]
? O out00-PYZ.pyz
{'BaseHTTPServer': (False, 1702685L, 8486),
 'ConfigParser': (False, 1070260L, 9034),
 'Cookie': (False, 2449334L, 9023),
 'Crypto': (True, 1151586L, 621),
 'Crypto.Cipher': (True, 4004931L, 1053),
 'Crypto.Cipher.AES': (False, 2553735L, 1656),
 'Crypto.Cipher.ARC4': (False, 5042987L, 1762),


 'google': (True, 3743324L, 102),
 'google.protobuf': (True, 328350L, 111),
 'google.protobuf.descriptor': (False, 2112676L, 9136),
 'google.protobuf.descriptor_database': (False, 344343L, 1713),
 'google.protobuf.descriptor_pb2': (False, 4695199L, 6549),
 'google.protobuf.descriptor_pool': (False, 2080592L, 6890),
 'google.protobuf.internal': (True, 1403354L, 120),
 'google.protobuf.internal.api_implementation': (False, 5300293L, 697),
 'google.protobuf.internal.containers': (False, 4954671L, 3244),
 'google.protobuf.internal.cpp_message': (False, 1447735L, 8287),
 'google.protobuf.internal.decoder': (False, 1318001L, 7942),
 'google.protobuf.internal.encoder': (False, 5337352L, 7379),
 'google.protobuf.internal.enum_type_wrapper': (False, 5202459L, 1039),
 'google.protobuf.internal.message_listener': (False, 2642102L, 1112),
 'google.protobuf.internal.python_message': (False, 4574735L, 13240),
 'google.protobuf.internal.type_checkers': (False, 4723000L, 3257),
 'google.protobuf.internal.wire_format': (False, 4040106L, 2834),
 'google.protobuf.message': (False, 370977L, 3410),
 'google.protobuf.pyext': (True, 650580L, 116),
 'google.protobuf.pyext.cpp_message': (False, 349637L, 763),
 'google.protobuf.reflection': (False, 2578008L, 2557),
 'google.protobuf.symbol_database': (False, 3971343L, 2075),
 'google.protobuf.text_encoding': (False, 2736366L, 1447),
 'google.protobuf.text_format': (False, 4991343L, 8425),

This shows that google.protobuf was successfully included by pyinstaller. In my case pyinstaller was doing the right thing, I had another step in the installer generation process that was using an older version of the installer that was still missing the proto library.

Thursday, September 17, 2015

Patching a python class attribute in a unit test using mock

I often want to change a class attribute in a unit test. While it is sort of documented in the official documentation, it's missing an actual example of this use case. So say you have a class:
class MyClass(object):
  myattr = 1
and you want to test the functionality of the class, but need to change the value of myattr from the default:
import mock
import modulename

with mock.patch.object(modulename.MyClass, "myattr", new=50):
  # do test stuff

Friday, September 4, 2015

Snippet for verifying a SHA256 hash of a downloaded file in bash

This snippet will download openssl and verify the hash against a hard-coded value. This is useful on old operating systems (specifically CentOS 5.11) that can't actually establish an SSL connection to lots of sites.

RETRIEVED_HASH=$(wget -q -O -${SSL_VERSION}.tar.gz | tee openssl-${SSL_VERSION}.tar.gz | sha256sum | cut -d' ' -f1)
if [ "${RETRIEVED_HASH}" != "${SSL_SHA256}" ]; then
  echo "Bad hash for openssl-${SSL_VERSION}.tar.gz, quitting"
  exit 1

Tuesday, August 4, 2015

Provisioning a RHEL GCE machine with packer "sudo: sorry, you must have a tty to run sudo"

RedHat has long had an insane requiretty default for sudo, something that is fixed in later versions. Packer rightly doesn't set a tty by default, so if you provision a RHEL machine you'll likely see this:
sudo: sorry, you must have a tty to run sudo
In fact RHEL is quite the special snowflake, you need to set ssh_pty and also specify a username since root ssh is disabled by default. Here's my working builder:
      "name": "rhel-7.1",
      "type": "googlecompute",
      "account_file": "account.json",
      "project_id": "YOUR_PROJECT_ID",
      "source_image": "rhel-7-v20150710",
      "zone": "us-central1-f",
      "instance_name": "rhel-7",
      "image_name": "rhel-7-{{timestamp}}",
      "machine_type": "n1-standard-1",
      "ssh_pty": "true",
      "ssh_username": "someuser"

Thursday, July 30, 2015

Docker cheat sheet

Intended as a living document, just some basics for now.

Build from the Dockerfile in the current directory and tag the image as "demo":
docker build -t demo .
There's a good Dockerfile reference document and Dockerfile best practices document that you should read when writing a Dockerfile. Make sure you have a .dockerignore file that excludes all unnecessary stuff from the image to reduce bloat and reduce the amount of context that needs to be sent to the docker daemon on each rebuild.

Run bash inside the container to poke around inside it:
docker run -it ubuntu bash
Share a host directory with the container:
docker run -it -v /home/me/somedir:/mounted_inside ubuntu:latest bash
List local available images:
docker images
See what containers are running:
docker ps
Bash helper functions (credit to raiford). "dckr clean" is particularly useful when building an image that results in lots of orphans due to errors in the Dockerfile:
if which docker &> /dev/null; then
  function dckr {
    case $1 in
        # Clean up orphaned images.
        docker rmi -f $(docker images -q -f dangling=true)
        # Delete All Docker images with prompt.
        read -r -p "Delete all docker images? [y/N] " response
        if [[ $response =~ ^([yY][eE][sS]|[yY])$ ]]; then
          docker rmi $(docker images -q)
        # Kill all running docker processes
        docker kill $(docker ps -q)
        echo "Commands: clean, killall"

Docker vs. Vagrant

Docker and Vagrant are somewhat similar technologies, so what are their relative strengths and weaknesses and when should you choose one over the other? I'm fairly new to both, but here's a compare-and-contrast based on what I've learned so far.

The tl;dr is that Docker is really best for running applications in production and fast testing across linux flavors. Vagrant handles Windows and OS X in addition to Linux, and is good for automating building and packaging of software, and testing where you need a full OS stack running. If you need to build a .dmg or a .exe installer for your software, Vagrant is a good place to do that.

Feature Docker Vagrant
Guest OSLinux only (for now, see update below), based on linux containersMac (on Mac hardware), Windows, Linux, etc. Relies on VirtualBox or VMWare.
Host OS Mac/Win/Linux. Windows and OS X use boot2docker which is essentially a small linux virtualbox VM that runs the linux containers Mac/Win/Linux
Configuration Dockerfile describes steps to build the docker container, which holds everything needed to run the program. You can start from standard images downloaded from dockerhub such as Ubuntu, which is an extremely minimal ubuntu install, and you can upload your own images. Vagrantfile describes what OS you want, any host<->guest file shares, and any provisioning scripts that should be run when the vm ("box") is started. You can start from standard fairly-minimal boxes for many OSes downloaded from Atlas, and upload your own box.
Building When you build a docker image it creates a new container for each instruction in the Dockerfile and commits it to the image. Each step is cacheable so if you modify something in the Dockerfile, it only needs to re-execute from that point onward. Many people use Vagrant without building their own box, but you can build your own. It's essentially a case of getting the VM running how vagrant likes it (packer can help here), making your customizations, then exporting and uploading.
Running When you docker "run" something it creates a container from the image specified, and runs your application inside. All of the regular system stuff you might expect to be running (rsyslog, ntpd, sshd, cron) isn't. You have Ubuntu installed in there, but it isn't running Ubuntu. The "run" command allows you to specify shared folders with the host. You run the full-blown OS inside a VM with all the bells and whistles (rsyslog, sshd, etc.) you would expect. When you "vagrant up" it brings up the VM and runs any provisioning scripts you specified in the Vagrantfile. This is typically where you do all the installing necessary to get your build environment ready. The provision script is the heart of the reproducible build you're creating. Once the provision script is done you'll typically have a Makefile or other scripts that will SSH into the environment and do the actual software building.
SSH access If you're trying to ssh into your container, you're almost certainly doing it wrong. SSH is a core part of the technology and used heavily. vagrant invisibly manages selecting ports for forwarding ssh to each of your VMs
Startup time <1 second <1 minute
Suitable for Production and Dev/test Dev/test only

Further reading

If you read this far you should also read this, where the authors of both tools explain the differences:
"Vagrant is a tool for managing virtual machines. Docker is a tool for building and deploying applications by packaging them into lightweight containers."

Update Aug 24, 2015

Microsoft has thrown their weight behind docker, and will be implementing shared-kernel containerization for Windows Server, so you will be able to run Windows Server containers on Windows Server as Mark explains here. You'll still need linux to run linux containers natively, and boot2docker will continue to be a way to bring linux containers to other OSes. Once container-capable windows servers are available from Microsoft/Amazon/Google clouds, much of the niche that Vagrant occupies will have been eroded, and windows server docker containers should be a better solution to the problem of being unable to share windows build VMs due to licensing.