Monday, October 27, 2014

Authenticode signing windows executables on linux

We had a particularly painful build and sign workflow that required multiple trips between linux and windows.  I looked around and found the following options for signing windows binaries on linux:
  • jsign, a java implementation
  • signcode from the Mono project, as suggested by Mozilla.  It's in the mono-devel ubuntu package.
  • osslsigncode, an Openssl-based implementation of authenticode signing that uses curl to make the timestamp requests.
The Mozilla instructions are good for getting your keys and certs into a format that will work with these tools. Some minor additions to those below:

openssl pkcs12 -in authenticode.pfx -nocerts -nodes -out key.pem
openssl rsa -in key.pem -outform PVK -pvk-strong -out authenticode.pvk
openssl pkcs12 -in authenticode.pfx -nokeys -nodes -out cert.pem
cat Thawte_Primary_Root_CA_Cross.cer >> cert.pem
openssl crl2pkcs7 -nocrl -certfile cert.pem -outform DER -out authenticode.spc
shred -u key.pem
Once you're done here you have authenticode.pvk with your encrypted private key, and authenticode.spc with your public certs. Appending the cross cert is necessary to make signature validation work with some tools. The windows GUI "Properties|Digital Signatures|Details" dialog will tell you "This digital signature is OK" but if you check with signtool verify on Windows, you'll find it isn't:
>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" verify /v /kp my.exe

Verifying: my.exe


SignTool Error: Signing Cert does not chain to a Microsoft Root Cert.

Number of files successfully Verified: 0
Number of warnings: 0
Number of errors: 1
I suspect the GUI uses the local cert store and/or APIs that automatically fetch the required cross cert, but signtool and 3rd-party signature verifiers do not. With the cross cert added to the spc as above it can be correctly verified and mentions the MS cross cert:
Z:\signing\windows>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" verify /v /kp my.exe

Verifying: my.exe


Cross Certificate Chain:
    Issued to: Microsoft Code Verification Root


Successfully verified: my.exe
If you use Bit9 it's also worth checking that it will verify your binary using the dascli.exe tool:
>"C:\Program Files (x86)\Bit9\Parity Agent\DasCLI.exe" certinfo my.exe
CertValidated[Y] Detached[N] Publisher[My Inc]

So, back to signing on Linux. At first I tried installing mono and using "signcode". It claims to succeed:
$ signcode sign -spc authenticode.spc -v authenticode.pvk -a sha1 -$ commercial -n MyApp -t -tr 5 my.exe
Mono SignCode - version
Sign assemblies and PE files using Authenticode(tm).
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.

Enter password for authenticode.pvk: 
And in the process echoes your password in cleartext!?! This is something I was prepared to fix with a "read -s -p 'Password'" wrapper script like this guy, but the signature was no good. I could see it appended in a hexeditor but Windows didn't give me a Digital Signature tab in the GUI and signtool couldn't find it either:
>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" verify /v /kp my.exe

Verifying: my.exe
SignTool Error: No signature found.

Number of files successfully Verified: 0
Number of warnings: 0
Number of errors: 1
It's possible that there's something weird about our exe that caused this to fail. Someone else reported a similar problem but then later claimed it was due to a corrupted exe. In any case, not being particularly wedded to, or happy with, mono and signcode at this point I tried osslsigncode, which worked fine and produced a valid signature.
sudo apt-get install libcurl4-openssl-dev
sudo make install
osslsigncode sign -certs authenticode.spc -key authenticode.pvk -n "MyApp" -t -in my.exe -out my_signed.exe
Update: After coming across this mozilla post, I suspect my problem with mono's signcode was that signcode may not support 64 bit, but I didn't go back to check.

Monday, October 6, 2014

Python: add an element to an array only if a filter function returns True

This post was going to be about a fairly obscure feature of Python I found, but is now a minor cautionary tale about trying to be too clever :)

I was looking for an elegant solution to the problem of appending elements to a list only if a filter function returns true. The long (and in retrospect much more readable) way to write this is something like:
filter_result = self.FilterFunction(response)
if filter_result:
There is in fact a one-liner that can do this for you, but since its fairly obscure it makes the code much harder to understand.
processed_responses += filter(None, [self.FilterFunction(response)])
This works because when the first argument to filter is None, the effect is to remove all items from the sequence that evaluate to False. In this case that means if self.FilterFunction is False you get an empty array, and appending the empty array has no effect on processed_responses. If it's True, you append a single element.

Obvious huh?

Mocking out python OS specific imports

Testing python code that needs to run on a different OS is painful. A major part of the difficulty is that even though you can (somewhat) easily mock out the API calls used, you can't import the code successfully because the modules only exist on the target OS. Lets take an example of code that imports and calls functions from the win32api module. How do you mock out the import so you can test it on linux? I know of two main approaches.

One is the proxy module. Basically you define a module to hide all of the OS-specific imports behind, and do a conditional import in that module. So instead of having code like this:
import win32api
you do
import windows_imports
and then in windows_imports/
import platform

if platform.system() == "Windows":
  import win32api
  import winerror
  import wmi
Then inside your tests you need to create stubs to replace your API calls, e.g. for windows_imports.win32api.GetLogicalDriveStrings. Theoretically this should be fairly straightforward, but when I started down this path it got fairly complicated and I struggled to make it work. In the end I gave up and settled on the second approach, as below.

The second approach, described here, is to delay the import of the OS specific code in your tests until after you modify sys.modules to stub out all the OS-specific modules. This has the distinct advantage of leaving your production code untouched, and having all the complexity in your test code. Using the mock library makes this much easier. Below is an example of mocking out a WMI call made from python.
import mock
import test_fixture
import unittest

class WindowsTests(unittest.TestCase):

  def setUp(self):
    self.wmimock = mock.MagicMock()
    self.win32com = mock.MagicMock()
    self.win32com.client = mock.MagicMock()
    modules = {
        "_winreg": mock.MagicMock(),
        "pythoncom": mock.MagicMock(),
        "pywintypes": mock.MagicMock(),
        "win32api": mock.MagicMock(),
        "win32com": self.win32com,
        "win32com.client": self.win32com.client,
        "win32file": mock.MagicMock(),
        "win32service": mock.MagicMock(),
        "win32serviceutil": mock.MagicMock(),
        "winerror": mock.MagicMock(),
        "wmi": self.wmimock

    self.module_patcher = mock.patch.dict("sys.modules", modules)

    # Now we're ready to do the import
    from myrepo.actions import windows = windows

  def tearDown(self):

  def testEnumerateInterfaces(self):

    # Stub out wmi.WMI().Win32_NetworkAdapterConfiguration(IPEnabled=1)
    wmi_object = self.wmimock.WMI.return_value
    wmi_object.Win32_NetworkAdapterConfiguration.return_value = [

    enumif =
    interface_dict_list = list(enumif.RunWMIQuery())

HOWTO change the extension of lots of files while keeping the original filename

With rename you get all the power of perl regex substitution in a simple interface. e.g. to backup all the .yaml files in a directory you could use:
rename --no-act 's/(.*)\.yaml$/$1.yaml.bak/' *.yaml
Remove the --no-act to actually make the changes.

The next logical progression is to want to do this more than once, so adding a timestamp to the backup is desirable, but I couldn't think of a way to make rename do this (you can't put backticks in the regex for instance). So here's a workaround
find . -name *.yaml -exec mv {} {}.bak.`date +%Y-%m-%dT%H:%M:%S%z` \;