Sunday, November 23, 2014

Thunderbolt DMA attacks on OS X

The current TL;DR on Thunderbolt DMA attacks is that the VT-d IOMMU is doing its job and Ivy-Bridge (2012 and later hardware) Macs running OS X >= 10.8.2 are not vulnerable to the easy direct-write-to-memory-style attacks we saw popularized with firewire.

While Inception claims to work with Thunderbolt, it's really only a firewire attack, so you need a Thunderbolt to firewire converter and it's subject to the same limitations as normal firewire, as described at the end of this post.

There was a great 2013 Blackhat talk by Russ Sevinsky that covered lots of chip-level reverse engineering for Thunderbolt, but ultimately he didn't come up with an attack (excellent description of the reverse engineering process though). More recently snare described how to set up an attack on thunderbolt with an FPGA board connected to a mac via a Thunderbolt-PCIe device. But the IOMMU foiled his efforts on modern hardware. Snare says he's working on trying to bypass VT-d, so there may be interesting developments in the future.

For now you should probably still be more worried about snare's other work using PCIe option ROMs as a bootkit.

Sunday, November 9, 2014

SMTP email conversation threads with python: making gmail associate messages into a thread

I have some python software that sends emails, and I wanted gmail to group messages that were related to the same subject in a conversation. It's not immediately obvious how this works, and there's plenty of bad advice out there, including people stating that you just need to add "RE:" to the subject line, which is just wrong.

The way conversation threads are constructed is by using the SMTP "Message-ID" as a reference to the original email in the "In-Reply-To" and "References" headers. RFC2822 has the details, explains some fairly complex multi-parent cases, and includes some good examples. My use case was very simple, just two messages I wanted associated. The first message is sent with a message-id, the second one references it:
import email.utils
from email.mime.multipart import MIMEMultipart
import smtplib

myid = email.utils.make_msgid()
msg = MIMEMultipart("alternative")
msg["Subject"] = "test"
msg["From"] = "myuser@mycompany.com"
msg["To"] = "myuser@mycompany.com"
msg.add_header("Message-ID", myid)
s = smtplib.SMTP("smtp.mycompany.com")
s.sendmail("myuser@mycompany.com", ["myuser@mycompany.com"], msg.as_string())

msg = MIMEMultipart("alternative")
msg["Subject"] = "test"
msg["From"] = "myuser@mycompany.com"
msg["To"] = "myuser@mycompany.com"
msg.add_header("In-Reply-To", myid)
msg.add_header("References", myid)
s = smtplib.SMTP("smtp.mycompany.com")
s.sendmail("myuser@mycompany.com", ["myuser@mycompany.com"], msg.as_string())
Note that it is up to the host generating the message ID to guarantee it is unique. This function gives you an RFC-compliant message id and uses a datestamp to give you a unique ID. If you are sending lots of mail that may not be enough so you can pass it extra data that will get appended to the ID:
In [3]: import email.utils

In [4]: email.utils.make_msgid()
Out[4]: '<20141110055935.21441.10732@myhost.mycompany.com>'

In [5]: email.utils.make_msgid('extrarandomsauce')
Out[5]: '<20141110060140.21441.5878.extrarandomsauce@myhost.mycompany.com>'
I wasn't particularly careful with my first test message ID and sent a non-compliant one :) Gmail recognizes this and politely fixes it for you:
Message-ID: <545c12bc.240ada0a.6dba.5421SMTPIN_ADDED_BROKEN@gmr-mx.google.com>
X-Google-Original-Message-ID: testafasdfasdfasdfasdf

Wednesday, November 5, 2014

Splitting an array into fixed-size chunks with python

If you're looking for a way to split data into fixed size chunks with python you're likely to run across this recipe from the itertools documentation:
def grouper(iterable, n, fillvalue=None):
    "Collect data into fixed-length chunks or blocks"
    # grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)
which certainly works, but why it works is less than obvious. In my case I was working with a small array where the data length was guaranteed to be a multiple of 4. I ended up using this, which is less sexy but more comprehendable:
[myarray[i:i+4] for i in xrange(0, len(myarray), 4)]

Monday, October 27, 2014

Authenticode signing windows executables on linux

We had a particularly painful build and sign workflow that required multiple trips between linux and windows.  I looked around and found the following options for signing windows binaries on linux:
  • jsign, a java implementation
  • signcode from the Mono project, as suggested by Mozilla.  It's in the mono-devel ubuntu package.
  • osslsigncode, an Openssl-based implementation of authenticode signing that uses curl to make the timestamp requests.
The Mozilla instructions are good for getting your keys and certs into a format that will work with these tools. Some minor additions to those below:

openssl pkcs12 -in authenticode.pfx -nocerts -nodes -out key.pem
openssl rsa -in key.pem -outform PVK -pvk-strong -out authenticode.pvk
openssl pkcs12 -in authenticode.pfx -nokeys -nodes -out cert.pem
cat Thawte_Primary_Root_CA_Cross.cer >> cert.pem
openssl crl2pkcs7 -nocrl -certfile cert.pem -outform DER -out authenticode.spc
shred -u key.pem
Once you're done here you have authenticode.pvk with your encrypted private key, and authenticode.spc with your public certs. Appending the cross cert is necessary to make signature validation work with some tools. The windows GUI "Properties|Digital Signatures|Details" dialog will tell you "This digital signature is OK" but if you check with signtool verify on Windows, you'll find it isn't:
>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" verify /v /kp my.exe

Verifying: my.exe

[snip]

SignTool Error: Signing Cert does not chain to a Microsoft Root Cert.

Number of files successfully Verified: 0
Number of warnings: 0
Number of errors: 1
I suspect the GUI uses the local cert store and/or APIs that automatically fetch the required cross cert, but signtool and 3rd-party signature verifiers do not. With the cross cert added to the spc as above it can be correctly verified and mentions the MS cross cert:
Z:\signing\windows>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" verify /v /kp my.exe

Verifying: my.exe

[snip]

Cross Certificate Chain:
    Issued to: Microsoft Code Verification Root

[snip]

Successfully verified: my.exe
If you use Bit9 it's also worth checking that it will verify your binary using the dascli.exe tool:
>"C:\Program Files (x86)\Bit9\Parity Agent\DasCLI.exe" certinfo my.exe
File[C:my.exe]
Elapsed[630ms]
CertValidated[Y] Detached[N] Publisher[My Inc]
FileVerified[Y]

[snip]
So, back to signing on Linux. At first I tried installing mono and using "signcode". It claims to succeed:
$ signcode sign -spc authenticode.spc -v authenticode.pvk -a sha1 -$ commercial -n MyApp -t http://timestamp.verisign.com/scripts/timestamp.dll -tr 5 my.exe
Mono SignCode - version 3.2.8.0
Sign assemblies and PE files using Authenticode(tm).
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.

Enter password for authenticode.pvk: 
MY_GODDAM_PASSWORD_IN_CLEARTEXT
Success
And in the process echoes your password in cleartext!?! This is something I was prepared to fix with a "read -s -p 'Password'" wrapper script like this guy, but the signature was no good. I could see it appended in a hexeditor but Windows didn't give me a Digital Signature tab in the GUI and signtool couldn't find it either:
>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe" verify /v /kp my.exe

Verifying: my.exe
SignTool Error: No signature found.

Number of files successfully Verified: 0
Number of warnings: 0
Number of errors: 1
It's possible that there's something weird about our exe that caused this to fail. Someone else reported a similar problem but then later claimed it was due to a corrupted exe. In any case, not being particularly wedded to, or happy with, mono and signcode at this point I tried osslsigncode, which worked fine and produced a valid signature.
sudo apt-get install libcurl4-openssl-dev
./configure
make
sudo make install
osslsigncode sign -certs authenticode.spc -key authenticode.pvk -n "MyApp" -t http://timestamp.verisign.com/scripts/timstamp.dll -in my.exe -out my_signed.exe
Update: After coming across this mozilla post, I suspect my problem with mono's signcode was that signcode may not support 64 bit, but I didn't go back to check.

Monday, October 6, 2014

Python: add an element to an array only if a filter function returns True

This post was going to be about a fairly obscure feature of Python I found, but is now a minor cautionary tale about trying to be too clever :)

I was looking for an elegant solution to the problem of appending elements to a list only if a filter function returns true. The long (and in retrospect much more readable) way to write this is something like:
filter_result = self.FilterFunction(response)
if filter_result:
  processed_responses.append(filter_result)
There is in fact a one-liner that can do this for you, but since its fairly obscure it makes the code much harder to understand.
processed_responses += filter(None, [self.FilterFunction(response)])
This works because when the first argument to filter is None, the effect is to remove all items from the sequence that evaluate to False. In this case that means if self.FilterFunction is False you get an empty array, and appending the empty array has no effect on processed_responses. If it's True, you append a single element.

Obvious huh?

Mocking out python OS specific imports

Testing python code that needs to run on a different OS is painful. A major part of the difficulty is that even though you can (somewhat) easily mock out the API calls used, you can't import the code successfully because the modules only exist on the target OS. Lets take an example of code that imports and calls functions from the win32api module. How do you mock out the import so you can test it on linux? I know of two main approaches.

One is the proxy module. Basically you define a module to hide all of the OS-specific imports behind, and do a conditional import in that module. So instead of having code like this:
import win32api
win32api.GetLogicalDriveStrings()
you do
import windows_imports
windows_imports.win32api.GetLogicalDriveStrings()
and then in windows_imports/__init__.py:
import platform

if platform.system() == "Windows":
  import win32api
  import winerror
  import wmi
Then inside your tests you need to create stubs to replace your API calls, e.g. for windows_imports.win32api.GetLogicalDriveStrings. Theoretically this should be fairly straightforward, but when I started down this path it got fairly complicated and I struggled to make it work. In the end I gave up and settled on the second approach, as below.

The second approach, described here, is to delay the import of the OS specific code in your tests until after you modify sys.modules to stub out all the OS-specific modules. This has the distinct advantage of leaving your production code untouched, and having all the complexity in your test code. Using the mock library makes this much easier. Below is an example of mocking out a WMI call made from python.
import mock
import test_fixture
import unittest

class WindowsTests(unittest.TestCase):

  def setUp(self):
    self.wmimock = mock.MagicMock()
    self.win32com = mock.MagicMock()
    self.win32com.client = mock.MagicMock()
    modules = {
        "_winreg": mock.MagicMock(),
        "pythoncom": mock.MagicMock(),
        "pywintypes": mock.MagicMock(),
        "win32api": mock.MagicMock(),
        "win32com": self.win32com,
        "win32com.client": self.win32com.client,
        "win32file": mock.MagicMock(),
        "win32service": mock.MagicMock(),
        "win32serviceutil": mock.MagicMock(),
        "winerror": mock.MagicMock(),
        "wmi": self.wmimock
        }

    self.module_patcher = mock.patch.dict("sys.modules", modules)
    self.module_patcher.start()

    # Now we're ready to do the import
    from myrepo.actions import windows
    self.windows = windows

  def tearDown(self):
    self.module_patcher.stop()

  def testEnumerateInterfaces(self):

    # Stub out wmi.WMI().Win32_NetworkAdapterConfiguration(IPEnabled=1)
    wmi_object = self.wmimock.WMI.return_value
    wmi_object.Win32_NetworkAdapterConfiguration.return_value = [
        test_fixture.WMIWin32NetworkAdapterConfigurationMockResults()]

    enumif = self.windows.EnumerateInterfaces()
    interface_dict_list = list(enumif.RunWMIQuery())

HOWTO change the extension of lots of files while keeping the original filename

With rename you get all the power of perl regex substitution in a simple interface. e.g. to backup all the .yaml files in a directory you could use:
rename --no-act 's/(.*)\.yaml$/$1.yaml.bak/' *.yaml
Remove the --no-act to actually make the changes.

The next logical progression is to want to do this more than once, so adding a timestamp to the backup is desirable, but I couldn't think of a way to make rename do this (you can't put backticks in the regex for instance). So here's a workaround
find . -name *.yaml -exec mv {} {}.bak.`date +%Y-%m-%dT%H:%M:%S%z` \;