Thursday, July 14, 2016

Running modern python on Ubuntu LTS

The python version on your Ubunutu LTS may be slightly behind latest, or years behind, depending on the release cycle. Here's how to run a newer python without interfering with the system one. Download the latest python source and install it:
sudo apt-get install build-essential libreadline-dev libsqlite3-dev
./configure --enable-ipv6 --enable-unicode=ucs4
sudo make install
Your new python is now in /usr/local/bin/python2.7. To use it, specify it in any virtualenvs you create. Make it an alias so you never forget:
alias virtualenv='virtualenv --python=/usr/local/bin/python2.7'

Tuesday, July 12, 2016

Run a different command on an existing docker container using exec

To run a previously created container with bash, start it as normal and then use exec (this assumes your original container can actually run successfully):
docker start [container id]
docker exec -it [container id] /bin/bash

Thursday, July 7, 2016

Creating a Google Cloud service account that can only create new objects in a single bucket

I wanted a service account that can only create new objects in a single bucket, and have those objects be publicly readable by default. Use case is a travis deployer that publishes build artifacts.
  1. create a service account. Currently this is under "IAM & Admin | Service Accounts" in the Google Cloud UI.
  2. In the IAM screen your service account is over-privileged, you can remove all privileges from the account here (which causes it to disappear from the IAM list). We will grant it permission over the bucket only.
  3. Create your bucket, then give the world access (you can also use AllUsers in the UI):
    gsutil defacl ch -u AllUsers:R gs://mybucket
  4. Give your writer access to the bucket. It seems there is no way to limit the permission to create only (options are read/write/owner).
  5. Test the permissions of your service account:
    gcloud auth activate-service-account --key-file mysecretfile.json serviceaccountname
    gcloud auth list
    # Check your service account is the active account, then try copying to the bucket you authorized, and another bucket which should fail.
    gsutil cp test gs://mybucket
    gsutil cp test gs://someotherbucket
  6. You can then set the default object permissions for the bucket via the UI so that new objects are world readable by default.

Tuesday, June 7, 2016

Make test pypi the default pip installer

It's possible to make the testpypi index the default for pip, but still retrieve any dependencies not on testpypi from the production repo. You just need a pip.conf like this:
$ mkdir ~/.pip
$ cat .pip/pip.conf 
index-url =
extra-index-url =

Sunday, May 22, 2016

Lowe's OC821 Iris Outdoor Video Camera

Some quick links to help others find information about using the Lowe's OC821 outdoor video camera without paying for the overpriced Lowe's security monitoring system.

Honestly though it looks like this camera was designed to be used via the API from the Iris hub, which I don't want to pay for. I'm going to replace it with something (Dropcam or similar) that doesn't require ongoing fees and has a better phone app.

Wednesday, May 18, 2016

Bash default value environment variable that can be overridden

Often in bash scripts I want to have a constant that is overridable. It's something I expect people to want to change but isn't worth creating commandline options for.

Here's how to do it:

: ${OVERRIDABLE:="thedefault"}

And it works like this:
$ bash ./ 
$ OVERRIDABLE="overridden" bash ./ 

Friday, May 13, 2016

launchd ThrottleInterval

Apple's documentation of the launchd options leave a lot to be desired.  It leaves out important details and is fairly ambiguous about lots of things. Various people are trying to document it themselves, so here's another addition for ThrottleInterval.

The launchd.plist man page says:

ThrottleInterval This key lets one override the default throttling policy imposed on jobs by launchd. The value is in seconds, and by default, jobs will not be spawned more than once every 10 seconds. The principle behind this is that jobs should linger around just in case they are needed again in the near future. This not only reduces the latency of responses, but it encourages developers to amortize the cost of program invocation.

What it really means is this:

By default jobs are expected to run for at least 10 seconds. If they run for less than 10 seconds, they will be respawned "10 - runtime" seconds after they die. Exit code is ignored, all that matters is runtime. If a job runs for more than 10 seconds then exits, it will be respawned immediately (assuming all other restart conditions are met).

So instead of just throttling how often a service gets restarted, ThrottleInterval also implies minimum runtime. Which is surprising to more than just me.

You'll see a message like this in the logs if the service dies inside the ThrottleInterval:[1] ( Service only ran for 3 seconds. Pushing respawn out by 7 seconds.