self.client.get('/some/path/', **{'HTTP_USER_AGENT':'silly-human', 'REMOTE_ADDR':'127.0.0.1'})
Wednesday, December 22, 2010
HOWTO add headers to the Django test client
To mess with the HTTP headers used by the Django web test client, make your request with your custom headers in a dict. From inside a django test method it looks like this:
Friday, December 17, 2010
Import multiple vcf files into gmail
You can import contacts into gmail - it currently accepts csv files or vcf files. Unfortunately you can only upload one file at a time. If you are trying to import a full phone backup of hundreds of vcf files, that isn't useful. It turns out vcf's have their own internal structure and you can just combine them all into one giant file (using cat on linux):
cat *.vcf > ../allinone.vcf
Sunday, December 12, 2010
HOWTO: Convert a darcs repository to mercurial
It seems the "easiest" way to convert a darcs repo to a mercurial repo is via git =( This uses darcs-fast-export, git-fast-import and hg convert. Other solutions such as tailor and "darcs2hg.py" exist but seem hard(er) to setup or have errors.
- Get darcs fast-export from bzr-fast-import:
git clone git://vmiklos.hu/bzr-fastimport
- The only thing you care about in the entire bzr-fast-import repo is "darcs-fast-export" (a python script) in bzr-fastimport/exporters/darcs.
- Checkout the darcs project
mkdir project; cd project; darcs get codebox:/code/project project.darcs; cd ..
- Convert darcs to git (in Ubuntu karmic git-fast-import isn't in PATH)
mkdir project/project.git; cd project/project.git; git init ../../darcs-fast-export.py ../project.darcs | /usr/lib/git-core/git-fast-import; cd ..
- Convert git to hg
hg convert project.git project.hg; cd project.hg; hg update
Wednesday, December 8, 2010
Generate SSL certificates for openvpn with easy-rsa
Easy-rsa is distributed with openvpn (on Ubuntu anyway), and makes generating SSL certs a lot easier.
Here is typical usage:
Keys and certs are written to the "keys" directory.
Here is typical usage:
cd /usr/share/doc/openvpn/examples/easy-rsa/2.0 [edit vars with your site-specific info] source ./vars ./clean-all ./build-dh -> takes a long time, consider backgrounding ./pkitool --initca ./pkitool --server myserver ./pkitool client1
Keys and certs are written to the "keys" directory.
Tuesday, December 7, 2010
HOWTO rotate large numbers of images in Ubuntu
I needed a way for my wife to rotate a whole bunch of JPEG images. It had to be simple to use (i.e. GUI not command line).
I stumbled across a blog that suggested the nautilus-image-converter plugin, which worked perfectly. It has a simple right-click interface that allows you to rotate images in place or use a renaming scheme. It can also do resizing.
Brilliant.
I stumbled across a blog that suggested the nautilus-image-converter plugin, which worked perfectly. It has a simple right-click interface that allows you to rotate images in place or use a renaming scheme. It can also do resizing.
Brilliant.
Thursday, December 2, 2010
Apache and client side SSL certificate verification
To require client SSL certificate verification, add this to your apache config:
And to log what is going on with the SSL client cert verification, use something like this:
SSLVerifyClient require SSLVerifyDepth 1 SSLCipherSuite HIGH:MEDIUM SSLCACertificateFile /etc/ssl/ca_that_signed_client_certs.pem
And to log what is going on with the SSL client cert verification, use something like this:
ErrorLog /var/log/apache2/error.log LogLevel info CustomLog /var/log/apache2/access.log combined CustomLog /var/log/apache2/ssl.log "%t %h %{SSL_PROTOCOL}x verify:%{SSL_CLIENT_VERIFY}x %{SSL_CLIENT_S_DN}x \"%r\" %b"
Tuesday, November 30, 2010
Commercial SSL certificate untrusted - what did I pay for?
I recently bought a commercial SSL certificate, and was slightly mystified as to why the browser was calling it untrusted. How could they possibly be selling certs that Firefox doesn't trust? After some head scratching I realised the answer was that I needed to install the intermediate certificates (provided by the CA) on the server side, to complete the chain of trust.
During the SSL certificate exchange the web server (in this case Apache) can provide the client with additional certificates to enable it to establish a chain of trust. Use the SSLCertificateChainFile directive in your site config, something like:
According to the apache help, you can cat these two together and just specify one file. Say the browser trusts RootCA1, it can check that RootCA1 signed ExternalCARoot1.crt, which signed CACompanySecureServerCA.crt, which signed my certificate. Without those intermediate certificates, the browser cannot establish trust.
During the SSL certificate exchange the web server (in this case Apache) can provide the client with additional certificates to enable it to establish a chain of trust. Use the SSLCertificateChainFile directive in your site config, something like:
SSLCertificateChainFile /etc/apache2/ssl/ExternalCARoot1.crt SSLCertificateChainFile /etc/apache2/ssl/CACompanySecureServerCA.crt
According to the apache help, you can cat these two together and just specify one file. Say the browser trusts RootCA1, it can check that RootCA1 signed ExternalCARoot1.crt, which signed CACompanySecureServerCA.crt, which signed my certificate. Without those intermediate certificates, the browser cannot establish trust.
Saturday, November 27, 2010
Making blogger look prettyish: removing the attribution footer and increasing the post width
The new templates provided by blogger go a long way to making it look prettier. There are IMHO a few fundamental problems. The first is the attribution footer gadget - that is nice for the original designer, but I don't need to advertise for them on my blog. A lot of people seem to be trying to change this behaviour.
To remove the attribution footer, search in your css for 'attribution' and use html comments to comment out those sections. Check with preview to see if they are gone. When you click 'save template' blogger will ask if you want to delete the attribution gadget. You can delete it, and it will stay gone.
Next, making the post wider. Blogger is stuck being optimised for small screen sizes no-one uses any more. To increase the post width, change the 'value' attribute of this tag (search for 'content.width'):
And to change the width of your gadget panel, change value of:
To remove the attribution footer, search in your css for 'attribution' and use html comments to comment out those sections. Check with preview to see if they are gone. When you click 'save template' blogger will ask if you want to delete the attribution gadget. You can delete it, and it will stay gone.
Next, making the post wider. Blogger is stuck being optimised for small screen sizes no-one uses any more. To increase the post width, change the 'value' attribute of this tag (search for 'content.width'):
<b:variable default='930px' name='content.width' type='length' value='1000px'/>
And to change the width of your gadget panel, change value of:
<b:variable default='360px' name='main.column.right.width' type='length' value='370px'/>
Wednesday, November 24, 2010
Set file modification time of a JPEG to the EXIF time
After editing a photo, it is nice to be able to set the file modification time back to its original so filesystem date sorting is still sensible. This can be achieved by reading the "Exif.Photo.DateTimeOriginal" or "Exif.Image.DateTime" out of the JPEG header. exiv2 will do this for you:
To do every file recursively under a directory, cd into the directory and use this:
exiv2 -T rename *.JPG
To do every file recursively under a directory, cd into the directory and use this:
find . -type f -iname "*.jpg" -print0 | xargs -0 exiv2 -T rename
Monday, October 4, 2010
HOWTO do a DNS zone transfer
Use dig to get a list of nameservers and then perform a DNS zone transfer:
dig -t NS transfer.me.com dig -t AXFR transfer.me.com @ns1.transfer.me.com
Monday, September 27, 2010
World meeting time planner
Planning a meeting or phonecall across multiple timezones can be tough.
I've used a few online tools, but my current favourite is a gnome app called slashtime by Andrew Cowie. I saw Andrew speak at Linux Conf Au in 2009 on GUI design, using slashtime as an example, and it was a great talk.
Highly recommend. Would be great to see a debian package so it is apt-gettable.
I've used a few online tools, but my current favourite is a gnome app called slashtime by Andrew Cowie. I saw Andrew speak at Linux Conf Au in 2009 on GUI design, using slashtime as an example, and it was a great talk.
Highly recommend. Would be great to see a debian package so it is apt-gettable.
Sunday, September 19, 2010
HOWTO list windows shares from a linux box using smbclient
To get a list of all SMB shares exposed by a windows box, use smbclient:
smbclient -L my.windows.host --user=myfirst.mylast
Friday, September 17, 2010
jQuery: host locally or use googleapis.com CDN?
There are two main ways to host your blob of jQuery minimised javascript: on your own webserver or, on Google's. Which is better? A large amount of time has been spent debating that very topic.
What are the pros and cons of using Google?
What are the pros and cons of using Google?
Pros
- Fast, geographically distributed, reliable CDN
- Free, saves using your bandwidth
- If many people use the Google version of jQuery (and they do), it is highly likely the user will have it in their browser cache, given the long expiry times Google sets when they serve the file (although there are some caveats). This means the user probably won't have to request the file at all. Even if users regularly clear their browser cache it is likely to be cached by a proxy.
Cons
- The CDN might be down, or slow, which will impact your site.
- If there is no Internet connection it won't work (definitely not the best choice for internal webservers)
- If you already have other javascript bundled into a minimised file (common practice), it is an extra web request that needs to be made, when you could just include it in the bundle.
- You are giving Google information about your customers (i.e. forcing them to make a request) to Google. Given the large amount of caching, this will not be comprehensive, and there is a reasonable chance you are running Google analytics anyway.
Using the django-admin time and date widgets in your own django app: too hard, use jQuery
The django doco encourages you to use the django admin widgets (such as the time and date picker) in your own apps. Unfortunately actually doing this turns out to be more work than using external widgets like the excellent jQuery-ui.
jQuery comes with good tutorials and doco. jQuery-ui, which builds on jQuery, has a great array of helpful widgets. If, like me, you just want a date picker widget, it is super easy.
You'll want to add something like the following to your template (in your base template if it will be used on every page):
To make it look pretty jQuery has you covered, with ThemeRoller. Pick a style you like, customise it if you need to, and download your new CSS. Drop that in your template too.
jQuery comes with good tutorials and doco. jQuery-ui, which builds on jQuery, has a great array of helpful widgets. If, like me, you just want a date picker widget, it is super easy.
You'll want to add something like the following to your template (in your base template if it will be used on every page):
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> jQuery(function() { jQuery('.vDateField').datepicker({ dateFormat: 'yy-mm-dd', constrainInput: 'true', maxDate: ' 1m', minDate: '0d' }) }); </script>
To make it look pretty jQuery has you covered, with ThemeRoller. Pick a style you like, customise it if you need to, and download your new CSS. Drop that in your template too.
Tuesday, September 14, 2010
Taking a disk image and creating a hash of the data with one read of the source disk
I have previously blogged about how to take a disk image over the network. The more common case is you want to make a forensic copy of a locally-connected disk. Usually this is a disk you connect using a write blocker, such as one from wiebetech, to prevent any changes being made to the source disk.
This command takes a forensic image and a hash of the original disk at the same time, requiring only one read of the source disk:
This command takes a forensic image and a hash of the original disk at the same time, requiring only one read of the source disk:
mkfifo /tmp/disk.dat; sha1sum /tmp/disk.dat & dd bs=256k if=/dev/sdc | tee /tmp/disk.dat > /mnt/destination/disk.dd
Monday, September 13, 2010
Chrome (chromium) browser makes random 10 character HEAD requests on startup
I recently saw this in a proxy log:
http://yyvssjupua/ 192.168.20.1/- - HEAD - from squid. http://mskwuzkkpu/ 192.168.20.1/- - HEAD - from squid. http://dfoigxiyyl/ 192.168.20.1/- - HEAD - from squid.What the? After talking to the user, who told me he was running chromium, I found out this was legit chromium behaviour. Apparently some ISPs will send you to a page with their advertising if you visit a url that has a DNS lookup failure. Bastards. To combat this, on startup chrome makes three requests to random domains that are guaranteed to generate lookup failures. If they get a HTML page back, chrome knows to disable the 'did you mean' functionality that asks if you meant to visit the host or perform a search query for the host so it doesn't keep pointing you to the ISP's ad page. Smart!
Saturday, September 11, 2010
Cisco 'show everything' and password cracking
To do a 'show everything' on a cisco device, use 'show tech-support'. This includes show run, process listings, interface info, and basically every bit of information you can get through running other commands. Note that user type 7 passwords (see below) are automatically sanitised from the output.
Cisco still uses a terrible password encryption scheme for user passwords that can be trivially cracked. The following user password uses the weak encryption (you can tell by the number 7 preceeding the hash):
Enable (MD5) passwords can be cracked using standard tools such as John the Ripper or rainbow tables.
Type 7 passwords can be cracked with the following simple perl script.
Cisco still uses a terrible password encryption scheme for user passwords that can be trivially cracked. The following user password uses the weak encryption (you can tell by the number 7 preceeding the hash):
username jdoe password 7 07362E590E1B1C041B1E124C0A2F2E206832752E1A01134DWhile user passwords are encrypted using this weak scheme, enable passwords are MD5 hashes that look like this (note the 5):
enable secret 5 $1$iUjJ$cDZ03KKGh7mHfX2RSbDqP.Cisco is stuck using the reversible encryption scheme for the near future due to the need to support certain authentication protocols (notably CHAP).
Enable (MD5) passwords can be cracked using standard tools such as John the Ripper or rainbow tables.
Type 7 passwords can be cracked with the following simple perl script.
#!/usr/bin/perl -w # $Id: ios7decrypt.pl,v 1.1 1998/01/11 21:31:12 mesrik Exp $ # # Credits for orginal code and description hobbit@avian.org, # SPHiXe, .mudge et al. and for John Bashinski# for Cisco IOS password encryption facts. # # Use for any malice or illegal purposes strictly prohibited! # @xlat = ( 0x64, 0x73, 0x66, 0x64, 0x3b, 0x6b, 0x66, 0x6f, 0x41, 0x2c, 0x2e, 0x69, 0x79, 0x65, 0x77, 0x72, 0x6b, 0x6c, 0x64, 0x4a, 0x4b, 0x44, 0x48, 0x53, 0x55, 0x42 ); while (<>) { if (/(password|md5)\s+7\s+([\da-f]+)/io) { if (!(length($2) & 1)) { $ep = $2; $dp = ""; ($s, $e) = ($2 =~ /^(..)(.+)/o); for ($i = 0; $i < length($e); $i+=2) { $dp .= sprintf "%c",hex(substr($e,$i,2))^$xlat[$s++]; } s/7\s+$ep/$dp/; } } print; }
Booting and/or mounting a raw disk image under windows
CERT has developed a cool tool called LiveView that allows you to boot a raw disk image (such as one produced by 'dd') using VMWare. LiveView preserves disk integrity by writing all disk changes to a separate file. The tool works under windows and linux, and boots a range of Windows versions.
Alternatively, if you just want to mount a disk image in windows (something that is trivial in linux using the loopback device), there is a tool called imdiskinst that can help you out.
Alternatively, if you just want to mount a disk image in windows (something that is trivial in linux using the loopback device), there is a tool called imdiskinst that can help you out.
Thursday, September 9, 2010
HOWTO dump out all email attachments from a Microsoft PST archive
On ubuntu install 'readpst' and 'uudeview', then:
readpst -o mbox -j4 mpst.pstWhich will use 4 processes to give you a bunch of mbox files in the 'mbox' directory. Then, extract all the attachments:
cd mbox uudeview -p ../pst_attachments *
Wednesday, August 25, 2010
Where there is awk, there is sed
I couldn't do a post on awk without also quickly covering sed. Bruce Barnett has written a great tutorial for sed that is worth reading.
Sed is your friend for applying regex's to files (well, streams really, since it is the 'stream editor'). The regex syntax is the same as for vim, and since that is my primary editor I only tend to use sed where the files are large and will take ages to load into vim.
Some quick examples to illustrate what I'm talking about:
Sed is your friend for applying regex's to files (well, streams really, since it is the 'stream editor'). The regex syntax is the same as for vim, and since that is my primary editor I only tend to use sed where the files are large and will take ages to load into vim.
Some quick examples to illustrate what I'm talking about:
sed s'/user root/user nothingtoseehere/g' < /var/log/auth.log sed s'!session closed for user \([^ ]*\)!\1 closed a session!g' < \ /var/log/auth.log sed s'!session closed for user \([^ ]*\)!&, allegedly!g' < \ /var/log/auth.logEscaped parenthesis
\(\)capture a value, and
&refers to the whole match. You can also use sed like grep. By default it prints every line, but that can be disabled with "-n" and you can cause matching lines to be printed by appending a "p" option:
sed -n 's/pattern/&/p' < file.txtOR, if we aren't making a substitution
sed -n '/pattern/p' < file.txtOR, you can just use grep -e, which is less confusing:
grep -e "pattern" file.txtRewriting a file in-place, making a backup with the .bak extension:
sudo sed -i.bak s'!XKBOPTIONS=""!XKBOPTIONS="ctrl:nocaps"!' /etc/default/keyboard
AWK - selecting columns for output
AWK is a super-handy old-skool UNIX tool. There are plenty of good tutorials out there for it, but I'm jotting down some basic uses here for my own benefit.
I have used AWK mainly to select and print certain columns from input, like this, which will print the 1st and 7th columns:
I have used AWK mainly to select and print certain columns from input, like this, which will print the 1st and 7th columns:
awk '{print $1,$7}' < /var/log/syslogColumns break-up is determined by the value of the FS (input field separator) variable, which is space by default (in POSIX mode this actually means space and tab but not newline). You can change this with:
awk 'BEGIN {FS=";"}{print $1,$7}' < /var/log/syslog OR awk -F: '{print $1,$7}' < /var/log/syslogThe output from awk is separated by the OFS (output field separator) variable, also a space by default. To write out CSV you might use:
cat /var/log/syslog | awk -F: 'BEGIN{OFS=","}{print $1,$3}'There is plenty more you can do with awk, including simple programming tasks such as counting, summing etc. cut is a simple alternative if all you want to do is cut fields from an input stream. It is doesn't take much to hit its limitations however. Consider the output of last, the first two columns of which look like this:
user pts/0 user pts/1 reboot system bootThis awk command will print the first two columns correctly:
last | awk '{print $1,$2}'Whereas this cut command:
last | cut -d" " -f1-5Won't produce the first two columns cleanly and we need to specify 5 columns to try and skip the empty fields. The problem is there are variable numbers of spaces between the username and the tty line.
Monday, August 23, 2010
Tips for hardening apache on ubuntu for django deployment
There is good doco for deploying django on apache with mod_python or wsgi. Here are a couple of extra tips for Ubuntu. First, edit
/etc/apache2/conf.d/securityand enable:
ServerTokens Prod ServerSignature Off TraceEnable OffAnd in the apache config in your "Location /" directive with the other django stuff:
Options -Indexes -Includes -Multiviews SymLinksIfOwnerMatchTake a look at Apache's security tips and it is also worth understanding how the Apache configuration directives (Directory, Location, etc.) work.
Friday, August 20, 2010
SSH client config
For Internet-connected hosts, running SSH on a different port is a really good idea since it cuts down the noise of authentication attempts from bots looking for weak passwords. Running on a different port is not a substitute for a secure configuration (ie. no root login, key-only auth) - it is purely useful in cutting down log noise.
Unfortunately you have to remember which port you chose :) To minimise the hassle you should add entries in your client /etc/ssh/ssh_config:
Unfortunately you have to remember which port you chose :) To minimise the hassle you should add entries in your client /etc/ssh/ssh_config:
Host nickname Port 43210 HostName mysshserver User myuserNow you can use "ssh nickname" and ssh will translate that to:
ssh -p 43210 mysshserver
Monday, August 16, 2010
Installing Windows 7 onto a netbook using USB
I wanted to install Windows 7 onto a netbook to replace an aging desktop as my only windows-on-metal box. This is a breeze with modern linux distros, but is of course far harder than it needs to be for windows.
I had a crack at using unetbootin on Linux with the Windows 7 ISO, despite a suspicious lack of mention of support for windows, and sure enough it didn't boot (unetbootin didn't give me any errors, it just didn't boot).
More googling turned up the Windows 7 USB/DVD Download Tool, which converts a Windows install ISO into a bootable USB installer - exactly what I wanted. After half an hour of downloading and installing dependencies (due to the lack of windows package management and a bizarre need to do the Genuine Windows check), I had the tool installed and it happily created a bootable USB for me.
This one worked perfectly, and dropped Windows 7 starter onto the netbook.
I had a crack at using unetbootin on Linux with the Windows 7 ISO, despite a suspicious lack of mention of support for windows, and sure enough it didn't boot (unetbootin didn't give me any errors, it just didn't boot).
More googling turned up the Windows 7 USB/DVD Download Tool, which converts a Windows install ISO into a bootable USB installer - exactly what I wanted. After half an hour of downloading and installing dependencies (due to the lack of windows package management and a bizarre need to do the Genuine Windows check), I had the tool installed and it happily created a bootable USB for me.
This one worked perfectly, and dropped Windows 7 starter onto the netbook.
Sunday, August 15, 2010
HOWTO call a python superclass method
Use the python 'super' method to call a superclass __init__:
class Foo(Bar): def __init__(self): super(Foo, self).__init__()
Saturday, August 14, 2010
Working with Amazon S3 storage
I was using duplicity on Amazon S3 storage for backup, but gave it up because it was waaaay too slow (I believe the slowness was mainly duplicity, rather than network traffic or S3). So, time to delete the data from S3. I logged onto the Amazon S3 web interface, but found it pretty useless: I had hundreds of files to delete and there was no way to 'select all', or even delete a whole bucket at once. In fact, I couldn't even get the web interface to delete a single file for me. Seems like it is in Amazon's interest to make deleting data hard...
So I installed the 's3cmd' package on Ubuntu, which worked a treat. Setup with:
So I installed the 's3cmd' package on Ubuntu, which worked a treat. Setup with:
s3cmd --configureThen to delete all the data in a bucket:
s3cmd del s3://data.bucket.name/* s3cmd rb s3://data.bucket.name
Thursday, August 12, 2010
Python named tuples
Python named tuples are a good way to make your code more readable when using tuples. Instead of using numerical dereferences like:
In [49]: c=('abc','adefa','aaaa') In [50]: c[0] Out[50]: 'abc'You can create a namedtuple class:
In [51]: from collections import namedtuple In [53]: MyTup = namedtuple('MyTup','first second other') In [54]: t = MyTup("aa","bb",other="cc") In [55]: t.first Out[55]: 'aa'
Postfix internal network information in 'Received' header
With the default Postfix configuration, a "Received" header line is added for every hop, which is fine, but I was surprised to learn a line is also added for mail sent to the local Postfix instance, i.e. 127.0.0.1. It looks something like this:
However, if you have more internal mail hops you don't want the world knowing about, you will need to create a header_checks rule that removes them (bear in mind this will make diagnosing problems harder...). Put a line like this in /etc/postfix/main.cf:
from mybox.internal.lan (localhost [127.0.0.1])Assuming this is your last hop before the Internet you are best off just adding your public dns name as the first entry in /etc/hosts (it also gets appended to the Message-ID header value).
However, if you have more internal mail hops you don't want the world knowing about, you will need to create a header_checks rule that removes them (bear in mind this will make diagnosing problems harder...). Put a line like this in /etc/postfix/main.cf:
header_checks = regexp:/etc/postfix/header_checksAnd put your regexes in /etc/postfix/header_checks:
/^Received:\sfrom\smybox.internal.lan/ IGNORE
Wednesday, August 11, 2010
Adding a defeat for a DNAT rule to allow SSH packets to hit the local box
I've been using SSH to pump packets down a VPN like this:
The problem is I need SSH packets to hit the local interface (i.e. not go down the VPN). Solution: add a REDIRECT rule before the DNAT in the PREROUTING chain:
iptables -A PREROUTING -t nat -d $external_ip -j DNAT --to-destination $tun iptables -A POSTROUTING -t nat -s $tun -o eth0 -j SNAT --to-source $external_ip
The problem is I need SSH packets to hit the local interface (i.e. not go down the VPN). Solution: add a REDIRECT rule before the DNAT in the PREROUTING chain:
iptables -A PREROUTING -t nat -d $external_ip -p tcp --dport 22 -j REDIRECTThe REDIRECT target sends to localhost (really the same as DNAT with --to-destination 127.0.0.1).
HOWTO change the preferred application for PDF on Ubuntu
I recently installed adobe reader, and it stole "preferred application" status for PDFs away from evince.
To check the default for PDF use:
Which, in my case was "AdobeReader.desktop". To change it:
To check the default for PDF use:
xdg-mime query default application/pdf
Which, in my case was "AdobeReader.desktop". To change it:
xdg-mime default evince.desktop application/pdf
HOWTO Setup OpenVPN on Ubuntu
The Ubuntu community doco has a decent HOWTO that I won't reproduce, and the O'Reilly article has a good summary of the openssl commands you need to generate the certs (or you could read my openssl posts). Just a few extra notes.
If you want to tie a client to a particular VPN ip address, create a file in:
In this file put:
This will tie the "clientname" box to 192.168.1.8. There appears to be a lot of confusion on the web and in forums as to what should be in the second parameter. The doco states it is the remote-netmask. In this case "192.168.1.5" is the local end of the point-to-point link, which works. If the doco is right "255.255.255.0" might be more correct. As an aside, the address allocation is in successive /30 subnets (so last octet is 1,2,5,6,9) to be compatible with Windows.
If you also want all traffic from the client to exit via the VPN (ie. have the VPN as the default route) add this special sauce after the ifconfig-push line:
If you want to tie a client to a particular VPN ip address, create a file in:
/etc/openvpn/ccd/clientnamewhere "clientname" is the Common Name from the certificate your client uses.
In this file put:
ifconfig-push 192.168.1.8 192.168.1.5
This will tie the "clientname" box to 192.168.1.8. There appears to be a lot of confusion on the web and in forums as to what should be in the second parameter. The doco states it is the remote-netmask. In this case "192.168.1.5" is the local end of the point-to-point link, which works. If the doco is right "255.255.255.0" might be more correct. As an aside, the address allocation is in successive /30 subnets (so last octet is 1,2,5,6,9) to be compatible with Windows.
If you also want all traffic from the client to exit via the VPN (ie. have the VPN as the default route) add this special sauce after the ifconfig-push line:
push "redirect-gateway def1 bypass-dhcp"This tells openvpn that you want to use the VPN as the default gateway but still use local DHCP.
Monday, August 9, 2010
HOWTO create an LVM-based virtual machine with KVM/libvirt
A quick google didn't turn up any well-researched benchmarks for performance of VM image files vs. KVM-based VMs (see here for an attempt), but it makes sense to me to eliminate the file loopback overhead by using KVM-based VMs.
There are a few good HOWTOs out there, which I have boiled down into basics:
First, build your VM using JeOS
The "raw" output option for qemu-img should do the trick, but I believe I hit a known bug, because I got:
You then need to update your KVM definition file to point the hard disk at the logical volume.
There are a few good HOWTOs out there, which I have boiled down into basics:
First, build your VM using JeOS
sudo vmbuilder kvm ubuntu --dest=/data/kvm/temp-ubuntu-amd64 --bridge=br0 --mem=2048 -c mybox.vmbuild.cfgI tried the vmbuilder '--raw' option and pointed it at my LVM, but vmbuilder seemed to silently ignore it. So we will have to convert the image file instead.
The "raw" output option for qemu-img should do the trick, but I believe I hit a known bug, because I got:
qemu-img: Error while formatting '/dev/vg01/mybox'Using "host_device" worked (you could also just convert to raw then dd):
qemu-img convert disk0.qcow2 -O host_device /dev/vg01/mybox
You then need to update your KVM definition file to point the hard disk at the logical volume.
Thursday, August 5, 2010
Linux runlevel configuration - start services on boot for Red Hat and Ubuntu
To configure runlevels on Ubuntu use the 'update-rc.d' tool. For example to ensure the '/etc/init.d/blah' script gets started and stopped in the normal runlevels, use:
sudo update-rc.d blah defaultsThe equivalent tool on Red Hat is 'chkconfig', use:
sudo chkconfig blah on
Thursday, July 15, 2010
HOWTO reset mysql database root password
Despite having a pretty decent password storage system, I occasionally find myself without a root password for my MySQL dev servers. MySQL has a procedure for resetting the password that involves using an init file. Personally I like this (similar) solution better because you don't need to worry about syntax errors in your SQL since you have an interactive prompt. Here is the summary:
/etc/init.d/mysql stop mysqld_safe --skip-grant-tables & mysql -u root mysql UPDATE user SET password=PASSWORD("ualue=42") WHERE user="root"; FLUSH PRIVILEGES; /etc/init.d/mysql restart
Saturday, July 10, 2010
Font size too small or too big in Gnome or Xfce
I've recently had some font-size battles. Ever had the problem where you get your resolution right, but the font is either tiny or so giant you can barely use the menu dialogs?
The solution: DPI.
For gnome: Go to System->Preferences->Appearance->Fonts->Details and change the 'resolution: dots per inch' value.
For xfce: Go to Applications->Settings->Appearance->Fonts and change the 'Custom DPI Settings' value.
The commands
You can also fiddle with the system font settings, and in Firefox you can change Edit->Preferences->Content->Fonts&Colors->Advanced->Minimum font size. You can also change Firefox's zoom level with Ctrl-Alt-+ and Ctrl-Alt--. Ctrl-Alt-0 will set the zoom back to zero. By default the zoom settings are rembered on a per-site basis, but you can change this in about:config.
For your terminal fonts, you can use the fixed width system font settings, or change your terminal profile with Edit->Profile Preferences.
The solution: DPI.
For gnome: Go to System->Preferences->Appearance->Fonts->Details and change the 'resolution: dots per inch' value.
For xfce: Go to Applications->Settings->Appearance->Fonts and change the 'Custom DPI Settings' value.
The commands
xdpyinfo | grep resolutionand
xrdb -queryare useful for determining your current DPI values.
You can also fiddle with the system font settings, and in Firefox you can change Edit->Preferences->Content->Fonts&Colors->Advanced->Minimum font size. You can also change Firefox's zoom level with Ctrl-Alt-+ and Ctrl-Alt--. Ctrl-Alt-0 will set the zoom back to zero. By default the zoom settings are rembered on a per-site basis, but you can change this in about:config.
For your terminal fonts, you can use the fixed width system font settings, or change your terminal profile with Edit->Profile Preferences.
Friday, May 28, 2010
Red Hat RHCE course learnings
I recently did the RH300 fast-track Red Hat Certified Engineer course, and learnt a few things. Some of the things I learnt were specific to red hat, some were genuinely new, and some were things I knew but had forgotten. Here is a summary, in no particular order.
Red Hat specific
There is a free version of Satellite Server called Spacewalk that works for Fedora and Centos clients. Satellite requires Oracle (suck), but Red Hat is working on a mysql/postgres solution. Incidentally our instructor hinted that RHEL desktop might be dropped soon in favour of free Fedora desktops for enterprise customers.
Red Hat is looking at removing the --no-deps option from RPM because too many people break their install with it.
To update your yum repo with createrepo, you should delete the repodata directory. Since creating a repo is slow with many packages, you should separate out your corporate packages that require regular changes into a separate repo.
You can do a kickstart from GRUB directly by adding some kernel params (the ksdevice removes a prompt on devices with multiple interfaces, and noipv6 cuts out a long ipv6 timeout):
Configure and manage LVM with system-config-lvm. It is an impressive GUI, and makes resizing partitions and file systems really easy.
Install the setroubleshoot server and use the GUI to find selinux problems (it usually gives you a sensible solution):
If you have more than one mail agent installed (e.g. sendmail and postfix), you can switch between them with the 'system-switch-mail' ncurses gui, which is a frontend to the alternatives system.
Other Random Learnings
The kernel limits the number of partitions possible on IDE and SCSI/SATA disks in different ways (in addition to the MBR limitations of 3 primary and 1 extended). For IDE (/dev/hd*), the max number of partitions is 63, for SCSI/SATA (/dev/sd*) it is 15.
To view and set disk labels for ext2/3:
To see if a NIC has a link I usually use ethtool, but that isn't installed by default, so if it is unavailable:
Red Hat specific
There is a free version of Satellite Server called Spacewalk that works for Fedora and Centos clients. Satellite requires Oracle (suck), but Red Hat is working on a mysql/postgres solution. Incidentally our instructor hinted that RHEL desktop might be dropped soon in favour of free Fedora desktops for enterprise customers.
Red Hat is looking at removing the --no-deps option from RPM because too many people break their install with it.
To update your yum repo with createrepo, you should delete the repodata directory. Since creating a repo is slow with many packages, you should separate out your corporate packages that require regular changes into a separate repo.
You can do a kickstart from GRUB directly by adding some kernel params (the ksdevice removes a prompt on devices with multiple interfaces, and noipv6 cuts out a long ipv6 timeout):
ks=http://my.kickstart/box.cfg ksdevice=eth0 noipv6The easy way to configure network services on Red Hat is to use the 'setup' ncurses UI, which calls out to the relevant Text User Interfaces (TUIs) such as 'system-config-network-tui'
Configure and manage LVM with system-config-lvm. It is an impressive GUI, and makes resizing partitions and file systems really easy.
Install the setroubleshoot server and use the GUI to find selinux problems (it usually gives you a sensible solution):
sealert -bTo change a selinux context to a reference context, ie. give the file the same context as /home:
chcon --reference=/home /home/newYou also can change selinux context by specifying a context from the many pre-built ones available from the targeted policy in the /etc/selinux directory:
chcon -t public_content_t /shared
chcon -t samba_share_t /windowsshareTo check selinux status (e.g is it enforcing?):
sestatusTo see the selinux status of files/processes, add 'Z' to the usual tools:
ls -lZ
ps -auxZYou can also see status and change it with (also handles iptables):
system-config-securitylevel-tuiiptables rules are stored in /etc/sysconfig/iptables and can be edited directly, or rules applied and then saved with 'service iptables save'
If you have more than one mail agent installed (e.g. sendmail and postfix), you can switch between them with the 'system-switch-mail' ncurses gui, which is a frontend to the alternatives system.
Other Random Learnings
The kernel limits the number of partitions possible on IDE and SCSI/SATA disks in different ways (in addition to the MBR limitations of 3 primary and 1 extended). For IDE (/dev/hd*), the max number of partitions is 63, for SCSI/SATA (/dev/sd*) it is 15.
To view and set disk labels for ext2/3:
dumpe2fs /dev/sda1
e2label /dev/sda1 labeldumpe2fs will also show you the mount defaults for the partition, which is handy to know when /etc/fstab just says 'defaults'. You can change these defaults with 'tune2fs', this adds the 'acl' option:
tune2fs -o acl /dev/sda1To get disk labels and UUIDs:
blkidThen if you want to know which device has that label or UUID:
fuser UUID="sdfkshdfkjshdkjshdf"Backup a disk partition table with:
sfdisk -d /dev/sda > sda.backupTo find out which process is holding a mount point:
fuser -m /mnt/pointand to kill all those processes (be careful):
fuser -km /mnt/pointTo get serial numbers and other information from the system management BIOS (handy when you are in a different location to a server):
dmidecodex86info has a nicer formatted version of the /proc/cpuinfo CPU information
To see if a NIC has a link I usually use ethtool, but that isn't installed by default, so if it is unavailable:
ip link show eth0You can get access to the bootloader of a xen machine with:
xm create -c xenname
Tuesday, May 25, 2010
Automagically install SSH public keys into authorized_keys on a remote machine
There is a handy little utility called 'ssh-copy-id' that simplifies getting your ssh public key into a remote machine's authorized_keys. The syntax is:
It handles creating the ssh directory and authorized_keys file on the remote side (if necessary), and setting the right file permissions. Neat!
ssh-copy-id [-i [identity_file]] [user@]machine
It handles creating the ssh directory and authorized_keys file on the remote side (if necessary), and setting the right file permissions. Neat!
yum/rpm vs. apt/dpkg (apt wins)
A comparison of Yum (RHEL 5.4) vs. Apt (ubuntu lucid)
Yum/RPM Pros:
'rpm -V' is very cool. It allows you to verify files that were installed from RPMs, which includes comparing owner, group, mode, md5, size, major/minor device numbers, symlink strings, and modification times to what was originally installed by the rpm. Use 'rpm -V -a' to verify all installed packages. Despite some wishlist bugs being filed, the closest dpkg comes is 'debsums', which only does hash comparisons.
Yum has $releasever and $basearch macros that can be put into yum repo configs. It would be great if Debian could do the same so $releasever would expand to 'lucid' or whatever distro you are running. This would make distributing standard apt sources.list files across an organisation running multiple versions much simpler.
Yum has 'localinstall', which allows you to install an rpm but satisfy its dependencies from the yum repo - neat! I believe the same is possible with apt, but it would look like:
Yum/RPM Cons:
Yum has no command-line (tab) completion, which is super annoying. I found I had to either do a search for my package first, or use commands like:
The decision to replace or not replace config files during package upgrades is left to the individual package maintainers. This is a serious bad idea. The two strategies are:
When you uninstall a package with yum, the removal of rpms that were installed as dependencies is left up to the individual package post scripts. This means that you might install 10 dependencies, but only uninstall 8.
In contrast, Apt will uninstall each package it installed and leave configuration files in place (unless you specify --purge). Apt also has 'autoremove' (there is no yum equivalent), which allows you to remove any packages that are no longer needed. Packages that fall into this category are usually libraries that were dependencies for multiple applications, all of which have since been removed.
If you are running a 64-bit OS and point to a repository that has both 64-bit and 32-bit packages in it, apt is smart enough to realise that you probably want to install the 64-bit package. Not so with yum:
The redhat community seems to be claiming this isn't a bug. OK, it is just a design flaw then, one that is going to start hurting a lot when the masses move to 64bit.
Yum package groups have long annoying names containing spaces that need to be escaped on the commandline. They also have 'optional' listed packages, but lack any way to install them (apart from copy paste). Apt has the same concept except it just uses meta-packages. The metapackages look and behave exactly the same as regular packages, which means you don't need a groupinstall command and they turn up in search results.
'yum search ftp' will turn up packages with an ftp URL (e.g. libsoup, which has nothing to do with ftp). Whoops! Apt does the right thing and searches the package title and descriptions, not the URL.
Yum search doesn't seem to handle multiple terms very well, it does an OR where it should do an AND.
Apt is way faster - the speed discrepancy is particularly noticeable when using search commands.
Yum/RPM Pros:
'rpm -V' is very cool. It allows you to verify files that were installed from RPMs, which includes comparing owner, group, mode, md5, size, major/minor device numbers, symlink strings, and modification times to what was originally installed by the rpm. Use 'rpm -V -a' to verify all installed packages. Despite some wishlist bugs being filed, the closest dpkg comes is 'debsums', which only does hash comparisons.
Yum has $releasever and $basearch macros that can be put into yum repo configs. It would be great if Debian could do the same so $releasever would expand to 'lucid' or whatever distro you are running. This would make distributing standard apt sources.list files across an organisation running multiple versions much simpler.
Yum has 'localinstall', which allows you to install an rpm but satisfy its dependencies from the yum repo - neat! I believe the same is possible with apt, but it would look like:
dpkg -i single.deb; apt-get -f installwhich is not exactly intuitive.
Yum/RPM Cons:
Yum has no command-line (tab) completion, which is super annoying. I found I had to either do a search for my package first, or use commands like:
yum install http-*which resulted in the installation of heaps of stuff I didn't need.
The decision to replace or not replace config files during package upgrades is left to the individual package maintainers. This is a serious bad idea. The two strategies are:
- 'rpmsave': install the new config, back the old one up as '.rpmsave'; and
- 'rpmnew': leave the old config in place and put the new one in as '.rpmnew'.
When you uninstall a package with yum, the removal of rpms that were installed as dependencies is left up to the individual package post scripts. This means that you might install 10 dependencies, but only uninstall 8.
In contrast, Apt will uninstall each package it installed and leave configuration files in place (unless you specify --purge). Apt also has 'autoremove' (there is no yum equivalent), which allows you to remove any packages that are no longer needed. Packages that fall into this category are usually libraries that were dependencies for multiple applications, all of which have since been removed.
If you are running a 64-bit OS and point to a repository that has both 64-bit and 32-bit packages in it, apt is smart enough to realise that you probably want to install the 64-bit package. Not so with yum:
yum install opensslmight get you either the 32 or the 64 version. Worse still, running yum install openssl might try to install the 32-bit version over the top of the already installed 64-bit version.
The redhat community seems to be claiming this isn't a bug. OK, it is just a design flaw then, one that is going to start hurting a lot when the masses move to 64bit.
Yum package groups have long annoying names containing spaces that need to be escaped on the commandline. They also have 'optional' listed packages, but lack any way to install them (apart from copy paste). Apt has the same concept except it just uses meta-packages. The metapackages look and behave exactly the same as regular packages, which means you don't need a groupinstall command and they turn up in search results.
apt-get install --install-recommendswill get you all the recommended packages (you can also set this permanently in /etc/apt/apt.conf). In addition Debian has the concept of 'suggested' packages; you can also change apt.conf to install these automatically.
'yum search ftp' will turn up packages with an ftp URL (e.g. libsoup, which has nothing to do with ftp). Whoops! Apt does the right thing and searches the package title and descriptions, not the URL.
Yum search doesn't seem to handle multiple terms very well, it does an OR where it should do an AND.
Apt is way faster - the speed discrepancy is particularly noticeable when using search commands.
Sunday, April 11, 2010
Bash if statements
I'm constantly forgetting the bash if statement syntax, and it takes a while to find the right example on the web or to read the man page. Some examples below.
One liner to get the number of seconds since a file was modified:
The same if statement with line breaks:
One liner to get the number of seconds since a file was modified:
if [ -f /var/lib/puppet/state/puppetdlock ] ; \ then echo $[`date +%s` - `stat -c %Y /var/lib/puppet/state/puppetdlock`] ; else echo 0 ; fi
The same if statement with line breaks:
if [ -f /var/lib/puppet/state/puppetdlock ] then echo $[`date +%s` - `stat -c %Y /var/lib/puppet/state/puppetdlock`] else echo 0 fi
Thursday, April 8, 2010
Microsoft's attempt to fix windows packaging: The Common Opensource Application Publishing Platform (CoApp)
I read about Garrett Serack's project to fix the mess that is windows packaging (or lack of packaging) with The Common Opensource Application Publishing Platform (CoApp) and was pretty surprised. It seems Garrett is a long time linux user who has somehow found himself working at Microsoft, and has come up with this project to try and fix things. His post is a pretty good description of just how bad packaging is on windows, and provides good background on how a lack of standardisation has got us into dependency hell. In his words:
and
Reading some of his responses to the slashdot comments, some interesting tidbits came out. He is planning to use WiX to chain MSI's together to handle dependencies, and Windows SxS to handle multiple versions of libraries.
Will it work? Who knows, it is a massive undertaking. I imagine some open-source community members might have to do some soul searching to figure out if they want to contribute to a project that will make windows better, and presumably make Microsoft more money.
Frankly it's taken an extremely long time to convince the powers-that-be at Microsoft that Linux's package management is stellar compared to Windows.
and
My intent is to completely do away with the [current Windows application] practice of everybody shipping every damn shared library.
Reading some of his responses to the slashdot comments, some interesting tidbits came out. He is planning to use WiX to chain MSI's together to handle dependencies, and Windows SxS to handle multiple versions of libraries.
Will it work? Who knows, it is a massive undertaking. I imagine some open-source community members might have to do some soul searching to figure out if they want to contribute to a project that will make windows better, and presumably make Microsoft more money.
Django formset.as_table() displays vertically only
Like other people, I want to be able to display a formset as an ordinary 'horizontal' table, i.e. where all headings are in their own row, separate from the data:
Whereas the django formset.as_table() method displays as below, with the heading on the same row as the data:
The solution is a pretty nasty gob of template code, since to fix it you need to make all the error handling and form magic explicit. Django devs: Let's have a new method or parameter to display the formset horizontal rather than vertical.
<table> <thead> <tr><th>column1</th><th>column2</th></tr> </thead> <tbody> <tr><td>form1.value1</td><td>form1.value2</td></tr> ... </tbody> </table>
Whereas the django formset.as_table() method displays as below, with the heading on the same row as the data:
<table> <tr><th>column1</th><td>form1.value1</td></tr> <tr><th>column2</th><td>form1.value2</td></tr> </table>
The solution is a pretty nasty gob of template code, since to fix it you need to make all the error handling and form magic explicit. Django devs: Let's have a new method or parameter to display the formset horizontal rather than vertical.
Wednesday, April 7, 2010
OpenLDAP completely broken on install under Karmic
I hate OpenLDAP. Thanks to a stupid decision by OpenLDAP packagers, the server is completely unusable after install on Ubuntu. We went from a working debconf on jaunty to a completely broken install on karmic (and it doesn't look like it will be fixed in lucid either).
An OpenLDAP install on karmic no longer creates a database, installs schemas, or creates an admin user. There are basically no good HOWTOs for this initial configuration, and all the official documentation is wrong - it says to use dpkg-reconfigure, which no longer works. A thread on the ubuntu forums is the best you will get.
All I wanted to do was move my existing (working) ldap database to another server, which should have been a 5-minute job with slapcat, slapadd.
I finally got it working using the instructions in the thread to create a database, then added the users using 'slapadd -n1' as originally intended (I never got to import the original config '-n0' successfully). There is a whole lot of black magic involved: I don't understand what installing the schemas does, and I only vaguely understand what the ldif file does. The error messages provided by openldap might as well be in another language because they are completely uninformative.
Once I had a working setup I was getting a prompt with 'I have no name!'@box. Despite there being a lot of bullshit about this being caused by permissions on ldap config files in forums, it turns out if you install 'nscd' it magically goes away. I have no idea why, but nscd also seems to be the antidote to other ldap bugs, so you might as well have it :)
It's enough to make me want to use a windows DC and likewise....
Update: according to a bug report, this doco is now up-to-date, with the same information as in the thread.
An OpenLDAP install on karmic no longer creates a database, installs schemas, or creates an admin user. There are basically no good HOWTOs for this initial configuration, and all the official documentation is wrong - it says to use dpkg-reconfigure, which no longer works. A thread on the ubuntu forums is the best you will get.
All I wanted to do was move my existing (working) ldap database to another server, which should have been a 5-minute job with slapcat, slapadd.
I finally got it working using the instructions in the thread to create a database, then added the users using 'slapadd -n1' as originally intended (I never got to import the original config '-n0' successfully). There is a whole lot of black magic involved: I don't understand what installing the schemas does, and I only vaguely understand what the ldif file does. The error messages provided by openldap might as well be in another language because they are completely uninformative.
Once I had a working setup I was getting a prompt with 'I have no name!'@box. Despite there being a lot of bullshit about this being caused by permissions on ldap config files in forums, it turns out if you install 'nscd' it magically goes away. I have no idea why, but nscd also seems to be the antidote to other ldap bugs, so you might as well have it :)
It's enough to make me want to use a windows DC and likewise....
Update: according to a bug report, this doco is now up-to-date, with the same information as in the thread.
Tuesday, April 6, 2010
A quick LVM HOWTO for Ubuntu linux and SAN disk LUNs
First get your luns in order using multipath-tools.
If you have a whole bunch of LUNs presented to the same box, it can be hard to figure out which one is which. I found that multipath-tools creates devices like '/dev/disk/by-id/scsi-3600342349827399729' where the big number is the same as the WWID in the HP management console. If worst comes to worst you can present LUNs one-by-one and use the WWID to match them with what is in the HP management console.
Be aware that restarting multipath-tools didn't refresh the /dev/mapper/ list properly for me (and also threw out some segfaults, yay). I couldn't remove the kernel module because it was in use, so a reboot was the only way to ensure the /dev/mapper list was accurate.
Once you know which LUN you need to work on (in this example it is mpath0), create a LVM partition on your LUN, selecting '8e' as the partition type:
Create your physical volume:
At this point I noticed I had two identical devices: "/dev/mapper/mpath0p1" and "/dev/mapper/mpath0-part1". You should use "mpath0-part1" - the p1 partition disappeared for me after a reboot. Before I rebooted I tried restarting multipath-tools to see if that would remove the extra partition, but no dice (partprobe would be another thing to try...). While this partition exists, pvscan will give you an error like this (but it doesn't seem to cause any problems):
Now create your volume group:
Create your logical volume specifying size:
OR
Slap on a filesystem:
and it is ready to be mounted. Grab the UUID for your fstab entry:
If you have a whole bunch of LUNs presented to the same box, it can be hard to figure out which one is which. I found that multipath-tools creates devices like '/dev/disk/by-id/scsi-3600342349827399729' where the big number is the same as the WWID in the HP management console. If worst comes to worst you can present LUNs one-by-one and use the WWID to match them with what is in the HP management console.
Be aware that restarting multipath-tools didn't refresh the /dev/mapper/ list properly for me (and also threw out some segfaults, yay). I couldn't remove the kernel module because it was in use, so a reboot was the only way to ensure the /dev/mapper list was accurate.
Once you know which LUN you need to work on (in this example it is mpath0), create a LVM partition on your LUN, selecting '8e' as the partition type:
fdisk /dev/mapper/mpath0
Create your physical volume:
pvcreate /dev/mapper/mpath0-part1
At this point I noticed I had two identical devices: "/dev/mapper/mpath0p1" and "/dev/mapper/mpath0-part1". You should use "mpath0-part1" - the p1 partition disappeared for me after a reboot. Before I rebooted I tried restarting multipath-tools to see if that would remove the extra partition, but no dice (partprobe would be another thing to try...). While this partition exists, pvscan will give you an error like this (but it doesn't seem to cause any problems):
Found duplicate PV xKBwhsdfssdfkshdfkdfgdfDGiRj: using /dev/mapper/mpath0p1 not /dev/mapper/mpath0-part
Now create your volume group:
vgcreate myvgname /dev/mapper/mpath0-part1
Create your logical volume specifying size:
lvcreate -L 10G -n mylvname myvgname
OR
lvcreate -l 100%FREE -n mylvname myvgname
Slap on a filesystem:
mkfs -t ext4 /dev/myvgname/mylvname
and it is ready to be mounted. Grab the UUID for your fstab entry:
blkid /dev/myvgname/mylvname
Monday, April 5, 2010
MythTV scanning for new videos now less intuitive
In MythTV 0.22, adding new videos to the playback list is far less intuitive. You need to hit the menu key ('m') in the watch videos screen, which will bring up a menu with a 'scan for changes' option.
Wednesday, March 31, 2010
Mounting a HP HSV300 SAN lun on Ubuntu linux
To get your HP HSV300 SAN luns mounted on linux over fiber channel, first use the HP GUI to set up your luns and present them to your host. On the host install 'multipath-tools'. You will probably see these lines in dmesg:
Download the firmware from the QLogic server, and put it in '/lib/firmware'. Reload the kernel module:
You should get a whole lot of /dev/sd* devices created, I believe there is one for each lun times the number of paths to the storage. Put these lines in '/etc/multipath.conf':
Restart the multipath service, and you should see your LUNs in /dev/mapper/mpath*
If you have just presented new LUNs, you might also need to reload the qla2xxx module as described above.
[ 4.670129] qla2xxx 0000:0e:00.0: firmware: requesting ql2400_fw.bin [ 4.673089] qla2xxx 0000:0e:00.0: Firmware image unavailable. [ 4.673091] qla2xxx 0000:0e:00.0: Firmware images can be retrieved from: ftp://ftp.qlogic.com/outgoing/linux/firmware/.
Download the firmware from the QLogic server, and put it in '/lib/firmware'. Reload the kernel module:
rmmod qla2xxx modprobe qla2xxx
You should get a whole lot of /dev/sd* devices created, I believe there is one for each lun times the number of paths to the storage. Put these lines in '/etc/multipath.conf':
defaults { udev_dir /dev polling_interval 10 selector "round-robin 0" path_grouping_policy failover getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" prio_callout "/bin/true" path_checker tur rr_min_io 100 rr_weight uniform failback immediate no_path_retry 12 user_friendly_names yes bindings_file "/var/lib/multipath/bindings" } devnode_blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z][[0-9]*]" devnode "^cciss!c[0-9]d[0-9]*" } devices { device { vendor "HP" product "HSV300" path_grouping_policy group_by_prio getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n" path_checker tur path_selector "round-robin 0" prio_callout "/sbin/mpath_prio_alua /dev/%n" rr_weight uniform failback immediate hardware_handler "0" no_path_retry 18 rr_min_io 100 } }
Restart the multipath service, and you should see your LUNs in /dev/mapper/mpath*
If you have just presented new LUNs, you might also need to reload the qla2xxx module as described above.
Saturday, March 20, 2010
DViCO FusionHDTV DVB-T Dual USB Tuner Card remote control LIRC config
The remote for the DViCO Dual USB Tuner appears as part of the USB interface. It looks like this in syslog:
First grab the remote control config file and tack this section onto /usr/share/lirc/remotes/dvico/lircd.conf.fusionHDTV:
Then in /home/user/.lirc/mythtv (where 'user' is the system user that runs the frontend) put your config for what you want each button to do in mythtv, here are some examples (the whole file is too big to include here):
input: IR-receiver inside an USB DVB receiver as /devices/pci0000:00/0000:00:1d.7/usb1/1-4/input/input6
First grab the remote control config file and tack this section onto /usr/share/lirc/remotes/dvico/lircd.conf.fusionHDTV:
# Please make this file available to others # by sending it to# # this config file was automatically generated # using lirc-0.8.0(userspace) on Mon Mar 5 16:00:35 2007 # # contributed by: Soth # # brand: DViCO FusionHDTV DVB-T Dual Digital # model no. of remote control: Fusion MCE # devices being controlled by this remote: # begin remote name DViCO_Dual_Digital bits 16 eps 30 aeps 100 one 0 0 zero 0 0 pre_data_bits 16 pre_data 0x1 gap 251756 toggle_bit 0 begin codes #starting at the top dtv 0x0179 mp3 0x0187 dvd 0x0185 cpf 0x016C #outer circle clockwise from top tvpower 0x0164 guide 0x016D info 0x0166 alttab 0x000F skip 0x00A3 start 0x001C replay 0x00A5 dvdmenu 0x008B back 0x009E setup 0x008D #inner circle up 0x0067 down 0x006C left 0x0069 right 0x006A ok 0x0160 #volume and channel voldn 0x0072 volup 0x0073 chup 0x0192 chdn 0x0193 #keypad camera 0x00D4 live 0x0182 folder 0x0086 1 0x0002 2 0x0003 3 0x0004 4 0x0005 5 0x0006 6 0x0007 7 0x0008 8 0x0009 9 0x000A aspect 0x0173 0 0x000B zoom 0x0174 #play buttons rew 0x00A8 playpause 0x00A4 ff 0x00D0 mute 0x0071 stop 0x0080 rec 0x00A7 power 0x0074 end codes end remote
Then in /home/user/.lirc/mythtv (where 'user' is the system user that runs the frontend) put your config for what you want each button to do in mythtv, here are some examples (the whole file is too big to include here):
begin remote = DViCO_Dual_Digital prog = mythtv button = fastforward config = > repeat = 0 delay = 0 end begin remote = DViCO_Dual_Digital prog = mythtv button = rewind config = < repeat = 0 delay = 0 endIn my setup I have a symlink:
/home/user/.mythtv/lircrc -> ../.lirc/mythtvLirc-aware applications look in your ~/.lircrc, which will look something like this:
include ~/.lirc/mythtv include ~/.lirc/mplayer include ~/.lirc/xine include ~/.lirc/vlc include ~/.lirc/xmame include ~/.lirc/xmess include ~/.lirc/totem include ~/.lirc/elisaI needed to do a fair bit of work to my ~/.lirc/mplayer, so I have included it here. A full list of mplayer commands is available here:
begin remote = DViCO_Dual_Digital prog = mplayer button = playpause config = pause repeat = 0 delay = 0 end begin remote = DViCO_Dual_Digital prog = mplayer button = back config = quit repeat = 0 delay = 0 end begin remote = DViCO_Dual_Digital prog = mplayer button = stop config = quit repeat = 0 delay = 0 end begin remote = DViCO_Dual_Digital prog = mplayer button = ff config = seek +30 repeat = 3 end begin remote = DViCO_Dual_Digital prog = mplayer button = rew config = seek -30 repeat =3 end begin remote = DViCO_Dual_Digital prog = mplayer button = right config = seek +30 repeat = 3 end begin remote = DViCO_Dual_Digital prog = mplayer button = left config = seek -30 repeat =3 end begin remote = DViCO_Dual_Digital prog = mplayer button = up config = speed_incr +.1 repeat = 3 end begin remote = DViCO_Dual_Digital prog = mplayer button = down config = speed_incr -.1 repeat = 3 end begin remote = DViCO_Dual_Digital prog = mplayer button = ok config = speed_set 1 repeat = 0 end begin remote = DViCO_Dual_Digital prog = mplayer button = volup config = audio_delay +.1 repeat = 3 end begin remote = DViCO_Dual_Digital prog = mplayer button = voldown config = audio_delay -.1 repeat = 3 endConfigure /etc/lirc/hardware.conf. The important parts are:
REMOTE="DViCO_Dual_Digital" REMOTE_MODULES="" REMOTE_DRIVER="devinput" REMOTE_DEVICE="/dev/input/by-path/pci-0000:00:1d.7-event-ir" REMOTE_LIRCD_CONF="dvico/lircd.conf.fusionHDTV" REMOTE_LIRCD_ARGS=""Restart lirc (I found you often need to restart it twice), and you should be good to go. You don't need to restart mythfrontend. The original instructions also have some good troubleshooting tips.
DViCO FusionHDTV DVB-T Dual USB Tuner Card on Linux
I'm immortalising some old notes here in case others have problems with this card. The driver is in the kernel now, so most of the below isn't required anymore, just plug and play (although you still need the firmware to get both tuners).
Drivers
I followed these instructions to get the driver source (you will need mercurial).
To update the source, cd to the directory and use:
To get the USB working I also needed the dvb-usb-bluebird-01.fw firmware, which then needs to go in your kernel directory:
Now you should have two adapters listed in /dev/dvb
Strangely enough when I upgraded to dapper, only the USB frontend would get loaded on boot. To get them both to load I had to go to the v4l-dvb directory and:
DVB Utils
Follow the same instructions to get the DVB utils, then to pull changes:
Scan
To test the card, try scanning the channels (this is for adapter 0, s/0/1/ for adapter 1):
You can also try the other (+/- 166667 Hz) configs:
Tzap
Tune with tzap:
dvbstream
Once you have tuned with tzap, grab an mpeg with dvbstream and open in xine:
The numbers are the PIDs from the tuning line, near the end:
xine
You can also play a dvb stream directly in xine by right clicking and selecting playlist | DVB
Drivers
I followed these instructions to get the driver source (you will need mercurial).
To update the source, cd to the directory and use:
hg pull -u http://linuxtv.org/hg/v4l-dvb
To get the USB working I also needed the dvb-usb-bluebird-01.fw firmware, which then needs to go in your kernel directory:
cp dvb-usb-bluebird-01.fw /lib/firmware/2.6.something/
Now you should have two adapters listed in /dev/dvb
Strangely enough when I upgraded to dapper, only the USB frontend would get loaded on boot. To get them both to load I had to go to the v4l-dvb directory and:
sudo make rmmod
sudo make insmod
DVB Utils
Follow the same instructions to get the DVB utils, then to pull changes:
hg pull -u http://linuxtv.org/hg/dvb-apps
Scan
To test the card, try scanning the channels (this is for adapter 0, s/0/1/ for adapter 1):
sudo scan -a 0 -v /usr/share/dvb/dvb-t/au-Adelaide | tee mychannels.conf
You can also try the other (+/- 166667 Hz) configs:
sudo scan -a 0 -v /usr/share/dvb/dvb-t/au-Adelaide.mod | tee mychannels.conf
sudo scan -a 0 -v /usr/share/dvb/dvb-t/au-Adelaide.mod2 | tee mychannels.conf
Tzap
Tune with tzap:
cp mychannels.conf ~/.tzap/channels.conf
tzap "ABC TV Canberra"
dvbstream
Once you have tuned with tzap, grab an mpeg with dvbstream and open in xine:
dvbstream 512 650 -o > test.mpeg
The numbers are the PIDs from the tuning line, near the end:
ABC TV:205791667:INVERSION_AUTO:BANDWIDTH_7_MHZ:FEC_3_4:FEC_3_4:QAM_64:
TRANSMISSION_MODE_8K:GUARD_INTERVAL_1_16:HIERARCHY_NONE:512:650:529
xine
You can also play a dvb stream directly in xine by right clicking and selecting playlist | DVB
Friday, March 19, 2010
HOWTO create an encrypted backup disk with LUKS
First partition as normal with fdisk. Then create the encrypted header:
Then create the filesystem and mount the disk:
Copy your backup, then unmount with:
sudo cryptsetup --verify-passphrase --verbose --hash=sha256 \
--cipher=aes-cbc-essiv:sha256 --key-size=256 luksFormat /dev/sda1
Then create the filesystem and mount the disk:
sudo cryptsetup luksOpen /dev/sda1 crypt-backup
sudo mkfs -t ext4 /dev/mapper/crypt-backup
sudo mkdir /mnt/backup
sudo mount /dev/mapper/crypt-backup /mnt/backup
Copy your backup, then unmount with:
sudo umount /mnt/backup
sudo cryptsetup luksClose crypt-backup
HOWTO get a disk UUID
udev now uses disk UUIDs for consistent mounting, so you probably see something like this is /etc/fstab:
So how can you map the /dev/sd* structure to UUIDs? Simple:
If the list is out of date you can refresh it by restarting udev:
The disktype utility can also help:
And you can also use 'vol_id' (now deprecated in favour of 'blkid') to print the UUID:
# <file system> <mount point> <type> <options> <dump> <pass>
# /dev/sda1
UUID=27EEEEEE-CFAF-4608-8036-FFFFBAEEEEEE / xfs relatime,errors=remount-ro 0 1
So how can you map the /dev/sd* structure to UUIDs? Simple:
~$ ls -lh /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 2010-03-20 14:15 27EEEEEE-CFAF-4608-8036-FFFFBAEEEEEE -> ../../sda1
If the list is out of date you can refresh it by restarting udev:
sudo /etc/init.d/udev restart
The disktype utility can also help:
~$ sudo disktype /dev/sda
--- /dev/sda
Block device, size 298.1 GiB (320072933376 bytes)
DOS/MBR partition map
Partition 1: 298.1 GiB (320070288384 bytes, 625137282 sectors from 63)
Type 0x83 (Linux)
XFS file system, version 4
Volume name ""
UUID 27EEEEEE-CFAF-4608-8036-FFFFBAEEEEEE
Volume size 296.6 GiB (318523899904 bytes, 77764624 blocks of 4 KiB)
And you can also use 'vol_id' (now deprecated in favour of 'blkid') to print the UUID:
~$ sudo vol_id /dev/sda1
ID_FS_USAGE=filesystem
ID_FS_TYPE=xfs
ID_FS_VERSION=
ID_FS_UUID=27EEEEEE-CFAF-4608-8036-FFFFBAEEEEEE
ID_FS_UUID_ENC=27EEEEEE-CFAF-4608-8036-FFFFBAEEEEEE
ID_FS_LABEL=
ID_FS_LABEL_ENC=
ID_FS_LABEL_SAFE=
Wednesday, March 17, 2010
*.microsoft.com sites return empty pages when accessed via squid
The microsoft.com webserver (which serves msdn.microsoft.com and support.microsoft.com among others) sends back a chunked HTTP request "Transfer-Coding: chunked" in response to a HTTP 1.0 GET. The Transfer-Coding header is not valid for HTTP 1.0 and the HTTP 1.1 RFC is very clear:
In firefox the page renders as empty. If you view source you can see part of the page (i.e. the first chunk) was downloaded, but the full content has been truncated.
Moving squid to HTTP 1.1 should probably fix this, except according to squid.conf it won't:
So the solution is to drop the accept-encoding header for the microsoft.com domain:
OR if you are using IE, apparently you can untick "Use HTTP1.1 through proxy connections" in IE’s Advanced internet options tab.
A server MUST NOT send transfer-codings to an HTTP/1.0 client
In firefox the page renders as empty. If you view source you can see part of the page (i.e. the first chunk) was downloaded, but the full content has been truncated.
Moving squid to HTTP 1.1 should probably fix this, except according to squid.conf it won't:
# Enables HTTP/1.1 support to clients. The HTTP/1.1
# support is still incomplete with an internal HTTP/1.0
# hop, but should work with most clients. The main
# HTTP/1.1 features missing due to this is forwarding
# of requests using chunked transfer encoding (results
# in 411) and forwarding of 1xx responses (silently
# dropped)
So the solution is to drop the accept-encoding header for the microsoft.com domain:
acl support.microsoft.com dstdomain support.microsoft.com
header_access Accept-Encoding deny support.microsoft.com
OR if you are using IE, apparently you can untick "Use HTTP1.1 through proxy connections" in IE’s Advanced internet options tab.
Saturday, March 13, 2010
Managing GPS waypoints on a Garmin Etrex Vista HCx under Linux
I used to use GPSman for uploading waypoints and downloading tracks to/from my Garmin Etrex Legend. When I moved to the Vista HCx I found GPSman didn't play well with the new USB interface. I soon stumbled across QLandkarte, which is a lot more polished (at least GUI-wise) and supports the Vista HCx over USB. It is the best software I have come across for linux interaction with Garmin devices.
Monday, March 8, 2010
Django current 'active' page highlighting for navigation
It is pretty common to want to highlight the current page in your navigation bar. Unfortunately it is surprisingly hard to do with django. After reading a few blogs I decided on the following, which involves creating a custom template tag and enabling the request context processor. First create a custom template tag (don't forget the __init__.py in your templatetags directory):
Add names to your views in urls.py so you can reference the regex in the template. The prototype is '(regular expression, Python callback function [, optional dictionary [, optional name]])':
Then in your template:
In settings.py define the 'request' context processor (you also have to define the other defaults so you don't lose them):
Now you just need a line in your CSS that does somehthing different for the 'current_page_item' class.
#!/usr/bin/env python
#project/site/templatetags/active.py
from django import template
register = template.Library()
@register.simple_tag
def active(request, pattern):
import re
if re.search(pattern, request.path):
return 'current_page_item'
return ''
Add names to your views in urls.py so you can reference the regex in the template. The prototype is '(regular expression, Python callback function [, optional dictionary [, optional name]])':
(r'^$', 'home view', {}, 'home')
Then in your template:
{% load active %}
{% url home as home %}
<li class="{% active request home %}">Home</li>
In settings.py define the 'request' context processor (you also have to define the other defaults so you don't lose them):
TEMPLATE_CONTEXT_PROCESSORS = ('django.core.context_processors.auth',
'django.core.context_processors.debug',
'django.core.context_processors.i18n',
'django.core.context_processors.media',
'django.core.context_processors.request')
Now you just need a line in your CSS that does somehthing different for the 'current_page_item' class.
Tuesday, March 2, 2010
Re-implementing the python 'inspect' module (whoops)
I am working with python introspection to have plugins for a framework registered (mostly) automatically. Until I read this blog post I was writing most of it myself, based on how the unittesting framework registers tests. Now I know there is a python inspect module that does all the heavy lifting, and the unittest module should be using it :) I learnt some interesting things about python introspection along the way I thought I should record.
Here is my implmentation of
Similarly, I was going to get the testmethods using a filter and the following method:
which can be replaced by
Here is my implmentation of
inspect.getmembers(testclasses,inspect.ismodule)to get a list of modules (.py files) from a package (directory with __init__.py):
from types import ModuleType
list = dir(testclasses)
modules = filter(lambda x: getattr(testclasses,x).__class__ == ModuleType, list)
Similarly, I was going to get the testmethods using a filter and the following method:
def isTestMethod(attrname, theclass=theclass, prefix=settings.TEST_METHOD_PREFIX):
"""It is a test method if it is callable and starts with the prefix"""
return attrname.startswith(prefix) and hasattr(getattr(theclass, attrname), '__call__')
which can be replaced by
inspect.getmembers(theclass,inspect.ismethod)
Monday, March 1, 2010
Using django components outside of django
Using django components (such as the model for DB abstraction) outside of django has a few gotchas, since you need to set up the environment correctly first. Most of the time you can just get away with setting DJANGO_SETTINGS_MODULE:
But this blog has a great summary of the different options. Another approach worth noting is the one manage.py uses:
export DJANGO_SETTINGS_MODULE=yoursite.settings
But this blog has a great summary of the different options. Another approach worth noting is the one manage.py uses:
from django.core.management import setup_environ
from mysite import settings
setup_environ(settings)
Sunday, February 28, 2010
Testing JSON django fixtures
My JSON testing fixture for django was bad, and django gave a really unhelpful error:
A line number would have been nice, jeez. To find out what the actual problem was I used the json module in ipython:
Which will either work, or give you a line number for the problem :)
ValueError: No JSON object could be decoded
A line number would have been nice, jeez. To find out what the actual problem was I used the json module in ipython:
import json
a=open("testing.json",'r').read()
json.loads(a)
Which will either work, or give you a line number for the problem :)
Saturday, February 27, 2010
Accessing Australian public transport data and Google's General Transit Feed Specification (GTFS)
I've used some pretty ordinary public transport websites. To do something about the problem, I recently embarked on a mission to create a google maps overlay. The idea was to show a particular public transport provider how their information delivery could be improved. In the process I found out lots of interesting stuff about public transport information, so I thought I'd record that first.
The San Francisco Bay Area Rapid Transit (BART) website is probably the best example of delivering useful information to commuters. They have a public, open, real-time API for developers that has spawned some great tools such as iPhone apps that display real-time arrival times for your favourite stations, provide route-planning and much more.
BART also publishes a General Transit Feed Specification (GTFS) zip file, which is a format developed by Google that allows you to provide amazing public transport functionality through google maps. Whenever you plan a route with google maps you can click on 'by public transit' and it will give you a selection of routes, times, and turn-by-turn directions for any walking required. See this random example trip I made in San Francisco.
All this is well and good for the US, but what about Australia? I started looking around, and I was amazed by the TransPerth website.
View Larger Map
Not only have they published a GTFS zip file so you can view stops and do fantastic trip planning via google maps, they also have a mobile website with real-time arrival times for bus stops and train stations, live timetables, and a 'stops near you' search on their website. The trip planner on the website is also the best I have ever used, and rivals that of google maps (they may just use a maps api...). Not surprisingly there is a Perth iPhone app, which seems to be much more feature-complete than the other capital cities, due to the data provided by TransPerth, which is infinitely preferable to screen-scraping. Of all the cities currently providing GTFS files, sadly Perth is the only Australian participating.
So, step 1 in giving commuters access to better public transport information is to publish a GTFS file. This at least means you have a database of all your stop coordinates, route names, timetable info etc. Updating the GTFS file is going to be easiest if you just use the GTFS format as your schema, it looks fairly sensible. I imagine there are lots of operators with giant excel spreadsheets holding route information, so it is probably already a step up. Next, take transperth as an example - they are kicking some goals.
PS. I was thinking - a cheap way to do GPS positioning for buses would be to stick an android phone in each one and hook it up to google latitude.
The San Francisco Bay Area Rapid Transit (BART) website is probably the best example of delivering useful information to commuters. They have a public, open, real-time API for developers that has spawned some great tools such as iPhone apps that display real-time arrival times for your favourite stations, provide route-planning and much more.
BART also publishes a General Transit Feed Specification (GTFS) zip file, which is a format developed by Google that allows you to provide amazing public transport functionality through google maps. Whenever you plan a route with google maps you can click on 'by public transit' and it will give you a selection of routes, times, and turn-by-turn directions for any walking required. See this random example trip I made in San Francisco.
All this is well and good for the US, but what about Australia? I started looking around, and I was amazed by the TransPerth website.
View Larger Map
Not only have they published a GTFS zip file so you can view stops and do fantastic trip planning via google maps, they also have a mobile website with real-time arrival times for bus stops and train stations, live timetables, and a 'stops near you' search on their website. The trip planner on the website is also the best I have ever used, and rivals that of google maps (they may just use a maps api...). Not surprisingly there is a Perth iPhone app, which seems to be much more feature-complete than the other capital cities, due to the data provided by TransPerth, which is infinitely preferable to screen-scraping. Of all the cities currently providing GTFS files, sadly Perth is the only Australian participating.
So, step 1 in giving commuters access to better public transport information is to publish a GTFS file. This at least means you have a database of all your stop coordinates, route names, timetable info etc. Updating the GTFS file is going to be easiest if you just use the GTFS format as your schema, it looks fairly sensible. I imagine there are lots of operators with giant excel spreadsheets holding route information, so it is probably already a step up. Next, take transperth as an example - they are kicking some goals.
PS. I was thinking - a cheap way to do GPS positioning for buses would be to stick an android phone in each one and hook it up to google latitude.
Using ssh forced-command with rsync for backups
I want to use ssh forced command to limit a backup user to just running rsync. The idea is to allow backups to be deposited without granting full shell access. The trickiest part of the problem is figuring out what command rsync will run on the server. The rsync man page gives a clue with this cryptic statement:
As an aside, rrsync is a perl script that parses SSH_ORIGINAL_COMMAND and provides a way to limit rsync to certain directories. This is not a bad idea, but I always want to run the same command, so it is over-kill.
I found an insight into the --server option, which solved the mystery of what command rsync runs. Just run your regular command with '-v -v -n', and rsync will tell you. Neat!
The actual command I will run uses one less 'v' and ditches the dry-run 'n'. So now my SSH forced command in ~/.ssh/authorized_keys looks like this:
Chuck it in a cron, and we are done.
--server and --sender are used internally by rsync, and should never be typed by a user under normal circumstances. Some awareness of these options may be needed in certain scenarios, such as when setting up a login that can only run an rsync command. For instance, the support directory of the rsync distribution has an example script named rrsync (for restricted rsync) that can be used with a restricted ssh login.
As an aside, rrsync is a perl script that parses SSH_ORIGINAL_COMMAND and provides a way to limit rsync to certain directories. This is not a bad idea, but I always want to run the same command, so it is over-kill.
I found an insight into the --server option, which solved the mystery of what command rsync runs. Just run your regular command with '-v -v -n', and rsync will tell you. Neat!
rsync -rtz -v -v -n /home/mw/src backup@host:/home/backup/
opening connection using: ssh -l backup host rsync --server -vvntrze.iLs . /home/backup/
The actual command I will run uses one less 'v' and ditches the dry-run 'n'. So now my SSH forced command in ~/.ssh/authorized_keys looks like this:
command="rsync --server -vtrze.iLs . /home/backup/",no-pty,no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-dss AAAA....
Chuck it in a cron, and we are done.
Thursday, February 25, 2010
HOWTO set a proxy for apt updates with a local mirror and problems with http_proxy overriding apt settings
This has annoyed me for some time now. The http_proxy environment variable in Ubuntu overrides the APT proxy settings, despite this apparently being fixed in Debian in Jan 2009. The apt configuration is more specific, and should win out over the environment variable. I'll explain why this is a problem.
Here is how it is supposed to work.
The simplest case is to set a proxy for apt by editing "/etc/apt/apt.conf" and adding this line:
The problems start if you have a local mirror - I do this to save bandwidth due to a large number of ubuntu installs on the network. For this config, remove any proxy lines from /etc/apt/apt.conf and create /etc/apt/apt.conf.d/30proxy:
With the http_proxy environment variable unset this works fine, until you go to install something like flashplugin-nonfree, which downloads a tarball from adobe. Apt completely ignores your proxy configuration and tries to download it directly:
Which obviously doesn't work. You can set the http_proxy environment variable, but then apt won't work because it sends everything through the proxy, and the local mirror settings (ubuntu.mydom.com) you have in /etc/apt/sources.list can't go through the proxy (and shouldn't). That's what the "DIRECT" above is supposed to do.
The only way to actually make this work is described by Troy. You need to set the no_proxy environment variable:
Then make sure it actually gets kept by sudo. First get the list of var's sudo is currently preserving (look at those under "Environment variables to preserve"):
Change /etc/sudoers with 'sudo visudo' and add:
Check that it got kept:
Chuck no_proxy and http_proxy in ~/.bashrc and you are good to go. Simple, right?
Here is how it is supposed to work.
The simplest case is to set a proxy for apt by editing "/etc/apt/apt.conf" and adding this line:
Acquire::http::Proxy "http://proxy.mydom:3128";
The problems start if you have a local mirror - I do this to save bandwidth due to a large number of ubuntu installs on the network. For this config, remove any proxy lines from /etc/apt/apt.conf and create /etc/apt/apt.conf.d/30proxy:
Acquire
{
http {
Proxy "http://proxy.example.com:8080/";
Proxy::ubuntu.mydom.com "DIRECT";
}
}
With the http_proxy environment variable unset this works fine, until you go to install something like flashplugin-nonfree, which downloads a tarball from adobe. Apt completely ignores your proxy configuration and tries to download it directly:
Connecting to archive.canonical.com|91.189.88.33|:80
Which obviously doesn't work. You can set the http_proxy environment variable, but then apt won't work because it sends everything through the proxy, and the local mirror settings (ubuntu.mydom.com) you have in /etc/apt/sources.list can't go through the proxy (and shouldn't). That's what the "DIRECT" above is supposed to do.
The only way to actually make this work is described by Troy. You need to set the no_proxy environment variable:
export no_proxy="ubuntu.mydom.com"
Then make sure it actually gets kept by sudo. First get the list of var's sudo is currently preserving (look at those under "Environment variables to preserve"):
sudo sudo -V
Change /etc/sudoers with 'sudo visudo' and add:
Defaults env_keep="no_proxy http_proxy https_proxy ftp_proxy XAUTHORIZATION XAUTHORITY TZ PS2
PS1 PATH MAIL LS_COLORS KRB5CCNAME HOSTNAME HOME DISPLAY COLORS"
Check that it got kept:
sudo printenv | grep no_proxy
Chuck no_proxy and http_proxy in ~/.bashrc and you are good to go. Simple, right?
Wednesday, February 24, 2010
Copying a compressed disk image across the network using netcat and dd
Here is a handy command to copy a disk image (such as a VM) across the network compressed. We also do a SHA1 hash to make sure it copied correctly. The idea is to only read and write the data once, to make it as quick as possible.
On the box you are copying the disk from:
On the box you are copying the disk to:
On the box you are copying the disk from:
mkfifo /tmp/disk.dat; sha1sum /tmp/disk.dat & dd bs=256k if=mydisk.dd | tee /tmp/disk.dat | gzip -1 | nc -q 2 10.1.1.1 8181
On the box you are copying the disk to:
The quick and dirty version with no hash checking is below. Note if your source is OS X you want -w instead of -q in the netcat command. I've used this with two macs connected via thunderbolt/firewire, one in TDM, and one sending the image to a linux box:
nc -l 8181 | gunzip | tee image.dd | sha1sum | tee image.dd.sha1
dd bs=256k if=mydisk.dd | gzip -1 | nc -q 2 10.1.1.1 8181 nc -l 8181 | gunzip > image.dd
Tuesday, February 23, 2010
Django user profiles
While the Django doco is pretty good, it is a bit light-on for user profiles. User profiles are for when you want to extend the information stored per user on top of the Django defaults (firstname, lastname, email etc.). There is a good blog post that fills in the gaps, although be sure to read the comments, because there is a gotcha and a work around. Basically this is what you want:
from site.app.models import UserProfile
from django.contrib.auth.admin import UserAdmin as RealUserAdmin
class UserProfileInline(admin.StackedInline):
model = UserProfile
class UserAdmin(RealUserAdmin):
inlines = [ UserProfileInline ]
admin.site.unregister(User)
admin.site.register(User,UserAdmin)
Time offsets in python
I always forget how to do time offsets (i.e. current time - 20 seconds) in python. Here's how:
from datetime import datetime,timedelta
datetime.now() - timedelta(seconds=20)
Sunday, February 21, 2010
Installing django and postgres on ubuntu
To install django with postgres on ubuntu:
Edit settings.py:
Use psql to create the user and database, granting all privs on the database. If you want to use django testing, your user also needs to be able to create a database. Use this syntax:
Then syncdb and startapp.
sudo apt-get install python-django postgresql python-psycopg2
django-admin.py startproject mysite
Edit settings.py:
DATABASE_ENGINE = 'postgresql_psycopg2'
DATABASE_NAME = 'blahdb'
DATABASE_USER = 'blah'
DATABASE_PASSWORD = 'blah'
DATABASE_HOST = 'localhost'
DATABASE_PORT = ''
Use psql to create the user and database, granting all privs on the database. If you want to use django testing, your user also needs to be able to create a database. Use this syntax:
alter user django createdb;
\du django
Then syncdb and startapp.
Thursday, February 4, 2010
dpkg basics - listing installed packages etc.
Here are some basic dpkg operations that come in handy.
List all installed packages:
List the files that a package installs:
Find out which package a particular file was installed by:
List all installed packages:
dpkg -l
List the files that a package installs:
dpkg -L [packagename]
Find out which package a particular file was installed by:
dpkg --search [filename]
Tuesday, February 2, 2010
HOWTO allow multiple users write access to a directory without changing umask
I have often run into the problem of giving multiple users write access to a code repo. The main problem is what permissions are set on files which are added in new commits. The default umask is 022, so you get directories as 755 and files as 644, which obviously doesn't work.
The solution I have used in the past is to change the umask in /etc/profile and /etc/login.defs to 002. You have to do both, otherwise files added via ssh and other means don't get the right mask. The disadvantage is that now all files get created as 775,664, when you only really need it for one directory. There is a better way, enter filesystem acls.
First, change your /etc/fstab to include the 'acl' option for the mount point where your repo resides:
Do some of the regular prep to make sure you files are owned right, and dirs have the sticky bit set.
Use setfacl to set the default acls for new files and directories:
And check the result with 'getfacl'. Also when you use 'ls', you should see a '+' at the end of the usual permissions string that indicates there are more acls:
The solution I have used in the past is to change the umask in /etc/profile and /etc/login.defs to 002. You have to do both, otherwise files added via ssh and other means don't get the right mask. The disadvantage is that now all files get created as 775,664, when you only really need it for one directory. There is a better way, enter filesystem acls.
First, change your /etc/fstab to include the 'acl' option for the mount point where your repo resides:
/dev/sda1 / ext3 defaults,acl 0 0
Do some of the regular prep to make sure you files are owned right, and dirs have the sticky bit set.
chown -R user:group /code
chown -R g+w /code
find /code -type d -exec chmod g+s {} \;
Use setfacl to set the default acls for new files and directories:
setfacl -R -m d:u::rwx,d:g::rwx,d:o:r-x /code
And check the result with 'getfacl'. Also when you use 'ls', you should see a '+' at the end of the usual permissions string that indicates there are more acls:
drwxrwsr-x+
Possibly the stupidest IT security comment I have ever read
From SANS news bites:
Northcutt, wtf? Does having long lines at Customs actually make your border more secure, or just slower? Presumably the security is in the checking that happens when you get to the counter, or beforehand when you book the flight. How does having a line make you more secure?
So what you would like to do is purposefully implement a DOS on SMTP? If you are so sure the sources are malicious, why not just block them instead of delivering the mail slowly? If you aren't sure enough to block them you are probably DOSing legitimate email. And what difference does it make to the attacker if the email is delivered slowly? The attack is still delivered.
I could go on, but I think this definitely wins the prize for stupidest IT security comment. I'll let you know when I read something worse.
TOP OF THE NEWS
--High Stakes in Covert Cyber War
(January 26, 2010)
Christian Science Monitor Editor John Yemma points out that the recently disclosed long term cyber attacks against US oil companies could result in "lost jobs and higher energy prices." The attackers infiltrated the companies' networks and remained inside, quietly stealing valuable bid data, which could allow them to make bids on potentially valuable oil and gas tracts without having to invest the considerable research funds spent by the targeted companies. Evidence suggests that the attacks originated in China.
http://www.csmonitor.com/Commentary/editors-blog/2010/0126/Why-the-China-virus-hack-at-US-energy-companies-is-worrisome
(Northcutt): One sensible approach is pretty simple. We make people stand in long lines to clear customs, let's do the same thing for packets. Now before you flame me for being an idiot, I am not suggesting all packets; let's start with SMTP. If a mail message comes from a known site or country that is a major source of malicious traffic, or has a link back to such a place, force it through a series of gateways. Who pays for this? The entity that wants to deal with the US. We can call it a packet visa. Counterpoint 1: "It will never work because there are a million pathways between here and there." Ah, very true, but there are a finite number of targets, US Government including DoD, the industrial defense contractors, Fortune 500 companies, critical infrastructure, and resource brokers such as oil companies. It is the old 80/20 rule. I am betting a guy like Tom Liston can write the code in an afternoon, though it will take some DHS contractor sixty people to maintain and improve it.]
Northcutt, wtf? Does having long lines at Customs actually make your border more secure, or just slower? Presumably the security is in the checking that happens when you get to the counter, or beforehand when you book the flight. How does having a line make you more secure?
So what you would like to do is purposefully implement a DOS on SMTP? If you are so sure the sources are malicious, why not just block them instead of delivering the mail slowly? If you aren't sure enough to block them you are probably DOSing legitimate email. And what difference does it make to the attacker if the email is delivered slowly? The attack is still delivered.
I could go on, but I think this definitely wins the prize for stupidest IT security comment. I'll let you know when I read something worse.
Wednesday, January 27, 2010
HOWTO install a moinmoin wiki
This article, although old, covers the basics. What I find crap is that you need to copy the /usr/share/moin/data and underlay directories into wherever your wiki is going to live.
If you don't do this, moinmoin will helpfully give you an error saying the data directory doesn't exist or has the wrong permissions. In my case, the directory did exist with the right permissions: the problem was it didn't have all the other subdirectories needed that you get from the copy operation. Yay for clear error messages.
If you don't do this, moinmoin will helpfully give you an error saying the data directory doesn't exist or has the wrong permissions. In my case, the directory did exist with the right permissions: the problem was it didn't have all the other subdirectories needed that you get from the copy operation. Yay for clear error messages.
Thursday, January 21, 2010
Upgrading postgres after an ubuntu dist-upgrade
After an ubuntu dist-upgrade, there is work to be done to bring a new version of postfix online. First you need to (unintuitively) drop the new empty postgres cluster:
Then upgrade the existing cluster (which will create a new one):
It looks like this does a complete duplication of the data, ie. it takes ages. Once it is finished, test it works, and get your diskspace back with:
pg_dropcluster --stop [new version number] main
Then upgrade the existing cluster (which will create a new one):
pg_upgradecluster [old version number] main
It looks like this does a complete duplication of the data, ie. it takes ages. Once it is finished, test it works, and get your diskspace back with:
pg_dropcluster [old version number] main
HOWTO Backup a Postgres database
Easy!
Here's a quick and dirty way to send it over the network compressed. On the postgres box:
And on the backup box:
pg_dump -U username -h localhost dbname > backup_2010_01_22.sql
Here's a quick and dirty way to send it over the network compressed. On the postgres box:
pg_dump -U username -h localhost dbname | gzip -1 | nc -q 2 backup 5000
And on the backup box:
nc -l 5000 > pg_backup_2010_01_22.sql.gz
Quick postgres cheatsheet
You can use '\h' for help on MySQL commands and '\?' for help on postgres commands. '\d' is show tables. It seems to be important that you specify '-h localhost' when logging in - it wouldn't accept my credentials without it.
On ubuntu you need to so the following to get access as 'postgres' (aka root):
Create dbase and user:
General stuff
On ubuntu you need to so the following to get access as 'postgres' (aka root):
sudo -u postgres psql postgres
Create dbase and user:
create database blah;
create user meblah with password 'blah';
grant all on database blah to meblah;
General stuff
psql -U blah -h localhost
\c databasename
\d
select * from tablename;
\q
Wednesday, January 20, 2010
Recovering a deleted file from Subversion
You need to point at the repository, not the checked out copy. Something like:
The '-r' revision syntax doesn't work, you need to use the '@' syntax as above for the revision.
svn copy file:///var/local/svn/myrepo/trunk/files/blah.conf@83 blah.conf
The '-r' revision syntax doesn't work, you need to use the '@' syntax as above for the revision.
Using rsyslog to log with dynamic file names
I wanted to split logs into /host/year/month/host-yyyy-mm-dd_syslog to avoid having to have a rotate rule for each one. The first thing I tried was syslog-ng, which I found difficult to configure. It also had a memory leak that resulted in logs being lost and the box running out of memory.
Now I'm trialling rsyslogd, which looks quite good. Unfortunately it took much longer to configure than I had hoped. I wanted something fairly simple - the split of logs as above, and local logs going to the regular files to prevent any confusion when others need to use the box. The config I came up with was:
The example config for what I wanted to do was wrong. The source is not 'localhost', but whatever the local dns name is ('mybox').
I also had to change $FileGroup to 'syslog' from 'adm' to make it work, even though this shouldn't have mattered. Without this I was getting 'Could not open dynamic file' errors where the file would be created with the right permissions and ownership, but rsyslogd then couldn't write to it.
Now I'm trialling rsyslogd, which looks quite good. Unfortunately it took much longer to configure than I had hoped. I wanted something fairly simple - the split of logs as above, and local logs going to the regular files to prevent any confusion when others need to use the box. The config I came up with was:
$template timeandhost_auth, "/var/log/rsyslog/%FROMHOST%/%$YEAR%/%$MONTH%/%FROMHOST%-%$NOW%-auth.log"
$template timeandhost_syslog, "/var/log/rsyslog/%FROMHOST%/%$YEAR%/%$MONTH%/%FROMHOST%-%$NOW%-syslog.log"
if $source != 'mybox' then ?timeandhost_syslog
if $source != 'mybox' and ($syslogfacility-text == 'authpriv' or $syslogfacility-text == 'auth') then ?timeandhost_auth
if $syslogfacility-text == 'cron' then -/var/log/cron.log
if $source == 'mybox' and ($syslogfacility-text == 'authpriv' or $syslogfacility-text == 'auth') then /var/log/auth.log
if $source == 'mybox' then -/var/log/syslog
if $source == 'mybox' and $syslogfacility-text == 'daemon' then -/var/log/daemon.log
if $source == 'mybox' and $syslogfacility-text == 'kern' then -/var/log/kern.log
The example config for what I wanted to do was wrong. The source is not 'localhost', but whatever the local dns name is ('mybox').
I also had to change $FileGroup to 'syslog' from 'adm' to make it work, even though this shouldn't have mattered. Without this I was getting 'Could not open dynamic file' errors where the file would be created with the right permissions and ownership, but rsyslogd then couldn't write to it.
Monday, January 18, 2010
HOWTO mount a qcow disk image with multiple partitions
To mount a qcow disk image directly you first need to convert it to raw:
Take a look at the partitions using disktype or fdisk -l:
Our first partition is 32 sectors from the beginning, which is an offset of 32 * 512 = 16384 bytes.
Mount it:
qemu-img convert -O raw disk0.qcow2 disk0.dd
Take a look at the partitions using disktype or fdisk -l:
disktype disk0.dd
--- disk0.dd
Regular file, size 11.72 GiB (12583960576 bytes)
GRUB boot loader, compat version 3.2, boot drive 0xff
DOS/MBR partition map
Partition 1: 9.312 GiB (9999007744 bytes, 19529312 sectors from 32)
Type 0x83 (Linux)
Ext3 file system
UUID 4BF80E7C-244E-43EE-BEA5-0A3D97188C68 (DCE, v4)
Volume size 9.312 GiB (9999007744 bytes, 2441164 blocks of 4 KiB)
Partition 2: 1.863 GiB (1999962112 bytes, 3906176 sectors from 19529344)
Type 0x82 (Linux swap / Solaris)
Linux swap, version 2, subversion 1, 4 KiB pages, little-endian
Swap size 1.863 GiB (1999953920 bytes, 488270 pages of 4 KiB)
Our first partition is 32 sectors from the beginning, which is an offset of 32 * 512 = 16384 bytes.
Mount it:
mount -t ext3 disk0.dd /mnt/temp/ -o loop,offset=16384
Subscribe to:
Posts (Atom)