Categories
Linux

WordPress Development, Staging and Production Deployment

A.K.A Keeping Your WordPress in Git

As a web application developer I’m used to having several environments to deploy to: my local workstation, the QA testing environment and our production environment. I’m also accustomed to keeping everything in version control: code, config and deployment scripts. As we prepare a new release it spends time in the QA environment and when testing is complete we move it to production. The method for deploying to QA is very similar to how we deploy to production, since we want to catch bugs in the deployment process itself.

This technique is not obviously applied to WordPress deployment. Over the years I have developed a technique for hosting a WordPress ‘development’ environment for our marketing and frontend webdev people to work on before it is release to the public. We keep all the changes in git and deploy directly from git in one command. I haven’t seen any other great solutions to the problem that a lot of your content is in the database, but a whole bunch of stuff is also in the theme files (php and js), so you need to ‘deploy’ the database changes alongside the file changes. Here’s my take on that.

Caveat

This technique BLOWS AWAY the production database during deployment. It is therefore not useful if you have comments enabled in WordPress. We use WordPress more like a CMS than a blog so we are free to replace the database when we deploy. The technique could probably be adapted to only deploy the essential tables (pages, posts etc) and leave the comments table alone.

Usage

Let’s assume the development environment is at /var/www/dev and the production environment is at /var/www/prod.

To ‘checkin’ the dev version

cd /var/www/dev
dump-n-push

To ‘checkout’ the current version into production

cd /var/www/prod
deploy

Set Up

Download the scripts from https://github.com/werkshy/wp-deploy and copy them to /usr/local/bin, which should be in your $PATH.

Everything is checked into git: wordpress files, themes, plugins, db dumps, everything.

Install wordpress in the dev environment

Download and unzip the wordpress release at /var/www/dev.

You’ll need to setup the dev database.

mysql -uroot -p
mysql> create database wpdev;
mysql> grant all on wpdev.* to wordpress identified by 'wordpress';

Set the db parameters in wp_config.php. THIS WILL NOT BE CHECKED IN.

Edit .gitignore, most importantly to block wp-connfig.php:

/wp-content/cache/
.DS_Store
/wp-config.php
.htaccess

Set up your webserver to serve php from that directory as normal, (see example Apache configs at the end of this post).

Add Everything To Git


cd /var/www/dev
git init
git add -A
git commit -m "initial commit"

Create the ‘origin’ repository

You may keep your site on a remote git repo, or in a git repo on the local machine.

Create the ‘origin’ repository:

cd /root/
mkdir wp.git
cd wp.git
git init --bare

Push your dev commit to the origin

cd /var/www/dev
dumpdb
git remote add origin /root/wp.git
git push

Prepare the production environment

Checkout the files:

cd /var/www
git clone /root/wp.git prod
cd prod
cp /var/www/dev/wp-config.php .

Create the production database (use the same user as the dev one)

mysql -uroot -p
mysql> create database wpprod
mysql> grant all on wpprod.* to wordpress;

Set the production db name in wp-config.php

Now try loading the db dump into production:

loaddb

If that all works, you can now dump and push the dev site with

dump-n-push

and you can deploy the production site from git with

deploy

Example Apache Config

Development Environment:

<VirtualHost *:80>
	ServerName dev.energyhub.com
	DocumentRoot /var/www/dev
	<Directory "/var/www/dev">
		AllowOverride All
	</Directory>
</VirtualHost>

Production Environment:

<VirtualHost *:80>
	ServerName www.energyhub.com
	DocumentRoot /var/www/prod
	<Directory "/var/www/prod">
		AllowOverride All
	</Directory>
</VirtualHost>
Categories
Code Linux

Release of ‘sleeper’ 0.2

I just released ‘sleeper’, a little utility script to suspend your computer if you are running a lightweight window manager like Awesome or Xmonad.

Categories
picflick

Picflick Update

Here’s Picflick v1.3.
Here’s Picflick v1.3.1.

Here’s Picflick v1.3.2.

Here’s Picflick v1.3.3.

New feature: much simplified single-script setup, getting rid of the picflick_starter wrapper script. The button now calls the picflick script which re-launches itself in a terminal window so you can see the progress. Much easier to understand and configure.

Bug fixes:

  1. The “make install” step failed when the Picasa buttons directory did not already exist. Now fixed. (As reported by Jeff Bloemink).
  2. Only use ‘urxvt’, if available, not rxvt since the latter does not have the  -hold option (thanks again to Jeff Bloemink).
  3. Fixed typo in xterm command line (thanks to “Dr AKULAvitch”)
  4. Fixed bug when using AUTH_TOKEN in picflick script instead of ~/.flickrrc (thanks to Mathieu).

Thanks for the bug reports guys. Keep ’em coming.

Picflick home page

Categories
Linux

Claws Mail with GMail

Why Claws Mail?

I’ve been suffering more and more recently on my old Thinkpad maxed out at 1GB of RAM. Also I’ve been feeling the need to use a real mail client after a few
months of having two GMail windows open (work + personal). Trusty old
Thunderbird uses 40+MB of RAM on this machine for three IMAP accounts, using a couple of crucial extensions. 40MB is a large chunk of my precious memory, considering that I’m already using two instances of Firefox (one for browsing, one for web development). If the memory usage hits 1GB then everything grinds for a couple of minutes (swap is so evil on laptops!) until I can kill one of those Firefoxes, so all of my apps have to justify themselves against low memory alternatives. Claws uses about 6MB, so I’m using it for now.

Using Claws Mail with GMail

Claws is remarkably capable as a GMail IMAP client these days. Naturally it supports IMAP over SSL and SMTP over SSL with TLS, which is required for GMail. It also has two features which Thunderbird only supports through extensions or about:config magic settings:

  • You can set Trash to be [Gmail]/Trash.
  • You can set up a shortcut key to archive emails. This isn’t obvious so here’s how:
  1. Create a label in Gmail called ‘archived’. This is just a label where you can put stuff so it isn’t in the inbox (“inbox” in Gmail is just a label too)
  2. Go to Configuration/Actions.
  3. Add a new action with Menu Name “Archive” and command as a filter action.
  4. Edit filter action, set Action = Move, and Destination = archived
  5. Save the action. You should now have the action available in the menu under Tools/Actions/Archive and can check that it works.
  6. Now, to set a shortcut key, go to Configuration/Preferences/Other/Miscellaneous and set “Enable customisable keyboard shortcuts”. Then go to Tools/Actions, and with the “archive” action highlighted press ‘Y’ to set the keyboard shortcut.

Other settings:

Tell Claws not to save sent mail, because using Google’s SMTP puts a copy in your sent folder anyway.

Set [Gmail]/Sent Mail to type ‘outbox’ and you can delete the other ‘Sent’ folder, plus you get a nice icon on the sent mail folder. You can do the same with Drafts and Trash.

Categories
Linux

Broken en_ZA locale in Ubuntu Jaunty

I’m dealing with a lot of documents with the language set to English (South African) this year, and in OpenOffice on Jaunty there’s always a ton of perfectly cromulent words which being flagged as mispelled. On the command line I see an error like this:

Failure loading aff file /usr/share/myspell/dicts/en_ZA.aff

I do have all the relevant packages installed, so it seems like Jaunty has installed a affixes file for myspell that can’t actually be used by OpenOffice at least.

The fix is to download the myspell en_ZA files from here http://downloads.translate.org.za/spellchecker/

Backup the original files

cd /usr/share/myspell/dicts/

sudo mv en_ZA.aff en_ZA.aff.bak

Unzip the file from translate.org.za, and copy it to /usr/share/myspell/dicts/en_ZA.aff, and do the same for en_ZA.dic

That should give you a working spellchecker in OpenOffice.org

Categories
Code Linux

Picflick 1.2

Quick update to picflick, fixing a $PATH bug in picflick_starter and using ‘nice’ when resizing the images.

Categories
Linux

Picflick Update

Here’s an update to ‘picflick’ the Picasa-to-Flickr uploader for Linux.

The changes are mainly simplification and documentation. I had found that my picflick setup broke when I upgraded my Ubuntu version over the holidays, and the previous setup wasn’t showing me why. The new version runs everything in a terminal and skips all the other pointless notification methods (libnotify, text-to-speech and beeping!!) What was I thinking?

Get it at picflick.

Categories
Code Linux

picflick: Picasa To Flickr Export on Linux

I’ve adapted the pragmatic-looking picasa2flickr Picasa plugin to work in Picasa 3 on Linux. Instead of feeding the Picasa files to a graphical Flickr uploader, it uploads them automatically using a perl utility called ‘flickr_upload’. Hopefully one or two people will find this useful.

Find it at picflick.

Categories
Linux

Weird problems with ‘curl -d’ and lighttpd

I’m trying to test a Google Checkout response handler for a project I’m working on. Rather than putting through sandbox orders I’m just trying to post a message using Curl but for some reason POSTing anything longer than about a kilobyte just doesn’t show up at the server, running lighttpd 1.4. Enabling some debugging showed that curl is sending an HTTP 100 header.

User-Agent: curl/7.18.0 (i486-pc-linux-gnu) libcurl/7.18.0 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.1
Host: asap
Accept: */*
Content-Length: 1116
Content-Type: application/x-www-form-urlencoded
Expect: 100-continue

There’s a bug in lighttpd 1.4 which means that this ‘Expect’ header is not handled properly. It won’t be fixed in v1.4 either.

The quick workaround for this bug is to call curl with the option “-H ‘Expect: ‘” to disable the header.

By the way, I also didn’t know how to post a file of data with curl – the answer is

curl -d ‘@file.xml’
Categories
Code Life

Command-line face detection

This post explains how to:

  1. Take a facial portrait and detect the position of the face
  2. Cut a facial portrait down the center and remove half of the picture so that kids can fill it in themselves.
  3. Print a massive amount of JPEGs at once by putting them in a PDF.

I’m in the Eastern Cape, South Africa now, working with orphans and vulnerable children. Alex and I are spending some of our time on art projects in remote rural areas, and one of the projects is an idea we stole from an orphanage in Cape Town: take a digital portrait of a child’s face, crop it down the center, print it and let them draw the other side of the face. Like this one Alex did:

Alex's half-face portrait
Half-face portrait

The first time we did this, we went out to the rural area, took pictures of about 16 kids and then spent an hour or two processing the pictures and printing. The processing involved:

  1. Importing the pictures to Picassa, straightening some of them and cropping others. (Yes, I know Picassa is not Free/Libre, but F-spot (in Ubuntu Hardy) is dog slow to display pictures and doesn’t have the straighten function).
  2. Exporting to a directory, then opening each file in GIMP and cropping the right-or-left hand side of the face away.
  3. Combine all the JPEG images into a PDF so they’re easy to print.

The second time, we did the project at a school, for 60+ pupils. The straightening/cropping in Picassa took about ten minutes (since most of the pictures didn’t need much work). The open-crop-save-close process in GIMP took about thirty seconds per picture and was both repetitive and highly mouse intensive so that we both got hand cramps after a while.

So, after watching Alex do the process for a second class at the school, I decided there must be a better way: automatic face detection. Lo-and-behold, five minutes of Googling got me to Torch3Vision, an image recognition toolkit with built-in face detection. It definitely works, but it takes quite a little setting up, so here’s a guide.

  1. Download Torch3Vision and un-tar it: tar -zxf Torch3vision2.1.tgz
  2. Build Torch3vision: cp Linux_i686.cfg.vision2.1 Linux_i686.cfg ./torch3make
  3. Build the vison examples for face detection:cd vision2.1/examples/facedetect/
    ../../../torch3make *.cc

So now we have a working set of face-detection programmes. The command line interface isn’t too friendly, so they take a little playing around. For starters, the binaries on my Ubuntu system don’t read JPEG images (although the code seems to be there, the build system is non-standard and didn’t automatically pick up my jpeg libraries. So, I needed to convert my images to PPM format, which is one of those image formats that no-one uses but somehow is the lowest common denominator for image processing command line apps. I use the program ‘jpegtopnm’ from package ‘netpbm’.

jpegtopnm andy.jpg > andy.ppm

Of the three facial detection programs available, I found ‘mlpcascadescan’ to be the most effective and quickest, although they all have similar interfaces so this will basically be the same for all of them. We need to pass the source image and the model file, and we tell it to write the face position and to save a drawing with the face detected:

mlpcascadescan andy.ppm -savepos -draw \
-model ~/temp/models/mlp-cascade19x19-20-2-110

This command takes about 20s to run on my creaky old laptop, and creates two files. One is a greyscale visualization of the face detected (the original image was colour):

Face detected, more or less
Face detected, more or less

The other file ‘andy.pos’ contains the results of face detection. Line one is the number of detections, then each line has format x y w h, very easy to parse.

   FACE_POS=`head -n 2 “andy.pos | tail -n 1`
FACE_X=`echo $FACE_POS | awk ‘{print $1}’`
FACE_W=`echo $FACE_POS | awk ‘{print $3}’`
FACE_CENTER=`echo $FACE_X + $FACE_W/2 | bc`

I played around with the step-factors in the x and y directions to shave a second or so off the face detection routine, the values I chose were 0.1 and 0.2 respectively (I don’t need any accuracy in the y direction really, since my use is to cut the face down the middle).

Then, since these are portrait photographs, I can speed up face detection by setting a minimum size for the face. I experimented and one sixth of the total image width gave good results – any larger and the face detection would fail with a crash. Adding this constraint provides better than 10X speed up, since the algorithm doesn’t waste time searching for small faces.

WIDTH=`identify -format “%w” “andy.jpg”`
MIN_FACE_WIDTH=`echo $WIDTH / 6 | bc`

So now here’s the final face detection command

mlpcascadescan “$ppm” -dir /tmp/ -savepos -model $MODEL \
-minWsize $MIN_FACE_WIDTH -stepxfactor $STEPX -stepyfactor $STEPY

And finally, as promised, I’ll tell you how to blank-out one side of the face: of course, using Image Magick. Using the ‘chop’ or ‘crop’ commands didn’t work for this purpose, where I wanted the image to keep it’s dimensions but have one half just be white. So I decided to draw a white rectangle over half of the picture.  I apply the image manipulation to the original JPEG file, not the temporary PPM file that I used to detect the face position.

convert -fill white \
-draw “rectangle $FACE_CENTER,0 $WIDTH,$HEIGHT” \
“andy.jpg” “andy_half.jpg”

And here’s the final result:

The script I am using to tie this all together.

After processing all the portraits, I run a quick script to convert the jpegs to PDF and then join them into one master PDF file that I can easily print. The JPEG-PDF conversion uses Image Magick again (convert -rotate 90 file.jpg file.pdf). Joining together many PDFs into one document is easy with ‘pdfjoin’ from package ‘pdfjam’ (pdfjoin $tempfiles –outfile jpg2pdf.pdf). See the final jpg2pdf script.

But perhaps more enjoyable is to see the result after letting my limited creative talents loose:

A work of staggering complexity.
A work of staggering complexity.