Categories: ,
Posted by: bjb

South migration names cannot have hyphens in them. grrr.

In fact they cannot have anything but alphanumeric characters and underscore.

Categories: , ,
Posted by: bjb

If you’re getting the error:

-sh: <( compgen -f -X -- 'b' ): No such file or directory

when you’re trying to invoke filename completion, for example like this:

bjb@edouard:/etc$ ls b<tab>

where you typed a b and then tab, to find all the directory entries in /etc that start with b, then you might have the posix option set on your shell while using an old version of bash_completion package. The version that gave me grief was 20080705. Another machine has version 1:1.2-3 and it doesn’t suffer from this problem.

You can check by showing the contents of $SHELLOPTS:

echo $SHELLOPTS

(And getting the package version number: apt-cache policy bash_completion.)

If the contents of the SHELLOPT shell variable includes the posix keyword, then you will need to disable posix because the shell completion implementation in bash-completion 20080705 doesn’t seem to be posix compliant.

set +o posix

will turn OFF posix mode, and

set -o posix

will turn ON posix mode.

So to make command line completion work, I had to give the command set +o posix CORRECTED 2010-12-16

Figuring this out introduced me to an interesting shell construct that I hadn’t seen before. In the old version of /etc/bash_completion, in function _filedir, you see this:

_filedir()
{
        local IFS=$'\t\n' xspec

        _expand || 0

        local toks=( ) tmp
        while read -r tmp; do
                [[ -n $tmp ]] && toks[${#toks[@]}]=$tmp
        done < <( compgen -d -- "$(quote_readline "$cur")" )

        if [[ "$1" != -d ]]; then
                xspec=${1:+"!*.$1"}
                while read -r tmp; do
                        [[ -n $tmp ]] && toks[${#toks[@]}]=$tmp
                done < <( compgen -f -X "$xspec" -- "$(quote_readline "$cur")" )
        fi

        COMPREPLY=( "${COMPREPLY[@]}" "${toks[@]}" )
}

It is the <( some command with output to stdout ) construct that I hadn’t seen before. It represents “process substitution” and the man page says it “is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files. It takes the form of <( list ) or >( list ). The process list is run with its input or output connected to a FIFO or some file in /dev/fd. The name of this file is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list.”

And also I relearned about the declare -f which will show you all the function definitions your shell currently knows about.

Categories: ,
Posted by: bjb

My app wasn’t running in the virtualenv I had prepared for it … I activated a virtualenv by sourcing /usr/local/pythonenv/DJANGO-1-1/bin/activate, and then ran ./manage.py runserver. However, according to the debug stuff I put in the template (django.get_version() and python.VERSION), the app wasn’t using the right django.

It turns out that manage.py has #!/usr/bin/python at the top — it wasn’t running the python in the path but a hardcoded path to the system python. To use the virtualenv’s python, change that top line to:

#!/usr/bin/env python

This will use the python that can be found in the path of the calling process — which is what you want when using virtualenv.

It may be that newer django’s will generate the /usr/bin/env line, but if you created your project with an older django, you might need this step.

Categories: , ,
Posted by: bjb

If you are using pip install somepackage, and it is not going into your virtualenv, perhaps you are tripping over the same thing I did.

I had activated the virtualenv with /usr/local/pythonenv/DJANGO-1-1/bin/activate, and (DJANGO-1-1) was appearing in the shell prompt. which pip showed /usr/local/pythonenv/DJANGO-1-1/bin/pip. python followed by

import sys
for p in sys.path:
    print p

showed that the system site-packages directories were not being included. So why was the python package being installed to /usr/local/python2.6/site-packages???

Well, the pythonenvs had been installed as root. I was actually running sudo pip install somepackage and that meant I was losing the virtualenv environment (which sets some environment variables that don’t survive the sudo to root transition).

So the solution was to run [UPDATED 2011/01/04]

sudo /usr/local/pythonenv/DJANGO-1-1/bin/pip install -E /usr/local/pythonenv/DJANGO-1-1/ somepackage

and indeed I threw in the --download-cache option to cut down on the download time (although I subscribe to DSL, my download times are closer to dial-up speeds) for subsequent installs into the DJANGO-1-0 and DJANGO-1-2 virtual envs.

Categories: , ,
Posted by: bjb

I really like django, but one thing it lacks is db schema migration. I’m trying out a django package called “south” that does that. (Debian package python-django-south).

South keeps track of migrations applied by adding a table to your database called south_migrationhistory. Of course, this table has to be added before south will work … you add it using syncdb. Fortunately, south keeps track of what it does and what syncdb does.

So you start using south by installing south, and then running ./manage.py syncdb. Now you have some new ./manage.py commands, called startmigration and migrate (also datamigration, schemamigration and graphmigrations, maybe I’ll look at those another time). To snapshot your initial db state or to detect changes to your schema, you use the startmigration command. To apply migrations to your database, you use the migrate command.

To snapshot your initial db state, use startmigration --initial. To detect changes to your schema, use startmigration --auto. The startmigration command will, in addition to other things, dump out the current db schema into the migration file. Every generated migration file contains a schema declaration towards the end. It also contains a migrate forwards and a migrate backwards command, for applying the migration and unapplying it automatically. The schema changes are detected by comparing the model data declarations with the migration schema dumps.

Some examples, for a db named clientportal and an app named portal:

# create migration named portal/migrations/0001_initial.py
./manage.py startmigration portal initial --initial

The initial migration is already in your database, so you don’t have to apply it yourself. Then you edit the portal/models.py file, and add a field. Then you can have south detect this and create a migration that applies the change to the database.

./manage.py startmigration portal add_field --auto  

You can use the newly created migration to change the database:

# will find all migrations named 0002_add_field
# and apply them (in alpha order of app name)
./manage.py migrate 0002_add_field 

I’m not sure how to better control the migrating naming and order of application. For instance, it seems that migrations are numbered sequentially within applications, but you don’t specify the number. So if you have more than one app (app a and app b), and you create a migrations in this order:

b/migration/0001_initial.py
a/migration/0001_initial.py
b/migration/0002_add_field.py

then you run ./manage.py migrate, the migrations will be applied in this order:

a/migration/0001_initial.py
b/migration/0001_initial.py
b/migration/0002_add_field.py

And I’m afraid that if I made a new migration (add_another_field) for app a, it would be called 0002_add_another_field and would be applied with all the other migrations (on ./manage.py migrate on a new db):

a/migration/0001_initial.py
a/migration/0002_add_another_field.py
b/migration/0001_initial.py
b/migration/0002_add_field.py

A little annoying when the migrations should have been applied in this order:

b/migration/0001_initial.py
a/migration/0001_initial.py
b/migration/0002_add_field.py
a/migration/0002_add_another_field.py

Hopefully there is a way to handle it, even if I don’t yet know what it is. Just being able to specify the 000n numbers would do it.

To list out the available migrations, you can ./manage.py migrate --list. The migrations that have been installed have a * next to them.

UPDATE 2011/Jan/12: to revert to an old version: ./manage.py migrate appname 0006_shortname_not_null

brings you to the state just after 0006_shortname_not_null has been applied.

Categories: , ,
Posted by: bjb

I want to put rounded borders on a stretchy box — expandable vertically and horizontally. I searched the web for solutions to this using pure CSS and found … none.

All the solutions I found had some kind of drawback. They all require you to either have a non-stretchy box in at least one dimension, or they require that your corner graphics be non-transparent. I’m not interested in the solutions that require javascript. The javascript just adds HTML elements or changes the CSS, anyway. And it’s slower than pure HTML and CSS, and sometimes people turn off javascript. Heaven forbid that they should miss out on the rounded corners.

The non-stretchy solutions involve having a single graphic to cover two or more corners — making stretching in the dimension between the two corners impossible.

There are a few non-transparent solutions that allow you to have a stretchy box. The solutions are various ways of stretching the sides of the box out to the corners, and then putting corner graphics on top of the sides. Thus the requirement to have opaque corners. One method is the sliding doors, another the Russian doll divs, and there are more as well.

One solution uses the pseudo-elements :before and :after, which is not supported in MSIE. Unfortunately, some people who read my blog (there are one or two people) use MSIE and complain when my blog looks funny. And they won’t switch browsers (or OSs, too bad).

Some solutions to the rounded-corners problem require you to have a certain set of elements in your html structure on which to place the corner and side graphics, and then claim that the solution does not require extraneous html elements. But if you don’t want a header element or a definition list in your box, you’re out of luck for that solution.

But I want stretchy sides and transparent-background corners around arbitrary contents.

Given that in order to implement this, I’m going to have to introduce extra HTML elements anyway, I think it is worth mentioning that a table can do what I want. A three by three table, with narrow fixed-width first and third columns and rows will allow the middle cell to stretch to accomodate its content. The first and third column and row cells can each have a different background image and each image displays in its own space with no overlap, allowing for transparent backgrounds in the graphics. Or, if you don’t want to have to maintain the table column and row widths and heights in sync with the graphics, you can put the graphics into those cells as HTML elements. Hey, one method is as evil as the other.

So, sometimes I’m going to use tables in my HTML. It is the best possible solution for this problem at this time. I sure hope HTML 5 has some sensible help for layout.

Categories:
Posted by: bjb

The first beam:

Image IMG_3547-med.JPG Image IMG_3548-med.JPG Image IMG_3549-med.JPG

Preparing for the second beam, at right angles to the first:

Image IMG_3551-med.JPG Image IMG_3552-med.JPG Image IMG_3553-med.JPG

The second beam is up, they are adjusting its position with a sledgehammer:

Image IMG_3557-med.JPG

Connect the two beams:

Image IMG_3559-med.JPG Image IMG_3560-med.JPG

Take out a temporary wall:

Image IMG_3656-med.JPG Image IMG_3657-med.JPG

The view without the temporary wall:

Image IMG_3658-med.JPG Image IMG_3727-med.JPG Image IMG_3730-med.JPG

Categories:
Posted by: bjb

Here’s the replacement for the load-bearing walls (see the nice red beams up near the ceiling). John put wood 2x6 along the bottom of the beam in order to have something to attach to while doing the rest of the finishing.

Image

Categories: , , ,
Posted by: bjb

I apt-get upgraded, and then apt-get using apt-cacher didn’t work any more. It seems if you have a perl IPv6 library installed (IO::Socket::INET6), apt-cacher will assume you have deployed IPv6 and it will make an IPv6 only connection for listening.

To make all your services obtain IPv4 address along with IPv6 ones, as root “echo 0 > /proc/sys/net/ipv6/bindv6only” and edit /etc/sysctl.d/bindv6only.conf to contain 0 rather than 1. Then restart your service(s).

Or if you do have IPv6 running locally, you can change your apt sources.list file (/etc/apt/sources.list) to refer to ipv6-localhost instead of localhost:

deb     http://ip6-localhost:3142/debian.mirror.iweb.ca/debian/ unstable main contrib non-free
deb-src http://ip6-localhost:3142/debian.mirror.iweb.ca/debian/ unstable main contrib non-free

(other distro’s might use another name like localhost6. Look in your /etc/hosts file for the right name.)

Another gotcha: If you install apt-cacher-ng, and you already have apt-cacher running, then apt-cacher-ng will attach to the remaining free interface (the IPv4 one) and apt-cacher will still be running on IPv6. The two packages don’t conflict. Yeargh. I can’t wait to see the cache corruption … Ah, no cache corruption. It starts its own cache — from scratch. Well better that than corruption I guess. But, even better to be running only one of them on all listening ports.

And another gotcha: Installing apt-cacher-ng might end up adding a proxy config to your apt config like so:

/etc/apt/apt.conf:  Acquire::http::Proxy "http://aa.bb.cc.dd:3142";

so then if you use urls as above, you would have specified the proxy twice: once in the url and once in the apt.conf file. Apt then complains with:

Failed to fetch http://aa.bb.cc.dd:3142/volatile.debian.org/debian-volatile/dists/lenny/volatile/contrib/source/Sources.gz  403  URL seems to be made for proxy but contains apt-cacher-ng port. Inconsistent apt configuration?

The fix is to remove the proxy either from apt.conf or from the sources.list entries. Note that the proxy might have been put in /etc/apt/apt.conf.d/01proxy or something like that instead.

Well that was a lot more exciting than I’d hoped for.

Categories: , , ,
Posted by: bjb

The exim4 config file is a bit annoying because it is hard to know what statements are assignments and what statements are conditions. Below, the debug_print, driver, and data statements (in the hub_user route) are assignments, while the domains and check_local_user statements are conditions. Note how the config file designer helpfully mixed them together.

hub_user:
  debug_print = "R: hub_user for $local_part@$domain"
  driver = redirect
  domains = +local_domains
  data = ${local_part}@DCreadhost
  check_local_user

hub_user_smarthost:
  debug_print = "R: hub_user_smarthost for $local_part@$domain"
  driver = manualroute
  domains = DCreadhost
  transport = remote_smtp_smarthost
  route_list = * DCsmarthost byname
  host_find_failed = defer
  same_domain_copy_routing = yes
  check_local_user
.endif

Anyway, to help with debugging exim4 routers, you can use the -bt option to exim4:

blueeyes:~# exim4 -bt bjb bjb@localhost bjb@blueeyes
R: system_aliases for bjb@linuxbutler.ca
R: userforward for bjb@linuxbutler.ca
R: procmail for bjb@linuxbutler.ca
bjb@linuxbutler.ca
  router = procmail, transport = procmail_pipe
R: system_aliases for bjb@localhost
R: userforward for bjb@localhost
R: procmail for bjb@localhost
bjb@localhost
  router = procmail, transport = procmail_pipe
R: system_aliases for bjb@blueeyes
R: userforward for bjb@blueeyes
R: procmail for bjb@blueeyes
bjb@blueeyes
  router = procmail, transport = procmail_pipe
blueeyes:~# 

Note to self: “satellite” mail type is for machines whose mail is forwarded to another machine. Use “smarthost” for machines on which I actually read mail.