Posted by: bjb

I’m trying out byteflow under django 1.2, and I finally have it working. There were only a few changes to make.

First off, the databases are specified differently in django 1.2 — there is the option to connect to multiple databases now. Strangely however, django did not force me to change my database settings … I guess there is some backward compabitility stuff for now.


    DATABASE_ENGINE = 'postgresql_psycopg2'
    DATABASE_NAME = 'byteflow'
    DATABASE_USER = 'db_user'
    DATABASE_PASSWORD = 'sekrit'


    DATABASES = { 'default' :
            'ENGINE' : 'postgresql_psycopg2',
            'NAME'   : 'byteflow',
            'USER'   : 'db_user',
            'PASSWORD' : 'sekrit',

There are also some deprecation warnings in the logs about, in (I may have added all those ‘settings.BLOG_URLCONF_ROOT’ in when using ‘bjb’ as my URL_PREFIX):

    url(r'^%sadmin/(.*)' % settings.BLOG_URLCONF_ROOT,, name='admin'),

    url(r'^%sadmin/(.*)' % settings.BLOG_URLCONF_ROOT,

But this didn’t work for me so I went back to the old way. The problem was that when I asked to edit a blog post, it brought me to the main admin page. When I clicked on the Change link, it stayed on the main admin page. I’ll have to look into that another time.

An update was required to apps/tagging/managers, in usage_for_queryset, to update the database query for the django 1.2 database scheme (multiple databases). I found this hint.

Also, in order to run django 1.2 on my stable machine, I set up a virtualenv (with --no-site-packages) in which to run it. Had to install all the site-packages into the virtualenv:

  • BeautifulSoup-
  • Django-1.2.1
  • PIL
  • mx
  • openid
  • psycopg2-2.0.7

That’s about it except for infrastructure:

  • easy-install
  • pip-0.8
  • setuptools-0.6c8

Well I suppose I should have started by upgrading byteflow, I’ll have to try that another time. Maybe some of my changes have been fixed in upstream already. However I did quickly note that the byteflow install page still says it requires django 1.0.

Posted by: bjb

If you want to do some unattended operations on your postgres database, and if you haven’t specified that the user who will do those unattended operations has access to that database using ident or sameuser authorization in /etc/postgresql/M.N/main/pg_hba.conf, then you will have to give a password upon invocation. But, postgres commands generally don’t let you specify a password on the command line (and there is a good reason for this).

There are two ways to configure your admin user to be able to work on your postgres database. One is with an environment variable and the other is with a postgres password config file in the admin user’s home directory.

The environment variable to set is PGPASSWORD, for example export PGPASSWORD=sekrit; pg_dump mydatabase.

The config file method means writing lines like hostname:port:database:username:password into a file called .pgpass in the admin user’s home directory. Don’t forget to set the permissions on ~/.pgpass to 0400, or -r--------.

The reason why postgres strongly discourages specifying the password on the command line is that it is easy for other users on the system to see that password with a simple invocation of the ps command.

Categories: , , , ,
Posted by: bjb

If you’re running a slave nameserver using bind9 and you’re getting messages like this in your logs:

    Aug 31 19:58:30 sns named[12175]: zone refused notify from non-master: 2002:1234:cdef::1234:cdef#13361

then the master is sending out notifies on an IPv6 address. Normally, you could just add that address to the “masters” list in the zone on the slave, but if the master isn’t listening on IPv6 you’ll get a bunch of other errors, like this:

    Aug 31 07:32:33 sns named[12175]: zone refresh: retry limit for master 2002:1234:cdef::1234:cdef#53 exceeded (source ::#0)
    Aug 31 07:32:33 sns named[12175]: zone Transfer started.
    Aug 31 07:35:42 sns named[12175]: transfer of '' from 2002:1234:cdef::1234:cdef#53: failed to connect: timed out
    Aug 31 07:35:42 sns named[12175]: transfer of '' from 2002:1234:cdef::1234:cdef#53: Transfer completed: 0 messages, 0 records, 0 bytes, 189.000 secs (0 bytes/sec)

There are two ways to fix this: on the slave nameserver, or on the master.

 continue reading
Posted by: bjb

Using my blog for it’s intended purpose: notes to self.

django on my server is an older version, my apps developed on my desktop need a newer version of django. I need to deploy virtualenv on my server so I can run a newer version of django for some apps. Like ipaddr and byteflow.

byteflow in particular, needs jquery.js (for attaching an image to a post, via postimage app and TAGGING_AUTOCOMPLETE_JS_BASE_URL setting) and that is not included in django 1.0.2.

But I’m out of time for today, so this is a task for another day.

Posted by: bjb

Summary of emacs rectangle commands, with default keystrokes:

C-x r k    kill-rectangle
C-x r d   delete-rectangle
C-x r y   yank-rectangle
C-x r o   open-rectangle
C-x r c   clear-rectangle
M-x delete-whitespace-rectangle
C-x r t <text><enter>  replace rectangle contents with text (string-rectangle)
M-x string-insert-rectangle<enter><string><enter>  insert string on each line of the rectangle


Categories: , ,
Posted by: bjb

I created a new django app on my development machine running Debian SID (updated Aug 14-ish) and wanted to run it on my server machine running Debian stable (lenny). It didn’t work:

 machine    |           devel                   server
Debian      |        sid (Aug 14)           stable (lenny)
django      |            1.2                      1.0
python      |            2.6                      2.5

There were complaints about missing modules, missing middleware, etc etc. Commenting out those bits resulted in more complaints about other things. So I figured I’d just have to run a “private” copy of django for that application.

To figure out what to copy, I looked at the python-django package contents using

dpkg -L python-django

and that pointed me to this directory full of django implementation files:


So I copied that over to the stable machine next to the little application, told the application to use that django by inserting that directory at the beginning of PYTHONPATH in the wsgi script, and ran the application.

It couldn’t find the module core.handlers.wsgi

That file was there …. but no file in core/handlers.

It turns out there were lots of missing files … and it turns out that although Debian installs the django python implementation files in /usr/lib/pyshared/django, it uses them from another directory /usr/lib/pymodules/python2.6/django, which is a mirror directory structure with a bunch of links to /usr/lib/pyshared/django for the files in /usr/lib/pyshared/django, plus some other files (like the missing (and usually empty) files, plus a pile of .pyc files).

My guess is that Debian makes the mirror directory so that the .pyc files will not mess up the “source” install directory.

The upshot is, that if you’re going to run another django by copying django to a directory local to the application and altering the PYTHONPATH, copy the /usr/lib/modules/python2.6/django directory and not the /usr/lib/pyshared/django directory.

But, the better solution is to use python virtual environments (venv). My app is a tiny thing that only uses django and nothing else (django was really overkill for my app) but it’s a Bad Idea to solve the “wrong-django” problem this way. For example, any time a file disappears from the new version of django, my app would still find it in the old path (which didn’t get removed when I altered PYTHONPATH).

Posted by: bjb

Today is Debian’s 17th birthday. I sent them a thank you on the thank you page.

Categories: ,
Posted by: bjb

I had to look at a 9-MB json file this weekend. Here’s how I converted it from one-line to indented multi-line:

$ sudo apt-get install python-simplejson
$ dpkg -L python-simplejson
$ less /usr/share/pyshared/simplejson/tests/

$ python
>>> import simplejson as json
>>> f = open (big.json, 'r')
>>> oneline = ()
>>> f.close
>>> ds = json.loads (oneline)                  # ds = "data structure"
>>> multiline = json.dumps (ds, indent="  ")   # two spaces / indent level
>>> f = open (formatted.json, 'w')
>>> f.write (multiline)
>>> f.close ()
>>> ^D

In reviewing the steps for this blog post, I note that there is also a “load” function, that might be even easier.

>>> import simplejson as json
>>> dir (json)
['Decimal', 'JSONDecodeError', 'JSONDecoder', 'JSONEncoder',\
 'OrderedDict', '__all__', '__author__', '__builtins__', '__doc__',\
'__file__', '__name__', '__package__', '__path__', '__version__',\
'_default_decoder', '_default_encoder', '_import_OrderedDict',\
'_import_c_make_encoder', '_speedups', '_toggle_speedups',\
'decoder', 'dump', 'dumps', 'encoder', 'load', 'loads', 'ordered_dict',\

Example input:

[{'pk': '5', 'model': 'theModel', 'fields': {'two': 'two', 'one': 'one'}}]

Example output:

    "pk": "5", 
    "model": "theModel", 
    "fields": {
      "two": "two", 
      "one": "one"

07/30: hpet_info

Categories: , ,
Posted by: bjb

HPET: High Precision Event Timer. My team is going to use it to measure some kernel activity and for starters I’ve written a userspace program that uses the timer. Well that turned out to be a bit more difficult than I thought. It’s not that well documented.

HPET is quite PC oriented, so you don’t get it on other architectures. When a machine has one, it might take over the task of the Real Time Clock and the OS timer. In each of the two machines I examined, there was one HPET block and it had three “timers” in it. Actually there was one counter and three comparators. But for the purposes of software we can consider it to be three separate but related timers. The HPET spec waxes eloquent about how you can have bunches of timers and comparators in a machine, but I haven’t seen it yet. Maybe in specialized hardware. Update I have another machine with 4 comparators; the first two are reserved by Linux and the other two are available for when /dev/hpet is opened. If you open /dev/hpet a third time (without closing any) it will say it’s busy.

The Linux kernel uses the first two timers for itself, and makes the third one available to entities that ask for it. Userspace can use it via the /dev/hpet device. You open /dev/hpet, then you can make fcntl and ioctl calls on it. Eventually you close the device. You can open the device in read-only mode, even if you’re going to set the timer, because you will never call write on it.

The API for using the HPET is rather narrow, you can’t examine all the registers or anything. You can ask the driver to fill in hpet_info for you with the INFO ioctl call. That fills in a data structure like this (I was looking at kernel 2.6.35):

struct hpet_info {
        unsigned long hi_ireqfreq;      /* Hz */
        unsigned long hi_flags; /* information */
        unsigned short hi_hpet;
        unsigned short hi_timer;

I could find NO documentation on what goes into this structure, aside from reading the code. And you have to read the code pretty closely and in conjunction with the HPET spec … Anyway:

hpet_info gets filled with the info for the timer device in question. In this case, with timer 2 (because that’s the “leftover timer that gets used for /dev/hpet requests”.

  • hi_ireqfreq is (not surprisingly) the requested frequency of the periodic timer. The device driver does a little arithmetic to give you a value in Hz, rather than the raw numbers from the register.

  • hi_flags contains 0 if the timer is capable of periodic (repeated) interrupts in hardware and 2 if not (kind of a waste of a 32 bit field, oh well)

  • hi_hpet contains the id of the timer (2 in the case of timer 2) … in this block, I think. I only have one block.

  • hi_timer contains the address offset between datastructures in the kernel. I have no idea why they thought that might be interesting to userspace … Update It is supposed to be counting the number of structures, rather than giving an address offset. But it should have given 2, and it actually returned 0x40 (twice the structure size). ???

It turns out that (at least on my hardware) you can’t set a one-shot timer. When triggered, the interrupt handler adds the period into the comparator and the timer will trigger again. And again, and again, until you turn it off. I haven’t tried turning it off in the interrupt handler yet. Not sure if that’s a safe operation for the interrupt handler. I guess I’ll find out tomorrow! Update Turning the interrupt off in the interrupt handler worked. That time. YMMV.

As it turns out, it would do that even if the timer was capable of doing periodic interrupts in hardware. To use the hardware periodic interrupts, you have to stop, reset and start the whole timer block (with all the comparators). Kinda catastrophic for the block with the system clock in it, so Linux just doesn’t use the hardware period timers.

Well I’m not done looking at this, I’ll update the post when I learn more.

Categories: , ,
Posted by: bjb
I created a django project and application and the associated database. I created the tables with syncdb over a couple of development iterations. I can still run ./ syncdb in the original project with no output. I tried copying the project to another directory, creating a new (empty) database (db-dev) and adjusting the in the new project. When I run ./ syncdb in the new project, it says:
psycopg2.ProgrammingError: relation "fileshare_language" does not exist
Shouldn’t django create the relation (table) as part of the syncdb operation?
    class Language (models.Model):
        Class to represent the choice of languages available so far
        language = models.CharField (max_length = LANGUAGE_LEN)
        def __repr__ (self):
            return self.language
    class Clients (models.Model):
        Class to represent the clients.  This class is associated
        with the Django User Model (where the name and email address are stored).
        user = models.ForeignKey (User, unique = True)
        filedir = models.CharField (max_length=FILE_PATH_LEN)
        language = models.ForeignKey (Language)

    class AddClientForm (forms.Form):
        Form for adding a client
        Also adds a django user and creates a directory
        username = forms.CharField (label = ugettext_lazy (u'Username'),
            widget = forms.TextInput (attrs = {'class' : 'form_object_bg' }),
            required = True)
        firstname = forms.CharField (label = ugettext_lazy (u'First Name'),
            widget = forms.TextInput (attrs = {'class' : 'form_object_bg' }))
        lastname = forms.CharField (label = ugettext_lazy (u'Last Name'),
            widget = forms.TextInput (attrs = {'class' : 'form_object_bg' }))
        email = forms.EmailField (label = ugettext_lazy (u'Email'),
            widget = forms.TextInput (attrs = {'class' : 'form_object_bg' }))
        filedir = forms.CharField (label = ugettext_lazy (u'Files Location'),
            required = True)

        language_qs = Language.objects.all ().order_by ('id')
        language_choices = []
        for ll in language_qs:
            language_choices.append ((, ll.language))
        language = forms.ChoiceField (choices = language_choices,
                                      label = ugettext_lazy (u'Language'))
        is_admin = forms.BooleanField (label =
                                       ugettext_lazy (u'Is administrator'),
                                       required = False)
        password = forms.CharField (label = ugettext_lazy (u'Password'),
            widget = forms.PasswordInput (attrs = {'class' : 'form_object_bg' }),
            required = True)
        pwd_confirm = forms.CharField (label = ugettext_lazy (u'Password Confirmation'),
            widget = forms.PasswordInput (attrs = {'class' : 'form_object_bg' }),
            required = True)
It turns out that the attempt to put the language choices in a dropdown list in the form is causing syncdb (and every other ./ command) to fail with that traceback. I suppose the quick fix is to create the table and populate it manually in the empty database, and then run syncdb. Later I can fix up the form so it doesn’t have code in the middle of the field declarations. Oops.