I am doing this on a loose heterogeneous cluster of linux boxes and osx boxes, using perl and the CLI versions of handbrake/mediafork. It works rather well, and is surprisingly easy to do if you know a scripting language. I've also done it with XGrid as a grid controller.
Here's what I did:
1) Put all of your VIDEO_TS folders on a networked drive(s).
2) Put the cli binary in the same location on each machine.
3) write a script to find all of the titles and all of the VIDEO_TS folders on the drive(s)
4) write a script to control the job distribution (this could be as easy as a loop with ssh/rsh -e calls).
If you are using OpenMosix, I assume that you are familiar with some scripting language, so you shouldn't have any problems.
I have a script on this site somewhere that finds VIDEO_TS and titles. Since I process an entire hard drive at once on each machine (which takes days), I didn't see much need to get fancy with queuing, just "ssh/rsh -e mediaforker.pl /Volumes/Drive#" from the controller to all of the machines.
My script processing unit is a title, but if you wanted to process in smaller chunks and send them out to different machines, it could be easily modified to process by chapter (maybe even blocks or chunks?), but then you would have to write something to reassemble the chapters back into a movie.
Here is the post with the scripts:
I think some of the developers may have recently written some useful scripts, as well, but I haven't used them yet.