avatarharuki zaemon

Lies, Damned Lies and Statistics

By

Whilst reading the latest news headlines on the ABC (Australian Broadcasting Corporation) web site just now, I happened upon what seemed like a rather interesting article entitled Brain fluid draining eases dementia: research. Fascinated, I read on.

…The study investigated 20 patients…71 per cent of our patients improved in memory and mental function and 94 per cent improved in balance and walking…

Hang on a second…71 percent of 20 patients would be…14.2 patients; 94 percent of 20 patients would be…18.8 patients. Huh?

Abstract ActiveRecord Classes by Convention

By

Ruby on Rails provides a very simple mechanism for specifying that a model class is an abstract base class and therefore has no corresponding database table:

class MyAbstractClass &lt ActiveRecord::Base**self.abstract_class = true**...end

Code can then interrogate a model class to see if it is abstract:

puts "it's abstract" if MyAbstractClass.abstract_class?

Not so hard, however I pretty much always prefix the name of my abstract classes with, you guessed it, 'Abstract'. So, I added some code to the RedHill on Rails Core Plugin the other day to extend the definition of an abstract class to include the name:

def abstract_class?@@abstract_class || !(name =~ /^Abstract/).nil?end

With that simple change, I no longer need to explicitly set self.abstract_class = true; it just works by magicconvention.

I suppose I could/should have created a plugin for it but I was feeling lazy :)

Perforce Client Setup

By

For anyone who is is unfortunate enough to work with Perforce – and so I don’t have to remember – here’s a quick-and-dirty kick-start guide for setting up a client workstation. (Note I’m on a Mac so your mileage may vary.)

First things first, install the perforce software available from http://www.perforce.com/perforce/downloads.

Next, in order to avoid various command-line arguments, I have the following environment variables set:

export P4EDITOR="$EDITOR"export P4USER="username"export P4PORT="1666"export P4HOST="clientname.local"export P4CLIENT="clientname"

If you’re using SSH, you’ll need to create a tunnel to the server. Something like this should work:

ssh -L1666:server:1666 -p 22 -N -t -x username@server

This sets things up so all requests to localhost:1666 are routed over ssh to the remote server. You can then setup the client:

p4 client

This will launch your default editor – in my case that’s TextMate but nano/pico/emacs/vi/etc will do – and allow you to modify the following fields:

Client:  clientnameOwner:  usernameHost:  clientname.localRoot:  /path/to/projects/View:

See the documentation for an explanation on how to set-up the View. In my case, I’m running a rails application, so I have some rules to exclude various generated and client specific directories:

-//depot/projectname/config/database.yml //clientname/projectname/config/database.yml-//depot/projectname/db/schema.rb //clientname/projectname/log/schema.rb-//depot/projectname/log/... //clientname/projectname/log/...-//depot/projectname/tmp/... //clientname/projectname/tmp/...

Finally, to get a copy of the latest source code in /path/to/projects/projectname, run:

p4 sync

And because I just can’t help myself, by way of comparison, here’s the equivalent instructions for subversion:

svn co svn+ssh://username@repositoryurl/trunk/projectname

Gosh, wasn’t that difficult.

Zip and Preserve File Permissions with Ant

By

Yes, it’s been a while since I posted an entry related to Java! Believe it or not, we still do Java development, lots of it in fact, but it’s mostly large-scale re-factoring and cleanup work on what can best be described as “legacy” applications so there’s rarely much if anything to write home about. That said, I have a couple of posts just itching to be written when I find some time. Until then, a relatively short entry will have to do :)

A client distributes one particular Java-based web application to hundreds of customers using a zip file. The distribution contains, among other things, the war file and some scripts for database migration, etc. It’s these scripts that cause us some headaches as they need to have execute permission. The problem arises because Ant’s built-in zip task specifically doesn’t handle file permissions. So, naturally, we concocted our own using macrodef:

<macrodef name="zipdir">
  <attribute name="destfile"/>
  <attribute name="sourcedir"/>
  <echo>Building zip: @{destfile}</echo>
  <exec executable="zip" dir="@{sourcedir}">
    <arg value="-qR"/>
    <arg value="@{destfile}"/>
    <arg value="*"/>
    <arg value="-x *.svn* "/>
  </exec>
</macrodef>

This simply calls the operating system’s – read *nix – zip command to compress the specified directory thus preserving all the file permissions that SVN lovingly maintains.

No Really, Perforce Does Suck

By

Ok, so after my rant yesterday I was feeling a bit better. So many people rushed to the defence of Perforce and on the authority of people I know, respect and work for – not mutually exclusive roles – I thought I’d get stuck into it and read the manuals, read news groups and even rushed out to buy a copy of Practical Perforce.

The documentation is plentiful and very informative and the support groups are very helpful. As for the book, well, the book is most excellent, a very easy read indeed and full of tonnes of really great tips – recipes, idioms, patterns, hacks, call them what you will – which just about sums up my experience thus far: Lots and lots of rather involved processes to do what I consider to be normal everyday activities. (At this point I feel compelled to direct you to an excellent article on why patterns are indicative of unsophisticated systems.)

To give you a 100% practical example, just today I committed 1600 files which I had to back-out almost immediately because I realised I had broken something. Now, ignoring the why’s and how’s I managed to get myself into such a pickle, the fact is I needed to rollback a commit. Here’s what I did:

> svn merge -c -27289 svn+ssh://me@therepositoryurl
> svn commit

Tricky stuff that!

So then on my way home I picked up the book mentioned earlier and went straight to the index to find “Backing out a recent change”. Whoot! Just what I wanted to know. So here’s the deal:

> p4 files @=27289         # This lists all the files that have changed
> p4 sync @27288p4 add ... # For each deleted file
> p4 edit ...              # For each changed file
> p4 syncp4 delete ...     # For each added file
> p4 resolve -ayp4 submit

Yes! Pretty impressive! And, straight from the book, re-printed without any permission whatsoever (emphasis added by yours-truly):

When a change involves a lot of files, you can filter the output of the files command to produce a list of files to open. Unfortunately, files can’t be piped directly to other p4 commands because its format isn’t acceptible to them. This can be easily fixed by using a filter; namely sed.

Wow. Cool! Just what I wanted to have to do. Ok, so let’s try that:

> p4 sync @27288p4 files @=27289 | sed -n -e "s/#.* - delete .*//p" | p4 -x- add
> p4 files @=27289 | sed -n -e "s/#.* - edit .*//p" | p4 -x- edit
> p4 sync
> p4 files @=27289 | sed -n -e "s/#.* - add .*//p" | p4 -x- delete
> p4 resolve -ay
> p4 submit

Awesome! That’s sooooo much better. Sheesh, I might even be able to script it, fan-bloody-tastic. Thankfully, Perforce is touted as being lightning fast because unless I’m very much mistaken, that’s seven, count ’em, seven calls to the server!

So, what have we learned so far? We’ve learned that precisely the scenario I’ve been told Perforce is great at handling, it really, really, really, ok once more, really, sucks!

Oh, but there’s more. I forgot to mention that I was also working offline before I committed the original sin. When I eventually connected this is what I did:

> svn commit

Ok, so technically I did:

> svn up
> svn commit

So, what would have been the equivalent if I had been using Perforce you might ask?

> p4 sync
> p4 diff -se | p4 -x- edit
> p4 diff -sd | p4 -x- delete
> p4 submit

(As a side note, adding new files in both systems is about the same amount of work. That said, at least with subversion a simple svn sta will show me which files are not yet under version control. For the life of me I can’t seem to find an easy way to do this with Perforce.)

Not too bad but technically, three times as many commands. And yes, again, I could script it but why should I need to? This is something I, as a developer, do every day. Am I mistaken for thinking that developers are by far the largest users of a tool such as this? Perhaps.

It’s no wonder Google want people to know how to use Perforce; it pretty much proves the candidate has a brain large enough to even feel like working out how to use it.

Deploying to Multiple Rails Environments

By

On one Rails project, we have two deployment environments: production; and UAT. Using the default Capistrano configuration makes deploying to these two environments rather difficult so, I thought I’d share our deploy.rb with a bit of explanation along the way. Ok, here goes:

For a start, we deploy to a directory that includes the environment as part of the path:

set :deploy_to, lambda { "/home/#{user}/www/#{rails_env}" }

For subversion, we checkout the code as the user who is running the deployment making sure not to cache authentication details on the server:

set :svn_user, ENV['USER']set :svn_password, lambda { Capistrano::CLI.password_prompt('SVN Password: ') }set :repository, lambda { " -- username #{svn_user} --password #{svn_password} --no-auth-cache svnurl/trunk/#{application}" }

In both cases, we run a mongrel cluster. Because the mongrel configuration files share a lot in common and because they largely duplicate information contained within the deployment script, we generate an appropriate configuration on deployment. More of that in a bit but for now, the common bits look like:

set :mongrel_address, "127.0.0.1"set :mongrel_environment, lambda { rails_env }set :mongrel_conf, lambda { "#{current_path}/config/mongrel_cluster.yml" }

Now, for the environment specific portions. For each environment we have a task that simply sets variables appropriately – I toyed with using an environment variable such as RAILS_ENV rather than the pseudo-tasks but it was more typing and I’m allergic to typing :).

For production, we want 3 mongrel instances in the cluster, listening on ports 8000-8002:

desc "Production specific setup"task :production doset :rails_env, :productionset :mongrel_servers, 3set :mongrel_port, 8000end

For UAT, we want 2 mongrel instances in the cluster, listening on ports 8010-8011:

desc "UAT specific setup"task :uat doset :rails_env, :uatset :mongrel_servers, 2set :mongrel_port, 8010end

And finally, a custom deployment script based almost entirely on the built-in deploy_with_migrations with the major difference being the configuration of the mongrel cluster just prior to restart:

desc "Generic deployment"task :deploy doupdate_`beginold_migrate_target = migrate_targetset :migrate_target, :latestmigrateensureset :migrate_target, old_migrate_targetendsymlink**configure_mongrel_cluster**restartend

That’s it really. Now whenever we need to deploy to a particular environment, say for example UAT, we do something like:

cap uat deploy

UPDATE: By request, here is our database.yml file:

common: &commonadapter: postgresqlusername: &lt%= ENV['USER'] %&gt;development:database: foo_development&lt;&lt;: *commontest:database: foo_test&lt;&lt;: *commonuat:database: foo_uat&lt;&lt;: *commonproduction:database: foo_production&lt;&lt;: *common

As you can probably tell, we’re lucky enough that the database user is always the same as the user under which the application will be run and is that the database itself is named according to the environment. That makes it very easy to wrap up most of the common parts – Thanks goes to Jon Tirsen for that YAML tip.

This could also easily be generated. I guess it just hasn’t needed any attention since it was created so YAGNI overrode DRY ;-)

Perforce: Just A Faster CVS?

By

So, it’s 7am-ish and I’ve had 6 or so hours of sleep to ruminate on this but yup, from a developers perspective, I still think Perforce sucks.

Can anyone tell me why they believe it seems like a good idea to:

  • Require an ssh tunnel to have encyrpted communication;
  • Keep a secondary workspace to enable offline revert;
  • Have a command-line tool that uses environment variables – or command-line arguments – to specify connection details;
  • Display a diff of which files changed as a tree – I just want to see the individual files not my entire project;
  • The list goes on…

I like to work offline, a lot, on planes, trains and in taxi-cabs; I like to be able to see immediately what’s changed; and I like to be able to revert everything (or only somethings) several times while I’m prototyping.

With subversion I get a lot out-of-the-box and while there will always be nice to have features such as “add all unknown files” it does pretty much everything I need.

As I moved from C to C++ to Java and then to Ruby, I felt empowered each step of the way. I had a similar experience moving from CVS to SVN. Perforce seems like a step backwards.

Google may use and recommend Perforce but when the answer to “why can’t I do …” is “you can, just write a script to …” I’m not sure I’m convinced.

ActiveRecord Identity Map for Rails Transactions

By

I happened to be reading a blog entry last night that mentioned some “short comings” in Rails’ ActiveRecord and its handling of record loading. Specifically, AR will load the same record twice, into two different instances, within the same transaction. Ie. the following test fails:

Customer.transaction doc = Customer.find_by_name('RedHill Consulting, Pty. Ltd.')assert_same c, Customer.find(c.id)end

To be honest, I’ve not yet been burned by this but it may just catch-out some so I quickly whipped up a very basic plugin to see how difficult it would be solve:

module RedHillConsultingmodule IdentityMapclass Cachedef initialize@objects = {}enddef put(object)objects = @objects[object.class] ||= {}objects[object.id] ||= objectendendmodule Basedef self.included(base)base.extend(ClassMethods)base.class_eval doalias_method_chain :create, :identity_mapendendmodule ClassMethodsdef self.extended(base)class &lt;&lt; base[:instantiate, :increment_open_transactions, :decrement_open_transactions].each do |method|alias_method_chain method, :identity_mapendendenddef instantiate_with_identity_map(record)enlist_in_transaction(instantiate_without_identity_map(record))enddef enlist_in_transaction(object)identity_map = Thread.current['identity_map']return object unless identity_mapidentity_map.put(object)endprivatedef increment_open_transactions_with_identity_mapincrement_open_transactions_without_identity_mapThread.current['identity_map'] ||= Cache.newenddef decrement_open_transactions_with_identity_mapThread.current['identity_map'] = nil if decrement_open_transactions_without_identity_map &lt; 1endenddef create_with_identity_map()create_without_identity_mapself.class.enlist_in_transaction(self)idendendendend

The code essentially interferes with create and instantiate (called from find) and ensures that, within a transactions, the same record will always be returned for the same id (IdentityMap).

As I mentioned, unlike all my other plugins, I’ve never used nor needed to use this one – and I’m not sure I will unless it proves to be a problem for me – but it’s yet another example of how easy it is to extend Rails to do pretty much whatever you might imagine.

Automatically Validate Uniqueness of Columns with Scope

By

The first cut at Schema Validations only applied validates_uniqueness_of for single-column unique indexes. This removed 80% of the cases in my code base but there were still cases where a scope was specified that lingered. Not any more.

The plugin now automatically generates validates_uniqueness_of with scope for multi-column unique indexes as well.

As always, there are some assumed conventions – which I believe will handle close to 99% of cases – around how to decide which column to validate versus which columns to consider part of the scope. The column to validate is chosen to be either:

With all remaining columns considered part of the scope, following, what I believe to be, a typical typical composite unique index column ordering.

So, for example, given either of the following two statements in your schema migration:

add_index :states, [:country_id, :name], :unique => trueadd_index :states, [:name, :country_id], :unique => true

The plugin will generate:

validates_uniqueness_of :name, :scope => [:country_id]

My next stop is to have a look at simple column constraints such as IN('male', 'female') and turn them into validates_inclusion_of :gender, :in => ['male', 'female'].

Perhaps tomorrow :)

validates_presence_of association Gotcha

By

The more I use Rails (and the more plugins I create) the more quirks I find.

Imagine I have a one:many relationship between Country and State:

State.belongs_to :country
Country.has_many :states

We then issue the following sequence of statements (I’ve interleaved the output of tailing the development log):

c = Country.find_by_name('Australia')
**  Country Load (0.006506)   SELECT * FROM countries WHERE (countries."name" = 'Australia' ) LIMIT 1**
s = c.states.build(:name =&gt; 'Victoria', :abbreviation =&gt; 'VIC')s.country
**  Country Load (0.009738)   SELECT * FROM countries WHERE (countries.id = 1)**

Notice the SELECT to find the country? Now why would that be necessary? I just used .states.build on the country. I would have thought that would set the association but that doesn’t appear to be the case.

Looking at the code, my suspicions were confirmed: only the parent’s id is set. That seems decidedly odd given that we know for a fact the parent exists – we just used it to create the child.

So anyway, I’m pretty sure this is considered a “feature” but to be honest, I can’t see why it is desired behaviour over and above the fact that doing otherwise would be more work and why would you need this if you already have the parent yada, yada, yada.

Well, for a start, I’d like this behaviour because I’d like to use validates_presence_of on foreign-keys and have it work for newly constructed graphs. Usually this barfs no matter what but I concocted a work-around last night and committed it to my Foreign Key Associations plugin which, if done manually, would look something like this:

class State &lt; ActiveRecord::Base
  validates_presence_of :country_id, :if =&gt; lambda { |record| record.country.nil? }
  ...
end

Essentially this says to validate the presence of country_id but only if there isn’t an associated country. This means that for cases where the parent record is also new, the validation checks for the presence of the associated object rather than the foreign-key column. If you had simply used validates_presence_of :country_id then save would fail because country_id was still nil.

OK that’s all very well and good but it still doesn’t help because, as shown above, the association isn’t set anyway. So, I’m now back to manually setting the association; at least the validation works hehe

I’m sure someone far smarter than I will point out why the behaviour as it stands is obviously the most appropriate and that no one in their right mind would want to do anything else, of course ;-)

Procrastinating in Ruby is Delicious

By

As I was bookmarking something on del.icio.us today, I noticed the dates on which I had bookmarked the last couple of times and wondered if there was any correlation between frequency and day of the week. So, I downloaded a summary using https://api.del.icio.us/v1/posts/all? and whipped up a little ruby script to compile some statistics:

Wednesday = 41Tuesday = 39Thursday = 37Friday = 32Monday = 26Saturday = 24Sunday = 12

Looks like Wednesday is the biggest day for bookmarking – also known as procrastinating – and what do you know? Today is…Wednesday!

So then I thought I’d see if there was anything interesting in the time of day:

12 = 2613 = 204 = 1722 = 150 = 1423 = 125 = 122 = 1220 = 1011 = 101 = 107 = 103 = 96 = 721 = 79 = 615 = 414 = 38 = 310 = 219 = 2

Phew! Most of my bookmarking is done around lunchtime although an awful lot were done at 4am!

RedHill on Rails Plugin Refactoring

By

I mentioned in my previous entry that I’d done quite a bit of refactoring of the plugins. Among the various changes that will affect developers using them are:

  • Schema Defining (schema_defining) has been deleted;
  • Foreign Key Support (foreign_key_support) has been deleted; and
  • RedHill on Rails Core (redhillonrails_core) has been added to replace the previous two as well as subsuming some of the more generic functionality from other plugins.

So, why all these changes?

The main reason is manageability. We’re actually eating our own dog food and using these plugins in production applications and we’re adding functionality at quite a surprising rate. Each time we add something, we first put it into the plugin that needs it directly. That works great for a while but then, someday, we decide we need that functionality in two or more plugins. What to do?

Our original idea had been to create new plugins and this worked for us up to a point. Unfortunately, of late, the number of extra plugins – with very specific functionality mind you – was just getting out of hand and needed to be simplified.

In the end, we decided on a two-tiered approach to plugins: those which add functionality but no (or at least minimal) behaviour; and those that add behavioural magic.

As an example, the new core plugin adds functionality to manage foreign keys, lookup indexes, add unique column meta-data, etc. but doesn’t do anything particularly magic that will affect the running of your application.

On the other hand, the foreign key migrations, foreign key associations, schema validations, etc. plugins – which all rely on core – add funky rails magic to automatically generate foreign keys, associations, model validation, etc.

Another change we made was in the way documentation is generated. We used to manually generate a nice HTML file containing all the plugins. This was becoming rather tedious and meant that the documentation was often quite out of date. We’ve now remedied this with a nice ruby script using Erb and RDoc to generate the online documentation directly from the README files.

I also mentioned previously that we’ve added “lots” of tests. I say lots because we’re still playing catchup so relatively, there are lots but we still need lots more. As a group of developers that are ardent TDD evangelists, the conspicuous lack of tests was somewhat embarrassing to say the least. Unfortunately, testing plugins (especially those related to schema and database) is pretty difficult so we opted to bypass the whole problem and just create a standard rails app with standard rails tests and all is well again.

And lastly, besides all the extrat features we’ve added (see the CHANGELOGs for the specific plugins), you’ll notice that the subversion URL has changed slightly – it used to contain an extra slash (/) which was not only unnecessary but caused SVN to regularly crap out.

My aplogies to all those that have been trying to keep up but we hope that’s the last of it. From now on, we’ll continue to beef up core as we need and then add plugins only when we need new behaviour.

Of course we’ll always reserve the right to change our minds ;-)

Not My SQL

By

Everyone else’s favourite database just gave me the shits, again!

As part of my Schema Validations plugin for rails, I needed to see if a column has a default value. If it does, then there’s no point in adding a validates_presence_of as the database will add one in. Ok, sounds sensible. Works just fine under PostgreSQL but my tests were failing when run against MySQL. Specifically, there was no validation being added for integer columns marked as NOT NULL. Huh?!

After a little investigation, I noticed that the meta-data that rails was collecting for mandiatory integer columns included a default of 0. So I looked in the test database and sure enough the columns all had a default of 0. But how? Why? I didn’t put a default in my migrations…

A little more investigation and I noticed that the schema dump that is generated out of the development database and then run against the test database did indeed include the very same defaults. I then looked in the development database and to my surprise found no such defaults there. Aha! Mystery solved I presumed. Rails must have a bug for MySQL.

So I go and look at the code but alas, the code is the same for both PostgreSQL and MySQL. Something else must be happening. Time to get down and dirty on the command-line.

mysql&gt; create table foo (col1 int, col2 int not null, col3 int default null) engine=InnoDB;mysql&gt; show columns from foo;+ -- -- -- -+ -- -- -- ---+ -- -- -- +- -- --+ -- -- -- ---+ -- -- -- -+| Field | Type    | Null | Key | Default | Extra |+ -- -- -- -+ -- -- -- ---+ -- -- -- +- -- --+ -- -- -- ---+ -- -- -- -+| col1  | int(11) | YES  |     | NULL    |       || col2  | int(11) | NO   |     |         |       || col3  | int(11) | YES  |     | NULL    |       |+ -- -- -- -+ -- -- -- ---+ -- -- -- +- -- --+ -- -- -- ---+ -- -- -- -+

If I have a nullable column then the default default (if that makes sense) is NULL. If I mark a column as mandiatory, the default default is…an empty string!? I wonder what would happen if I tried inserting a row and letting MySQL default all the values:

mysql&gt; insert into foo () values ();mysql&gt; select * from foo;+ -- -- -- +- -- ---+ -- -- -- +| col1 | col2 | col3 |+ -- -- -- +- -- ---+ -- -- -- +| NULL |    0 | NULL |+ -- -- -- +- -- ---+ -- -- -- +

You have to be shitting me! I attempted to insert a row into a table without specifying a value for a column that is marked as NOT NULL and it inserts 0!? Hold on a second…what if I force the default to be NULL so that it behaves just like every other sensible database on the planet:

mysql&gt; create table bar (col1 int not null default null) engine=InnoDB;ERROR 1067 (42000): Invalid default value for 'col1'

Egads! OK let me try that in PostgreSQL:

psql=# create table bar (col1 int not null default null);CREATE TABLE

Thank-you!

Sure, I could make the assumption that 0 was never going to be a valid identifier for a record in another table but why should I have to? As far as I can tell, MySQL is just making shit up! No wonder my brother says it reminds him of using Microsoft Access.

So, now I’m left with the task of working out how to patch rails to get around this. I think I’ll just have to presume that empty strings are equivalant to NULL for manditory columns. Sheesh.

Foreign Key Associations Plugin

By

I’ve done quite a bit of refactoring of my Ruby on Rails plugins lately which, unfortunately, broke some stuff (thanks to all those that let me know) but the upshot is a much cleaner division of responsibility between plugins; and some sorely needed unit tests.

Another of the benefits from all of this was yet another plugin, this time to automatically generate associations based on foreign-keys.

For example, given a foreign-key from a customer_id column in an orders table to an id column in a customers table, the plugin generates:

  • Order.belongs_to :customer; and
  • Customer.has_many :orders.

(In the near future we intend to support has_one associations for foreign-key columns having a unique index.).

If there is a uniqueness constraint – eg unique index – on a foreign-key column, then the plugin will generate a has_one instead of a has_many.

For example, given a foreign-key from an order_id column with a uniqueness constraint in an invoices table to an id column in an orders table, the plugin generates:

  • Invoice.belongs_to :order; and
  • Order.has_one :invoice.

You can download the latest version directly from svn://rubyforge.org//var/svn/redhillonrails/trunk/vendor/plugins/foreign_key_associations

For all those that have asked for pure HTTP access, I hear you and I’m working on it. (It seems ./script/plugin install doesn’t understand the format of the browse repository pages on RubyForge. DOH!)

Transactional Migrations Plugin

By

I wrote a while ago on utilising transactional DDL in your ruby on rails migration scripts so I decided to create a plugin.

In a nutshell:

Transactional Migrations is a plugin that ensures your migration scripts – both up and down – run within a transaction. When used in conjunction with a database that supports transactional Data Definition Language (DDL) – such as PostgreSQL – this ensures that if any statement within your migration script fails, the entire script is rolled-back.

Hoopla

By

The National Breast Cancer Foundation is Australia’s national fundraising body for breast cancer research and is hosting a charity event at beautiful Rippon Lea Estate in Melbourne on the 8th September, 2006 to raise money.

All the details are available online so get out your $100, put on some party wear and come and support a wonderful charity.

And yes, even geeks are allowed!

Ploojins

By

I’m not sure anyone else really bothers but today I decided I’d try listing to the Text-to-Speech version of this blog by clicking on the Listen to this article link. Apparently the software they use hasn’t caught up with Plugin as a commonly used variation of the hyphenated Plug-in. I was somewhat amused therefore, to hear about the Ruby on Rails ploojins :).

A Plea to the Ruby on Rails Core Team

By

Yet another plea: Please don’t add foreign-key migrations, schema validations or for that matter acts_as_taggable or any significant number of the myriad plugins that are now available. Leave RoR as lean and as mean as possible. By all means change your assumptions and your opinions but don’t allow Rails to become the Micro$oft Word of the Ruby world – bloated and with features that < 1% of the community ever use.

I’m pretty opinionated. DHH is obviously pretty opinionated. That doesn’t mean I necessarily agree with his opinions – I clearly think foreign-keys are important – but that doesn’t prevent me from using RoR. In fact, quite the contrary. Precisely because DHH is so reticent to adding every new feature under the sun into RoR, the current feature set appeals to most of the developers who use it. This is not to say that Rails is in anyway fully-featured but what is there, most people use. So what about all that neat stuff that we all think is great and absolutely necessary but with which DHH and, no doubt, a non-trivial number of other developers in the community disagrees?

I think my favourite feature of RoR is the very sophisticated plugin model. With a little thought and imagination, it’s pretty easy to implement just about any extension imaginable. And this is where the power lies in being lean, mean and opinionated. It’s much easier to add features than to take them away – actually it’s pretty easy to take them away too but it can get pretty ugly and besides, who wants to spend their time writing plugins to disable functionality? In fact I like plugins so much, I’ve started thinking about my applications as collections of plugins. Plugins work, there are lots of them, they allow you to add features that no one ever dreamed of and then, with very little effort you can, if you’re a good sort, give something back to the development community.

You probably don’t want to use much (if any) of the stuff that I think is useful and I sure as hell don’t want your manky ideas cluttering up what continues to be my favourite development environment. So, please stop inundating the RoR Trac with every little thing that you believe to be 100% necessary and start building and publishing plugins.

Schema Validations Plugin

By

After listening to Prag Dave’s Keynote Speech this afternoon, I was motivated to implement some of the things he’d been asking for. Here’s my first cut at it.

As the doco says, the plugin reads some – ok only one at the moment but we’ll see how many others I get done before the beer runs out – database column constraints and tries to apply the closest corresponding rails validation. The first one I implemented reads the NOT NULL constraints against columns and generates a corresponding validates_presence_of.

I literally just whipped it up with no tests or what-not and I’ve only played with it against PostgreSQL, so if it has bugs or behaves oddly for whatever reason, please let me know, send me as much info as possible and I’ll make it work. Nothing better than having real people testing it for me ;-)

UPDATE: OK, so far the beer has lasted long enough to implement validation of numbers (including specific support for integers) and lengths of strings.

UPDATE: Now calls validates_presence_of anytime you declare a belongs_to association for a NOT NULL foreign-key column.

UPDATE: Single-column unique indexes are now converted to validates_uniqueness_of.

That Weird Devil Number

By

This morning, my brother was sitting at his laptop attempting to get FreeBSD to mount a ReiserFS partition without much success – some permissions problem that means he can mount it as root but not from fstab – when his girlfriend sat down beside him, peered over, and asked “What’s that weird devil number?” Naturally we both had a little WTF moment. “See”, she continued, “it’s even called devil!”

On closer inspection, it seems the file permissions for /dev are, understandably, 666. Riiiiiigghhht!