Friday, March 28, 2008

DRY versus WET

You've got to have either a Snappy Acronym or Fancy Vague Phrase (FVP) in order be credible. So when I decided to write about what I think is wrong with DRY, I came up with 'DRY versus WET' - but I didn't know what WET meant.

DRY - Don't Repeat Yourself - is one of the Mantras of the Current Age of Programming Wisdom.

In case you don't know what a mantra is, it's a sound or phrase which you repeat over and over again while meditating. Meditating is something you do with your mind which brings peace and quiet . . . by avoiding all thought.

The problem I have with DRY is that you have to Think in order to write Good Code. In other words, you need Well Examined Thought - that's where WET comes from. [So now I've got an Acronym that Actually Means Something].

For Example:

Take a look at the Rails source code to see what over-DRY-ness does. After a while you'll see that (almost) every piece of code which could be repeated is turned into a method call. Even stuff which isn't more than a fraction of a line! This is Really DRY - so It Must Be Good. Right? Wrong!

This is Spaghetti [see The New Spaghetti for more ranting . . I mean 'detail'].

Historical Note: Spaghetti code originally referred to the overuse of GOTO (or branch) statements in FORTRAN or Assembler programs, but any program which has a high ratio of Control Transfer statements to Useful Work is really Spaghetti - because it reads like all the code was written on the sides of noodles and then all mixed up.

So . . . Don't DRY up. Get WET!

Think it's worth doing a book about?

Thursday, March 13, 2008

The New Spaghetti

Back before Structured Programming - which everyone now thinks is as natural as breathing - we had a thing called 'Spaghetti Code'.

You wrote Spaghetti Style by using lots of conditional and unconditional GO TO statements to re-use every bit of code you'd written. It was the privative precursor of DRY [Don't Repeat Yourself].

In those days it kind of made sense - for a couple of reasons:

1. we didn't know any better

2. memory was tiny - we measured big machines in KiloBytes and wrote programs using 'overlays' [in case you don't know, an overlay is a chunk of executable code which 'overlays' another chunk in memory. We used to do that because (a) we didn't have enough memory to do the job and (b) there wasn't any automatic memory management]

The result of Spaghetti Style is that nobody could ever figure out how the programs worked and modifying them was next to impossible.

What brought this up?

I've been trying to 'learn Rails' - which is a pretty large Ruby application. Rails has close to 10,000 methods defined, about 1,500 classes, and God only knows what else. The Vocabulary is huge - and so it's a daunting task to say the least.

On top of the huge vocabulary, there it's DRY.

That seems to mean that every chunk of code which is used two or more times is ripped out and turned into a method. The result is that in order to do anything, somewhere near one and a half gazillion methods are called. In order to figure out how almost anything works you have to work through a call stack which is makes the old FORTRAN Spaghetti look like a 2nd grader's maze.

Why is this happening?

I'm guessing, but I think there are three factors driving this result:

1. DRY - This is a misunderstanding. DRY is a hip way of saying 'we want to maintain a single point of control so that we don't have to repeat bug fixes in the code'. DRY shouldn't be a mantra, because it's not a Principal of Good Programming. It's a method to implement an idea which is good programming.

Prior to formulating 'Single Point of Control' as DRY, programmers used some judgement in the size of the methods/functions they wrote. Typical size was 5 to 25 lines or so - enough to do useful work and small enough to be easily understood. Rubyists seem to be addicted to one-line methods which call other methods which call other methods which ... - but they don't Repeat!

2. Agile Development - This seems to go hand in hand with Test Driven Development. Both sound like good ideas, and they seem to work well for 'throw away' applications - which is what most web sites are. After crawling through the Rails Association Code, I've come to the conclusion something critical has been lost in the Agile-ness of the Development: There's no Design Phase.

Test Driven Development has proven you don't have to have an overall plan to build code which works. You can build any twisted mess and if you continuously test, it will pass the test.

But it won't make sense, won't be cohesive, will probably be inefficient, and it will be hell to modify or learn.

Everything has to be designed it is going to work well and be maintainable. That's true for Buildings, Cars, Motorcycles, Bridges, and, believe it or not, Software.

3. Ruby itself.

I love Ruby - almost - because it's a beautiful language and it makes it easy to otherwise hard things. The fact that Ruby is a scripting language means we get instantaneous feedback - and I love that too.

But both of those things - along with Ruby's openness which allows all kinds of self-modifying code [which seems to be confused with meta-programming] makes it easy to get the illusion of 'doing something' when the programmer is actually just churning different implementations with little impact on improving the goals of the application being developed.

Does that make sense? If not, read it again.

Activity != Progress

It takes discipline and experience to handle as powerful a tool as Ruby is without creating a mess.

Don't believe me?

Just take a look at the source for Mongrel and compare [or rather Contrast it] with a bunch of the Rails code. Zed Shaw writes awesome, easily understood code. Somebody in the Rails Core doesn't.