4 Oct 2009

Distributed is the new Object Oriented

In the 80s, Object Oriented development promised a fundamental reshaping of the software development landscape, and it had distinct religious overtones. (You can tell it was religious because Object Oriented is capitalized.) It was going to be better in every way from procedural programming - everything would be reused, bugs would be eliminated, and mass love would result. Like Theravada Buddhism, once you accepted the Four Noble Truths of Encapsulation, Inheritance, Polymorphism, and Modularity everything else followed. This fever gripped the development world for twenty years, and thousands of developers never made the mental shift necessary to embrace it.

Leaders often made the fateful decision to rewrite existing procedural apps in object oriented technologies. Did the resulting programs run better? Um, no. Did they conquer the marketplace? God no. Did they run faster? Hell no. Windows Vista is a prime example; I'm not going to rehash any personal case histories because the pain is still too great. I'll let you know when I'm strong enough to cry.

Distributed development is as different from Object Oriented as Object Oriented is from procedural development. Most of the existing cadre of developers will never get this stuff, just as most procedural developers never figured out OO. Hadoop / MapReduce and Erlang require a rethinking of how problems should be solved, and a rethinking of what problems can be solved. Instead of figuring out how to best rewrite yesterday's apps with today's technologies, it's much better to treat them as solved problems and move on.

2 comments:

Mister G.A.G. said...

Off topic: You're cute, sir :)

Scott said...

OO failed in the embedded world. The reason was that to embrace OO, you had to embrace an abstraction of your programming. Everything was neat and tidy with objects, constructors, destructors, etc. The problem was that most developers either forgot, or more probably, never learned what was going on underneath the hood.

They didn't realize what was happing to the stack as you instantiated an object that had multiple layers of inheritance. The stack went nuts! All these layers of constructors creating new dynamic memory: Pop everything onto the stack... pop it all back off again. Over and over ad infinitum.

All of that Big-O notation stuff in college, which was really hard to understand, was conveniently ignored. And so, when they tried rewriting embedded code, it failed with glory.

At BNR, back in the 90's, they rewrote much of the digital switch call processing code in C++ as a test exercise. One of the key metrics in digital switching at the time was "dial tone delay"--how long from picking up the telephone handset before you hear the dial tone. The rigorous specs said it had to be 100 ms or less.

With well written procedural code, that was no problem. But when they tried an elegant OO solution, the dial tone delay was OVER A MINUTE! Ha! I can think of no finer example!