OK, full disclosure; I am not an “Architect”. At least, I am not an architect who actually knows what architecture is. I know what real architects do, and I am not one of those. The other kind (“Software architect” or “IT architect”), I don’t have a clue what it is they actually do, or are supposed to do. So maybe I am one after all, no way to be sure. Which is fascinating, and has got me thinking lately. I think I know what it is they’re supposed to be doing. But, as it’s not an education, job description, or even an assignment that differentiates them from other roles of an IT project / organization, I really can’t claim to know what it is that defines them as a discernible entity, class, or species if you will. In other words, I would be at a loss if asked to define one, or even guess if a person (“A”) was one. I would be forced into a making a judgement call, a wild guess. More on that to follow.
Recently, I was invited in on a discussion on an “architectural” issue at my current place of employment. They (“the architects”) laid out a plan for a new architectural component that they would like to see implemented. Without going into much detail, the “architecture” proposed could actually be construed as a valid idea, when proposed on the only platform it will ever work on, “Powerpoint”. However, it went against almost all modern (and ancient for that matter), proved, and accepted principles of building sturdy and performing software systems. It involved building a central lump of software for “data management” services. It would create cohesive dependencies to and from every other inhouse system (except legacy or 3d party), it would represent a SPF (single point of … ah, you get it), it would be detrimental to performance in all those systems, and worst of all (dammit, I’m not even sure of that), it would effectively eradicate database integrity in all of the systems involved. Yes, that’s right. I am not making this up.
So, my initial thought was: “Wow, this governance thingy must represent a really, really, really high value to their business, since they are willing to compromise on such a grand, gigantic, majestic, no-expenses-spared scale on all other technical issues”.
But no. It doesn’t. After a somewhat heated exchange of opinions, many of which starting with “Oh yeah? Well your mother…”, I get the distinct impression they just “think it’s a good idea”.
And this is where I started off this impressively informative rant; There must be a huge difference in perspective here. And I am not just talking about the mundane “different jobs, different priorities” conflicts.
As an example, in the case described, one of the goals is to get rid of the use of NULL values inside of each application domain. Doesn’t sound all that controversial? May even sound like a valid idea? OK, bear with me for a second; NO, it is NOT a good idea. Why? Because it is a defined value that by definition means “I don’t know this”. That is actually what it means. I know there are some would-be geniuses that will try and persuade you that NULL is evil. That is because they haven’t learned to appreciate the pure and simple concept of NULL. Granted, if you have unintended NULL pointers in your code, it’s annoying, and may seem like an “unnecessary evil”. But they get in there for the simple reason that you have done it wrong. You have put them there, and later failed to take that into account. But enough about that. Really. Seriously. Enough. Back to our example in question:
Developer: “OK, what do you suggest we put in there instead?”
Architect: “A predefined value that means “null”.
Developer: “OK, uhm, ok, erh, Why, really?”
Architect: “Because then we know it’s actually unknown, or value not supplied, or – something.” (OK, I added the bit about “something” myself, couldn’t resist it).
Developer: “So you want to get rid of this value that’s been given by definition, and replace it with something that needs a flimsy convention to make sense?”
Architect: “Yes! ” (happy, and almost seems impressed that developer got it so quickly).
And this, dear reader (nudge! Hello, still there, right?), is at the heart of my brand new hypothesis: Decisions are being made by people who have conflicting world views, and completely different systems of belief. Getting technical about the ‘null’ again for the last time (promise): A definition is in my mind a much less ambiguous ‘value’ than one defined by convention. If pressed on it, I will acknowledge the fact that it is doable to replace the defined null with a “convened null”. However, it is an expensive maneuver, and there are no good reasons to do it. Even worse, it will, by it’s nature, introduce ambiguity in the system’s data models. The same goes for the discussion (from the same meeting) on database integrity. Yes, it is conceivable that you can make systems work without enforced referential integrity within their own domains, but there are no reasons to do so, and it will inevitably lead to yet more conventions and countermeasures, all of which will be prone to new problems and errors, and expensive to implement to boot.
It is blaringly obvious, isn’t it? Easy to grasp?
Yes, but apparently not. So, therefore, I came up with this theory that explains it all.
To be able to understand this decision process meltdown, we need to stipulate some objective parameters. Actually, one parameter will do for this (I never said it was rocket science, did I?): I call it “The accountability of supplied facts.” It is a one-dimensional table:
“I decide to do this, because I could base my decision on facts that are:”
- Objective, mathematical Proof
- Actual Definition
- Standards-based (really just a convention that “someone else” agreed on)
- Things I choose to believe in
- Things I think are true
- Correspondent to my feelings
- Stuff I really wish could be true.
That’s the objective parameter out of the way, now we need to categorize people with regards to how they relate to “facts” in their respective categories. Like this:
|Accountability of given facts||Developer||Architect|
|By Objective Proof||100||60|
|Choose to believe that||40||70|
|Really wish that||0||60|
The numbers are from (and inclusive of) 0 (zero, not ‘null’) to (and inclusive of) 100, which is proven to be a “larger number” than zero. Why not?
The table seems to imply that there’s a certain difference in the way we perceive things. And indeed, transform this into a net chart, and this is what we find:
So you see, ladies and gentlemen, there is a reason why this decision process is an ongoing struggle, which all too often ends up in software systems ending up in the “supermassive black hole of architecture”, where nothing, not even unambiguous null values or valid foreign keys can get out.