Category: Reactor

Decisionmaking and belief systems

OK, full disclosure; I am not an “Architect”. At least, I am not an architect who actually knows what architecture is. I know what real architects do, and I am not one of those. The other kind (“Software architect” or “IT architect”), I don’t have a clue what it is they actually do, or are supposed to do. So maybe I am one after all, no way to be sure. Which is fascinating, and has got me thinking lately. I think I know what it is they’re supposed to be doing. But, as it’s not an education, job description, or even an assignment that differentiates them from other roles of an IT project / organization, I really can’t claim to know what it is that defines them as a discernible entity, class, or species if you will. In other words, I would be at a loss if asked to define one, or even guess if a person (“A”) was one. I would be forced into a making a judgement call, a wild guess. More on that to follow.

Recently, I was invited in on a discussion on an “architectural” issue at my current place of employment. They (“the architects”) laid out a plan for a new architectural component that they would like to see implemented. Without going into much detail, the “architecture” proposed could actually be construed as a valid idea, when proposed on the only platform it will ever work on, “Powerpoint”. However, it went against almost all modern (and ancient for that matter), proved, and accepted principles of building sturdy and performing software systems. It involved building a central lump of software for “data management” services. It would create cohesive dependencies to and from every other inhouse system (except legacy or 3d party), it would represent a SPF (single point of … ah, you get it), it would be detrimental to performance in all those systems, and worst of all (dammit, I’m not even sure of that), it would effectively eradicate database integrity in all of the systems involved. Yes, that’s right. I am not making this up.

So, my initial thought was: “Wow, this governance thingy must represent a really, really, really high value to their business, since they are willing to compromise on such a grand, gigantic, majestic, no-expenses-spared scale on all other technical issues”.
But no. It doesn’t. After a somewhat heated exchange of opinions, many of which starting with “Oh yeah? Well your mother…”, I get the distinct impression they just “think it’s a good idea”.

And this is where I started off this impressively informative rant; There must be a huge difference in perspective here. And I am not just talking about the mundane “different jobs, different priorities” conflicts.

As an example, in the case  described, one of the goals is to get rid of the use of NULL values inside of each application domain. Doesn’t sound all that controversial? May even sound like a valid idea? OK, bear with me for a second; NO, it is NOT a good idea. Why? Because it is a defined value that by definition means “I don’t know this”. That is actually what it means. I know there are some would-be geniuses that will try and persuade you that NULL is evil. That is because they haven’t learned to appreciate the pure and simple concept of NULL. Granted, if you have unintended NULL pointers in your code, it’s annoying, and may seem like an “unnecessary evil”. But they get in there for the simple reason that you have done it wrong. You have put them there, and later failed to take that into account. But enough about that. Really. Seriously. Enough. Back to our example in question:

Developer: “OK, what do you suggest we put in there instead?”
Architect: “A predefined value that means “null”.
Developer: “OK, uhm, ok, erh, Why, really?”
Architect: “Because then we know it’s actually unknown, or value not supplied, or – something.” (OK, I added the bit about “something” myself, couldn’t resist it).
Developer: “So you want to get rid of this value that’s been given by definition, and replace it with something that needs a flimsy convention to make sense?”
Architect: “Yes! ” (happy, and almost seems impressed that developer got it so quickly).

And this, dear reader (nudge! Hello, still there, right?), is at the heart of my brand new hypothesis: Decisions are being made by people who have conflicting world views, and completely different systems of belief. Getting technical about the ‘null’ again for the last time (promise): A definition is in my mind a much less ambiguous ‘value’ than one defined by convention. If pressed on it, I will acknowledge the fact that it is doable to replace the defined null with a “convened null”. However, it is an expensive maneuver, and there are no good reasons to do it. Even worse, it will, by it’s  nature, introduce ambiguity in the system’s data models. The same goes for the discussion (from the same meeting) on database integrity. Yes, it is conceivable that you can make systems work without enforced referential integrity within their own domains, but there are no reasons to do so, and it will inevitably lead to yet more conventions and countermeasures, all of which will be prone to new problems and errors, and expensive to implement to boot.
It is blaringly obvious, isn’t it? Easy to grasp?
Yes, but apparently not. So, therefore, I came up with this theory that explains it all.

To be able to understand this decision process meltdown, we need to stipulate some objective parameters. Actually, one parameter will do for this (I never said it was rocket science, did I?): I call it “The accountability of supplied facts.” It is a one-dimensional table:

“I decide to do this, because I could base my decision on facts that are:”

  1. Objective, mathematical Proof
  2. Actual Definition
  3. Convention
  4. Standards-based (really just a convention that “someone else” agreed on)
  5. Things I choose to believe in
  6. Things I think are true
  7. Correspondent to my feelings
  8. Stuff I really wish could be true.

That’s the objective parameter out of the way, now we need to categorize people with regards to how they relate to “facts” in their respective categories. Like this:

Accountability of given facts Developer Architect
By Objective Proof 100 60
By Definition 100 70
By Convention 100 50
By standards 80 60
Choose to believe that 40 70
Think that 20 50
Feel that 0 50
Really wish that 0 60

The numbers are from (and inclusive of) 0 (zero, not ‘null’) to (and inclusive of) 100, which is proven to be a “larger number” than zero. Why not?

The table seems to imply that there’s a certain difference in the way we perceive things. And indeed, transform this into a net chart, and this is what we find:

So you see, ladies and gentlemen, there is a reason why this decision process is an ongoing struggle, which all too often ends up in software systems ending up in the “supermassive black hole of architecture”, where nothing, not even unambiguous null values or valid foreign keys can get out.

The benefits of technical debt

The meta definition of debt goes something like this:

“A debt<T> is created when a creditor<T> agrees to lend a sum of assets<T> to a debtor<T,R>.

In modern society, debt<T> is usually granted with expected repayment; in most cases, plus interest”

In other words, If you borrow something, etuiqette dictates that you pay it back – especially in instances of debt<money>, with interest. If you don’t, bad things will happen to you, and you’ll probably end up having to sleep under a bridge – with aching knee caps as a possible consequence of using the wrong T for creditor (chosing creditor<bank> is usually a better idea than creditor<Tony S>)

In the case of debt<technical>, creditor<enterprise>, asset<time>, “debtor<T>” gets interesting.

It is worth mentioning that the consequences of technical debt actually varies according to the developer’s religion. Islam prohibits lending with interest, so a debtor<developer, Sheik Yerbouty> gets away with paying less than an instance of debtor<developer, Catholic girl> where interest has been a concern since the days of Charles Babbage. On the other hand, an instance of debtor<developer, Jewish princess> implementing an especially inefficient piece of code, will literally get away with technical murder (According to the Torah, all debts should be erased every 7 years and every 50 years)

Another consideration is that we (for the type definitions above) actually need two instances of debtor<T>. This is because one usually goes out of scope before the transaction is complete. This is especially true for the pair<debtor<consultant>, creditor<enterprise>>

To complicate matters further, the semantics change dramatically across the domain of type parameters.

In the case of debt<financial>, creditor<institution>, asset<money>, debtor<Joe> we have just defined the driving force behind the global economy, growth and prosperity.

If we switch back to the technical debt type parameterization scenario, we end up with the unit-testing mafia’s classical rhetorical dilemma “pay now or pay more later ?”. In this case debt is interpreted as bad and should therefore be avoided at all costs. Unfortunately, their intepretation is based on their inability to grasp the concept of two temporally coupled instances of debtor<T>

Technical debt is a good thing. It’s what makes the IT world spin. Without it, every system would work flawlessly and be so easy to maintain and extend that 95% of us would be out of a job – and therefore probably end up having to sleep under a bridge. There is actually an argument to be made for a significant global increase in the aquisition of technical debts.

1: An enteprise with a large technical debt will likely require the services of an increasing number of developers to keep things under control (and probably also additional manpower to compensate for the software’s inability to support the core business processes). More developers will introduce more technical debt and the cycle continues. Eventually everything will collapse and the original software has to be replaced by new and improved software. Fortunately the introduction of a piece of new and complex software requires the services of domain- and software experts. If the software gets complex enough (and it will, if the debt is large enough), one will have to refer to the previous implementation to be absolutely sure that all the implicit knowledge contained in the old implementation is transfered to the new implementation. This includes representations of data and logic. If (reader<executive> == typeof(you)) Goto 1;

Debt is good. It’s very, very good.

It provides more jobs than any enterprise can ever get rid of by downsizing. If they select the outsourcing option, the technical debt increses even faster because of a higher technical interest. In addition to this, it will also (eventually) drive your salary skywards due to the fact that educational institutions don’t scale, but technical debt does – exponentially.

Normality and semantics are hereby restored.

[Ed: Some definitions may accidentally have been ripped from Wikipedia in the process of writing this article]

Well it’s kind of a – kind of a mass. It keeps getting bigger and bigger

This won’t hurt a bit. Just take a deep breath and relax – while we introduce you to the…

If you’re a developer, you’re probably thinking something along the lines of:

“Is it… ?”
“No, it can’t be – surely ….”
“By Knuth ! Look at the efferent coupling on that thing ! We’re all going to die !”

If you ever presented a design like that to anyone – you’d probably be out of a job. You can, however replace the blob with a more traditional ESB long-box representation and dangle the services below it (This is one of oldest marketing tricks known to man, also known as the “homeomorphic swap”)

It still has the same topology and the same coupling issues that are indicative of “something that knows too much” and “a certain brittleness”, but now – you have become architect material.

As an architect, you might be tempted to make an argument that everything is fine and dandy, because it’s loosely coupled.

Please don’t.

If there were no hardwired endpoints, and you had a service discovery mechanism powered by a Wintermute AI at your disposal – you would probably get away with it (The odds are – there are – and you don’t). That is, as long as your architecture only needs to support read operations.

The “Bus” concept is somewhat analog to a semantic sucker punch. The word itself practically induces associations to some kind of Matrix like hosting mehcanism that suddenly made distributed computing easy.

The one thing the analyst forgot to tell you all those years ago at that expensive conference - was that an ESB is a piece of software that is running on a piece of hardware. There is a high probablity that your ESB actually obeys the laws of physics. I.e. it doesn’t really work, does it ?

REST didn’t kill SOA – there was no need.

Been looking for this ?

Three months of uninterrupted service in a hostile environment has to be a new Annoyatron record. People have literally dismantled office equipment and even been on their knees trying to locate the source of the 12KHz sound. Successful deployment tactics seems to be associated with sound reflection and diffusion.
Anoyatrons are already difficult enough to locate if they’re installed beneath a desk or in the cushion of a chair, but if you deploy them in pairs behind perforated ceiling tiles, they seem to be almost impossible to locate – at least by middle management execs. Great fun !

Zone mapping chart

Pet Technology Timeline

 

Languages

Union Tweets

Concepts of desire