Wednesday, August 8, 2018

Why mandating primary keys is a mistake in defining the RM and in RDBMS design.

The relational model of data was intended as a general-purpose model.  General-purpose in there means : suitable for addressing every conceivable business scenario and/or subject matter and/or problem type.  Stress : _every conceivable_.

Here's a relation schema plus FDs :

(A1, A2) with {A1->A2 , A2->A1}

That's perfectly conceivable.  Seasoned modelers will recognize this as a "bidirectional translation table".  Stress _bidirectional_.  Meaning that there *will* be users wanting to query this relation to find A2 value given an A1 value, and there *will* be users wanting to query this relation to find A1 value given an A2 value.  Perfectly conceivable.

Normalization theory informs us that the set of keys for this relation schema is {{A1} {A2}}.

If the RM wants to meet its stated objective of being "general-purpose", it must sensibly support this use case.

Now, either you believe that some key being "primary" is absolutely foundational and MUST be a part of the data model and therefore MUST be an aspect of any logical database design and therefore any DBMS *must force* the designer to make certain choices about this.  And then the consequence is that those involved in [studying] the business of deriving logical database models from conceptual [business] models MUST [also be able to] provide an "algorithm" and/or list of checkpoints or some such that will allow a designer to at least make this choice on well-founded grounds.  If such well-founded grounds do not exist, then any choice is obviously entirely arbitrary, meaning the choice itself does not carry any real meaning, meaning the choice shouldn't have to be made.  Any and all supporters of the idea of mandating primary keys are invited to state their solution/approach for my (A1, A2) case.

Or you believe that "primality" of a key is meaningless and irrelevant and then, well, simply no one has any problem.  Just declare all the keys there are and use any one that suits your purposes as the identifier needed for the business use case at hand.  Observe that if "primality" of a key is not meaningless, then it must be possible to find some aspect of the behaviour of software/data systems that can be supported by "systems-with-primary-keys" but cannot be supported by systems without.  That is, if "primality" is meaningful, there must be some added value somewhere for the software/data system.  If that added value exists, it can be demonstrated.  I have never seen it.  And until it is demonstrated, the only reasonable/rational option is to treat the "primary" in "primary key" as the mere psychological distraction that it is.

Tuesday, May 15, 2018

Why 3VL is unusable in computing for humans.

The following post was triggered by the discussion at https://news.ycombinator.com/item?id=17028878 and in particular, by the response I got when I made the observation that 3VL has 19683 distinct binary logical operators.  I'll quote the relevant portion of that response here for purposes of retaining context :

Could it be that just as with the 16 binary operators, many of which have relations to one another (e.g. inverses and complements, among others) that the trinary operators could fall into similar groups, which, making the 3^9 number you mentioned seem a whole lot less complex? Could that be why it's neither necessary nor customary to work with all the operators in either sort of logic?

Well yes, they do, and I was already very much aware of that when I wrote my message that this was in reply to, but I wasn't aware of the actual numbers.  So I decided to go do the maths (note in passing that the commenter could have chosen to do that just as well, and that actually without effectively doing that, whatever he says doesn't even constitute an argument but stays at the level of gratuitious handwaving, which is often the only thing such commenters are capable of) and here are the results of that exercise.

First of all, let's inspect in some deeper detail how come there are 16 binary logical operators but when we're asked to sum them op we often get no farther than AND,OR, euhhhhhhhhhhhhhh implication ?

In order to answer that I first want to look at the monadic operators in 2VL.  There are in total 4 such "theoretically possible" operators (I'll name them "T", "F", "I" and "N" respectively) :

IN \ OPER  ! T ! F ! I ! N !
-----------+---+---+---+---+
T          ! T ! F ! T ! F !
F          ! T ! F ! F ! T !

The operator named "T" returns 'true' no matter what, i.e. regardless of its input.  Likewise for the operator named "F" which returns 'false' no matter what.  At least from a programming language point of view, there is not much point in actually having these operators in the language, since invoking these is the equivalent of writing the corresponding literal.  So these can certainly be eliminated if all we want to retain is the set of "useful" operators.

The remaining two, "I" and "N" have the property that they constitute *permutations* of the applicable set of truth values : each truth value gets mapped to *some* truth value and no two truth values get mapped to the same truth value.  So these two are both total functions that are also bijections.

But of these two, "I" is not terribly useful either, because it constitutes the identity mapping : each value just gets mapped onto itself.  In programming language terms, there would be little point in ever writing code such as I(<bool xpr>) if we can just as well just write <bool xpr>.

So of these four theoretically possible monadic logical 2VL operators, exactly and only one is actually useful : the one we know as "NOT".  And we'll be applying this one in the next step.

Now onto the binary logical 2VL operators.

As already stated, theoretically 16 such "operators" are possible :

IN \ OPER  ! 1 ! 2 ! 3 ! 4 ! 5 ! 6 ! 7 ! 8 ! 9 ! A ! B ! C ! D ! E ! F ! 0 !
-----------+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
T,T        ! T ! T ! T ! T ! T ! T ! T ! T ! F ! F ! F ! F ! F ! F ! F ! F !
T,F        ! T ! T ! T ! T ! F ! F ! F ! F ! F ! F ! F ! F ! T ! T ! T ! T !
F,T        ! T ! T ! F ! F ! T ! T ! F ! F ! F ! F ! T ! T ! F ! F ! T ! T !
F,F        ! T ! F ! T ! F ! T ! F ! T ! F ! F ! T ! F ! T ! F ! T ! F ! T !

Now 16 names is already getting a bit much to remember all of them, so we go looking for ways to reduce this set of 16 operators to a smaller one that is manageable to remember.

As we did with the monadic operators, there are operators like monadic "T" and "F" to be eliminated (binary "1" and "9").  There are also those who "just retain the value of the first IN argument" and "just retain the value of the second IN argument" (binary "4" and "6").

But let's first try something else.  One column is one operator definition.  Let's consider column named "8" (the one we know as "AND").  We could characterize this one as "TFFF" (the result values for the 4 possible input combinations chained together).

Using a "useful monadic operator", we could conceive an operation of "applying the monadic operator to this binary operator characterization" so that from "TFFF" we obtain "FTTT", and that's a characterization for some other operator (the one listed under "0").

In fact, that operation could be carried out for each of the operator characterizations "1"-"8", and we'd obtain one from "9"-"0" in each case.  What this means in language terms is that an invocation of "0" on argument values (x,y) is demonstrably equivalent to invoking "8" on argument values (x,y) and then invoking monadic "N" on that.  (I'm carefully avoiding using NOT here too much because it will matter when we get to the 3VL counterparts).

So we are able to reduce the set of 16 to the set of just "1"-"8" by observing that we can achieve the effects of "9"-"0" also by just using an extra invocation of monadic "N".

And only now is the point where we wish to eliminate the "degenerate" operators that just return a fixed value or just one of its input arguments, unchanged.  We then retain the following 5 binary operators :

IN \ OPER  ! 2 ! 3 ! 5 ! 7 ! 8 !
-----------+---+---+---+---+---+
T,T        ! T ! T ! T ! T ! T !
T,F        ! T ! T ! F ! F ! F !
F,T        ! T ! F ! T ! F ! F !
F,F        ! F ! T ! T ! T ! F !

We immediately recognize columns "2" and "8" as being the ones commonly known as "OR" and "AND" respectively.  But those are typically the *only* two we think of readily and immediately upon seeing the term "binary logical operator".  So what about the other three ?

First, column "7".  Upon inspection, we can see that this is in fact the definition for an operator that could be labeled "boolean equality" : it returns true iff the two input arguments are the same.  We don't usually think of that one as a "logical operator" (and in programming the need doesn't arise all that often for comparing boolean values for equality/being the same) but mathematically it is in fact very much so.  A slightly different light is shed on the situation if we consider the "negated" version, which is "logical inequality", so to speak, which is more commonly known as XOR.  That one *does* get included in some languages as a primitive !  So in fact column "7" reminds us of a useful logical binary operator we often instinctively tend to "forget".

Next, column "5".  Upon inspection, we can see that this is in fact the definition for the operator usually labeled "[material] implication".  Ah yes, the one we usually write as "not(x) or y".  Well, fair enough.  There is an odd thing about material implication ("how can an implication be true if its antecedent is false meaning it can't be tested ?") that probably prohibits syntax such as "X IMPLIES Y" or "IMPLIES(X,Y)" to be equally self-documenting/self-explaining as "X AND Y" or "AND(X,Y)".  Fair enough.

Lastly, column "3".  Here, we can see that it is in some sense "symmetric" with column 5 in that for all x,y : "3"(x,y) === "5"(y,x).  That is, in language terms, if we need to invoke operator "3" we can also achieve that effect by invoking operator "5" and just swapping the arguments.  So usually we don't bother to give this operator its own name (e.g. "IMPLIEDBY") and just let programmers invoke the other one, either through its assigned name (e.g. "IMPLIES") or its equivalent NOT/OR combo.

This ends our survey of how large sets of logical operators are reduced to a much smaller, more manageable set with a very limited number of names to remember.

We will *definitely* need that when we switch to 3VL.

In 3VL, when considering all the theoretically possible monadic operators, we end up having 27 (!!!) of those :

IN \ OPER  ! 0 ! 1 ! 2 ! 3 ! 4 ! 5 ! 6 ! 7 ! 8 ! ... ! 26 !
-----------+---+---+---+---+---+---+---+---+---+-----+----+
T          ! T ! T ! T ! T ! T ! T ! T ! T ! T !     !  F !
U          ! T ! T ! T ! U ! U ! U ! F ! F ! F !     !  F !
F          ! T ! U ! F ! T ! U ! F ! T ! U ! F !     !  F !

Of these, just like in 2VL, the ones can be discarded that return T, U and F regardless of the input, already reducing the set to be considered to 24 (!) pieces.

Of these, there are 6 that have a similar characteristic as the two ones that were remaining in 2VL : namely that they constitute permutations (total functions that are bijective) on the set of applicable truth values.

Note that this does not mean that the remaining 18 ones are insignificant or irrelevant.
For example, looking at column "8", we see that this is the definition for an operator that could be named TREAT_U_AS_F, which is the operator that is effectively applied by SQL (tacitly) to any predicate that appears in an SQL WHERE clause (if SQL finds a WHERE clause that evaluates to 'unknown', it will *not* include the row in the result set).
For another example, looking at column "2", we see that this is the definition for an operator that could be named TREAT_U_AS_T, which is the operator that is effectively applied by SQL (tacitly) to any predicate that appears in an SQL CHECK clause (if SQL finds a CHECK clause that evaluates to 'unknown', it *will* consider that CHECK constraint as "satisfied" and not reject an update on behalf of *that* CHECK constraint).
There are plenty of those.  For example, SQL "IS NULL" is the operator that (+-) maps U to T and both T and F to F.  "+-" because SQL "IS NULL" also applies to scalars and here we're purely talking truth values.

Anyway, in a first step and for our present purposes we will only consider the 6 permutation-operators (I've labeled four of them "A"-"D" because I am too lazy to figure out what their number would have been in the "0"-"26" scheme) :

IN \ OPER  ! 5 ! 7 ! A ! B ! C ! D !
-----------+---+---+---+---+---+---+
T          ! T ! T ! U ! U ! F ! F !
U          ! U ! F ! T ! F ! U ! T !
F          ! F ! U ! F ! T ! T ! U !

Of these, column "C" is the one that is usually defined as the 3VL equivalent of (2VL) "NOT", for the "attractive" property that for all x, "C"("C"(x)) === x which is then the 3VL counterpart of the 2VL tautology NOT(NOT(x)) === x.  (But note that operators "7" and "A" have this property too.)

Now in 2VL at this point we were only left with 1 single operator.  So there was no question of whether that set could be further reduced.  We are not that lucky here, and we would certainly like to not have to continue with 6 distinct names (and corresponding definitions) to remember and this just for the monadic operators alone.  Remember that in 2VL we ended up with a set of at most 5 (NOT OR AND XOR IMPLIES) and in 3VL at this point we already have a larger number than those.

More unfortunately, the operators corresponding to the columns "C", "7" and "A" having the property just described (of "reverting their own effect" just like 2VL NOT does), it means that for all these three operators, no amount of chaining/nesting it will ever make us end up with the operator definition for any of the 4 others (excluding here column "5", which is the identity operator at hand, and which I'll refer to as "I" from this point on).

So if we want to reduce this set of 6 operators by showing that some of them are equivalent to some chaining of invocations of some of the others, we need to look at "B" and "D".

It turns out that
 
"D"("D"(x)) === "B"(x)
"B"("B"(x)) === "D"(x)
"D"("D"("D"(x))) === "I"(x)
"B"("B"("B"(x))) === "I"(x)
 
 And no chaining/nesting of these ever ends up at the definition for either of "7", "A" and "C".  That makes sense if one observes that both "B" and "D" "cycle through" the values in a F->U->T->F or T->U->F->T sense, and the three others "swap two values while preserving the third".

For readability, we'll introduce the names "PROM" for "D" - because it "promotes" F to U, U to T and T round to F - and "DEM" for "B" - because it "demotes" truth values in a similar sense.  Likewise, we'll refer to the monadic operator "C" as NOT.

These observations show that a minimal set to express all of these six monadic 3VL operators must at least consist of one of {PROM DEM} and NOT (or one of the "7" and "A" operators, but that choice would be highly questionable from an intuitiveness point of view).  We have not proved that the set {PROM NOT} suffices to arrive at the operators "7" and "A" but we believe this to be the case and we proceed on the mere conjecture of this.

Now here's an interesting experiment.  Given any random chaining/nesting of invocations of PROM/NOT, point in the table with the 6 operator definitions which particular one the nesting is equivalent to.  Do this *WITHIN THE SAME TIMEFRAME* that you would need to just count the NOTs and tell whether it is odd or even (amounting to NOT or identity).  Do you think *ANYONE* would be capable of this ?  I don't.  Just try it.

PROM(NOT(PROM(NOT(x)))).

Timed it ?  And it can get worse.  We already observed that the 6 permutation operators are not the only _relevant_ ones (for being used in truth value computations).  Do the same exercise in the 27-column table for

TREAT_U_AS_F(PROM(TREAT_U_AS_F(PROM(NOT(x))))).

Which one of number 0-26 is it ?  If one is operating in an environment where these operators can *all of them* actually be used, it seems relatively important to be just as proficient in this as in just counting NOTs and assessing whether the count is even or odd.  But that's a tall order, and I believe it to be beyond any of us humans.

This should already be giving you a start of a sense why 3VL is, actually, totally unmasterable for the human mind.  But it gets still worse.  3VL too has binary operators.

19683 of them, theoretically speaking, to be precise.  I'm not going to provide them all in tabular form.  And if there is any set we want to see reduced for memorizability, it's this one.

Well we sure can.  There's the three "degenerate" operators that always return T, U, F, respectively, regardless of the inputs.  Discard those from the set to retain 19680.  (BTW Reason I'm doing this now is that I can't reduce these three operators to half a case by applying the six distinct monadic permutation operators to their TTTTTTTTT, UUUUUUUUU, FFFFFFFFF characterizations.

To these remaining 19680, apply the six distinct characterization transforms to have only 3560 left, hoping that the six distinct characterization transforms applied to these 3560 will effectively cover all of the 19680 operators there are (this has *NOT* been proved).

Subsequently, inspect that set of 3560 for individual pairs that expose characteristics of "symmetry" just like IMPLIES/IMPLIEDBY did in 2VL.  (Commutative operators (like AND, OR and EQV in 2VL) are "symmetric" in this sense to themselves, so no reduction opportunities there).

I'll have to take a guess at the actual number left here, but the next task is then to assign useful and meaningful names to the remaining 2- to 3000.  And hope memorizing them all will be doable for anyone having to study it.

So at this point I'm just going to be charitable to my critic.  Yes that set of 19683 operators can be significantly reduced.  By somewhere roundabouts 84%, which is even a higher percentage than the roundabouts 68% we could achieve in 2VL.  So I'm going to let him study/invent 3000 operator names while I'm building useful stuff using {AND OR NOT XOR IMPLIES}.

Friday, January 12, 2018

Afterthoughts on a data architects meetup


Visited a meetup of data architects yesterday. Main topic for me was the presentation with thoughts on our practices of data modeling, provokingly presented under the title “data modeling must die”. It was a very good talk. It defended ideas that have been mine as well for as long as I can remember. However this post is about a point of disagreement. And another one.
Disagreement 1.
It was claimed that when Codd invented the relational model of data, he also made some serious mistakes. Fair enough, he has. (It may have been the case that many of those mistakes actually only crept in during the later years for reasons and circumstances that were more political than anything else, and that early Codd was even “purer” than the fiercest relational fundamentalist still walking around these days, but that’s another discussion.)
But the mistake being referred to was “inventing the relational model of data on an island”, by which it was meant that his “mistake” was to invent the RM in isolation from other phases of the process of data systems development, such as conceptual modeling.
True, the inventing happened in isolation. But dressing that up as a “mistake” he made is, eurhm, itself a mistake. One that exposes a lack of understanding of the circumstances of the day.
One, it is not even certain imo that “conceptual modeling” as a thing in its own right already existed at the time. Codd’s RM is 1969, Chen ER is 1974 ("An Introduction to Database Systems" even dates it 1976). So how *could* he have included any such thing in his thinking. Here are two quotes from "An Introduction to Database Systems" that are most likely to illustrate accurately how Codd probably even never have come up with the RM if he *truly, genuinely* was "working on an island, separated from any and all of those developer concerns as they typically manifest themselves while working at the conceptual level".
"It is probably obvious to you that the ideas of the E/R approach, or something very close to those ideas, MUST HAVE BEEN (emphasis mine) the informal underpinnings in Codd's mind when he first developed the formal relational model."
"In other words, in order for Codd to have constructed the (formal) relational model in the first place, he MUST HAVE HAD (emphasis mine) some (informal) "useful semantic concepts" in his mind, and those concepts MUST BASICALLY HAVE BEEN (emphasis mine) those of the E/R model, or something very like them."
Readers wanting to read more are referred to chapter 14 of said book, and pg 425 in particular, for the full discussion by Chris Date.
So why did Codd not bother with the stuff at the conceptual level ?  My answer : because he was a mathematician not an engineer.  And as a mathematician, his mindset always led him to want to be able to PIN THINGS DOWN PRECISELY, with "precisely" here carrying the meaning it has when present in the mind of a PhD in mathematics.  Which is quite different from the meaning the word might have in the mind of the average reader of this post.
And at the conceptual level, you never get to "pin things down precisely" AND THAT'S DELIBERATE.
In those days, there was analysis and there was programming. With a *very* thick Chinese Wall between the two, and often even between the people engaging in one of those two activities (at the time it was typically considered outright impossible for any person to be proficient in both). Analysis was done *on paper* and that paperwork got stored in physical binders ending up in a dust-collecting locker. I even doubt Codd ever got to see any such paper analysis work. He did get to see programs written in the “programming” side of things. Because that’s where his job was : in an environment whose prime purpose was to [develop ‘systems’ software to] support programmers in their “technical” side of the story.
Two, Codd never pretended to address the whole of the data systems development process with his RM. The RM was targeted at a very specific and narrow problem he perceived in that process, as it typically went in those days : that of programmers writing procedural code to dig out the data from where it is stored. He just aimed for a system that would permit *programmers* to do their data manipulation *more declaratively* and *less procedurally/mechanically*. Physical data independence. Nothing more than that. And the environmentals that would make such a thing conceivable and feasible in real life. Codd was even perfectly OK with not even considering how the data got into the database ! His first proposal for a data language, Alpha, *did not have INSERT/DELETE/UPDATE* ! He was perfectly fine leaving all those IMS shops as they were and do nothing but add a “mapping layer” so what came out of the mapping layer was just a relational view of data that was internally still “hierarchical”. I could go on and on about this, but my point here is : calling it a “mistake” that someone doesn’t do something he never intended to do in the first place (and possibly even didn’t have any way of knowing that doing it could be useful), is a bit over the edge.
Disagreement 2
It was claimed that “model translations MUST be automatic”. (The supporting argument being something of the ilk “otherwise it won’t happen anyway”.)
True and understandable (that otherwise it won’t happen), but reality won’t adapt itself so easily to management desiderata (“automatic” is management speak for “cheap” and that’s the only thing that matters) merely because management is management.  Humans do if they're not the manager, reality doesn't.  And the reality is that the path from highly conceptual, highly abstract, highly informal to fully specced out to the very last detail, is achieved by *adding stuff*. And *adding stuff* is design decisions taken along the way. And automated processes are very inappropriate for making *design decisions*. (By *adding stuff* I merely mean *add new design information to the set of already available design information*, I do not mean, add new symbols or tokens to an already existing schema or drawing that is already made up in some syntax.)
When can automated systems succeed in making this kind of design decisions ? When very rigid conventions are followed. E.g. when it is okay that *every entity* modeled at the conceptual level eventually also becomes a table in the logical model/database. But that goes entirely counter to the actual purpose of modeling at the *conceptual* level ! If you take such conventions into account at the time you’re doing conceptual-level modeling, then you are deluding yourself because in fact you are actually already modeling at the logical level. Because you are already thinking of the consequences at the logical level of doing things this way or that way. The purpose of conceptual-level modeling is to be able to *communicate*. You want to express the notion that *somewhere somehow* the system is aware of a notion of, say, “customer” that is in some way related to, say, a notion of “order” that our business is about. You *SHOULD NOT NEED TO WORRY* about the *logical details* of that notion of a “customer” if all you want to do is express the fact that these notions exist and are related.
So relatively opposite to the undoubtedly wise people in front of the audience, I’m rather inclined to conjecture that if you try to do those “model translations” automatically, you are depriving yourself of the freedom to take those design decisions that are the “right” ones for the context at hand, because the only design decisions that *can* still be taken are *[hardcoded] in [the implementation of]* the translation process. And such a translation process can *never* understand the context (central bank vs. small shop on the corner of the street kind of aspects), let alone take it into account, in the same way that a human designer can indeed. That is, you are depriving yourself of the opportunity to come up with the “right” designs.
A third point.
I was also surprised to find how easily even the data architects of the current generation who are genuinely motivated to improve things, seem to have this kind of association that “Codd came up with SQL”. He didn’t and he’d actively turn around in his grave hearing such nonsense (he might also just have given up turning around because it never ends). He came up with the relational model. The *data language* he proposed himself was called Alpha. Between Alpha and SQL, several query languages have seen the light of day, the most notable among them probably being QUEL. SQL is mostly due to what good old Larry did roundabouts 1980. It is relatively safe to assume that, once SQL was out, Codd felt about it much the same way that Dijkstra felt about BASIC and COBOL : that it was the most horrendous abomination ever conceived by a human. But that (neither the fact that the likes of Codd *have* such a denigrating opinion, nor the fact that they’re right) won’t stop adoption.