Wednesday, August 8, 2018

Why mandating primary keys is a mistake in defining the RM and in RDBMS design.

The relational model of data was intended as a general-purpose model.  General-purpose in there means : suitable for addressing every conceivable business scenario and/or subject matter and/or problem type.  Stress : _every conceivable_.

Here's a relation schema plus FDs :

(A1, A2) with {A1->A2 , A2->A1}

That's perfectly conceivable.  Seasoned modelers will recognize this as a "bidirectional translation table".  Stress _bidirectional_.  Meaning that there *will* be users wanting to query this relation to find A2 value given an A1 value, and there *will* be users wanting to query this relation to find A1 value given an A2 value.  Perfectly conceivable.

Normalization theory informs us that the set of keys for this relation schema is {{A1} {A2}}.

If the RM wants to meet its stated objective of being "general-purpose", it must sensibly support this use case.

Now, either you believe that some key being "primary" is absolutely foundational and MUST be a part of the data model and therefore MUST be an aspect of any logical database design and therefore any DBMS *must force* the designer to make certain choices about this.  And then the consequence is that those involved in [studying] the business of deriving logical database models from conceptual [business] models MUST [also be able to] provide an "algorithm" and/or list of checkpoints or some such that will allow a designer to at least make this choice on well-founded grounds.  If such well-founded grounds do not exist, then any choice is obviously entirely arbitrary, meaning the choice itself does not carry any real meaning, meaning the choice shouldn't have to be made.  Any and all supporters of the idea of mandating primary keys are invited to state their solution/approach for my (A1, A2) case.

Or you believe that "primality" of a key is meaningless and irrelevant and then, well, simply no one has any problem.  Just declare all the keys there are and use any one that suits your purposes as the identifier needed for the business use case at hand.  Observe that if "primality" of a key is not meaningless, then it must be possible to find some aspect of the behaviour of software/data systems that can be supported by "systems-with-primary-keys" but cannot be supported by systems without.  That is, if "primality" is meaningful, there must be some added value somewhere for the software/data system.  If that added value exists, it can be demonstrated.  I have never seen it.  And until it is demonstrated, the only reasonable/rational option is to treat the "primary" in "primary key" as the mere psychological distraction that it is.

Tuesday, May 15, 2018

Why 3VL is unusable in computing for humans.

The following post was triggered by the discussion at https://news.ycombinator.com/item?id=17028878 and in particular, by the response I got when I made the observation that 3VL has 19683 distinct binary logical operators.  I'll quote the relevant portion of that response here for purposes of retaining context :

Could it be that just as with the 16 binary operators, many of which have relations to one another (e.g. inverses and complements, among others) that the trinary operators could fall into similar groups, which, making the 3^9 number you mentioned seem a whole lot less complex? Could that be why it's neither necessary nor customary to work with all the operators in either sort of logic?

Well yes, they do, and I was already very much aware of that when I wrote my message that this was in reply to, but I wasn't aware of the actual numbers.  So I decided to go do the maths (note in passing that the commenter could have chosen to do that just as well, and that actually without effectively doing that, whatever he says doesn't even constitute an argument but stays at the level of gratuitious handwaving, which is often the only thing such commenters are capable of) and here are the results of that exercise.

First of all, let's inspect in some deeper detail how come there are 16 binary logical operators but when we're asked to sum them op we often get no farther than AND,OR, euhhhhhhhhhhhhhh implication ?

In order to answer that I first want to look at the monadic operators in 2VL.  There are in total 4 such "theoretically possible" operators (I'll name them "T", "F", "I" and "N" respectively) :

IN \ OPER  ! T ! F ! I ! N !
-----------+---+---+---+---+
T          ! T ! F ! T ! F !
F          ! T ! F ! F ! T !

The operator named "T" returns 'true' no matter what, i.e. regardless of its input.  Likewise for the operator named "F" which returns 'false' no matter what.  At least from a programming language point of view, there is not much point in actually having these operators in the language, since invoking these is the equivalent of writing the corresponding literal.  So these can certainly be eliminated if all we want to retain is the set of "useful" operators.

The remaining two, "I" and "N" have the property that they constitute *permutations* of the applicable set of truth values : each truth value gets mapped to *some* truth value and no two truth values get mapped to the same truth value.  So these two are both total functions that are also bijections.

But of these two, "I" is not terribly useful either, because it constitutes the identity mapping : each value just gets mapped onto itself.  In programming language terms, there would be little point in ever writing code such as I(<bool xpr>) if we can just as well just write <bool xpr>.

So of these four theoretically possible monadic logical 2VL operators, exactly and only one is actually useful : the one we know as "NOT".  And we'll be applying this one in the next step.

Now onto the binary logical 2VL operators.

As already stated, theoretically 16 such "operators" are possible :

IN \ OPER  ! 1 ! 2 ! 3 ! 4 ! 5 ! 6 ! 7 ! 8 ! 9 ! A ! B ! C ! D ! E ! F ! 0 !
-----------+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
T,T        ! T ! T ! T ! T ! T ! T ! T ! T ! F ! F ! F ! F ! F ! F ! F ! F !
T,F        ! T ! T ! T ! T ! F ! F ! F ! F ! F ! F ! F ! F ! T ! T ! T ! T !
F,T        ! T ! T ! F ! F ! T ! T ! F ! F ! F ! F ! T ! T ! F ! F ! T ! T !
F,F        ! T ! F ! T ! F ! T ! F ! T ! F ! F ! T ! F ! T ! F ! T ! F ! T !

Now 16 names is already getting a bit much to remember all of them, so we go looking for ways to reduce this set of 16 operators to a smaller one that is manageable to remember.

As we did with the monadic operators, there are operators like monadic "T" and "F" to be eliminated (binary "1" and "9").  There are also those who "just retain the value of the first IN argument" and "just retain the value of the second IN argument" (binary "4" and "6").

But let's first try something else.  One column is one operator definition.  Let's consider column named "8" (the one we know as "AND").  We could characterize this one as "TFFF" (the result values for the 4 possible input combinations chained together).

Using a "useful monadic operator", we could conceive an operation of "applying the monadic operator to this binary operator characterization" so that from "TFFF" we obtain "FTTT", and that's a characterization for some other operator (the one listed under "0").

In fact, that operation could be carried out for each of the operator characterizations "1"-"8", and we'd obtain one from "9"-"0" in each case.  What this means in language terms is that an invocation of "0" on argument values (x,y) is demonstrably equivalent to invoking "8" on argument values (x,y) and then invoking monadic "N" on that.  (I'm carefully avoiding using NOT here too much because it will matter when we get to the 3VL counterparts).

So we are able to reduce the set of 16 to the set of just "1"-"8" by observing that we can achieve the effects of "9"-"0" also by just using an extra invocation of monadic "N".

And only now is the point where we wish to eliminate the "degenerate" operators that just return a fixed value or just one of its input arguments, unchanged.  We then retain the following 5 binary operators :

IN \ OPER  ! 2 ! 3 ! 5 ! 7 ! 8 !
-----------+---+---+---+---+---+
T,T        ! T ! T ! T ! T ! T !
T,F        ! T ! T ! F ! F ! F !
F,T        ! T ! F ! T ! F ! F !
F,F        ! F ! T ! T ! T ! F !

We immediately recognize columns "2" and "8" as being the ones commonly known as "OR" and "AND" respectively.  But those are typically the *only* two we think of readily and immediately upon seeing the term "binary logical operator".  So what about the other three ?

First, column "7".  Upon inspection, we can see that this is in fact the definition for an operator that could be labeled "boolean equality" : it returns true iff the two input arguments are the same.  We don't usually think of that one as a "logical operator" (and in programming the need doesn't arise all that often for comparing boolean values for equality/being the same) but mathematically it is in fact very much so.  A slightly different light is shed on the situation if we consider the "negated" version, which is "logical inequality", so to speak, which is more commonly known as XOR.  That one *does* get included in some languages as a primitive !  So in fact column "7" reminds us of a useful logical binary operator we often instinctively tend to "forget".

Next, column "5".  Upon inspection, we can see that this is in fact the definition for the operator usually labeled "[material] implication".  Ah yes, the one we usually write as "not(x) or y".  Well, fair enough.  There is an odd thing about material implication ("how can an implication be true if its antecedent is false meaning it can't be tested ?") that probably prohibits syntax such as "X IMPLIES Y" or "IMPLIES(X,Y)" to be equally self-documenting/self-explaining as "X AND Y" or "AND(X,Y)".  Fair enough.

Lastly, column "3".  Here, we can see that it is in some sense "symmetric" with column 5 in that for all x,y : "3"(x,y) === "5"(y,x).  That is, in language terms, if we need to invoke operator "3" we can also achieve that effect by invoking operator "5" and just swapping the arguments.  So usually we don't bother to give this operator its own name (e.g. "IMPLIEDBY") and just let programmers invoke the other one, either through its assigned name (e.g. "IMPLIES") or its equivalent NOT/OR combo.

This ends our survey of how large sets of logical operators are reduced to a much smaller, more manageable set with a very limited number of names to remember.

We will *definitely* need that when we switch to 3VL.

In 3VL, when considering all the theoretically possible monadic operators, we end up having 27 (!!!) of those :

IN \ OPER  ! 0 ! 1 ! 2 ! 3 ! 4 ! 5 ! 6 ! 7 ! 8 ! ... ! 26 !
-----------+---+---+---+---+---+---+---+---+---+-----+----+
T          ! T ! T ! T ! T ! T ! T ! T ! T ! T !     !  F !
U          ! T ! T ! T ! U ! U ! U ! F ! F ! F !     !  F !
F          ! T ! U ! F ! T ! U ! F ! T ! U ! F !     !  F !

Of these, just like in 2VL, the ones can be discarded that return T, U and F regardless of the input, already reducing the set to be considered to 24 (!) pieces.

Of these, there are 6 that have a similar characteristic as the two ones that were remaining in 2VL : namely that they constitute permutations (total functions that are bijective) on the set of applicable truth values.

Note that this does not mean that the remaining 18 ones are insignificant or irrelevant.
For example, looking at column "8", we see that this is the definition for an operator that could be named TREAT_U_AS_F, which is the operator that is effectively applied by SQL (tacitly) to any predicate that appears in an SQL WHERE clause (if SQL finds a WHERE clause that evaluates to 'unknown', it will *not* include the row in the result set).
For another example, looking at column "2", we see that this is the definition for an operator that could be named TREAT_U_AS_T, which is the operator that is effectively applied by SQL (tacitly) to any predicate that appears in an SQL CHECK clause (if SQL finds a CHECK clause that evaluates to 'unknown', it *will* consider that CHECK constraint as "satisfied" and not reject an update on behalf of *that* CHECK constraint).
There are plenty of those.  For example, SQL "IS NULL" is the operator that (+-) maps U to T and both T and F to F.  "+-" because SQL "IS NULL" also applies to scalars and here we're purely talking truth values.

Anyway, in a first step and for our present purposes we will only consider the 6 permutation-operators (I've labeled four of them "A"-"D" because I am too lazy to figure out what their number would have been in the "0"-"26" scheme) :

IN \ OPER  ! 5 ! 7 ! A ! B ! C ! D !
-----------+---+---+---+---+---+---+
T          ! T ! T ! U ! U ! F ! F !
U          ! U ! F ! T ! F ! U ! T !
F          ! F ! U ! F ! T ! T ! U !

Of these, column "C" is the one that is usually defined as the 3VL equivalent of (2VL) "NOT", for the "attractive" property that for all x, "C"("C"(x)) === x which is then the 3VL counterpart of the 2VL tautology NOT(NOT(x)) === x.  (But note that operators "7" and "A" have this property too.)

Now in 2VL at this point we were only left with 1 single operator.  So there was no question of whether that set could be further reduced.  We are not that lucky here, and we would certainly like to not have to continue with 6 distinct names (and corresponding definitions) to remember and this just for the monadic operators alone.  Remember that in 2VL we ended up with a set of at most 5 (NOT OR AND XOR IMPLIES) and in 3VL at this point we already have a larger number than those.

More unfortunately, the operators corresponding to the columns "C", "7" and "A" having the property just described (of "reverting their own effect" just like 2VL NOT does), it means that for all these three operators, no amount of chaining/nesting it will ever make us end up with the operator definition for any of the 4 others (excluding here column "5", which is the identity operator at hand, and which I'll refer to as "I" from this point on).

So if we want to reduce this set of 6 operators by showing that some of them are equivalent to some chaining of invocations of some of the others, we need to look at "B" and "D".

It turns out that
 
"D"("D"(x)) === "B"(x)
"B"("B"(x)) === "D"(x)
"D"("D"("D"(x))) === "I"(x)
"B"("B"("B"(x))) === "I"(x)
 
 And no chaining/nesting of these ever ends up at the definition for either of "7", "A" and "C".  That makes sense if one observes that both "B" and "D" "cycle through" the values in a F->U->T->F or T->U->F->T sense, and the three others "swap two values while preserving the third".

For readability, we'll introduce the names "PROM" for "D" - because it "promotes" F to U, U to T and T round to F - and "DEM" for "B" - because it "demotes" truth values in a similar sense.  Likewise, we'll refer to the monadic operator "C" as NOT.

These observations show that a minimal set to express all of these six monadic 3VL operators must at least consist of one of {PROM DEM} and NOT (or one of the "7" and "A" operators, but that choice would be highly questionable from an intuitiveness point of view).  We have not proved that the set {PROM NOT} suffices to arrive at the operators "7" and "A" but we believe this to be the case and we proceed on the mere conjecture of this.

Now here's an interesting experiment.  Given any random chaining/nesting of invocations of PROM/NOT, point in the table with the 6 operator definitions which particular one the nesting is equivalent to.  Do this *WITHIN THE SAME TIMEFRAME* that you would need to just count the NOTs and tell whether it is odd or even (amounting to NOT or identity).  Do you think *ANYONE* would be capable of this ?  I don't.  Just try it.

PROM(NOT(PROM(NOT(x)))).

Timed it ?  And it can get worse.  We already observed that the 6 permutation operators are not the only _relevant_ ones (for being used in truth value computations).  Do the same exercise in the 27-column table for

TREAT_U_AS_F(PROM(TREAT_U_AS_F(PROM(NOT(x))))).

Which one of number 0-26 is it ?  If one is operating in an environment where these operators can *all of them* actually be used, it seems relatively important to be just as proficient in this as in just counting NOTs and assessing whether the count is even or odd.  But that's a tall order, and I believe it to be beyond any of us humans.

This should already be giving you a start of a sense why 3VL is, actually, totally unmasterable for the human mind.  But it gets still worse.  3VL too has binary operators.

19683 of them, theoretically speaking, to be precise.  I'm not going to provide them all in tabular form.  And if there is any set we want to see reduced for memorizability, it's this one.

Well we sure can.  There's the three "degenerate" operators that always return T, U, F, respectively, regardless of the inputs.  Discard those from the set to retain 19680.  (BTW Reason I'm doing this now is that I can't reduce these three operators to half a case by applying the six distinct monadic permutation operators to their TTTTTTTTT, UUUUUUUUU, FFFFFFFFF characterizations.

To these remaining 19680, apply the six distinct characterization transforms to have only 3560 left, hoping that the six distinct characterization transforms applied to these 3560 will effectively cover all of the 19680 operators there are (this has *NOT* been proved).

Subsequently, inspect that set of 3560 for individual pairs that expose characteristics of "symmetry" just like IMPLIES/IMPLIEDBY did in 2VL.  (Commutative operators (like AND, OR and EQV in 2VL) are "symmetric" in this sense to themselves, so no reduction opportunities there).

I'll have to take a guess at the actual number left here, but the next task is then to assign useful and meaningful names to the remaining 2- to 3000.  And hope memorizing them all will be doable for anyone having to study it.

So at this point I'm just going to be charitable to my critic.  Yes that set of 19683 operators can be significantly reduced.  By somewhere roundabouts 84%, which is even a higher percentage than the roundabouts 68% we could achieve in 2VL.  So I'm going to let him study/invent 3000 operator names while I'm building useful stuff using {AND OR NOT XOR IMPLIES}.

Friday, January 12, 2018

Afterthoughts on a data architects meetup


Visited a meetup of data architects yesterday. Main topic for me was the presentation with thoughts on our practices of data modeling, provokingly presented under the title “data modeling must die”. It was a very good talk. It defended ideas that have been mine as well for as long as I can remember. However this post is about a point of disagreement. And another one.
Disagreement 1.
It was claimed that when Codd invented the relational model of data, he also made some serious mistakes. Fair enough, he has. (It may have been the case that many of those mistakes actually only crept in during the later years for reasons and circumstances that were more political than anything else, and that early Codd was even “purer” than the fiercest relational fundamentalist still walking around these days, but that’s another discussion.)
But the mistake being referred to was “inventing the relational model of data on an island”, by which it was meant that his “mistake” was to invent the RM in isolation from other phases of the process of data systems development, such as conceptual modeling.
True, the inventing happened in isolation. But dressing that up as a “mistake” he made is, eurhm, itself a mistake. One that exposes a lack of understanding of the circumstances of the day.
One, it is not even certain imo that “conceptual modeling” as a thing in its own right already existed at the time. Codd’s RM is 1969, Chen ER is 1974 ("An Introduction to Database Systems" even dates it 1976). So how *could* he have included any such thing in his thinking. Here are two quotes from "An Introduction to Database Systems" that are most likely to illustrate accurately how Codd probably even never have come up with the RM if he *truly, genuinely* was "working on an island, separated from any and all of those developer concerns as they typically manifest themselves while working at the conceptual level".
"It is probably obvious to you that the ideas of the E/R approach, or something very close to those ideas, MUST HAVE BEEN (emphasis mine) the informal underpinnings in Codd's mind when he first developed the formal relational model."
"In other words, in order for Codd to have constructed the (formal) relational model in the first place, he MUST HAVE HAD (emphasis mine) some (informal) "useful semantic concepts" in his mind, and those concepts MUST BASICALLY HAVE BEEN (emphasis mine) those of the E/R model, or something very like them."
Readers wanting to read more are referred to chapter 14 of said book, and pg 425 in particular, for the full discussion by Chris Date.
So why did Codd not bother with the stuff at the conceptual level ?  My answer : because he was a mathematician not an engineer.  And as a mathematician, his mindset always led him to want to be able to PIN THINGS DOWN PRECISELY, with "precisely" here carrying the meaning it has when present in the mind of a PhD in mathematics.  Which is quite different from the meaning the word might have in the mind of the average reader of this post.
And at the conceptual level, you never get to "pin things down precisely" AND THAT'S DELIBERATE.
In those days, there was analysis and there was programming. With a *very* thick Chinese Wall between the two, and often even between the people engaging in one of those two activities (at the time it was typically considered outright impossible for any person to be proficient in both). Analysis was done *on paper* and that paperwork got stored in physical binders ending up in a dust-collecting locker. I even doubt Codd ever got to see any such paper analysis work. He did get to see programs written in the “programming” side of things. Because that’s where his job was : in an environment whose prime purpose was to [develop ‘systems’ software to] support programmers in their “technical” side of the story.
Two, Codd never pretended to address the whole of the data systems development process with his RM. The RM was targeted at a very specific and narrow problem he perceived in that process, as it typically went in those days : that of programmers writing procedural code to dig out the data from where it is stored. He just aimed for a system that would permit *programmers* to do their data manipulation *more declaratively* and *less procedurally/mechanically*. Physical data independence. Nothing more than that. And the environmentals that would make such a thing conceivable and feasible in real life. Codd was even perfectly OK with not even considering how the data got into the database ! His first proposal for a data language, Alpha, *did not have INSERT/DELETE/UPDATE* ! He was perfectly fine leaving all those IMS shops as they were and do nothing but add a “mapping layer” so what came out of the mapping layer was just a relational view of data that was internally still “hierarchical”. I could go on and on about this, but my point here is : calling it a “mistake” that someone doesn’t do something he never intended to do in the first place (and possibly even didn’t have any way of knowing that doing it could be useful), is a bit over the edge.
Disagreement 2
It was claimed that “model translations MUST be automatic”. (The supporting argument being something of the ilk “otherwise it won’t happen anyway”.)
True and understandable (that otherwise it won’t happen), but reality won’t adapt itself so easily to management desiderata (“automatic” is management speak for “cheap” and that’s the only thing that matters) merely because management is management.  Humans do if they're not the manager, reality doesn't.  And the reality is that the path from highly conceptual, highly abstract, highly informal to fully specced out to the very last detail, is achieved by *adding stuff*. And *adding stuff* is design decisions taken along the way. And automated processes are very inappropriate for making *design decisions*. (By *adding stuff* I merely mean *add new design information to the set of already available design information*, I do not mean, add new symbols or tokens to an already existing schema or drawing that is already made up in some syntax.)
When can automated systems succeed in making this kind of design decisions ? When very rigid conventions are followed. E.g. when it is okay that *every entity* modeled at the conceptual level eventually also becomes a table in the logical model/database. But that goes entirely counter to the actual purpose of modeling at the *conceptual* level ! If you take such conventions into account at the time you’re doing conceptual-level modeling, then you are deluding yourself because in fact you are actually already modeling at the logical level. Because you are already thinking of the consequences at the logical level of doing things this way or that way. The purpose of conceptual-level modeling is to be able to *communicate*. You want to express the notion that *somewhere somehow* the system is aware of a notion of, say, “customer” that is in some way related to, say, a notion of “order” that our business is about. You *SHOULD NOT NEED TO WORRY* about the *logical details* of that notion of a “customer” if all you want to do is express the fact that these notions exist and are related.
So relatively opposite to the undoubtedly wise people in front of the audience, I’m rather inclined to conjecture that if you try to do those “model translations” automatically, you are depriving yourself of the freedom to take those design decisions that are the “right” ones for the context at hand, because the only design decisions that *can* still be taken are *[hardcoded] in [the implementation of]* the translation process. And such a translation process can *never* understand the context (central bank vs. small shop on the corner of the street kind of aspects), let alone take it into account, in the same way that a human designer can indeed. That is, you are depriving yourself of the opportunity to come up with the “right” designs.
A third point.
I was also surprised to find how easily even the data architects of the current generation who are genuinely motivated to improve things, seem to have this kind of association that “Codd came up with SQL”. He didn’t and he’d actively turn around in his grave hearing such nonsense (he might also just have given up turning around because it never ends). He came up with the relational model. The *data language* he proposed himself was called Alpha. Between Alpha and SQL, several query languages have seen the light of day, the most notable among them probably being QUEL. SQL is mostly due to what good old Larry did roundabouts 1980. It is relatively safe to assume that, once SQL was out, Codd felt about it much the same way that Dijkstra felt about BASIC and COBOL : that it was the most horrendous abomination ever conceived by a human. But that (neither the fact that the likes of Codd *have* such a denigrating opinion, nor the fact that they’re right) won’t stop adoption.

Monday, July 14, 2014

Conceptual vs. Logical modeling, part IV & conclusion

Other kinds of constraint

Still referring to the example model at

http://www.databaseanswers.org/data_models/assets/index.htm

I now want to draw your attention to that very peculiar construct close to the center of the image.  Thing is, I have never seen such a symbol before in a conceptual data diagram, and I suspect the same will hold for most of you.

So what does it express ?  The question alone illustrates an important property of using (any) language to express oneself : if you use a word or a symbol that the audience doesn't know/understand, they won't immediately get your meaning, and they'll have to resort to either guessing your intended meaning from context, or else asking you to clarify explicitly.  And if your models and/or drawings end up just being stored in some documentation library, there's a good chance that the readers won't have you around anymore for directly asking further clarification from the original author.  Leaving "guessing the meaning from context" as the only remaining option.  (As said before, guesswork always has its chance of inducing errors, no matter how unlikely or improbable.)

So, since the original author isn't available for asking questions to, let's just do the guesswork.  I guess that this curious symbol intends to express something that might be termed "exclusive subtyping" (I'm not a great fan of the word "subtyping" in contexts of conceptual modeling but never mind).  It expresses that an asset can be a "Financial" asset, or a "Physical" asset, or an "Information" asset, but never two or more of those three simultaneously.  We already touched on this, slightly, in the discussion of referential integrity : the lines from the three subtypes can be seen as relationships, of which only a single one can exist.  I'm pretty sure at one point or other, you've already run into the following notation to denote this :
 
+-----------------+
! ASSETS          !
+-----------------+
!...              !
+-----------------+
   |   |   |
 \-o---o---o-/
   |   |   |
   |   |   +-----------------------+
   |   |                           |
   |   +---------------+           |
   |                   |           |
+------------------+ +-------+ +--------+
! FINANCIAL_ASSETS ! ! ...   ! ! ...    !
+------------------+ +-------+ +--------+
!...               ! ! ...   ! ! ...    !
+------------------+ +-------+ +--------+


And this more or less begs the question, "then what about the cardinalities of those relationships ?".  The point being, the example model doesn't seem to give us any information about this.  Can there be only one or can there be more FINANCIAL_ASSETS entries for each ASSSET ?  Can there be zero FINANCIAL_ASSETS associated with a given ASSET (even if the attributes at the ASSETS level tell us it is indeed a "financial" asset) ?  Can there be only one ASSETS associated with each FINANCIAL_ASSET, or can there be more, or even zero ?  Strictly speaking, no answer to be found in the drawing.  Guesswork needed !



Arbitrarily complex constraints

And beyond these, there are of course the "arbitrarily complex" constraints.

"The percentages of the ingredients for a recipe must sum up to 100 for each individual recipe".  No graphical notation I know of allows to specify that kind of thing.  Nevertheless, rules such as these are indeed also a part of a logical information model.



Conclusion


In general, for each information system that manages (and persists) data for its users, there exists some set of "metadata" (I'm not a big fan of that term, just using it for lack of better here and hence in scare quotes) that conveys ***every*** little bit of information about :

(a) the structure of the user's data being managed
(b) the integrity rules that apply to the user's data being managed

Such a "complete" set of information is what I call a "logical information model".



Anything less than that, i.e. any set of information that could have to leave unanswered any possible developer question concerning either structure of or integrity in some database, necessarily becomes a "conceptual information model" by that logic.  Note that such a definition makes the term "conceptual information model" cover the entire range of information models, from those that "leave out almost nothing compared to a fully specced logical one", to those that "leave out almost everything compared to a fully specced logical one".  Some of the existing notations are deliberately purposed for very high levels of abstraction, some of them are deliberately purposed for allowing to document much finer details.

Thing is, all of the existing notations are purposed for _some_ level of abstraction.  Even ORM with its rich set of symbols that allows for way more expressiveness than plain simple basic ER, has more than enough kinds of things it cannot express (perhaps more on that in a next post).  And hence any information drawn in any of the available notations, must then be regarded as a conceptual model.

Some of them will expose more of the nitty gritty details, and thus perhaps come somewhat closer to the "truly logical" kinds of model, others will expose less detail, thus deliberately staying at the higher abstraction levels, and thus staying closer to what might be considered "truly conceptual".  But none of them allow to specify every last little detail.  And the latter is what a true logical informatin model is all about.

Monday, July 7, 2014

Conceptual vs. logical modeling, part III

Conceptual vs logical modeling of Referential Integrity

In the previous post, a somewhat detailed inspection was made of :

(a) how popular modeling approaches support the documenting of a very common class of database constraints, categorizable as "uniqueness constraints"
(b) how those modeling approaches get into problems when the task is to document _all_ the uniqueness rules that might apply to some component of some design (and how in practice this effectively leads to under-documenting the information model)
(c) how those same modeling approaches also get into problems when the task is to document certain other database constraints that also define "uniqueness" of some sort, just not the sort usually thought of in connection with the term "uniqueness".

This post will do the same for another class of constraints of which it is commonly believed that they can be documented reasonably well, i.e. the class of "foreign key" constraints.  Whether that belief is warranted or not, depends of course on your private notion of "reasonable".

Reverting to the "Assets" example model referenced in the initial post of this little series, we see the "Assets" entity has two relationships to "parent" entities.  They respectively express that each Asset "is of some category", and "is of some Asset_type".  (Aside : the explanations in the "Asset_Categories" entity are suspiciously much alike the three "Asset Detail" entities at the bottom, betraying a highly probable redundancy in this model.  But it will make for an interesting consideration wrt views.  End of aside.)



What is _not_ expressed in models such as the one given.

Assets has two distinct relationships to "parent" entities.  That means that there will be _some_ attribute identifying, for each occurrence of an Asset, which Asset_Category and which Asset_Type it belongs to (it was already observed that some dialects omit these attributes from their rectangles, this changes the precise nature of the "problem" here only very slightly).  But what is _not_ formally and explicitly documented here, is _which_ attribute is for expressing _which_ relationship.

Now, in this particular example, of course it is obvious what the true state of affairs is, because the names of the attributes are identical between "child" and "parent" entity.  But this rule/convention as such is certainly not tenable in all situations, most notably in bill-of-material structures :

+---------------+   +----------------------------+
! THINGY        !--<! CONTAINMENT_OF_THINGIES    !
+---------------+   +----------------------------+
! ThingID    ID !--<! ContainingThingID       ID !
+---------------+   ! ContainedThingID        ID !
                    +----------------------------+

So _conventions_ will obviously have to be agreed upon to help/guarantee full understanding, or else guesswork may be needed by the reader of such models, and that guesswork constitutes tacit assumptions of some sort, even if there is 99.9999% likelihood the guesswork won't be incorrect.



We've stated that it cannot be documented "which attribute is for expressing which relationship".  A slight moderation is warranted.  It is perfectly possible to convey this information by making the connecting lines "meet the rectangles" precisely at the place where the attribute in question is mentioned inside the rectangle (that is, of course, if it is mentioned there at all).  Kind of like :

+---------------+    +----------------------------+
! THINGY        !    ! CONTAINMENT_OF_THINGIES    !
+---------------+    +----------------------------+
! ThingID    ID !-+-<! ContainingThingID       ID !
+---------------+ +-<! ContainedThingID        ID !
                     +----------------------------+


This technique documents reasonably well that the relevant attribute pairs are (ContainingThingID, ThingID) and (ContainedThingID, ThingID).

It will be clear that this will work well only for "singular" (non-composite) FKs, and at any rate even then any possibility of crossing lines or so might lead to some degree of obfuscation.  (Once again, I leave it for you to ponder whether the popular belief that composite keys aren't such a very good idea, is due precisely to these notational problems.  "You shouldn't do that because you can't document it in the drawings.")



Reverting to the original Assets example model, another thing not formally expressed (to the fullest) is the _optionality_ of the relationship.  If the vertical crossing bar at the "Asset_Categories" side of the relationship means, "ALWAYS EXACTLY 1", then there isn't a problem.  But what if the relationship were actually that each "Asset" CAN be of at most one Asset_Category, but perhaps as well OF NONE AT ALL ?  In "almost-relational" systems, this could perhaps at the logical level be addressed by making the corresponding FK nullable, but with truly relational systems, this isn't an option.  In that case, the same technique would have to be applied at the database level as is done for many-to-many relationships : an "intersection" thing would have to be defined that "materializes" the relationship.  But if we do that, where and how do we document the name of this structure that achieves this materialization ?  We could give it its separate rectangle, but this technique is sometimes criticized for creating entity bloat, and is sometimes considered undesirable, claiming it "obfuscates" to some extent the things that the drawn model is _mainly_ intended to convey to the user/reader.



Referential integrity bis

Just as was the case with uniqueness constraints, Foreign Key constraints get their bit of extra "twists" when the Temporal (historical data) dimension is added.  To begin, "temporal foreign key constraints" will almost always, and of necessity, be composite.  Whatever serves as an identifier to identify whatever thingy is relevant, PLUS a range of time.  The aforementioned problems with documenting composite foreign keys apply almost by definition.

Second, (this too is analogous to the situation with uniqueness constraints) for the range attributes that participate in a foreign key, there is a possible distinction to be made between the "traditional", equality-based treatment, and a treatment "for every individual point implied by the range".  And just as was the case with uniqueness constraints, it is even possible for a temporal foreign key to comprise >1 range attribute, and for some of those to have the equality-based treatment (matching rows in the referenced table must have exactly the same range value), while others have the "individual implied points" treatment (matching rows in the referenced table are only required to "cover" the implied points with a range value of their own).  A notation for documenting the distinction is needed, but difficult to conceive.



Referential integrity ter
Referential integrity to a view.  We've mentioned the "Categories of assets" as an aside earlier on.  Presumably, If an Asset is a "Financial_Asset", then its corresponding Asset_Category_Code must have a certain particular value.  Iow, the FK implied by that line from Financial_Asset to Assets, is not really one from the Financial_Assets table to the Assets table, but rather, it is an FK from the Financial_Asets table to a subsetting(/restriction) view of the Assets table, something akin to ... REFERENCES (SELECT ... FROM ASSETS WHERE Asset_Category_Code = ...) ...

The notational problem with our rectangles and connecting lines is completely analogous to the case with "keys on views" : we could document such things by giving the view its own rectangle, and then we've shifted the problem to documenting the definitional connect between two distinct rectangles in the schema, or we can settle for not documenting it at all and hope the info will not be lost in communication, meaning the subsequent readership of our model will have to make all the necessary assumptions, and not a single one more, and make them all correctly.  Might be a tall order.



So once again, it seems we can conclude that for certain classes of referential rule on the data, our graphical ways of modeling can suffice to document such a rule, but they certainly don't suffice to document, clearly, all possible cases of referential rule on data.  The more bits and pieces abstracted away by a graphical modeling language (= the smaller the symbol set of the language), the more conventions will need to be assumed in order to get effective and accurate communication between modeler and reader, and the more cases there will be that are, formally speaking, not documentable because the language's symbols set doesn't allow expressing some nitty gritty detail that "deviates from the usual conventions".



(Still to be continued...)

Friday, June 20, 2014

Conceptual vs. logical modeling, part II

Integrity

In the previous post, a somewhat detailed inspection was made of possible approaches for specifying, in a modeling language, some database structure, highlighting differences in informational value between those modeling languages, as well as pointing out some pieces of relevant information the specificiation of which is typically left completely unsupported by any of them.

But a full formal spec of a database is not only about its structure, it is also about any additional rules that apply to the constituent components of that structure.  This observation holds, regardless of whether the database is a relational one (and its constituent components are what TTM calls "relation variables", "tables" in SQL) or a graph-based or hierarchical one (and its constituent components are nodes and edges).  I'll be speaking of relvars (TTM abbreviation for relation variables) in what follows, but keep in mind that the same should apply as well, mutatis mutandis, to hierarchical and "graph-ical" models.

While the aspect of 'structure' can reasonably well be modeled in "graphical" languages (such as the various ER dialects and UML), that is much less the case with the aspect of the integrity constraints between the components of that structure.  How come ?

The essential reason is that the nature of an integrity rule/constraint can really be just anything at all.  Its "structure" is constrained only by the fact that it must be expressed exclusively in terms of the relvars that make up the database structure.  At the logical level, where all the formal details of the relvars have been fully specced out in some given language, this is achieved "easily" enough using some language based on/inspired by mathematics.  Just spell out the predicate that makes a violation a violation.  But how to devise a language that supports expressing "anything at all" at the conceptual level ?  The answer is you can't.  The only thing you can do is try to taxonomize the set of all possible constraints in certain well-defined "classes" that might indeed be epxressible.  That is (sort of) exactly what has happened in database modeling land (*).  From ER modeling over IDEFIX to Halpin ORM : the set of all possible constraints is subsetted according to certain chosen criteria, and for each "identifiable" subset, a notation is devised to facilitate documenting constraints belonging to that subset.  A modeling language such as ER leaves it at that, Halpin's "Big Brown Book" explicitly adds a (fourteenth, I believe) category "others", the leftovers that still aren't expressible using the modeling language's symbols that are available.

Anyway.  The fact alone that a powerful modeling approach such as ORM still has this category "leftovers" in the constraints realm, should already suffice to show that _complete_ and fully formal specifications are in fact inachievable without a mathematical language.  For the typical categories of constraints other than those "leftovers", however, the belief seems to be fairly widespread, and firm, that current mainstream notations suffice to document all the stuff we need to know/convey about our databases.  That belief is not entirely warranted, imo, and in this post I'll be illustrating a couple of issues relating to the (common) class of uniqueness constraints.  A subsequent post will do the same for referential integrity/foreign key constraints.

(*) the sad byproduct of this state of affairs is that if one uses the word "constraint" in a database discussion, SQL practitioners will often think you must be talking either of a UNIQUE constraint or a foreign key, overlooking the fact that "there are more types of constraint between heaven and earth than are expressible in ER, and supported by SQL".



Uniqueness

The example case from the previous post is not very well suited to illustrate issues with documenting uniqueness rules.  None of the entities in that example are directly suitable for illustrating the points I want to make, so for the sake of this discussion, I'm somewhat forced to resort to a totally different example, which is likely to look ridiculous in the eyes of business modelers, but I won't mind that for the time being.

Let's say we want to model the operation of numeric addition - if you really can't bear the thought, imagine you are Euclid or Pythagoras, that arithmetic has not been invented yet and you are in the process of doing exactly that, using the latest database design technology (and with apologies upfront for my ascii modeling) :

+---------------+
! ADDITION      !
+---------------+
! N1     number !
! N2     number !
! SUM    number !
+---------------+

(Yes, and the table is indeed like

+----+----+-----+
! N1 ! N2 ! SUM !
+----+----+-----+
!  1 !  1 !   2 !
!  1 !  2 !   3 !
!  ...          !
!  2 !  1 !   3 !
!  2 !  2 !   4 !
!  ...          !
+---------------+

!!!!!!!!)



I'm pretty sure if I'd ask you what the key is here, you'd reply with "N1 and N2 combined, of course".  You get 33% from me for that answer.  There are three keys here : {N1 N2}, {N1 SUM}, and {N2 SUM}.  Well, granted of course that the matter is also important of which ones you ACTUALLY WANT ENFORCED (that's what you're referring to if you wanted to argue that these latter two "do not identify an addition").  If we wanted to "enforce" the obvious consistencies/equivalences between expressions of addition and expressions of subtraction, we would indeed need to model and enforce all three (hehe.  settled that.).

Now how would you document all three of them in your model of "annotated rectangles" ?  You're in trouble !  (In fact, I think it is precisely _because_ of this notational problem to document multiple composite keys inside one single rectangle, that the notion of "primary" key (as distinct from "secondary / ternary / auxiliary / ..." key ????) has become so widespread as it has, and furthermore that the practice of ID-ifying really just everything has become as popular and widespread as it has.  I leave it for you to ponder whether that's a case of putting the cart before the horse or not - or one of redefining/retrofitting the problem such as to suit the most desirable solution.)



Anyway.  Supposing we do want to enforce all three keys.  How can we document this in our drawing ?  Observe in particular that each key consists of >1 attribute and each attribute effectively participates in >1 key.  The only way I can imagine to convey all the key information in our rectangle is like this :

+---------------+
! ADDITION      !
+---------------+-------+
! N1     number ! K1,K2 !
! N2     number ! K1,K3 !
! SUM    number ! K2,K3 !
+---------------+-------+

Very much like the approach of putting a P in front of the attributes, but it takes attentive and careful deciphering to read the drawing and capture the keys information correctly !  (And the sad byproduct of omitting the full keys information, e.g. for readability sake, in diagrams such as these, is indeed that typically not all keys are properly identified, let alone enforced.)

In fact, the most readable way to convey all of this information about uniqueness rules, seems to be exactly by just using syntax very similar to declarative DDL, or the declarative portion of a D language :

UNIQUE {N1,N2} , UNIQUE {N1,SUM} , UNIQUE {N2,SUM} or
UNIQUE { {N1,N2} {N1,SUM} {N2,SUM} }

And here again we are seemingly headed toward a similar conclusion : if you want to be precise _AND_ complete in what you are stating about the nature of the database that you are documenting, then of necessity you MUST resort to a language that has a much higher expressiveness than the ones you typically have available when modeling at a "higher" level of abstraction.

Incidentally, in notations such as Data Vault (a "hub" to represent the entity and separate rectangles connected to the "hub" for each attribute), the problem of documenting keys is even worse.  The only graphical way(s) I can imagine to document the existence of some meaningful "grouping" of attributes, such as their belonging to the same key, will invariably make the diagram equally unreadable because of extraneous line bloat.  Whether you try to do it by surrounding them with dotted lines or so, or by creating a new symbol for documenting the existence of a key (three extra symbols for the ADDITION Data vault) and connecting each attribute with them as appropriate (six extra connections on the diagram), it's always going to turn your beautiful neatly organized DV diagram into more of a spider web.  Fortunately, Data Vault diagrams are typically used only in DW contexts, not to document keys in the operational source systems they're concerned with, but it still goes to show that whatever conceptual notation you use, it only goes as far as it goes and _something_ will always be "missing" from it.



Uniqueness bis

Managing temporal data is somewhat of a long-standing problem in database land.  A thorough analysis of the _nature_ of the problem (and what is needed to address it) can be found in "Temporal Data & the Relational Model", pages 1-857, so I'm not going to re-iterate all of that here, but one particular problem dealt with is "temporal uniqueness".  (Aside : if you haven't yet read the book but are interested to do so, don't go order or search it now.  Updated and revised edition is to appear within a couple of months.)

Say you have

+-----------------+
! ASSET_VALUATION !
+-----------------+
! ASSET_ID     ID !
! FROM       date !
! TO         date !
! VALUE    number !
+-----------------+

or

+-----------------+
! MARRIED_TO      !
+-----------------+
! PERSON1_ID   ID !
! PERSON2_ID   ID !
! FROM       date !
! TO         date !
+-----------------+

and you want to enforce a constraint "no single date giving >1 distinct values for same ASSET_ID", or "no one married to >1 other person on same date".

The "traditional" interpretation of what a key is, will not allow you to express this.  No "traditional", equality-based, key will ever prevent overlaps between various FROM-TO combinations for the same person/asset/...  So perhaps you might be inclined to conclude "that is not a real key".  Interestingly, The "Temporal Data" book proposes to rearrange matters a bit so that expressing the constraint does become possible, and indeed in the form of "specifying a key" at that :

+-------------------+
! ASSET_VALUATION   !
+-------------------+
! ASSET_ID       ID !
! DURING date_range !
! VALUE      number !
+-------------------+

or

+-------------------+
! MARRIED_TO        !
+-------------------+
! PERSON1_ID     ID !
! PERSON2_ID     ID !
! DURING date_range !
+-------------------+

... WHEN UNPACKED ON (DURING) THEN KEY {ASSET_ID DURING} ...
... WHEN UNPACKED ON (DURING) THEN KEY {PERSON1_ID DURING} KEY {PERSON2_ID DURING} ...


The graphical languages such as ER that we typically use for information modeling, still let us down, somewhat, in the case we'd want to specify that level of detail.  In addition to the composite nature of the key, we'd also need to express the semantics of the "WHEN UNPACKED ON" part : namely that the values for the DURING attribute must be interpreted in a "for each individual date value covered by the range" kind of way.  The closest we could come to denoting that might be something like this :

+-------------------+
! MARRIED_TO        !
+-------------------+-----------+
! PERSON1_ID     ID ! K1        !
! PERSON2_ID     ID ! K2        !
! DURING date_range ! K1_T,K2_T !
+-------------------+-----------+

Of course, the notational problem with denoting multiple keys to the relvar has not disappeared, nor has the possible participation of a single attribute in >1 key, just an extra little bit of codification has been added (suffixing _T to a key name on the lines for the attributes where it applies) to denote the extra bit of semantics covered.  It will be clear that while such tricks are indeed possible, and potentially helpful in denoting modeled solutions to a problem that is indeed very common, once again such solutions can only go as far as they do, and taking things further will ultimately result only in making the models we draw ultimately unreadable.  Specifically in the context of temporal data management and temporal keys, observe for example that it is not necessarily the case that all range-valued attributes will _always_ have the _T "interpretation" for _all_ the keys in which they participate :

+----------------------------------+
! NOT_BEST_OF_EXAMPLES_BUT_ANYWAY  !
+----------------------------------+------+
! PRESIDENTIAL_TERM     year_range ! K1   !
! DURING                date_range ! K1_T !
! PRESIDENT_NAME               ... !      !
+----------------------------------+------+

( think   70-74 : 70-73 : NIXON   &&   70-74 : 74-74 : FORD )

Once again the conclusion seems warranted that extending expressiveness/notational support beyond current common practices, will quickly result in making the models more unreadable and thus less informative, rather than more informative.



Uniqueness ter

Another variation on the theme of uniqueness rules, is the problem of enforcing uniqueness on only a (proper) subset of all occurrences of an entity type.  Say you have

+----------------------------------+
! CAR_LICENSE_PLATE                !
+----------------------------------+------+
! CAR_CHASSIS_ID               ... ! K1   !
! TAX_LICENSE_PLATE            ... ! K2   !
! CAR_STILL_IN_ACTIVE_USE     bool !      !
+----------------------------------+------+

and for the purpose of re-using license plate numbers, you want to enforce license plate uniqueness (key K2) only for those cars that are still in active use (there are admittedly better solutions to this problem than the one modeled here, which I sometimes call the ultra-poor man's historical database, but it does serve to illustrate my point).

The aspect of the problem that makes the "key" "not documentable", is precisely the subsetting rule, i.e. the fact that the "key" is not to be enforced on the whole CAR_LICENSE_PLATE entity, but just on the subset of it that, once the database is implemented in SQL, could be found by issuing SELECT ... FROM CAR_LICENSE_PLATE WHERE CAR_STILL_IN_ACTIVE_USE == true;

If we absolutely wanted to be able to document the existence of this key, using the available means of adding a "Kn" annotation in the rectangles, we'd have to add a separate rectangle for the subsetted CAR_LICENSE_PLATE entity, and then we'd have shifted the problem to documenting the definitional connect/dependency of this new rectangle with/on the "original" one, the "full" entity.  That is, we've transformed the problem into one of conceptually documenting "view definitions" (and that very idea is probably seriously questionable in itself already because including the "definitional connect" smacks quite a bit of conflating conceptual/logical).  Once again, our modeling language will let us go only as far as it goes.



(still to be continued)

Friday, June 6, 2014

Conceptual vs. Logical modeling (once again)

http://www.databaseanswers.org/data_models/assets/index.htm

was presented to me as an example case in a discussion on [data] modeling at

https://www.linkedin.com/groups/data-model-SAP-Bill-Material-2357895.S.270612382

(group membership is required to view) and in particular as a case for fleshing out the distinctions between what I call 'conceptual' and 'logical' modeling.  That distinction being that "conceptual is informal, logical is formal", or "conceptual is typically incomplete, logical is always complete".  "Complete" in the sense of "full disclosure of all relevant information".  This post intends to clarify further what I mean by that, exactly.

One model being incomplete and another one being complete means there are differences in "informational value".  What & where are those differences ?  The discussion will be split covering two distinct aspects : one of structure and one of integrity (and the integrity part will be kept for a later post).

Structure.

Take a look at a single rectangle in the example model, say "Assets".  Disregard the mentions of PK/FK for the time being, they're related to integrity, not to structure per se.

What information is present in the rectangle ?
  • a rectangle heading telling us that the rest of the rectangle informs us further of the nature of a concept called "Assets".
  • a rectangle body telling us that the concept "Assets" is defined to have properties named "Asset_ID", "Asset_Name", ...
What information is _not_ present in the rectangle ?
  • Most notably, the type information for each property.  Now, I've seen many similar models that _did_ include this information.  Usually limited to the set of type names that were known to be supported by the DBMS that the system was known to be going to be implemented on (so far for DBMS agnostic modeling !).  Sometimes the set of type names used would include things such as "COORDINATE".  Indicating that some domain/type of that name is supposed to exist, and supposed to be known and understood correctly by the reader.  And it's the double "supposed" in there that makes such models informal/incomplete still.
  • A very nasty one, and very hideous : optionality !!!  Take a look at the Date_Disposed_of property.  Is that property going to have an "assigned value" for each and every occurrence/instance of an "Assets" type ???  Presumably not.  While it is not invalid per se to introduce some kind of concept of "nullability" at the conceptual level (of entity attributes), the thing is : the logical level's "full disclosure" requirement implies that the diagrams must then show that information !!!  (I've seen at least one dialect that used '#'/'*' symbols on the left side in the rectangles to achieve this.)  And the second thing is : diagramming languages such as E/R and UML already have a notion of optionality (albeit not for attributes but between entities).  And of necessity, it will introduce multiple ways of saying the same thing.  Singling out the 'disposed_of' attribute in its own "Assets_Disposed" entity (with the obvious one-to-at-most-one relationship) will do the job, but is often considered "poor practice" because of the "entity bloat" it creates (and the corresponding reduction of opportunity to "inspect just once and get the whole picture").  Otoh, it is precisely what would _need_ to be done to achieve a relational version of the logical information model, since the relational model does not allow [values for] attributes to be missing.
  • Also not incorporated in the model shown by this example, but the notion does exist : the distinction between "weak" E/R entities and "strong" E/R entities.  There are modeling dialects that would _NOT_ include the "Asset_ID" attribute in the "Asset_Valuations" entity, and the reader is supposed to infer, somehow, that "Asset_Valuation" is indeed a "weak" entity and additionally requires its "parent entity"'s primary key before "an occurrence of it can come into existence".  This particular approach induces interpretation ambiguities in two cases : (a) the parent entity has >1 identifying key (solvable only by introducing yet other artefacts such as distinctions between "primary" and "secondary" keys), and (b) the child entity has two relationships (think bill-of-material structures) to the same parent entity (there will have to be two distinctly named attributes, so you can't assume "same name as the primary key attributes in the parent", but the modeling approach of leaving them unmentioned means you can't specify which will be which ...  This actually belongs more in the discussion on constraint specification, so I'll pick the subject up there again if I don't forget it.
  • Also quite hideous as well as nasty : the meaning of the damn thing !!!  Will drawing a rectangle and labeling it "Assets" ensure that the understanding derived from it by some individual, will be _exactly_ the same as that derived by some other individual ?  _sufficiently_ the same ?  And even if so, how will you know any differences in understanding ?  Posing these questions is answering them.  Looking at the drawing, all readers will just be nodding yes because with sufficient details abstracted away, your beautiful drawing simply matches their perception of reality.  I used to have a catchphrase "After you've abstracted away all the differences, whatever remains is identical."
Conclusion : while it is OK for conceptual models to leave out information such as the things mentioned (note that I do not claim completeness for the list of things I mentioned), a fully formal logical model will always have to include _all_ the pieces of the "puzzle" :
  • All the type information.  To begin, that's a complete and fully specced inventory of all the data types that will be used in the rest of the model.  And "fully specced" really means "fully specced" here, e.g., just saying "INTEGER" will _not_ be sufficient if there is any risk that any reader might be misinterpreting the range of numbers "covered" by this name.  Sometimes it _is_ interesting for a user to know that 100000 is not an INTEGER (because >32767), or to know that -1 is not an INTEGER (because only positive numbers were anticipated).  For a central bank deciding to introduce negative interest rates, it might be interesting to know that some of their IT systems had not anticipated this and defined the domain for interest rates something like the range from 0.0000 to 99.9999 ...  And for types such as "coordinate", there is nothing in the name to suggest whether these are 2D or 3D coordinates ( (x,y) pairs vs (x,y,z) triples ).  Formal completeness requires one to state something like :

    COORDINATE : {(x,y,z) | x IN FLOAT && y IN FLOAT && z IN FLOAT}

    This definition itself depends on a definition for a thing called FLOAT.  This one in turn could be defined as

    FLOAT : { (m,e) | m IN NN && e in NN && e>=-128 && e<=127 && m>=... && m<=...}

    Now we depend on a definition for NN.  It will be clear that somewhere somehow, something inevitably has "got to be given".  Fortunately, that something can be as simple as "the set of Natural Numbers" that everyone should know from 2nd grade math classes, or thereabouts.  And if misunderstandings and/or communication problems boil down to a lack of agreement/common understanding of what the set of natural numbers is, well then there will be very little any modeling language/methodology could possibly to address that.
  • Assuming we are defining a fully formal logical structure according to the relational model (as distinct from "the graph-based model", which may be conceivable/imaginable, but has unfortunately never been elaborated/formally spelled out the same way the RM has been), _all_ the attributes of the relational structures, plus the type they're of (those types having been formally defined in the previous step).

    concrete example :

    ASSETS : { (Asset_ID, Asset_Category_Code, Asset_Type_Code, Asset_Name, Asset_Description, Date_Acquired, Date_Disposed_of, Other_Details) |
            Asset_ID in ... &&
            Asset_Category_Code IN ... &&
            Asset_Type_Code IN ... &&
            Asset_Name IN ... &&
            Asset_Description IN ... &&
            Date_Acquired IN GREG_CAL_DATE &&
            Date_Disposed_of IN GREG_CAL_DATE &&
            Other_Details IN ... }
  • Still assuming we are defining a fully formal logical structure according to the relational model (such that attributes cannot ever be null), the relational structures will be split out in separate parts whenever some attributes of a conceptual entity are optional.

    concrete example :

    ASSETS : { (Asset_ID, Asset_Category_Code, Asset_Type_Code, Asset_Name,
                        Asset_Description, Date_Acquired, Date_Disposed_of, Other_Details) |
                Asset_ID in ... &&
                Asset_Category_Code IN ... &&
                Asset_Type_Code IN ... &&
                Asset_Name IN ... &&
                Asset_Description IN ... &&
                Date_Acquired IN GREG_CAL_DATE &&
                Other_Details IN ... }
    ASSETS_DISPOSED : { (Asset_ID, Date_Disposed_of) |
                 Asset_ID in ... &&
                 Date_Disposed_of IN GREG_CAL_DATE }
  • And it will also have to include a precise statement of the so-called "external predicate" for each relational structure so defined.  Don't underestimate the importance of this.  An SQL table has a very precise intended meaning, and too often I've seen maintenance developers think "oh I can reuse this table for my current purpose", looking exclusively at its structure and ignoring/denying/disregarding the precise, current intended meaning completely.  It is in fact because of this associated intended meaning that "re-using existing database tables" is, in principle, the WORST POSSIBLE IDEA a db designer can have.  Except if he is 100% certain that the current external predicate matches _exactly_ with the "external predicate" he has to "introduce" into the database for achieving "his current purpose".  This is most unlikely to be the case, except if the table is an EAV table, and I've already dealt with why that approach sucks (in 99.9999% of the cases).

    (Incidentally, if you were struck by a remarkable resemblance between the way the data type definitions were stated in the foregoing point, and the way the relational structures were stated in the last, it is exactly this aspect of their being related or not to such an "external predicate" that makes the difference.  Data type definitions are just that, formal ways to define _values_ that are usable in the _relational structures that will make up the database_ to represent _meaning_.  Values in data types do not carry meaning, the relational structures that make up the database do.  E.g. "The asset identified by §Asset_ID§ has been disposed of on §Date_Disposed_of§".  Note the placeholders in between §§ marks, and that the placeholders correspond 1-1 with the attribute names.  Such a phrase could be the "external predicate" for the ASSETS_DISPOSED relational structure, and since it defines the logical meaning of the content that will be held in the concerned relational structure, it should always be an integral part of the logical model defining the database.)
Next question : and what kind of modeling language supports all of this ?  There are several.  Or none at all.  Depending on what you're willing to call a "modeling language".

To begin with a language of pure maths.  One such as used in "Applied Mathematics for Database Professionals".  Very commendable reading, BTW, even if I have to add the footnote that the book doesn't really bother with formal type definitions, contenting itself to rely on a type system such as that offered by Oracle (this had mainly to do with the 9 different DBMS's the authors had been using the most throughout their careers : Oracle 4, Oracle 5, Oracle 6, Oracle 7, Oracle 8, Oracle 9, Oracle 10, Oracle 11 & Oracle 12 - end of jocular sidenote).

Anyways.  Math-like language offers everything we need to express precise type definitions and precise definitions of relational structures such as :

FLOAT : { (m,e) | m IN NN && e in NN && e&gt;=-128 && e&lt;=127 && m&gt;=... && m&lt;=...}
COORDINATE : {(x,y,z) | x IN FLOAT && y IN FLOAT && z IN FLOAT}

ASSETS_DISPOSED : { (Asset_ID, Date_Disposed_of) | ... }

But we're left in a bit of trouble when wanting to express external predicates for our relational constructs in such a language.  Of necessity, of course, the thing is termed "external" for good reason (external = external to the system of mathematical computation that is the DBMS, hence it's a bit contradictive to expect them to be expressible in math language) !



And those not well versed in using math formulae, will of course quibble that they don't see themselves manipulating models expressed in such a language, and that they want an alternative.  Such an alternative exists in the form of a subset of the statements of a programming language such as Tutorial D, in particular, the set of statements in that language that most developers will be inclined to label "declarative" statements (following examples are only loosely inspired by, and not 100% valid, Tutorial D) :

TYPE FLOAT (M INT, E INT) CONSTRAINT E>=-128 && E<=127 && M>=... && M<=... ;
TYPE GREG_CAL_DATE (D INT, M INT, Y INT) CONSTRAINT ........................................... ;

VAR ASSETS_DISPOSED RELATION {Asset_ID ... , Date_Disposed_of GREG_CAL_DATE } ;

Documenting the external predicate for all the VARs (= the relational structures to make up the database, "tables" in SQL) is a matter of adding comments, or (better), some kind of PREDICATE subclause in syntaxes such as the one used here as an example.

The nice thing about such an approach is that these kinds of formal spec are parseable.  In a language such as Tutorial D, it means the logical definition of the database structure could be made known to any program by a mere "import logical_model" directive.  In environments using other languages, it means that stuff such as Hibernate classes can be generated 100% automagically.

And just to show that the syntax (the "how") of the language is, actually, completely irrelevant, and the only thing that matters is the meaning of the information it conveys (the "what"), here's an example of how the same information could be expressed in a (hypothetical) language that is much more OO-like :

immutable class FLOAT {
  int m;
  int e;
  constraint e&gt;=-128 && e&lt;=127 && m&gt;=... && m&lt;=... ;
}
immutable class COORDINATE {
  FLOAT x;
  FLOAT y;
  FLOAT z;
}
database class ASSETS_DISPOSED {
  ... Asset_ID;
  GREG_CAL_DATE Date_Disposed_of;
  predicate "The asset identified by §Asset_ID§ has been disposed of on §Date_Disposed_of§";
}

(If you don't recognize this immediately as an OO-style syntax, imagine "public" in front of every text line in there.)  Some of the "weird-looking" stuff ("immutable class"/"database class"/"constraint"/"predicate") in this example is deliberately intended to illustrate the kind of things that currently existing OO languages are typically still lacking in order to make them more suitable for management of data that resides in a DB, or at least for cleaner and better integration between the programming side of things and the data side of things.



To conclude on the matter of "structure" :

Any language can exist to epxress an information model.  The degree in which such a language allows to express EVERY POSSIBLE RELEVANT ASPECT of [the content of] an information model, is what determines its degree of suitability for expressing information models that can legitimately be regarded as fully formal logical models.  That scale of suitability is a continuum, but no existing modeling languages/dialects actually achieve the full 100% of the expressiveness that is needed.



I'll be leaving anything to do with integrity for a next post (hopefully).