Friday, January 12, 2018

Afterthoughts on a data architects meetup

Visited a meetup of data architects yesterday. Main topic for me was the presentation with thoughts on our practices of data modeling, provokingly presented under the title “data modeling must die”. It was a very good talk. It defended ideas that have been mine as well for as long as I can remember. However this post is about a point of disagreement. And another one.
Disagreement 1.
It was claimed that when Codd invented the relational model of data, he also made some serious mistakes. Fair enough, he has. (It may have been the case that many of those mistakes actually only crept in during the later years for reasons and circumstances that were more political than anything else, and that early Codd was even “purer” than the fiercest relational fundamentalist still walking around these days, but that’s another discussion.)
But the mistake being referred to was “inventing the relational model of data on an island”, by which it was meant that his “mistake” was to invent the RM in isolation from other phases of the process of data systems development, such as conceptual modeling.
True, the inventing happened in isolation. But dressing that up as a “mistake” he made is, eurhm, itself a mistake. One that exposes a lack of understanding of the circumstances of the day.
One, it is not even certain imo that “conceptual modeling” as a thing in its own right already existed at the time. Codd’s RM is 1969, Chen ER is 1974 ("An Introduction to Database Systems" even dates it 1976). So how *could* he have included any such thing in his thinking. Here are two quotes from "An Introduction to Database Systems" that are most likely to illustrate accurately how Codd probably even never have come up with the RM if he *truly, genuinely* was "working on an island, separated from any and all of those developer concerns as they typically manifest themselves while working at the conceptual level".
"It is probably obvious to you that the ideas of the E/R approach, or something very close to those ideas, MUST HAVE BEEN (emphasis mine) the informal underpinnings in Codd's mind when he first developed the formal relational model."
"In other words, in order for Codd to have constructed the (formal) relational model in the first place, he MUST HAVE HAD (emphasis mine) some (informal) "useful semantic concepts" in his mind, and those concepts MUST BASICALLY HAVE BEEN (emphasis mine) those of the E/R model, or something very like them."
Readers wanting to read more are referred to chapter 14 of said book, and pg 425 in particular, for the full discussion by Chris Date.
So why did Codd not bother with the stuff at the conceptual level ?  My answer : because he was a mathematician not an engineer.  And as a mathematician, his mindset always led him to want to be able to PIN THINGS DOWN PRECISELY, with "precisely" here carrying the meaning it has when present in the mind of a PhD in mathematics.  Which is quite different from the meaning the word might have in the mind of the average reader of this post.
And at the conceptual level, you never get to "pin things down precisely" AND THAT'S DELIBERATE.
In those days, there was analysis and there was programming. With a *very* thick Chinese Wall between the two, and often even between the people engaging in one of those two activities (at the time it was typically considered outright impossible for any person to be proficient in both). Analysis was done *on paper* and that paperwork got stored in physical binders ending up in a dust-collecting locker. I even doubt Codd ever got to see any such paper analysis work. He did get to see programs written in the “programming” side of things. Because that’s where his job was : in an environment whose prime purpose was to [develop ‘systems’ software to] support programmers in their “technical” side of the story.
Two, Codd never pretended to address the whole of the data systems development process with his RM. The RM was targeted at a very specific and narrow problem he perceived in that process, as it typically went in those days : that of programmers writing procedural code to dig out the data from where it is stored. He just aimed for a system that would permit *programmers* to do their data manipulation *more declaratively* and *less procedurally/mechanically*. Physical data independence. Nothing more than that. And the environmentals that would make such a thing conceivable and feasible in real life. Codd was even perfectly OK with not even considering how the data got into the database ! His first proposal for a data language, Alpha, *did not have INSERT/DELETE/UPDATE* ! He was perfectly fine leaving all those IMS shops as they were and do nothing but add a “mapping layer” so what came out of the mapping layer was just a relational view of data that was internally still “hierarchical”. I could go on and on about this, but my point here is : calling it a “mistake” that someone doesn’t do something he never intended to do in the first place (and possibly even didn’t have any way of knowing that doing it could be useful), is a bit over the edge.
Disagreement 2
It was claimed that “model translations MUST be automatic”. (The supporting argument being something of the ilk “otherwise it won’t happen anyway”.)
True and understandable (that otherwise it won’t happen), but reality won’t adapt itself so easily to management desiderata (“automatic” is management speak for “cheap” and that’s the only thing that matters) merely because management is management.  Humans do if they're not the manager, reality doesn't.  And the reality is that the path from highly conceptual, highly abstract, highly informal to fully specced out to the very last detail, is achieved by *adding stuff*. And *adding stuff* is design decisions taken along the way. And automated processes are very inappropriate for making *design decisions*. (By *adding stuff* I merely mean *add new design information to the set of already available design information*, I do not mean, add new symbols or tokens to an already existing schema or drawing that is already made up in some syntax.)
When can automated systems succeed in making this kind of design decisions ? When very rigid conventions are followed. E.g. when it is okay that *every entity* modeled at the conceptual level eventually also becomes a table in the logical model/database. But that goes entirely counter to the actual purpose of modeling at the *conceptual* level ! If you take such conventions into account at the time you’re doing conceptual-level modeling, then you are deluding yourself because in fact you are actually already modeling at the logical level. Because you are already thinking of the consequences at the logical level of doing things this way or that way. The purpose of conceptual-level modeling is to be able to *communicate*. You want to express the notion that *somewhere somehow* the system is aware of a notion of, say, “customer” that is in some way related to, say, a notion of “order” that our business is about. You *SHOULD NOT NEED TO WORRY* about the *logical details* of that notion of a “customer” if all you want to do is express the fact that these notions exist and are related.
So relatively opposite to the undoubtedly wise people in front of the audience, I’m rather inclined to conjecture that if you try to do those “model translations” automatically, you are depriving yourself of the freedom to take those design decisions that are the “right” ones for the context at hand, because the only design decisions that *can* still be taken are *[hardcoded] in [the implementation of]* the translation process. And such a translation process can *never* understand the context (central bank vs. small shop on the corner of the street kind of aspects), let alone take it into account, in the same way that a human designer can indeed. That is, you are depriving yourself of the opportunity to come up with the “right” designs.
A third point.
I was also surprised to find how easily even the data architects of the current generation who are genuinely motivated to improve things, seem to have this kind of association that “Codd came up with SQL”. He didn’t and he’d actively turn around in his grave hearing such nonsense (he might also just have given up turning around because it never ends). He came up with the relational model. The *data language* he proposed himself was called Alpha. Between Alpha and SQL, several query languages have seen the light of day, the most notable among them probably being QUEL. SQL is mostly due to what good old Larry did roundabouts 1980. It is relatively safe to assume that, once SQL was out, Codd felt about it much the same way that Dijkstra felt about BASIC and COBOL : that it was the most horrendous abomination ever conceived by a human. But that (neither the fact that the likes of Codd *have* such a denigrating opinion, nor the fact that they’re right) won’t stop adoption.

Monday, July 14, 2014

Conceptual vs. Logical modeling, part IV & conclusion

Other kinds of constraint

Still referring to the example model at

I now want to draw your attention to that very peculiar construct close to the center of the image.  Thing is, I have never seen such a symbol before in a conceptual data diagram, and I suspect the same will hold for most of you.

So what does it express ?  The question alone illustrates an important property of using (any) language to express oneself : if you use a word or a symbol that the audience doesn't know/understand, they won't immediately get your meaning, and they'll have to resort to either guessing your intended meaning from context, or else asking you to clarify explicitly.  And if your models and/or drawings end up just being stored in some documentation library, there's a good chance that the readers won't have you around anymore for directly asking further clarification from the original author.  Leaving "guessing the meaning from context" as the only remaining option.  (As said before, guesswork always has its chance of inducing errors, no matter how unlikely or improbable.)

So, since the original author isn't available for asking questions to, let's just do the guesswork.  I guess that this curious symbol intends to express something that might be termed "exclusive subtyping" (I'm not a great fan of the word "subtyping" in contexts of conceptual modeling but never mind).  It expresses that an asset can be a "Financial" asset, or a "Physical" asset, or an "Information" asset, but never two or more of those three simultaneously.  We already touched on this, slightly, in the discussion of referential integrity : the lines from the three subtypes can be seen as relationships, of which only a single one can exist.  I'm pretty sure at one point or other, you've already run into the following notation to denote this :
! ASSETS          !
!...              !
   |   |   |
   |   |   |
   |   |   +-----------------------+
   |   |                           |
   |   +---------------+           |
   |                   |           |
+------------------+ +-------+ +--------+
! FINANCIAL_ASSETS ! ! ...   ! ! ...    !
+------------------+ +-------+ +--------+
!...               ! ! ...   ! ! ...    !
+------------------+ +-------+ +--------+

And this more or less begs the question, "then what about the cardinalities of those relationships ?".  The point being, the example model doesn't seem to give us any information about this.  Can there be only one or can there be more FINANCIAL_ASSETS entries for each ASSSET ?  Can there be zero FINANCIAL_ASSETS associated with a given ASSET (even if the attributes at the ASSETS level tell us it is indeed a "financial" asset) ?  Can there be only one ASSETS associated with each FINANCIAL_ASSET, or can there be more, or even zero ?  Strictly speaking, no answer to be found in the drawing.  Guesswork needed !

Arbitrarily complex constraints

And beyond these, there are of course the "arbitrarily complex" constraints.

"The percentages of the ingredients for a recipe must sum up to 100 for each individual recipe".  No graphical notation I know of allows to specify that kind of thing.  Nevertheless, rules such as these are indeed also a part of a logical information model.


In general, for each information system that manages (and persists) data for its users, there exists some set of "metadata" (I'm not a big fan of that term, just using it for lack of better here and hence in scare quotes) that conveys ***every*** little bit of information about :

(a) the structure of the user's data being managed
(b) the integrity rules that apply to the user's data being managed

Such a "complete" set of information is what I call a "logical information model".

Anything less than that, i.e. any set of information that could have to leave unanswered any possible developer question concerning either structure of or integrity in some database, necessarily becomes a "conceptual information model" by that logic.  Note that such a definition makes the term "conceptual information model" cover the entire range of information models, from those that "leave out almost nothing compared to a fully specced logical one", to those that "leave out almost everything compared to a fully specced logical one".  Some of the existing notations are deliberately purposed for very high levels of abstraction, some of them are deliberately purposed for allowing to document much finer details.

Thing is, all of the existing notations are purposed for _some_ level of abstraction.  Even ORM with its rich set of symbols that allows for way more expressiveness than plain simple basic ER, has more than enough kinds of things it cannot express (perhaps more on that in a next post).  And hence any information drawn in any of the available notations, must then be regarded as a conceptual model.

Some of them will expose more of the nitty gritty details, and thus perhaps come somewhat closer to the "truly logical" kinds of model, others will expose less detail, thus deliberately staying at the higher abstraction levels, and thus staying closer to what might be considered "truly conceptual".  But none of them allow to specify every last little detail.  And the latter is what a true logical informatin model is all about.

Monday, July 7, 2014

Conceptual vs. logical modeling, part III

Conceptual vs logical modeling of Referential Integrity

In the previous post, a somewhat detailed inspection was made of :

(a) how popular modeling approaches support the documenting of a very common class of database constraints, categorizable as "uniqueness constraints"
(b) how those modeling approaches get into problems when the task is to document _all_ the uniqueness rules that might apply to some component of some design (and how in practice this effectively leads to under-documenting the information model)
(c) how those same modeling approaches also get into problems when the task is to document certain other database constraints that also define "uniqueness" of some sort, just not the sort usually thought of in connection with the term "uniqueness".

This post will do the same for another class of constraints of which it is commonly believed that they can be documented reasonably well, i.e. the class of "foreign key" constraints.  Whether that belief is warranted or not, depends of course on your private notion of "reasonable".

Reverting to the "Assets" example model referenced in the initial post of this little series, we see the "Assets" entity has two relationships to "parent" entities.  They respectively express that each Asset "is of some category", and "is of some Asset_type".  (Aside : the explanations in the "Asset_Categories" entity are suspiciously much alike the three "Asset Detail" entities at the bottom, betraying a highly probable redundancy in this model.  But it will make for an interesting consideration wrt views.  End of aside.)

What is _not_ expressed in models such as the one given.

Assets has two distinct relationships to "parent" entities.  That means that there will be _some_ attribute identifying, for each occurrence of an Asset, which Asset_Category and which Asset_Type it belongs to (it was already observed that some dialects omit these attributes from their rectangles, this changes the precise nature of the "problem" here only very slightly).  But what is _not_ formally and explicitly documented here, is _which_ attribute is for expressing _which_ relationship.

Now, in this particular example, of course it is obvious what the true state of affairs is, because the names of the attributes are identical between "child" and "parent" entity.  But this rule/convention as such is certainly not tenable in all situations, most notably in bill-of-material structures :

+---------------+   +----------------------------+
+---------------+   +----------------------------+
! ThingID    ID !--<! ContainingThingID       ID !
+---------------+   ! ContainedThingID        ID !

So _conventions_ will obviously have to be agreed upon to help/guarantee full understanding, or else guesswork may be needed by the reader of such models, and that guesswork constitutes tacit assumptions of some sort, even if there is 99.9999% likelihood the guesswork won't be incorrect.

We've stated that it cannot be documented "which attribute is for expressing which relationship".  A slight moderation is warranted.  It is perfectly possible to convey this information by making the connecting lines "meet the rectangles" precisely at the place where the attribute in question is mentioned inside the rectangle (that is, of course, if it is mentioned there at all).  Kind of like :

+---------------+    +----------------------------+
+---------------+    +----------------------------+
! ThingID    ID !-+-<! ContainingThingID       ID !
+---------------+ +-<! ContainedThingID        ID !

This technique documents reasonably well that the relevant attribute pairs are (ContainingThingID, ThingID) and (ContainedThingID, ThingID).

It will be clear that this will work well only for "singular" (non-composite) FKs, and at any rate even then any possibility of crossing lines or so might lead to some degree of obfuscation.  (Once again, I leave it for you to ponder whether the popular belief that composite keys aren't such a very good idea, is due precisely to these notational problems.  "You shouldn't do that because you can't document it in the drawings.")

Reverting to the original Assets example model, another thing not formally expressed (to the fullest) is the _optionality_ of the relationship.  If the vertical crossing bar at the "Asset_Categories" side of the relationship means, "ALWAYS EXACTLY 1", then there isn't a problem.  But what if the relationship were actually that each "Asset" CAN be of at most one Asset_Category, but perhaps as well OF NONE AT ALL ?  In "almost-relational" systems, this could perhaps at the logical level be addressed by making the corresponding FK nullable, but with truly relational systems, this isn't an option.  In that case, the same technique would have to be applied at the database level as is done for many-to-many relationships : an "intersection" thing would have to be defined that "materializes" the relationship.  But if we do that, where and how do we document the name of this structure that achieves this materialization ?  We could give it its separate rectangle, but this technique is sometimes criticized for creating entity bloat, and is sometimes considered undesirable, claiming it "obfuscates" to some extent the things that the drawn model is _mainly_ intended to convey to the user/reader.

Referential integrity bis

Just as was the case with uniqueness constraints, Foreign Key constraints get their bit of extra "twists" when the Temporal (historical data) dimension is added.  To begin, "temporal foreign key constraints" will almost always, and of necessity, be composite.  Whatever serves as an identifier to identify whatever thingy is relevant, PLUS a range of time.  The aforementioned problems with documenting composite foreign keys apply almost by definition.

Second, (this too is analogous to the situation with uniqueness constraints) for the range attributes that participate in a foreign key, there is a possible distinction to be made between the "traditional", equality-based treatment, and a treatment "for every individual point implied by the range".  And just as was the case with uniqueness constraints, it is even possible for a temporal foreign key to comprise >1 range attribute, and for some of those to have the equality-based treatment (matching rows in the referenced table must have exactly the same range value), while others have the "individual implied points" treatment (matching rows in the referenced table are only required to "cover" the implied points with a range value of their own).  A notation for documenting the distinction is needed, but difficult to conceive.

Referential integrity ter
Referential integrity to a view.  We've mentioned the "Categories of assets" as an aside earlier on.  Presumably, If an Asset is a "Financial_Asset", then its corresponding Asset_Category_Code must have a certain particular value.  Iow, the FK implied by that line from Financial_Asset to Assets, is not really one from the Financial_Assets table to the Assets table, but rather, it is an FK from the Financial_Asets table to a subsetting(/restriction) view of the Assets table, something akin to ... REFERENCES (SELECT ... FROM ASSETS WHERE Asset_Category_Code = ...) ...

The notational problem with our rectangles and connecting lines is completely analogous to the case with "keys on views" : we could document such things by giving the view its own rectangle, and then we've shifted the problem to documenting the definitional connect between two distinct rectangles in the schema, or we can settle for not documenting it at all and hope the info will not be lost in communication, meaning the subsequent readership of our model will have to make all the necessary assumptions, and not a single one more, and make them all correctly.  Might be a tall order.

So once again, it seems we can conclude that for certain classes of referential rule on the data, our graphical ways of modeling can suffice to document such a rule, but they certainly don't suffice to document, clearly, all possible cases of referential rule on data.  The more bits and pieces abstracted away by a graphical modeling language (= the smaller the symbol set of the language), the more conventions will need to be assumed in order to get effective and accurate communication between modeler and reader, and the more cases there will be that are, formally speaking, not documentable because the language's symbols set doesn't allow expressing some nitty gritty detail that "deviates from the usual conventions".

(Still to be continued...)

Friday, June 20, 2014

Conceptual vs. logical modeling, part II


In the previous post, a somewhat detailed inspection was made of possible approaches for specifying, in a modeling language, some database structure, highlighting differences in informational value between those modeling languages, as well as pointing out some pieces of relevant information the specificiation of which is typically left completely unsupported by any of them.

But a full formal spec of a database is not only about its structure, it is also about any additional rules that apply to the constituent components of that structure.  This observation holds, regardless of whether the database is a relational one (and its constituent components are what TTM calls "relation variables", "tables" in SQL) or a graph-based or hierarchical one (and its constituent components are nodes and edges).  I'll be speaking of relvars (TTM abbreviation for relation variables) in what follows, but keep in mind that the same should apply as well, mutatis mutandis, to hierarchical and "graph-ical" models.

While the aspect of 'structure' can reasonably well be modeled in "graphical" languages (such as the various ER dialects and UML), that is much less the case with the aspect of the integrity constraints between the components of that structure.  How come ?

The essential reason is that the nature of an integrity rule/constraint can really be just anything at all.  Its "structure" is constrained only by the fact that it must be expressed exclusively in terms of the relvars that make up the database structure.  At the logical level, where all the formal details of the relvars have been fully specced out in some given language, this is achieved "easily" enough using some language based on/inspired by mathematics.  Just spell out the predicate that makes a violation a violation.  But how to devise a language that supports expressing "anything at all" at the conceptual level ?  The answer is you can't.  The only thing you can do is try to taxonomize the set of all possible constraints in certain well-defined "classes" that might indeed be epxressible.  That is (sort of) exactly what has happened in database modeling land (*).  From ER modeling over IDEFIX to Halpin ORM : the set of all possible constraints is subsetted according to certain chosen criteria, and for each "identifiable" subset, a notation is devised to facilitate documenting constraints belonging to that subset.  A modeling language such as ER leaves it at that, Halpin's "Big Brown Book" explicitly adds a (fourteenth, I believe) category "others", the leftovers that still aren't expressible using the modeling language's symbols that are available.

Anyway.  The fact alone that a powerful modeling approach such as ORM still has this category "leftovers" in the constraints realm, should already suffice to show that _complete_ and fully formal specifications are in fact inachievable without a mathematical language.  For the typical categories of constraints other than those "leftovers", however, the belief seems to be fairly widespread, and firm, that current mainstream notations suffice to document all the stuff we need to know/convey about our databases.  That belief is not entirely warranted, imo, and in this post I'll be illustrating a couple of issues relating to the (common) class of uniqueness constraints.  A subsequent post will do the same for referential integrity/foreign key constraints.

(*) the sad byproduct of this state of affairs is that if one uses the word "constraint" in a database discussion, SQL practitioners will often think you must be talking either of a UNIQUE constraint or a foreign key, overlooking the fact that "there are more types of constraint between heaven and earth than are expressible in ER, and supported by SQL".


The example case from the previous post is not very well suited to illustrate issues with documenting uniqueness rules.  None of the entities in that example are directly suitable for illustrating the points I want to make, so for the sake of this discussion, I'm somewhat forced to resort to a totally different example, which is likely to look ridiculous in the eyes of business modelers, but I won't mind that for the time being.

Let's say we want to model the operation of numeric addition - if you really can't bear the thought, imagine you are Euclid or Pythagoras, that arithmetic has not been invented yet and you are in the process of doing exactly that, using the latest database design technology (and with apologies upfront for my ascii modeling) :

! ADDITION      !
! N1     number !
! N2     number !
! SUM    number !

(Yes, and the table is indeed like

! N1 ! N2 ! SUM !
!  1 !  1 !   2 !
!  1 !  2 !   3 !
!  ...          !
!  2 !  1 !   3 !
!  2 !  2 !   4 !
!  ...          !


I'm pretty sure if I'd ask you what the key is here, you'd reply with "N1 and N2 combined, of course".  You get 33% from me for that answer.  There are three keys here : {N1 N2}, {N1 SUM}, and {N2 SUM}.  Well, granted of course that the matter is also important of which ones you ACTUALLY WANT ENFORCED (that's what you're referring to if you wanted to argue that these latter two "do not identify an addition").  If we wanted to "enforce" the obvious consistencies/equivalences between expressions of addition and expressions of subtraction, we would indeed need to model and enforce all three (hehe.  settled that.).

Now how would you document all three of them in your model of "annotated rectangles" ?  You're in trouble !  (In fact, I think it is precisely _because_ of this notational problem to document multiple composite keys inside one single rectangle, that the notion of "primary" key (as distinct from "secondary / ternary / auxiliary / ..." key ????) has become so widespread as it has, and furthermore that the practice of ID-ifying really just everything has become as popular and widespread as it has.  I leave it for you to ponder whether that's a case of putting the cart before the horse or not - or one of redefining/retrofitting the problem such as to suit the most desirable solution.)

Anyway.  Supposing we do want to enforce all three keys.  How can we document this in our drawing ?  Observe in particular that each key consists of >1 attribute and each attribute effectively participates in >1 key.  The only way I can imagine to convey all the key information in our rectangle is like this :

! ADDITION      !
! N1     number ! K1,K2 !
! N2     number ! K1,K3 !
! SUM    number ! K2,K3 !

Very much like the approach of putting a P in front of the attributes, but it takes attentive and careful deciphering to read the drawing and capture the keys information correctly !  (And the sad byproduct of omitting the full keys information, e.g. for readability sake, in diagrams such as these, is indeed that typically not all keys are properly identified, let alone enforced.)

In fact, the most readable way to convey all of this information about uniqueness rules, seems to be exactly by just using syntax very similar to declarative DDL, or the declarative portion of a D language :

UNIQUE { {N1,N2} {N1,SUM} {N2,SUM} }

And here again we are seemingly headed toward a similar conclusion : if you want to be precise _AND_ complete in what you are stating about the nature of the database that you are documenting, then of necessity you MUST resort to a language that has a much higher expressiveness than the ones you typically have available when modeling at a "higher" level of abstraction.

Incidentally, in notations such as Data Vault (a "hub" to represent the entity and separate rectangles connected to the "hub" for each attribute), the problem of documenting keys is even worse.  The only graphical way(s) I can imagine to document the existence of some meaningful "grouping" of attributes, such as their belonging to the same key, will invariably make the diagram equally unreadable because of extraneous line bloat.  Whether you try to do it by surrounding them with dotted lines or so, or by creating a new symbol for documenting the existence of a key (three extra symbols for the ADDITION Data vault) and connecting each attribute with them as appropriate (six extra connections on the diagram), it's always going to turn your beautiful neatly organized DV diagram into more of a spider web.  Fortunately, Data Vault diagrams are typically used only in DW contexts, not to document keys in the operational source systems they're concerned with, but it still goes to show that whatever conceptual notation you use, it only goes as far as it goes and _something_ will always be "missing" from it.

Uniqueness bis

Managing temporal data is somewhat of a long-standing problem in database land.  A thorough analysis of the _nature_ of the problem (and what is needed to address it) can be found in "Temporal Data & the Relational Model", pages 1-857, so I'm not going to re-iterate all of that here, but one particular problem dealt with is "temporal uniqueness".  (Aside : if you haven't yet read the book but are interested to do so, don't go order or search it now.  Updated and revised edition is to appear within a couple of months.)

Say you have

! ASSET_ID     ID !
! FROM       date !
! TO         date !
! VALUE    number !


! MARRIED_TO      !
! FROM       date !
! TO         date !

and you want to enforce a constraint "no single date giving >1 distinct values for same ASSET_ID", or "no one married to >1 other person on same date".

The "traditional" interpretation of what a key is, will not allow you to express this.  No "traditional", equality-based, key will ever prevent overlaps between various FROM-TO combinations for the same person/asset/...  So perhaps you might be inclined to conclude "that is not a real key".  Interestingly, The "Temporal Data" book proposes to rearrange matters a bit so that expressing the constraint does become possible, and indeed in the form of "specifying a key" at that :

! ASSET_ID       ID !
! DURING date_range !
! VALUE      number !


! MARRIED_TO        !
! PERSON1_ID     ID !
! PERSON2_ID     ID !
! DURING date_range !


The graphical languages such as ER that we typically use for information modeling, still let us down, somewhat, in the case we'd want to specify that level of detail.  In addition to the composite nature of the key, we'd also need to express the semantics of the "WHEN UNPACKED ON" part : namely that the values for the DURING attribute must be interpreted in a "for each individual date value covered by the range" kind of way.  The closest we could come to denoting that might be something like this :

! MARRIED_TO        !
! PERSON1_ID     ID ! K1        !
! PERSON2_ID     ID ! K2        !
! DURING date_range ! K1_T,K2_T !

Of course, the notational problem with denoting multiple keys to the relvar has not disappeared, nor has the possible participation of a single attribute in >1 key, just an extra little bit of codification has been added (suffixing _T to a key name on the lines for the attributes where it applies) to denote the extra bit of semantics covered.  It will be clear that while such tricks are indeed possible, and potentially helpful in denoting modeled solutions to a problem that is indeed very common, once again such solutions can only go as far as they do, and taking things further will ultimately result only in making the models we draw ultimately unreadable.  Specifically in the context of temporal data management and temporal keys, observe for example that it is not necessarily the case that all range-valued attributes will _always_ have the _T "interpretation" for _all_ the keys in which they participate :

! PRESIDENTIAL_TERM     year_range ! K1   !
! DURING                date_range ! K1_T !
! PRESIDENT_NAME               ... !      !

( think   70-74 : 70-73 : NIXON   &&   70-74 : 74-74 : FORD )

Once again the conclusion seems warranted that extending expressiveness/notational support beyond current common practices, will quickly result in making the models more unreadable and thus less informative, rather than more informative.

Uniqueness ter

Another variation on the theme of uniqueness rules, is the problem of enforcing uniqueness on only a (proper) subset of all occurrences of an entity type.  Say you have

! CAR_LICENSE_PLATE                !
! CAR_CHASSIS_ID               ... ! K1   !
! TAX_LICENSE_PLATE            ... ! K2   !
! CAR_STILL_IN_ACTIVE_USE     bool !      !

and for the purpose of re-using license plate numbers, you want to enforce license plate uniqueness (key K2) only for those cars that are still in active use (there are admittedly better solutions to this problem than the one modeled here, which I sometimes call the ultra-poor man's historical database, but it does serve to illustrate my point).

The aspect of the problem that makes the "key" "not documentable", is precisely the subsetting rule, i.e. the fact that the "key" is not to be enforced on the whole CAR_LICENSE_PLATE entity, but just on the subset of it that, once the database is implemented in SQL, could be found by issuing SELECT ... FROM CAR_LICENSE_PLATE WHERE CAR_STILL_IN_ACTIVE_USE == true;

If we absolutely wanted to be able to document the existence of this key, using the available means of adding a "Kn" annotation in the rectangles, we'd have to add a separate rectangle for the subsetted CAR_LICENSE_PLATE entity, and then we'd have shifted the problem to documenting the definitional connect/dependency of this new rectangle with/on the "original" one, the "full" entity.  That is, we've transformed the problem into one of conceptually documenting "view definitions" (and that very idea is probably seriously questionable in itself already because including the "definitional connect" smacks quite a bit of conflating conceptual/logical).  Once again, our modeling language will let us go only as far as it goes.

(still to be continued)

Friday, June 6, 2014

Conceptual vs. Logical modeling (once again)

was presented to me as an example case in a discussion on [data] modeling at

(group membership is required to view) and in particular as a case for fleshing out the distinctions between what I call 'conceptual' and 'logical' modeling.  That distinction being that "conceptual is informal, logical is formal", or "conceptual is typically incomplete, logical is always complete".  "Complete" in the sense of "full disclosure of all relevant information".  This post intends to clarify further what I mean by that, exactly.

One model being incomplete and another one being complete means there are differences in "informational value".  What & where are those differences ?  The discussion will be split covering two distinct aspects : one of structure and one of integrity (and the integrity part will be kept for a later post).


Take a look at a single rectangle in the example model, say "Assets".  Disregard the mentions of PK/FK for the time being, they're related to integrity, not to structure per se.

What information is present in the rectangle ?
  • a rectangle heading telling us that the rest of the rectangle informs us further of the nature of a concept called "Assets".
  • a rectangle body telling us that the concept "Assets" is defined to have properties named "Asset_ID", "Asset_Name", ...
What information is _not_ present in the rectangle ?
  • Most notably, the type information for each property.  Now, I've seen many similar models that _did_ include this information.  Usually limited to the set of type names that were known to be supported by the DBMS that the system was known to be going to be implemented on (so far for DBMS agnostic modeling !).  Sometimes the set of type names used would include things such as "COORDINATE".  Indicating that some domain/type of that name is supposed to exist, and supposed to be known and understood correctly by the reader.  And it's the double "supposed" in there that makes such models informal/incomplete still.
  • A very nasty one, and very hideous : optionality !!!  Take a look at the Date_Disposed_of property.  Is that property going to have an "assigned value" for each and every occurrence/instance of an "Assets" type ???  Presumably not.  While it is not invalid per se to introduce some kind of concept of "nullability" at the conceptual level (of entity attributes), the thing is : the logical level's "full disclosure" requirement implies that the diagrams must then show that information !!!  (I've seen at least one dialect that used '#'/'*' symbols on the left side in the rectangles to achieve this.)  And the second thing is : diagramming languages such as E/R and UML already have a notion of optionality (albeit not for attributes but between entities).  And of necessity, it will introduce multiple ways of saying the same thing.  Singling out the 'disposed_of' attribute in its own "Assets_Disposed" entity (with the obvious one-to-at-most-one relationship) will do the job, but is often considered "poor practice" because of the "entity bloat" it creates (and the corresponding reduction of opportunity to "inspect just once and get the whole picture").  Otoh, it is precisely what would _need_ to be done to achieve a relational version of the logical information model, since the relational model does not allow [values for] attributes to be missing.
  • Also not incorporated in the model shown by this example, but the notion does exist : the distinction between "weak" E/R entities and "strong" E/R entities.  There are modeling dialects that would _NOT_ include the "Asset_ID" attribute in the "Asset_Valuations" entity, and the reader is supposed to infer, somehow, that "Asset_Valuation" is indeed a "weak" entity and additionally requires its "parent entity"'s primary key before "an occurrence of it can come into existence".  This particular approach induces interpretation ambiguities in two cases : (a) the parent entity has >1 identifying key (solvable only by introducing yet other artefacts such as distinctions between "primary" and "secondary" keys), and (b) the child entity has two relationships (think bill-of-material structures) to the same parent entity (there will have to be two distinctly named attributes, so you can't assume "same name as the primary key attributes in the parent", but the modeling approach of leaving them unmentioned means you can't specify which will be which ...  This actually belongs more in the discussion on constraint specification, so I'll pick the subject up there again if I don't forget it.
  • Also quite hideous as well as nasty : the meaning of the damn thing !!!  Will drawing a rectangle and labeling it "Assets" ensure that the understanding derived from it by some individual, will be _exactly_ the same as that derived by some other individual ?  _sufficiently_ the same ?  And even if so, how will you know any differences in understanding ?  Posing these questions is answering them.  Looking at the drawing, all readers will just be nodding yes because with sufficient details abstracted away, your beautiful drawing simply matches their perception of reality.  I used to have a catchphrase "After you've abstracted away all the differences, whatever remains is identical."
Conclusion : while it is OK for conceptual models to leave out information such as the things mentioned (note that I do not claim completeness for the list of things I mentioned), a fully formal logical model will always have to include _all_ the pieces of the "puzzle" :
  • All the type information.  To begin, that's a complete and fully specced inventory of all the data types that will be used in the rest of the model.  And "fully specced" really means "fully specced" here, e.g., just saying "INTEGER" will _not_ be sufficient if there is any risk that any reader might be misinterpreting the range of numbers "covered" by this name.  Sometimes it _is_ interesting for a user to know that 100000 is not an INTEGER (because >32767), or to know that -1 is not an INTEGER (because only positive numbers were anticipated).  For a central bank deciding to introduce negative interest rates, it might be interesting to know that some of their IT systems had not anticipated this and defined the domain for interest rates something like the range from 0.0000 to 99.9999 ...  And for types such as "coordinate", there is nothing in the name to suggest whether these are 2D or 3D coordinates ( (x,y) pairs vs (x,y,z) triples ).  Formal completeness requires one to state something like :

    COORDINATE : {(x,y,z) | x IN FLOAT && y IN FLOAT && z IN FLOAT}

    This definition itself depends on a definition for a thing called FLOAT.  This one in turn could be defined as

    FLOAT : { (m,e) | m IN NN && e in NN && e>=-128 && e<=127 && m>=... && m<=...}

    Now we depend on a definition for NN.  It will be clear that somewhere somehow, something inevitably has "got to be given".  Fortunately, that something can be as simple as "the set of Natural Numbers" that everyone should know from 2nd grade math classes, or thereabouts.  And if misunderstandings and/or communication problems boil down to a lack of agreement/common understanding of what the set of natural numbers is, well then there will be very little any modeling language/methodology could possibly to address that.
  • Assuming we are defining a fully formal logical structure according to the relational model (as distinct from "the graph-based model", which may be conceivable/imaginable, but has unfortunately never been elaborated/formally spelled out the same way the RM has been), _all_ the attributes of the relational structures, plus the type they're of (those types having been formally defined in the previous step).

    concrete example :

    ASSETS : { (Asset_ID, Asset_Category_Code, Asset_Type_Code, Asset_Name, Asset_Description, Date_Acquired, Date_Disposed_of, Other_Details) |
            Asset_ID in ... &&
            Asset_Category_Code IN ... &&
            Asset_Type_Code IN ... &&
            Asset_Name IN ... &&
            Asset_Description IN ... &&
            Date_Acquired IN GREG_CAL_DATE &&
            Date_Disposed_of IN GREG_CAL_DATE &&
            Other_Details IN ... }
  • Still assuming we are defining a fully formal logical structure according to the relational model (such that attributes cannot ever be null), the relational structures will be split out in separate parts whenever some attributes of a conceptual entity are optional.

    concrete example :

    ASSETS : { (Asset_ID, Asset_Category_Code, Asset_Type_Code, Asset_Name,
                        Asset_Description, Date_Acquired, Date_Disposed_of, Other_Details) |
                Asset_ID in ... &&
                Asset_Category_Code IN ... &&
                Asset_Type_Code IN ... &&
                Asset_Name IN ... &&
                Asset_Description IN ... &&
                Date_Acquired IN GREG_CAL_DATE &&
                Other_Details IN ... }
    ASSETS_DISPOSED : { (Asset_ID, Date_Disposed_of) |
                 Asset_ID in ... &&
                 Date_Disposed_of IN GREG_CAL_DATE }
  • And it will also have to include a precise statement of the so-called "external predicate" for each relational structure so defined.  Don't underestimate the importance of this.  An SQL table has a very precise intended meaning, and too often I've seen maintenance developers think "oh I can reuse this table for my current purpose", looking exclusively at its structure and ignoring/denying/disregarding the precise, current intended meaning completely.  It is in fact because of this associated intended meaning that "re-using existing database tables" is, in principle, the WORST POSSIBLE IDEA a db designer can have.  Except if he is 100% certain that the current external predicate matches _exactly_ with the "external predicate" he has to "introduce" into the database for achieving "his current purpose".  This is most unlikely to be the case, except if the table is an EAV table, and I've already dealt with why that approach sucks (in 99.9999% of the cases).

    (Incidentally, if you were struck by a remarkable resemblance between the way the data type definitions were stated in the foregoing point, and the way the relational structures were stated in the last, it is exactly this aspect of their being related or not to such an "external predicate" that makes the difference.  Data type definitions are just that, formal ways to define _values_ that are usable in the _relational structures that will make up the database_ to represent _meaning_.  Values in data types do not carry meaning, the relational structures that make up the database do.  E.g. "The asset identified by §Asset_ID§ has been disposed of on §Date_Disposed_of§".  Note the placeholders in between §§ marks, and that the placeholders correspond 1-1 with the attribute names.  Such a phrase could be the "external predicate" for the ASSETS_DISPOSED relational structure, and since it defines the logical meaning of the content that will be held in the concerned relational structure, it should always be an integral part of the logical model defining the database.)
Next question : and what kind of modeling language supports all of this ?  There are several.  Or none at all.  Depending on what you're willing to call a "modeling language".

To begin with a language of pure maths.  One such as used in "Applied Mathematics for Database Professionals".  Very commendable reading, BTW, even if I have to add the footnote that the book doesn't really bother with formal type definitions, contenting itself to rely on a type system such as that offered by Oracle (this had mainly to do with the 9 different DBMS's the authors had been using the most throughout their careers : Oracle 4, Oracle 5, Oracle 6, Oracle 7, Oracle 8, Oracle 9, Oracle 10, Oracle 11 & Oracle 12 - end of jocular sidenote).

Anyways.  Math-like language offers everything we need to express precise type definitions and precise definitions of relational structures such as :

FLOAT : { (m,e) | m IN NN && e in NN && e&gt;=-128 && e&lt;=127 && m&gt;=... && m&lt;=...}
COORDINATE : {(x,y,z) | x IN FLOAT && y IN FLOAT && z IN FLOAT}

ASSETS_DISPOSED : { (Asset_ID, Date_Disposed_of) | ... }

But we're left in a bit of trouble when wanting to express external predicates for our relational constructs in such a language.  Of necessity, of course, the thing is termed "external" for good reason (external = external to the system of mathematical computation that is the DBMS, hence it's a bit contradictive to expect them to be expressible in math language) !

And those not well versed in using math formulae, will of course quibble that they don't see themselves manipulating models expressed in such a language, and that they want an alternative.  Such an alternative exists in the form of a subset of the statements of a programming language such as Tutorial D, in particular, the set of statements in that language that most developers will be inclined to label "declarative" statements (following examples are only loosely inspired by, and not 100% valid, Tutorial D) :

TYPE FLOAT (M INT, E INT) CONSTRAINT E>=-128 && E<=127 && M>=... && M<=... ;
TYPE GREG_CAL_DATE (D INT, M INT, Y INT) CONSTRAINT ........................................... ;


Documenting the external predicate for all the VARs (= the relational structures to make up the database, "tables" in SQL) is a matter of adding comments, or (better), some kind of PREDICATE subclause in syntaxes such as the one used here as an example.

The nice thing about such an approach is that these kinds of formal spec are parseable.  In a language such as Tutorial D, it means the logical definition of the database structure could be made known to any program by a mere "import logical_model" directive.  In environments using other languages, it means that stuff such as Hibernate classes can be generated 100% automagically.

And just to show that the syntax (the "how") of the language is, actually, completely irrelevant, and the only thing that matters is the meaning of the information it conveys (the "what"), here's an example of how the same information could be expressed in a (hypothetical) language that is much more OO-like :

immutable class FLOAT {
  int m;
  int e;
  constraint e&gt;=-128 && e&lt;=127 && m&gt;=... && m&lt;=... ;
immutable class COORDINATE {
  FLOAT x;
  FLOAT y;
  FLOAT z;
database class ASSETS_DISPOSED {
  ... Asset_ID;
  GREG_CAL_DATE Date_Disposed_of;
  predicate "The asset identified by §Asset_ID§ has been disposed of on §Date_Disposed_of§";

(If you don't recognize this immediately as an OO-style syntax, imagine "public" in front of every text line in there.)  Some of the "weird-looking" stuff ("immutable class"/"database class"/"constraint"/"predicate") in this example is deliberately intended to illustrate the kind of things that currently existing OO languages are typically still lacking in order to make them more suitable for management of data that resides in a DB, or at least for cleaner and better integration between the programming side of things and the data side of things.

To conclude on the matter of "structure" :

Any language can exist to epxress an information model.  The degree in which such a language allows to express EVERY POSSIBLE RELEVANT ASPECT of [the content of] an information model, is what determines its degree of suitability for expressing information models that can legitimately be regarded as fully formal logical models.  That scale of suitability is a continuum, but no existing modeling languages/dialects actually achieve the full 100% of the expressiveness that is needed.

I'll be leaving anything to do with integrity for a next post (hopefully).

Tuesday, January 29, 2013

"Converting SQL to Relational Algebra", part II

When I wrote the first posts in this blog, it seemed to generate traffic from places I found rather curious, until it occurred to me that search engines are not entirely unlikely to present "Relational Model" as a useful search result to people who are searching for a Relation with a Model ...

Likewise, the former post on the same subject as this one, gave quite some traffic, and it seems not unreasonable to assume that most of that traffic was from students who were looking for info on how to solve "Convert SQL to RA" problems, and who were not looking for my philosophical agonies on why these problems are even being taught ...

As somewhat of an apology to all those disappointed students, and to prevent further disappointments in the future, a concise set of guidelines, in the form of SQL_KEYWORD to RA_OPERATOR mappings, and whatever additional comments I think might be useful.

SELECT maps to (any combination of) PROJECT, RENAME and EXTEND.

If the select clause involves exclusively column names of the table defined in the FROM clause, then PROJECT is all that is involved.

If the select clause involves constructs such as <columnname> AS <othercolumnname>, then a RENAME is involved as well.

If the select clause involves scalar expressions (quantity+1, HOUR(<colname>), ...), then an EXTEND is involved.  Note that if such scalar expressions are not followed by an AS <colname> construct, then there is no relational equivalent for this SQL, because the SQL yields an unnamed column, and the relational model and/or the relational algebra do not allow unnamed attributes.  Note also that if the expression "quantity + 1" is used in a SELECT clause (and quantity is a column of the table in the FROM clause), then the EXTEND operation by itself will not "project away" the quantity attribute.  Don't forget to explicitly PROJECT away such attributes in the RA formulation.

In all cases, observe that Relational Algebra does not allow duplicate rows, but SQL does.  If the SQL


could possibly give rise to the same quantity appearing multiple times in the result table, then there simply isn't any relational-algebra equivalent of this query.  In such cases, for the query to even have a relational-algebra equivalent, it is required that the SELECT is a SELECT DISTINCT.

FROM maps to CARTESIAN PRODUCT, NATURAL JOIN, or just nothing at all.

It maps to nothing at all if (and I think only if) its comma-separated list of table expressions has cardinality one.  In that case, the table expression simply defines the input argument to the "surrounding" [combination of] PROJECT/RENAME/EXTEND.

It maps to NATURAL JOIN between pairs of table expressions (say, T1 and T2), if and only if, for all the columns that have the same name in T1 and T2, there is an equality condition in the WHERE clause (WHERE T1.<colx> = T2.<colx>).

In all other cases, it maps to CARTESIAN PRODUCT.  Note that in CARTESIAN PRODUCT, the tables involved are not allowed to have any column names in common.  You might need to introduce additional RENAMEs in order to resolve/do away with any such column name overlappings.  Also note that whatever you write in front of a dot, does not magically become part of the column name itself.  Not in SQL, and, well, relying on it in some RA notation isn't a very attractive idea either.


It maps to RESTRICT if the WHERE clause involves boolean expressions comparing columns of the table against each other, or against literal values.  E.g. WHERE <colname1> != <colname2>, WHERE HOUR(<colname>) BETWEEN 8 AND 18, ...

It maps to SEMIJOIN if the WHERE clause involves an EXISTS(...) construct.  The nature of the inner SELECT embedded in that clause will give you the second argument for the SEMIJOIN.  Note that this inner SELECT might contain a WHERE clause itself, and that the nature of this WHERE clause may dictate that additional RENAMEs might have to be applied to this second argument.  For example,


requires a RENAME to be applied to T2, renaming A to B, in order for RA's SEMIJOIN to expose the same behaviour as the SQL EXISTS(...).

And finally, WHERE maps to an ANTIJOIN if the WHERE clause involves a NOT EXISTS(...) construct.  All remarks for SEMIJOIN apply here as well, mutatis mutandis.

If >1 of these constructs are involved in the WHERE clause, then these constructs will have a logical connective between them.  Split the overall query in as many parts as are needed (one for the scalar conditions, one for each EXISTS and one for each NOT EXISTS).  Each logical connective so eliminated becomes a NATURAL JOIN or INTERSECTION if the connective was AND, and becomes a UNION if the connective was OR.  The nesting of these INTERSECTIONs and UNIONs is as dictated by the parentheses and/or the precedence rules between the AND/ORed expressions in the SQL.

GROUP BY ...  HAVING ... maps to nothing at all.  Or more precisely, if you are studying for an exam, do as your course tells you, and try to forget it as soon as possible after you've passed the exam, because whatever you were told in your course/classes was probably about the biggest crap one could imagine.

The same applies if you don't see any GROUP BY ..., but your SELECT clause has an aggregate function in it, like, say, SELECT SUM(QUANTITY) FROM MYTABLE.  That's logically the same as having an "empty" GROUP BY clause.

There do exist definitions/versions of Relational Algebra that can properly handle all this kind of aggregation-like functionality, but it is unlikely that you have been taught these.  So, do as your course tells you and forget it ever after.

UNION maps, unsurprisingly, to UNION.  But do note that basically the same remark applies as with SELECT/projection : Relational Algebra does not allow duplicate rows.  So if your SQL UNION could possibly give rise to duplicate rows, then this UNION query simply does not have any relational-algebra equivalent.

Modern SQL also has operators such as EXCEPT, INTERSECT, NATURAL JOIN, JOIN ...ON ...  These map to relational difference, relational intersection, natural join, or some combination of natural joins and cartesian products, mostly in obvious ways.

FROM clause missing ...  That's not standard SQL, but some products allow it (and as a consequence, some courses may even teach it).  In principle, SELECT statements without a FROM clause are used to denote table literals.  Use whatever notation your RA course used for denoting relation literals.

Tuesday, January 8, 2013

A letter by Carl Hewitt

Recently, an article by Erik Meijer entitled "all your databases are belong to us" created a bit of commotion among Relationlanders.  One Carl Hewitt subsequently wrote a letter in support of Meijer's article.  A few observations and personal thoughts about Hewitt's letter.

Hewitt writes :

"Relational databases have been very useful in practice but are increasingly an obstacle to progress due to the following limitations:"

This clearly implies that mr. Hewitt believes that at least once upon some time, a relational database has actually ever existed somewhere, and he has been able to observe, directly or indirectly, that that database was "useful in practice".  Considering the overwhelming evidence that SQL is not relational, I wonder what "relational database" mr. Hewitt has so observed to be "useful in practice".  Anyway.  Is mr. Hewitt unaware of the difference between "relational technology" and "SQL technology" ?  Or does he consider the difference (and its consequences) between the two so meaningless and futile that it does no harm at all to gloss over that difference and speak of SQL as if it were indeed relational ?

Besides.  Looking at his arguments in support of his claim (inexpressiveness of the RA, for example), one cannot help but wonder if mr. Hewitt is even aware of the difference between a "database" (that's the word he used) and that thing that most knowledgeable people usually use the term "DBMS" for ...  Can "inexpressiveness of the RA" possibly a shortcoming of "an organized collection of data" (if so, how ?), or could it only possibly be a shortcoming of "the software system, the foundations of which are in RA, used to manage the collection of data" ?

What does that tell you about how thorough, accurate and meticulously precise mr. Hewitt tries/bothers to be in his published writings ?

Hewitt writes :

"Inexpressiveness. Relational algebra cannot conveniently express negation or disjunction, much less the generalization/specialization connective required for ontologies;"

Codd's Relational Algebra was proven equivalent to predicate calculus, no ?  So that means that Codd's RA can express both negation and disjunction, right ?  And subsequent definitions of RA that emerged over time (think, for example, of the addition of a transitive closure operation) did not exactly remove the MINUS operator from the existing algebra, right ?  So that indicates that the key operative word in the claim is that vague qualification "conveniently", right ?  Using such a word without being precise about its intended meaning, is just cheap handwaving.

Anyway, the RA has MINUS (and its nephew SEMIMINUS, aka antijoin), and Relationlanders have known for over 40 years that this operator perfectly suits the purpose of expressing any predicate like "... and it is not the case that ...".  It remains an open question what mr. Hewitt thinks is "inconvenient" about writing things such as "<xpr1> MINUS <xpr2>" in, say, Tutorial D code.

Also, there is nothing to stop anyone from defining an algebra that has a "complement" operation (well, so long as all the domains/types are finite, presumably).  This algebraic operation by itself is the exact relational counterpart of the logical operation of negation, taken by itself.  Having to actually compute complements will be contraindicated in most circumstances, as it will typically involve actual computation of what is known in relationland as the "universal relation" for a given heading.  All of that is probably exactly the reason why Codd did not want to include such a "complement" operation in his algebra.

At any rate, I'm still left wondering what mr. Hewitt's problem is here.

Hewitt writes :

"Inconsistency non-robustness. Inconsistency robustness is information-system performance ..."

Note very carefully that Hewitt's complaint here is that the RM lacks "inconsistency robustness", which he then defines to be a performance characteristic.  Performance is not a characteristic of the model.  Anyway. Once writers start going down this alley, readers can already suspect the kind of , eurhm, "talk" that is about to follow ...

"... in the face of continually pervasive inconsistencies, a shift from the once-dominant paradigms of inconsistency denial and inconsistency elimination attempting to sweep inconsistencies under the rug. In practice, it is impossible to meet the requirement of the Relational Model that all information must be consistent, but the Relational Model does not process inconsistent information correctly. ..."

"Inconsistent" in the context of data[base] management, means the presence of information that is in violation of some stated rule that is supposed to apply to the database.  Or iow, that accepting the "inconsistent" information in the database, makes the proposition that states that the violated rule holds, a false one.  Or iow, the proposition that states that the rule holds in the database, is in contradiction with the "inconsistent" information.  Or iow, accepting inconsistent information in the database is tantamount to accepting contradictions.

And I've been told that it is a proven property that you can prove really just anything from a contradiction.

In RM, it is "possible" to consider "inconsistent information", just like in 2-valued propositional and/or predicate logic, it is "possible" to consider contradictory propositions/predicates.  Querying an RM system that holds "inconsistent information" is like applying the rules of logical reasoning to a set of contradictory axioms/premisses.  And blaming the RM for "not processing inconsistent information correctly", is like blaming logic for "not dealing with contradictions correctly" (where by 'correctly', it is implied that it should be something other than just 'false', of course).

"... Attempting to use transactions to remove contradictions from, say, relational medical information is tantamount to a distributed-denial-of-service attack due to the locking required to prevent introduction of new inconsistencies even as contradictions are being removed in the presence of numerous interdependencies;"

Anyone who is familiar with the RM, and also with how it is typically criticized, will recognize this immediately as criticizing the model on grounds of implementation issues (locking and transactions), which are orthogonal to the model.  Typical.

Hewitt continues :

"Information loss. Once information is known, it should be known thereafter;"

Should it really ?  I dispute that.  A requirement to never ever ERASE or DELETE or REMOVE just anything, will inevitably bring us to the point where planet earth does not have enough atoms to store all our stuff.

At any rate, there is absolutely nothing in the RM that prevents any database designer from defining structures that keep a record of "which information was known to the database owner during which period of time" and/or "at which point in time the database owner regarded this particular piece of information as no longer relevant and removed it from his operational system".  Even SQL has included features to support such stuff in the new 2011 standard.

And in the end, when to DELETE a piece of information, should be at the user's discretion (if regulatory bounds apply, then that user should of course be staying within those bounds, if that wasn't obvious), not at the model's discretion.

Hewitt still hasn't finished :

"Lack of provenance. All information stored or derived should have provenance;"

There is absolutely nothing in the RM to stop a DBMS user from defining structures that record exactly the kind of "provenance" information that Hewitt is talking about (whatever that may be), there is nothing in the RM to stop a DBMS designer from building facilities to automagically populate such structures, and there is nothing in the RM to stop a DBMS user from using such facilities.

Nor should there be any such thing in the RM.

And on and on it goes :

"Inadequate performance and modularity. SQL lacks performance ..."

So once again he unambiguously states that he thinks that "The RM is obsolete" (that's what the title says) because "SQL lacks performance".  OMFG.

"... because it has parallelism but no concurrency abstraction. Needed therefore are languages based on the Actor Model ( to achieve performance, operational expressiveness, and inconsistency robustness. To promote modularity, a programming language type should be an interface that does not name its implementations contra to SQL, which requires taking dependencies on internals."

Is the term "programming language type" used here with the same meaning as the term "data type" in relationland ?  If so, then the last sentence seems to demand nothing else than that which relational advocates have been demanding for decades already : that the relational model should and must be "orthogonal to type", that is, that it is not up to the model to prescribe which data types should exist/be supported, and which shouldn't.

Of course, relationlanders have known for a long time already that SQL basically flouts that idea, and that its attempts at supporting -in full- the notion of abstract data types are quite crippled and half-baked.  But apparently it does indeed seem to be the case that mr. Hewitt mistakenly equates "the relational model" with SQL.

And Hewitt concludes :

"There is no practical way to repair the Relational Model to remove these limitations."

Well, this is the first time I can agree with something Hewitt says.  Sadly, as far as I can tell, "these limitations" are not limitations that derive from the Relational Model, rather they seem to derive from his limited understanding thereof.  And indeed there is no repairing a piano when the problem is in its player.