I've noticed a theme lately in a few books and current business periodicals about models - primarily financial and risk management in nature - not taking into account scenarios that have little or no chance of occurring - which at some point, do happen. Of course, since the models didn't account for such events, the organizations utilizing them promptly lost their shirts in the markets, went bust, got bailed out by the government, laid off thousands, ad nauseum.
This isn't exactly news - in the mid-1990s, the hedge-fund Long Term Capital Management (LTCM) made a fortune - for awhile - using computerized trading models to exploit pricing differences in various securities and derivatives. They also compounded those activities with a lot of financial leverage. Then in 1998, certain events that could 'never happen' conspired to bring them down like a house of cards: Russia defaulting on its debt, and subsequent market turbulence and credit markets seizing-up. The Federal Reserve hastily arranged a bailout by investment banks, and injected capital to stabilize the markets - which responded positively and led directly to...the dot-com boom and subsequent bust in 2000-2001.
Sound familiar, and like we're always doomed to repeat the past?
This is yet another lesson that a number of Black Swan scenarios (see "The Black Swan: The Impact of the Highly Improbable") must be thought out and accounted for in models. This isn't an easy task by any means, but it is necessary. The reason it isn't easy is that we must leave the comfort of the workaday, modestly-predictable existence that we see almost every day and become "Chicken Little" - assuming that "the sky is falling" and developing scenarios and responses to them that protect (to the extent possible) assertions and positions the main model espouses. Another conundrum is developing and selecting the "right" outliers and scenarios to incorporate - again, this can get maddening, but if we don't do it, we're doomed to repeat the past.
What this means for enterprise, data, and software architects is that risk scenarios are acknowledged and to the extent possible accounted for, as models are developed and validated. It also means that models are critically reviewed by peers and stakeholders top-to-bottom. This should include vetting models against realistic scenarios that probably won't happen, but render cataclysmic outcomes if they do. Case in point that happens enough to mention: data architects developing logical and physical models that while well thought-out and designed, fail to take into account data volumetrics, flows, and the capacity of the production systems/databases to handle the load the models directly or indirectly propose. Then in test or even production, with the systems, network, and databases down or at best, crawling on their knees, it's a really bad time to think "damn, we should have simulated data flows and volumetrics to validate the data models!" Next!
The ultimate question in doing hypotheticals like this is how much risk you and your organization are willing to absorb. Ignoring that minute chance that a 'Black Swan' will arrive will usually work most of the time, but when it does arrive - and it eventually will - the results are never pretty. For anyone.