Does complexity guarantee “system failure”?


According to one journalist, whose speciality is deconstructing accidents, it does (see below). Naturally we at Ontonix would like to respond to this statement:

When complexity reaches the point of “critical complexity” system functionality is lost and failure can ensue.

System  complexity can be managed…that is what we do! More Complexity Facts from Ontonix


Nevertheless this is an interesting and worrying observation. One that, when taken in the context of Global Financial Services, begs the obvious question:

Will Basel III (or II for that matter) make things better or worse?

While you ponder this…as if it really needs too much thought…you may wish to read an extract from this interesting article “Oil, complexity and the inevitability of “system accidents” dealing with THE spill in the Gulf of Mexico cites three fundamental truths:

First, the American economy runs on oil and will continue to do so for the foreseeable future, as much as we might wish otherwise.

Second, most oil in more accessible locations has been or is being pumped. There isn’t enough to satisfy our future needs.

Third, large new oil fields have been discovered in deep water in the Gulf and the Atlantic Ocean. Technology has advanced to the point that drilling those deep wells is both technically possible and economically feasible.

But doing so is complex, so complex that mistakes are inevitable. Unfortunately, there’s precious little margin of error when they occur.

Journalist William Langewiesche, who specializes in deconstructing accidents, says that there are three kinds of accidents.

• “Procedural” accidents, in which someone makes a mistake — as when pilot error causes plane crash.

• “Engineered” accidents, in which materials or structures fail in ways that should have been foreseen by designers and engineers.

• “Systems accidents,” such as the Gulf oil spill, which occur because “the control and operation of some of the riskiest technologies require organizations so complex that serious failures are virtually guaranteed to occur.”

Among those “riskiest technologies” are the air transportation system, nuclear power plants, aircraft carriers and, as we now know, deep-water oil drilling. We accept the risks they entail because we like the rewards they provide.

Systems accidents don’t occur because the system failed, they occur because the system exists — and because it is so complicated that inevitably something will go wrong.

One of the implications of “systems accidents” is that when we try to address what went wrong we add even more complexity to an overburdened system. And that increases the risk of accidents.

This is not to say that we should abandon regulatory oversight, mandatory safety reviews or environmental assessments, as some people have claimed. Those are all important safety checks that can help prevent disaster.

It is to say that as long as we continue drilling for oil in deep water, another accident is inevitable. The cast of characters probably will be different, as will be the proximate causes. But — initially, at least — the result will be the same: oil in the water.

It’s a sobering and incredibly inconvenient truth.

Interestingly Kenneth Rogoff (Professor of Economics and Public Policy at Harvard University, and was formerly chief economist at the IMF) drew a similar conclusion that was the subject of a previous blog item: Kenneth Rogoff: The BP Oil Spill’s Lessons for Regulation

Related articles by Zemanta

3 Responses to Does complexity guarantee “system failure”?

  1. nick gogerty says:

    complexity increases the number of hidden paths in the system. The more hidden paths, the greater increase in potential for one of them to lead to failure. There thing that leads to accelerated system exploration of hidden paths is tight coupling or capacity extremes in the section of the system. Just a thought.

  2. Thank your post, thank your blog. I love your blog.

  3. Nick Gogerty says:

    Correct, the system operates in a perpetual failure mode in that certain elements in a large enough system are always failing, but resilient systems overcome this with design elements of redundancy and resilience.

    an example of this would be a university with its array of students, class rooms, teachers, papers etc. as components. Teachers get sick, papers are late or lost, meetings run over etc. Each of these small element failures occur all the time and in seemingly random fashion, but degrees still get handed out and the education process continues for the most part.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 230 other followers