Complexity and Operational Risk


A collage of Systems Engineering applications/...

Image via Wikipedia

An interesting thing about complex systems is how easily "stories" from one domain can be understood and applied, to good effect, in others! "Thinking in systems" may be the first step in process from exaptation to adaptation…and survival.

Prevention will always be better (and more cost effective!) than cure. Hence Ontonix have a specific product: OntoTest

The reason bars place bouncers at the door is because it’s easier and less riskier to prevent entry than to root out later

No one ever said choosing a career in IT was going to be easy, but no one said it had to be so hard you’d be banging your head on the desk, either. One of the reasons IT practitioners end up with large, red welts on their foreheads is because data centres tend to become more, not less, complex and along with complexity comes operational risk. Security, performance, availability. These three inseparable issues often stem not from vulnerabilities or poorly written applications but merely from the complexity of data centre network architectures needed to support the varying needs of both the business and IT.

Unfortunately it is often the case that as emerging technologies creep (and sometimes run headfirst) into the data centre the network is overlooked as a potential source of risk in supporting these new technologies. Traditionally, network readiness has entailed some load testing to ensure adequate bandwidth to support a new application, but rarely is it the case that we take a look at the actual architecture of the network and its services to determine if it is able to support new applications and initiatives. It’s the old “this is the way we do this” mantra that often ends up being the source of operational failure.

COMPLEXITY MEANS MULTIPLE POINTS of FAILURE…

via BFF: Complexity and Operational Risk.

As recycling is generally good for everyone(!?) I will seize this opportunity to link to a previous article (from November 2010):

The basic problem is the larger and more expensive an IT project is, the more likely it is to fail. You can do a lot of analysis as to why that is. You can say maybe we’re not using the right methodology, or communications is failing, or any number of things. But ultimately the only variable that appears to correlate closely with failure is complexity.

Enhanced by Zemanta

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s