This is how IT departments can survive data overload

Advertisement

VI 1

iStock

Data. The very word conjures up thoughts of responsibility in the typical data center. For most IT departments, it's all about processing the data, providing the data, storing the data, and protecting the data - all in a quest to achieve business goals.

Yet IT departments are drowning in another type of data, one invaluable to operations. But many of them wrongly assume it offers little in return for the business process.

That data comes in the form of logs, alerts, transactions, and packets, and it is all automatically generated by the system-wide infrastructure and associated devices. Simply put, every layer of the stack creates some type of data that has value - data that is used by assorted management consoles and tools to keep IT workers informed of infrastructure and application health. It sounds simple enough, at least in theory, but the truth is that IT is drowning in data, which is often parsed off to different management silos.

How to fix the problem

If IT departments could get a better handle on that data and unify it into a single management platform, the rewards could be significant, according to John Gentry, VP of marketing and alliances at the IT performance management company Virtual Instruments.

In Gentry's 20 years of working in IT, he has observed numerous occasions in which data that could fuel solutions and improve operations is ignored, simply because no one is looking at it on a large scale. "Taking control of the data and creating context around it is the key to improving the efficiency of data center," he says.

The payback can be substantial because IT departments can mine meaningful insights out of the data. "There are a host of possibilities that analytics can deliver," Gentry says, "ranging from improved infrastructure performance to speeding-up to troubleshooting, to ultimately driving better business performance at a lower overall cost."

He adds that there are obstacles standing in the way of tying context to data and then effectively analyzing the data for actionable insights. Overcoming those obstacles takes a careful process to maximize the value, velocity, and veracity of the data.

1. Tear down the silos of isolation.

Many monitoring tools use their own proprietary systems to capture data. The trick is to unify the data gathering and process it in real time into a platform that can digest the multiple feeds and provide context.

2. Bring context into the picture.

Data without associated context offers very little value when it comes to simulation, planning, and automation. It is critical to combine machine and wire data so traffic, performance, location, usage, and applications are bound together to bring context to the actual data being monitored. In other words, it proves wise to know not only that packets are traversing the network, but also how they're related to the workloads being managed and delivered.

3. Correlate the data.

With context as part of the picture, it's much easier to accurately understand correlations between demand and availability, and how, why, and where performance degradations are occurring. The correlation of the movement of data is crucial to determining scale, efficiency, and availability - and then using that information to create models that offer predictions based upon needs.

4. Establish baselines.

With wire and machine data gathered and correlated, it becomes easier to create baselines of network traffic flow, which can then be used to determine what normal traffic is and what it isn't. Baselines can be the key to detecting anomalies that can be created by security issues, equipment failures, unauthorized changes, rogue IT implementation, or most any "unexpected" change to the system. What's more, baselines can also be used to measure the value of changes or improvements.

5. Strive for automation.

With the proper management tools in place, the data can be used by scripted policies to drive automated events, such as automatic scale out, or rerouting of traffic. Policies can also drive other events or warnings to keep IT management well aware of what is happening with systems operations. But all of this becomes possible only if machine and line data is gathered correctly and has context and analysis added to it.

A solution for authoritative data analysis

It's clear that wire and machine data can offer significant value to IT operations. But that value extends well beyond just increasing efficiencies. Virtual Instruments offers VirtualWisdom4, a product that can analyze heterogeneous data captured in real time across the IT Stack.

VirtualWisdom4 is used by many large organizations to monitor and analyze mission-critical workload data to make sure service level agreements (SLAs) are met. It also fuels the movement toward infrastructure performance management (IPM), an ideology that ties end-to-end computing performance with the metrics generated by the heterogeneous environments in use today across multiple data centers, regardless of vendor technology installed.

With some basic effort, enterprises today can increase the value of machine and wire data by simply applying the same analytics to it that businesses have used since the dawn of business intelligence.

This post is sponsored by Virtual Instruments.

Find out more about Sponsored Content.

Follow BI Studios on Twitter, Facebook, and LinkedIn.