Imin’s Blog

Elasticity and Scalability

Posted on: January 10th, 2011 by Imin Lee No Comments

For data center operators, cloud operators, and service providers, being able to grow their business and scale up is one of the major topics of discussion. Large enterprises, when managing large IT infrastructures and multiple business units, share the same requirements. Elasticity is another topic of interest in recent cloud computing discussions enabling IT as a utility or as a pay-per-use model.

In order to be elastic, and to pay only for true usage, management software needs to be able to scale up and down. Management and monitoring capability needs to change according to the data center and cloud environment’s computation power: network traffic, flows, packets, logs/events, metrics, and devices. Therefore, being able to scale is a prerequisite of being able to elastic.

A recent visit to a global Fortune 100 enterprise highlighted the strengths of AccelOps versus our major competition. Simply having multiple copies of a SIEM appliance, whether virtual or otherwise, is not synonymous with being able to scale out event cross-correlation capabilities. True elasticity requires clustered cross correlation capabilities, dynamic deployment of virtual appliances for more processing power, usage accounting, and a flexible licensing model. This is Elastic EPS and AccelOps provides this unique capability in our Integrated Data center and Cloud Monitoring solution.

  • Bookmark on Delicious
  • Digg this
  • Recommend on Facebook
  • Share on Reddit
  • Tweet this

Cloud Computing and Service Provider Focused Management

Posted on: January 4th, 2011 by Imin Lee No Comments

A year ago, I was reading a cloud computing report from Lazard Capital and I could not agree more with them: cloud computing is not about technology, it is all about the business model; it is about how IT is consumed as a utility. So the data center is still made up of servers, storage, networking and security equipment. But how the data center is used or offered as a service is the key topic of interest.

One year later, at Gartner’s Datacenter Conference, cloud computing was greatly demystified. It is no longer something as fuzzy and remotely untouchable as a cloud in the sky anymore. Instead, more concrete definitions of cloud computing, its characteristics, and initial success stories in the real-world have been shared among analysts, vendors, and industry practitioners.

Management Software, whether it is provisioning, monitoring, or the help-desk, needs to meet the following requirements:

  • Service-result focus: whether it can quickly and easily allow service providers bring up new services, add new clients etc.
  • Shared/Multi-tenancy: an environment where multiple organizations or customers are hosted and managed. An efficient way to view, mark, and control them is key, but at the same time reducing the operational overhead of multiple solutions and their corresponding investment costs.
  • Utility-based and usage-chargeable: can the management software support the IT-as-utility model by understanding how each organization and each customer uses resources, and provide the option of charging the customers accordingly.
  • Scalable and Elastic: the goal of the service provider is to efficiently service many different customers. Can they easily scale to hundreds of customers? Service providers employ a recurring revenue and fee-based model so their licensing framework needs to be flexible and allow for unpredictable growth.


If you look at these characteristics, you will see that they are the common practices and requirements seen day-in and day-out in the service provider world, whether it is the traditional telephone service provider, or power and electricity providers. In other words, the data center, whether public or private, needs to run and operate like a service provider. If so, the management software is the key layer that considers all of the moving parts in the data center to provide the concerted effort of ‘IT as utility’.

Up to this point, management software of the last 10 – 20 years was focused on the traditional enterprise model. Many vendors’ products were built with one single organization in mind. Although they have been used in many enterprises and service providers , they are in fact serving the management of the data center, not the cloud (again, a difference in the business model). Therefore, in order to provide the ‘IT as a utility’ business model , one can not just say ‘we have been serving the datacenters and service providers, therefore we are serving cloud computing’.

A 10-20 year old tool cannot all of a sudden become a cloud computing software platform. As a matter of fact, many private and public data centers, and service providers, in the current transition phase to cloud computing or a utility computing business model, are feeling the pain of trying to put a square peg into a round hole. As a result, they are giving up on the legacy solutions and looking elsewhere for innovative solutions that realistically address the requirements and characteristics of cloud computing.

We encourage data center operators, cloud operators, and service providers to investigate these characteristics in management software. You will find exactly that with AccelOps’ Integrated Datacenter and Cloud Monitoring Platform.

  • Bookmark on Delicious
  • Digg this
  • Recommend on Facebook
  • Share on Reddit
  • Tweet this

Integrated Management Framework and Best of Breed Mini-Product Suites

Posted on: December 17th, 2010 by AccelOps No Comments

Another interesting topic at Gartner’s Datacenter Conference 2010, is the trend of more and more vendor tools providing functional level integration and data integration between 2010 – 2015. The end goal is to move into a management framework.

However, there is the challenge of vendor lock-in. So an alternative to having one management vendor’s framework is to purchase a set of best of breed mini management product suites in some sub areas and fill in the gap with best of breed point products. The live audience poll greatly favored this approach.

I am delighted to hear the term ‘mini-management suite’. In a way, we can think of our Integrated Datacenter and Cloud Monitoring solution as a mini-suite in the monitoring area within the datacenter operation management umbrella.

Our approach of integrating the availability, performance, security, and change monitoring coupled with auto-discovery, and a CMDB allows us to truly integrate at the data level. The cross-correlation of all of these multi-sourced data points with powerful analytics capabilities, whether it is logical analytics for relationships and patterns, or trending analysis for anomalies and best practices, provides datacenter operations the intelligence and proactive capabilities that they require.

These capabilities allow us (AccelOps) to provide our users (datacenter operators) a best of breed, integrated framework or mini-suite in the monitoring area.

Interestingly, when polling the live audience in the event, 27% of the audience responded that they currently use “other solutions” than the big 4’s. And 25% will use “other solutions than the Big 4’s” for 2011. When asked how many have the confidence for the big 4’s solutions, 34% of the audience responded NOT having the confidence in the big 4 and in infrastructure vendors’ (e.g. Cisco, VMWare, Oracle, Microsoft) solutions.

I think that is the reason why our solution is so well received and welcomed by the market: a lot of room for new innovations like ours to address issues in the increasingly complex but increasingly important datacenter and its operations.

There are two other opportunities for the new players according to Gartner, besides managing across multiple sourced environments (which is the integration point already mentioned):

  • Alternative delivery methods (e.g. SaaS, subscription model) — AccelOps has this already.
  • Penetrating customers outside the Global 2000 — AccelOps was built for this market with ease of use, ease of deployment, and with the right TCO.

I am glad that we are hitting the mark! I have good feelings coming back from the Gartner Conference — a lot of reassurance and confirmation!

  • Bookmark on Delicious
  • Digg this
  • Recommend on Facebook
  • Share on Reddit
  • Tweet this

Key Things to Look for in IT Operations Management Tools

Posted on: December 9th, 2010 by Imin Lee No Comments

I just came back from the Gartner Datacenter Conference 2010 in Las Vegas. A high quality conference with all of the relevant attendees: datacenter executives, mid-level managers and directors, and datacenter architects. Of course, being Gartner, it is packed with multi-track sessions, discussions and interesting keynotes. Lots of metrics and interesting facts. One of the things I liked is the live polling of the audience, which came back with data very much matching the metrics Gartner had gathered in the marketplace. Bingo!

There are a lot of take-aways from this conference. Here, I would like to focus on one thing: what are the key things to look for in an IT Operations Management tool?

In one of the town hall meetings dedicated to the topic of IT Operations, all Gartner analysts in this area, such as David Williams, Debra Curtis, and Ronni Colville gave their thoughts. One of the audience members stood up and asked: “How can I tell the difference between all the IT operations management tools as they all sound alike?” The answer is: yes, the majority of them all sound the same. Unfortunately, today’s tools are still very much fragmented in functionality, according to Gartner. They are not tying together multi-sourced data, and not tying that with live discovery information in the CMDB. This is something that is very difficult to get right.

So the key things to look for in an IT management tool are:

  • Can the CMDB be auto-populated by auto-discovered data? Auto-discovery is the key.
  • Can all of the data (events, logs, metrics) be cross-correlated and is the up-to-date CMDB data referenced for the understanding of availability, performance, security and change?
  • Is the GUI very intuitive, and is information presented in an easy to understand and easy to analyze fashion?

Being an audience member in this session, I could not be happier to hear these key points from the analysts. For us, it is a confirmation and reassurance of our product direction, and on how we see the IT operations market challenges and how we address them.

The above mentioned items for an IT management tool are the key differentiators for AccelOps, where our customers appreciate the capability of our auto-discovery, and auto-population of the CMDB. The cross-correlation engine and analytics capabilities are highly regarded and that is core to our DNA. AccelOps is designed to collect and cross-correlate all sources of data for multiple functions: availability, performance, security and change monitoring and proactive alerting. And the flexibility and the presentation of the data via our GUI are a generation ahead of the tools existing in the market (quoted by some of our customers and industry analysts).

  • Bookmark on Delicious
  • Digg this
  • Recommend on Facebook
  • Share on Reddit
  • Tweet this

AccelOps comments on ArcSight Acquisition by HP and SIEM market

Posted on: September 14th, 2010 by Imin Lee No Comments

Today’s acquisition of ArcSight by HP not only validates that the Security Information and Event Management (SIEM) market is a proven market with widespread adoption by organizations globally, but also that a new and improved approach to managing enterprise IT environments is needed.
According to Bill Beghte, EVP of HP Software & Solution,

the market needs a better approach to managing the escalating complexity, escalating threats, and escalating regulation, and an approach where security and IT operation are converged, not siloed, an approach that is holistic and proactive, an approach that breaks down the traditional silos between IT operations and security to provide broader visibility, better context, and better continuity. So that is where we will focus.

These fundamental points and sound bites are so familiar to us –they are something that we have been preaching since we started the company almost three years ago. This is also the exact reason and philosophy behind this, our second startup, after our first SIEM start-up (Protego Networks) was acquired by Cisco in 2005.

Some of these key points and philosophies can be found in my earlier blogs (why now? and why accelops?) and and in our SOC/NOC convergence whitepaper.

So in a nutshell, we are happy to see that these fundamental points on SOC/NOC convergence, on the holistic approach vs. the siloed approach to managing the datacenter and IT environment, are validated by today’s acquisition, and these same principles are emphasized over and over again in Bill Veghte’s press conference.

We, AccelOps, as an Integrated Datacenter & Cloud Management solution company, have integrated the management of availability, performance, security and compliance in a holistic way. The key is to allow enterprise organizations and cloud providers to be more Intelligent, Proactive and Secure in delivering IT services, without the heavy integration costs and high SW investment costs as with HP and ArcSight. The concept of convergence is a reality for us, as opposed to just wishful thinking in IT managers and executives’ heads.  AccelOps provides best-of-breed SOC/NOC convergence today, while HP and ArcSight will start thinking about convergence in the days, months and years to follow.

In the last year, since we launched our company, we have received an overwhelming response from the market and from our customers.  The awards from industry experts, the reviews from the analysts, and the growth of our sales pipeline proves that we are on the right track for what the market needs. The HP/ArcSight acquisition is just another positive validation of our story.

Other analysts assessments concur… See Mike Rothman’s comments on HP’s acquisition at
http://securosis.com/blog/hp-sets-its-arcsights-on-security

Datacenter & Network Management: AccelOps – Jack of All Trades and Master of None?

Posted on: May 13th, 2010 by Imin Lee No Comments

It has been a while, since I wrote my last blog where I promised to write about whether AccelOps monitoring solution is ‘Jack of all trades, and master of none’.

So here I am, on Mother’s day, sitting in front of the computer writing a geek’s blog. A workaholic mom and entrepreneur? I guess both are true. To me, building a startup is like raising kids, it requires 120% attention, hard work and commitment; no other choices.

Before I answer the ‘Jack of all trades and master of none’ question, let me try to start with the requirement for datacenter and network management in 2010, by quoting Evelyn Hubbert, Forrester Research:

The element-based network management era is over. Today, network management teams need to manage and understand network-related issues across silos such as servers, storage, security, databases, and applications. They need to manage complex and dynamic IP networks to connect customers, vendors, and employees. Forrester sees the traditional network management space becoming service-oriented: The attention is on the service that is being delivered to the business. Innovations such as IT automation and Web services management techniques, and best practices such as ITIL, have changed the network management market and will continue to shape it in the years to come.

How true!  Data center and network management cannot be at the element level anymore. That used to be case in the 80’s when I worked in AT&T Bell Lab, and in the 90’s when I was working in the Network Management Business Unit in Cisco; but not in 2010. Times have changed, the datacenter infrastructure has evolved and so has their requirements. Managing the business services that the datacenter infrastructure and elements are delivering is now the key.

In order to meet these requirements, two fundamental pieces in a management solution have to be done well first: CMDB and mapping infrastructure impact to business services.

CMDB is a great concept and it is a corner stone towards the new management paradigm: managing by services. But often we see that mid and large enterprises embark on the process but are not able to make much progress. This is due to the fact that they often start with the top-down approach: cross functional teams with excel files trying to map out the organizations, map out the ownership of the infrastructure, and the dependency of the business services. The process is too heavy, out-weighs the benefits and deters the original intent.

As Evelyn Hubbert from Forrester Research sees it:

A CMDB is a fundamental component of an ITIL framework. The CMDB records Configuration Items (CIs) and the details about the important relationship between CIs. A CI is an instance of an entity that has configurable attributes – for example, a computer, a process, or an employee. A key success factor in implementing a CMDB is the ability to automatically discover information about the CIs – Autodiscovery

In complete agreement with her, we believe that the bottoms-up approach via auto-discovery is the right way to go: automatically discover what is in the datacenter including the network, map out the applications to the infrastructure, and map out their relationships. Gaining visibility is key. Once the IT organization has the map, they can then start defining the business services’ relationship to the infrastructure via the map easily. This discovery driven CMDB approach not only makes it easy to populate the CMDB for the first time, it also helps to keep the CMDB up to date. Periodically rediscover or rediscover upon changes and you are done!

Many years of experience working in network and security management field has taught me that the discovery process has to be very easy for the user to use; or it beats the purpose again.

With that philosophy, what we have built is something that requires very little from the user to quickly get to the final goal:  simply define the credentials to the devices and applications, define the appropriate network range(s) and the tool should take over from there. The AccelOps discovery engine discovers all the pieces in the datacenter infrastructure, the attributes and their inter-relationships, how they relate to and impact the critical business services and applications. It discovers the configuration inside the devices, the installed and running software, the patches… It discovers L2/L3 relationships, Guest OSs to ESX relationships, Wireless APs to Controller relationships, switch modules to switch relationships… It understands the changes: differences between saved and running configurations, between saved configurations, ports going up and down, applications going up and down… It categorizes devices and applications and presents them in a very logical and but easy to understand graphical way. To do all of the above, it requires the understanding of network, systems, applications and storage. In a complex and large datacenter environment, this is a non trivial job to do, as there are so many network scenarios and so many combinations in network configurations.

The undertaking of the above tasks does not sound like a ‘jack of all trade and master of none’ would be able to cut it, does it? It requires deep understanding and the domain knowledge in network, systems and application management.

Now let’s get to the second fundamental in today’s datacenter and network management: mapping infrastructure impacts to business services. Here I would like to use examples to show, how the requirement of managing by business services cannot tolerate a ‘jack of all trade and master of none’.

In order to be able to manage by business services and map the infrastructure’s impact, a solution must be able to do the following, as a minimum:

(1)  Define a problem, an exception or a vulnerability involving any datacenter infrastructure component and detect the issue in real-time. Here are a few datacenter scenarios:

Example 1:  Service health critical

For the same hostIP, if

average cpu utilization >90% or (average memory utilization >98% and paging rate > ) or (disk I/O utilization > ) or max interface utilization >50% from 3 consecutive sample within 10 minutes

then generate an incident (alert)

Example 2:  Excessive vMotion migration

For the same VMName, if

3 or more VM-Hot-Migration events or VM-Migration events in a 15 minute window,

then generate an incident (alert)

Example 3: Excessive End user DNS queries to unauthorized DNS servers

For the same srcIP, if

TCP/UDP port = 53, destination IP is not in internal DNS server group, source IP not in management applications group and not in internal DNS server group, and source IP is from the inside, and if this happens 10 times in a 5 minutes window,

then generate an incident (alert).

Note that internal DNS server group and internal management application group are populated from auto-discovery.

Example 4: User added as admin in the accounting application. Provide the identity of the user.

If VPN login event followed by windows server login event followed by user added to global admin group event within a 15 minute window, and the following conditions are met

VPN login source IP = windows server login source IP and

windows server login user = user add event’s user and

windows server login id  = user add event’s logon id and

reporting IP in accounting server group

then generate an incident (alert)

(2)  Define what makes up a business service and any of the problems defined in step (1)’s relationship to the business service. Here is another example of one of the scenarios:

Example. If there are any of incidents for the objects/components in a business service (devices, applications, users, users, etc.), generate an incident for that business service. (this requires the nest rules support. aka. Second level of rule fires based on the first level of rules)

(3)  All the definitions can be easily entered by user via the GUI. So a user can define these scenarios, behaviors based on the IT knowledge of the user without waiting for the software vendor to come up with new upgrade to support the scenarios.

So now you can see, without good understanding of network, systems, applications, VMs, storage, security, and without the capability to describe these understandings, and the capability to monitor and detect exceptions/anomaly/problem based on the understandings, there is no way to even meet the basic requirement of managing by business service.

So today’s datacenter and network management puts a much higher bar for management solutions. The existing silo-ed management solutions cannot cut it simply because they do not a common analytical framework to handle all the data from disparate parts of the datacenter infrastructure. This is however what AccelOps does via deep-and-wide understanding of a datacenter.

Why You? Why AccelOps?

Posted on: April 14th, 2010 by Imin Lee 2 Comments

In the last blog, I wrote about ‘Why Now?’ In this one, I will attempt to answer the ‘Why You?’ and ‘Why AccelOps?’ question.

To better answer this question, let’s look at some of the very similar characteristics in the security space.

Security is famous for being dynamic: attack and vulnerability scenarios are constantly changing and changing fast. On top of it, the security infrastructure is also very dynamic: frequent IPS/IDS signature updates, frequent FW rules/policies updates, … User identity can also change fast – once a user logs out of VPN, that same IP is immediately assigned to another user. Security is now also built into the infrastructure in most cases, and becoming part of the infrastructure as security service.

To address the challenges in security, problem identification/resolution and incident handling need to be very fast, if not real-time, due to the nature of the problem. So it is time sensitive.

Now, you see that today’s datacenter availability/performance management has a strong similarity with the security management – dynamic environment, dynamic scenarios/behaviors, and time sensitive.

Let’s look at how security technology advanced in the last few years:
• Take in all data from the infrastructure: Firewalls, IPS/IDS, Anti-virus, Web proxies, mail gateways, VPN, Wireless Controllers, Server, Applications, Switches, Routers, …
• Take in netflow data to understand the traffic pattern
• Take in the user/identify data to understand who and whom
• Take in the infrastructure to understand their configurations, policies, topological relationships, location, …
• Cross-correlate all these data to come to a quick conclusion in near real time: (1) is it an attack? (2) is some vulnerability being exploited? (3) who did it, who is impacted and where is the attacker located? (4) what else could be impacted?

So in the last 5-7 years, cross-correlation technology for security has advanced greatly in algorithms and techniques: scenario based analysis, statistical profiling, cross-domain false positive detection etc. These advances broke the original silo-ed way of managing security: FW-only management, IPS-only management, AV-only management etc. As a result, security management became much more effective and the field SIEM – Security Information and Event Management – was born. By aggregating all the data, SIEM vendors also (almost accidentally) helped to solve the big compliance management problem and as a result SIEM/Log Mgmt vendors became household names in the IT space.

If you look at how the security space counters the attack detection/identification, the incident response problems, you would probably get the hint of what I am about to write…

You can see that availability and performance management for data center can learn a few things from security, as their characteristics in todays environment are very similar. What is needed is to take what we have learnt in the security space, push cross-correction to the next level to holistically address the challenges in the availability, performance, security and log/compliance management space.

In order to do so, lots of innovations and know-how are needed. Basically the cross-correlation techniques need to be enhanced to accommodate a much larger and diverse set of parameters, the system needs to understand a much larger set of devices, different types of datacenter technologies and their inter-relationships and inter-dependencies. The analytics engine also needs to have advanced inbuilt cross-correlation logic that can be used by administrators to describe the complex scenarios and behaviors based on their data center operation knowledge.

With that, the end result is a bottom up integrated platform capable of doing all the above, and breaking all the traditional product silos in IT and datacenter management space. The current datacenter product sets are technology based and very fragmented: one tool for network management, one tool for system management, one tool for application performance management, one tool for security/compliance management, etc. The integrated platform eliminates blind spots in IT management and the ‘Not-my-problem’ scenes in the workplace and it is also efficient in root-cause detection, proactive in alerting on future problems. It has the all the analytic capabilities to provide executive decision support based on KPIs and trends… It is a single pane of glass that facilitates better collaboration/cooperation, but at the same time, capable of allowing individual focus for different IT functions.

The innovation does not stop here. It also needs to be very easy to use:
• easy to collect all the data, by auto detection of data source and format, or with minimum of human intervention to tell how to collect them
• built-in intelligence or know-how so that administrators do not need advanced IT knowledge to debug and spot problems; instead they can enjoy the benefit of knowledge gained by domain experts over the years with the shipped product
• easy to allow user to describe new IT problems and security threat scenarios
• easy to deploy, easy to provision and no requirement of new hardware in the datacenter except the common VM and server architecture in the datacenter.
• capable to scale in computation and storage for large datasets and complex analytics

The last, but not least, the product needs to be flexible in distribution to meet various needs. In today’s enterprise IT world: not only it needs to have technology innovation, but the product needs to have flexible distribution channels, e.g. multi-tenancy for SaaS, MSP and cloud providers, and for small enterprise to emerging enterprise.

Nirvana, isn’t it? If you think of what James Cameron can do with Avatar that he could not have done so 10 years ago, you can see what I have described here can be done, if it is with enough know-how, and be smart about execution.

What AccelOps has done is exactly that – a platform that allows for end-to-end datacenter and cloud monitoring and management, with the capability of not only the 7 functions that ScienceLogic EM7 or Nimsoft are capable of doing (namely
Net Mgmt, APM, Sys Mgmt, Asset Mgmt, Event Mgmt, Ticket System, SLA), but also
SIEM (security mgmt), Log/Compliance Management, Change Management/CMDB
VM Mgmt and Identity/Location Management.

However, the breath of the product capability should not induce people to think it is a mere bundling of functions, although that is exactly what happened as the Big 4 IT management companies (IBM, HP, CA, BMC) and even emerging vendors like Nimsoft bought companies and created a patchwork of products. Instead Accelops is a true bottom-up integration using the technology innovations that I had described above. Certainly, a secondary benefit is IT management tool consolidation. Sounds familiar? Datacenter infrastructure is converging, so why not the management space?

In the next blog, I will try to answer some of the questions of ‘Yes, that is wonderful. But are you jack of all trades and master of none?’

Stay tuned…

Imin

Why Now: The recent acquisition of Nimsoft by Computer Associates

Posted on: April 9th, 2010 by Imin Lee No Comments

The recent acquisition of Nimsoft by Computer Associates, and leading venture firm New Enterprise Associates’s investment into ScienceLogic (both companies in the Availability and Performance Management of Datacenters) validate the point: datacenter and cloud computing markets are growing and a new way of managing these environments is urgently needed.

There are many write-ups on the event:

“The current market landscape consists of both products that were designed 15-20 years ago to address a different set of problems than those that end-users are experiencing today…The mid-market …underserved. Large vendors… not focusing on this market segment, and they were not able to offer solutions that were flexible and easy to use and manage, but still include all technology capabilities that end-users needed” – Bojan Simic, Trac Research http://www.trac-research.com/CANimsoft_ME.pdf

“With Nimsoft, CA can bring a tailored IT management solution to a new set of customers — emerging enterprises ($300M-$2B Revenue), emerging national markets, and the MSPs and cloud service providers that serve those markets.
But to serve these emerging enterprises, you have to do things differently
… software needs to be straightforward (and even easy) to install, use, and maintain. It needs to have a very broad but focused set of functional capabilities, stretching across a significant amount of the environments, components, and devices that an organization will want to manage, inside and outside its firewalls”, Jay Fry, CA Inc.
http://datacenterdialog.blogspot.com/2010/03/ca-and-nimsoft-because-smaller.html

I would also like to point out that these datacenter management companies becoming hot all on a sudden after 7 – 10 years in business (Nimsoft – 10 years, ScienceLogic – 7 years), shows that the market timing is now – there is a pent-up demand for better management solutions in the high growth datacenter market.

So what has all these got to do with AccelOps? In building the Integrated Datacenter and Cloud Service Management business for the last two years, many people has asked me these questions: “Why Now? Wasn’t the problem there before? Why you? Why is AccelOps better?”. After answering the same questions in many emails, calls to customers, market analysts, investors and employees, I thought it would be better to just put down my thoughts here…

So let’s get to the ‘Why Now?’ question.

In order to answer that, let’s look at what is normally called ‘the compelling event’ in the sales practice. A compelling event consists of “what is pain point, and is it time sensitive”?

Driven by outsourcing, consolidation and cost optimization, the datacenters in recent years is going through a lot of changes: the proliferated use of VMs, VMware’s reference storage architecture and unification of storage and networking.

Today’s datacenter is definitely more complex and there are many moving parts – it is not the same datacenter that we used to know 10, 20 years ago. Things are more fluid now: resource optimization is causing Virtual machines to move between physical machines and dynamically connect to different virtual switch ports at different times, applications are dynamically load balanced across different virtual machines in a cluster, users access the network from different entry points (mobile, vpn, wired ports).

In the old days, server environments used to be static. The presentation layer (GUI), the computation layer (servers) and the storage layer were relatively independent of each other. But that is not the case anymore: VM images are stored in the remote shared store and moved around by the hypervisor, data are computed in the servers and shipped across high speed networks to remote stores, networking and storage traffic are sharing the same switch. The storage layer is becoming a very integral part of the computation layer. Without much need to say, same is true for the network layer…

The line between applications, servers, network and storage are becoming blurred. Cisco’s attempt to unify computing, networking and storage by entering the server business (UCS), HP getting into the high-end enterprise switching market (via 3Com acquisition), Oracle buying Sun signify just that.

So that is the pain point: complex and dynamic environment and how do you manage it?

Now, let’s get to the second attribute of the ‘Compelling Event’: time sensitivity.

This complex data center environment happens to support business critical business applications and services. This is especially true these days for the service providers and cloud providers, where their top line (revenue) comes directly from the datacenter operations. There is a SLA tag to business applications. When I was working in AT&T and Bell Labs, 99.999% was the only thing for the telephone network. Now 99.999% is the requirement for the business applications and the data network and is the critical differentiator for service providers! With that level of SLA attached to the complex datacenter, problems need to be resolved fast, and problems need to be predicted and avoided.

By the way, not an after thought, what about the security threats to the datacenter and cloud? How do you keep it secured and the data protected? Do you have a situation where you are debugging a network slowness issue, while wondering whether it could be the result of an access control violation, an unauthorized change, an unpatched server in the test network generating lots of network traffic, or a server performance problem? How do you quickly find out who has access to the information, while you may not have the permission to do so? What about compliance? It is part of the service delivery too! No one is going to dispute that when a security threat brings down a datacenter network, it has nothing to do with SLA…

So how do we solve this new problem? In the next blog, I will write about the ‘how’ part, which is ‘why you?’, ‘Why AccelOps?’…

Thanks,

Imin