Wednesday, September 30, 2015

Software defined networking tutorial

Software defined networking tutorial

The Coming Of Virtualized Storage. Today the race to virtualize all aspects of the data center. Nicknamed the data center defined by software (SDDC) or sometimes software defined network (SDN), SDDC is a market IDC projects will exceed $ 3.7 billion by 2016. It is a hot market, too: just this week, Cisco, IBM, VMware, Red Hat and others have come together under a consortium organized by the Linux Foundation called OpenDaylight . But while this is a significant step towards virtualizing the network layer data center step, it may simply be a prelude to the next phase of virtualization: storage.

VMware led the way in server virtualization in the data center, creating enormous value for its shareholders over the last decade. Originally acquired by EMC for $ 635 million in 2003, VMware is now an independent company with a market capitalization of more than $ 30 billion. Last year it acquired a new leader SDN, Nicira, for about US $ 1.3 billion. That movement scared a lot of suppliers of Cisco data centers - mainly - who want to dominate not see VMware networking virtualization more fully, as they came to own server virtualization.

Too often it overlooked in all the billions of dollars sloshing around servers and networks SDDC competition is the most backward, storage. Traditional storage business is an annual $ 10 billion, but until recently it has not made much progress in virtualization.

That may be about to change.
To better understand the trends shaping the increase in work defined storage software, I recently sat down with Dr. Kieran Harty, CEO of Tintri , manufacturers of storage systems for data center software defined, and one of a pioneer virtualization core. Harty ran 1999-2006 VMware engineering and their teams created software products that virtualize server SDDC equation.

ReadWrite: Remind us again what VMware was trying to make a dozen years ago, when their lead teams focused on server virtualization.

Harty: Virtualization solves basic problems then call server consolidation and excess supply. Companies wanted to move computing workloads of large expensive servers, individual owners (usually the Sun servers) run an application, often at only 10% capacity, groups of cheap commodity servers Linux. VMware pioneered a technology called hypervisor that enables virtualization to make this possible - on the server.

ReadWrite: Today VMware enjoys much the market share of 90% in server virtualization. The spectacular success of server virtualization raises the big question of what comes next. Can the same benefits of server virtualization apply to the rest of the data center?

Harty: This is what gives rise to the concept of data center software defined (SDDC) - a data center with the infrastructure that is fundamentally more flexible, automated and cost; comprising infrastructure workloads and applications can be assigned automatically and efficiently pooled resources to match application demands. Instead of building complete data centers more-provisioned and silos resources, SDDC more efficiently use and share all aspects of the infrastructure: servers, networks and storage.

While servers, and to a lesser extent networks have embraced SDDC, storage remains well behind and continues to cause great pain in the data center today. Fortunately, some of the key technologies that led to radical changes in servers and networks are taking shape for storage.

ReadWrite: What kind of changes?
Harty: A quick look at some of the most successful disruptive technologies reveals that many of them "crossed the chasm" with the help of a few common key ingredients: standardization, innovation and hardware abstraction. In the case of server virtualization, standardization of the Intel x86 platform and the proliferation of Linux open source operating system massively disrupted the server market. Armed with a new generation of multi-core processors and VMware hypervisor technology, server virtualization conquered the data center.

Software defined networking tutorial
Networks followed a similar path beginning with TCP / IP network protocol standardized. Gigabit Ethernet transmission speed increases in an order of magnitude. OpenFlow, which laid the foundations of an open and standards-based software defined networks, paving the way for the most significant changes in decades networks.

ReadWrite: What kind of changes in standards, innovation and hardware abstraction are leading to disruption in the storage market?

Harty: For 20 years, little has changed in the world of legacy storage designed for physical environments. As data centers become more virtualized, there is a growing gap due to lack of complete correspondence between how storage systems were designed and demands of virtual environments. It's a bit like people who do not speak the same language and have difficulty understanding each other - Storage LUN and speaks volumes; servers seem to be virtual machines.

As a result, they do not understand each other very well. Storage allocation, management and resolution of performance problems virtualized infrastructure are difficult, if not impossible with legacy storage. Companies have tried to avoid this obstacle by over-provisioning storage is very expensive and increases the complexity.

ReadWrite: Is where the technology comes and interrupts flash storage? Can we feed these challenges through legacy storage performance improvements that are an order of magnitude over traditional spinning disk?

Harty: Storage has always been about performance and data management. Flash eliminates performance problems and levels of competitive playing field for storage vendors. Flash allows very dense storage systems that can hold thousands of virtual machines in just a rack units of space. But flash by itself - without intelligence - just put us so far.

And while some industry players are trying to make products storage virtualization adapt legacy through the API, or upgrading legacy storage virtualization to become, not go far enough to close the gap between these two mismatched technologies - you can put lipstick on a pig, but it's still a pig. What is needed to solve this problem is storage that has been completely redefined to operate in the virtual environment and uses virtualization constructs. In short, the storage VM-aware.

ReadWrite: What do you mean, VM-aware?
Harty: virtualized environments require storage designed for virtualization. The companies expect to obtain the full benefit of the defined data centers need storage software is simple and flexible management, while delivering the performance required by modern applications. They need to understand the storage IO patterns virtual environments and automatically manages quality of service (QoS) for each virtual machine.

We eliminate a whole layer of unnecessary complexity if we talk about LUN or volume. The widespread adoption of virtual machines as the data center gives us the lingua franca facto standardization of software-defined storage. The rapid growth and cost reduction of flash technology provides hardware innovation.

This leaves us with the last piece missing essential - abstraction between storage and virtual machines, a virtual machine abstraction comprising in addition to abstract power and grouped underlying storage resources and delivering benefits, simple high performance and cost-effective storage . We call VM-aware storage.

Artikel Terkait