You can migrate live VMs between Intel processor-based servers but migration in a mixed CPU environment requires downtime and administrative hassle
A study commissioned by Intel Corp.
One of the greatest advantages of adopting a adopting a public, private, or hybrid cloud environment is being able to easily migrate the virtual machines that run your critical business applications—within the data center, across data centers, and between clouds. Routine hardware maintenance, data center expansion, server hardware upgrades, VM consolidation, and other events all require your IT staff to migrate VMs. For years, one powerful tool in your arsenal has been VMware vSphere® vMotion®, which can live migrate VMs from one host to another with zero
downtime, provided the servers share the same underlying architecture. The EVC (Enhanced vMotion Compatibility) feature of vMotion makes it possible to live migrate virtual machines even between different generations of CPUs within a given architecture.
This IDC study provides an evaluation of 10 vendors that sell all-flash arrays (AFAs) for dense mixed
enterprise workload consolidation that includes at least some mission-critical applications.
"All-flash arrays are dominating primary storage spend in the enterprise, driving over 80% of that
revenue in 2017," said Eric Burgener, research director, Storage. "Today's leading AFAs offer all the
performance, capacity scalability, enterprise-class functionality, and datacenter integration capabilities
needed to support dense mixed enterprise workload consolidation. More and more IT shops are
recognizing this and committing to 'all flash for primary storage' strategies."
Datacenter improvements have thus far focused on cost reduction and point solutions. Server consolidation, cloud computing, virtualization, and the implementation of flash storage capabilities have all helped reduce server sprawl, along with associated staffing and facilities costs. Converged systems — which combine compute, storage, and networking into a single system — are particularly effective in enabling organizations to reduce operational and staff expenses. These software-defined systems require only limited human intervention. Code imbedded in the software configures hardware and automates many previously manual processes, thereby dramatically reducing instances of human error. Concurrently, these technologies have enabled businesses to make incremental improvements to customer engagement and service delivery processes and strategies.
Jusqu’à présent, les améliorations du datacenter se sont limitées à la réduction des coûts et à des solutions ponctuelles. La consolidation des serveurs, le Cloud computing, la virtualisation et l’implémentation de stockage Flash ont contribué à réduire la prolifération des serveurs, ainsi que les coûts de personnel et d’installations associés. Regroupant ressources de calcul, de stockage et de réseau au sein d’une même solution, les systèmes convergés se révèlent particulièrement efficaces dans la baisse des dépenses de personnel et de fonctionnement. Ces systèmes définis par logiciel (software-defined) exigent peu d’interventions humaines. Le code intégré dans le logiciel configure le matériel et automatise de nombreux processus autrefois manuels, ce qui réduit considérablement le risque d’erreurs humaines. Ensemble, ces technologies ont permis aux entreprises d’améliorer progressivement les processus et stratégies d’engagement client et de prestation de services.
How hyperconverged infrastructure can reduce costs and help align enterprise IT with business needs. Includes chapters on hyperconvergence and cloud, datacenter consolidation, ROBO deployment, test and development environments, and disaster recovery.
The reference guide lays out for data center managers a step-by step approach to data center consolidation. By breaking down the process into clear and identifiable actions – all of which are covered in the sections below – a data center consolidation becomes much more manageable, and the odds of its success much higher.
Published By: Fujitsu
Published Date: Feb 06, 2017
Data center infrastructure complexity must be tamed, as mobility, cloud networking and social media demand fast and agile approaches to data delivery. You can overcome these obstacles and improve your data center operations by consolidating your systems and deploying virtualization, using the Fujitsu PRIMEFLEX vShape reference architecture. Get the e-Book.
Published By: Commvault
Published Date: Jul 06, 2016
It’s no secret that today’s unprecedented data growth, data center consolidation and server virtualization are wreaking havoc with conventional approaches to backup and recovery. Here are five strategies for modern data protection that will not only help solve your current data management challenges but also ensure that you’re poised to meet future demands.
Published By: CyrusOne
Published Date: Jul 05, 2016
Data centers help state and federal agencies reduce costs and improve operations. Every day, government agencies struggle to meet critical cost controls with lower operational expenses while fulfilling the Federal Data Center Consolidation Initiative’s (FDCCI) goal. All too often they are finding themselves constrained by their legacy in-house data centers and connectivity solutions that fail to deliver exceptional data center reliability and uptime.
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
The data center is central to IT strategy and houses the computational power, storage resources, and applications
necessary to support an enterprise business. A flexible data center infrastructure than can support and quickly deploy new applications can result in significant competitive advantage, but designing such a data center requires solid initial planning and thoughtful consideration of port density, access-layer uplink bandwidth, true server capacity, oversubscription, mobility, and other details.
Interest in cloud computing over the last several years has been phenomenal. For cloud providers, public or private, it will transform business and operational processes, streamlining customer on-ramping and time to market, facilitating innovation, providing cost efficiencies, and enabling the ability to scale resources on demand.
The data center has gone through many major evolutionary changes over the past several decades, and each change has been defined by major shifts in architectures. The industry moved from the mainframe era to client/server computing and then to Internet computing. In 2011, another major shift began: the shift to a virtual data center. This has been the primary driver in enabling customers to transition to the cloud and ultimately IT as a service. The shift to a virtual data center will be the single biggest transition in the history of computing. It will reshape all the major data center tiers: applications, storage, servers and
More than ever, the data center is a gateway to opportunity, responsible for bringing together the data, applications, and IT resources needed to support growth and innovation. At Cisco we have the opportunity to observe how data center infrastructure
is evolving in response to market and technology trends. In no particular order, here are five important strategies we see companies using to overcome the limitations of traditional IT infrastructure and transform their data centers to support innovation and growth.
Published By: Red Hat
Published Date: Sep 25, 2014
Today’s mega IT trends – cloud computing, big data, mobile and social media –have dramatically altered how enterprises work, requiring datacenters to find new, more flexible and cost effective ways to meet computing demands.
For most datacenters, the path toward tomorrow's compute paradigm mandates an investment in standardization and consolidation as well as a more robust adoption of enterprise virtualization software, along with cloud system software to extend that virtualized infrastructure into a true private cloud environment.
Linux has emerged as one of the key elements to a modernization program for a datacenter.
Published By: Symantec
Published Date: Jul 11, 2014
Today's datacenters face a gauntlet of challenges including protection of physical and virtual environments, fast recovery of data, reducing backup times and storage requirements, server consolidation, and disaster recovery. How are savvy CIO's conquering these types of challenges? Find out how in this white paper by Expert, David Davis.
Discover, map, and optimize assets to streamline global data centers, speed audits, slash costs: Using HP’s Business Services Management portfolio, GE Capital has realized an $8 million annual savings through data center consolidation. Find out how they addressed their key IT challenges of data center simplification, asset management and capacity management. GE Capital is the financial services unit of the American conglomerate General Electric. GE Capital provides commercial lending and leasing, as well as a range of financial services.
As enterprises focus on consolidation of data centers, they continue to expand the roles and numbers of branch offices, often in locations that are difficult to support and protect. Learn to extend the virtual edge of the data center to branch offices, enabling complete consolidation of servers and data, improving security, and providing LAN performance at the edge via the WAN.
As organizations consolidate backup and disaster recovery operations, WAN optimization plays a key role in mitigating risk without sacrificing performance. Discover a new architectural approach that extends the virtual edge of the data center to the branch for complete server and data consolidation without a performance compromise.
Published By: Riverbed
Published Date: Jan 28, 2014
In this ESG analyst white paper, learn how Riverbed introduces an integrated approach to ensure performance of your network and applications by leveraging Cascade network performance management and Steelhead WAN optimization solutions.
Most enterprise datacenters are so cluttered with individually constructed servers, including database servers, that staff time is largely taken up with just the maintenance of these systems. As a result, IT service to business users suffers, and the agility of the enterprise suffers. Integrated systems represent an antidote to this problem. By dramatically reducing the amount of setup and maintenance time for the applications and databases they support, integrated systems enable the technical staff to spend more of their time supporting users and enabling the enterprise to thrive.
The Enterprise Strategy Group discusses how data center consolidation, virtualization, and cloud architectures are on the rise; however IT budgets are not increasing. This poses a unique challenge: How do you crease a flexible and agile environment without increasing the cost? See ESG’s analysis of WAN optimization benefits and how your peers are increasing their ROI and lowering their TCO.
Continual and timely upgrades of UNIX systems can enable growth and lower costs. But for some Sun/Oracle and HP customers, expected upgrade plans have hit major hurdles. Read this Clipper Group white paper to see how only IBM continues to provide a plan, and a predictable drumbeat, for the future.
Read it now.