Data centers are large, important investments that, when properly designed, built, and operated, are an integral part of the business strategy driving the success of any enterprise. Yet the central focus of organizations is often the acquisition and deployment of the IT architecture equipment and systems with little thought given to the structure and space in which it is to be housed, serviced, and maintained.
In this White Paper IDC sees the use of static x86 server configurations is quickly becoming an outdated concept with the introduction of modern solutions based on blade architectures, which can offer both intelligent configuration and management and the ability to perform physical-to-virtual migration to promote uptime and efficient resource usage. When combined with the quickly maturing x86 hypervisor technologies available from a variety of solution providers, the synergy of blade architectures and virtualization offers customers the ability to dramatically increase utilization of their server investments, boost uptime, provide a more resilient and available infrastructure, and roll out new infrastructure and services more quickly.
To accommodate increasingly dense technology environments, increasingly critical business applications, and increasingly stringent service level demands, data centers are typically engineered to deliver the highest-affordable availability levels facility-wide. Within this monolithic design approach, the same levels of mechanical, electrical, and IT infrastructure are installed to support systems and applications regardless of their criticality or business risk if unplanned downtime occurs. Typically, high redundancy designs are deployed in order to provide for all eventualities. The result, in many instances, is to unnecessarily drive up both upfront construction or retro-fitting costs and ongoing operating expenses.
When Alcatel bought out Lucent at the end of 2006, the two companies had already begun planning data center consolidations of their own, but the merger changed all that. As it turns out, the merged company created a plan to consolidate 25 data centers and 125 server rooms down to six data centers and just a few server rooms. This change has presented challenges, especially in terms of arranging downtime and dealing with employees' attachment to their servers and applications, but the company is on pace to meet it’s goal of reducing IT operational cost by 25% over three years.
Virtualization is a revolutionary technology that has emerged as one of the hottest trends in the IT segment. If you are not already leveraging the server virtualization wave as a means to deploy cost effective Disaster Recovery, then use this paper as a catalyst to transform your business today: virtualization enables greater efficiencies, faster application and data recovery, and overall cost savings.
Data centers are large, important investments that, when properly designed, built, and operated, are an integral part of the business strategy driving the success of any enterprise. Yet the central focus of organizations is often the acquisition and deployment of the IT architecture equipment and systems with little thought given to the structure and space in which it is to be housed, serviced, and maintained. This invariably leads to facility infrastructure problems such as thermal “hot spots”, lack of UPS (uninterruptible power supply) rack power, lack of redundancy, system overloading and other issues that threaten or prevent the realization of the return on the investment in the IT systems.
Today's IT executives are not only expected to create and maintain high-availability IT environments, but they are also expected to implement green initiatives to satisfy customers, analysts, and government agencies that are worried about the impact of modern, energy-thirsty data centers on the environment. Is such a dual mandate reasonable? Can companies be expected to maintain service levels and reduce their carbon footprints at the same time? The White Paper offers a description of the different types of services available to improved energy efficiency data center design and a prescription for successful implementation.
The recent release of the Environmental Protection Agency (EPA) study on data center energy efficiency is adding fuel to the fire in the research and development of new ways to reduce energy use in centers. The findings, summarized on the EPA website, are staggering: Data centers consumed about 60 billion kilowatt-hours (kWh) in 2006, roughly 1.5 percent of total US electricity consumption -Energy consumption of servers and data centers has doubled in the past five years and is expected to almost double again in the next five years to more than 100 billion kWh, costing about $7.4 billion annually.
With server virtualization now well established in the world's datacenters, IT leaders are turning their attention to virtualizing and centralizing desktop PCs to address some of their toughest computing challenges:
. Managing complex end-user hardware and software infrastructure
. Securing networks and information assets
. Supporting a global workforce in an environment of constant change
Most long-standing data warehouses are designed to support a relatively small number of users who access information to support strategic decisions, financial planning and the production of standard reports that track performance. Today, many more users need to access information in context and on demand so that critical functions are optimized to run efficiently. Learn how to create a roadmap for a truly dynamic warehousing infrastructure, and move ahead of your competition with your business intelligence system
Does your business need to safeguard information, keep auditors and regulators satisfied, and improve data quality? Data governance is the answer. This informative video outlines the latest challenges and best practices in data governance. IBM data governance solutions help businesses with:• Audit and reporting • Data architecture/infrastructure • Data quality • Information lifecyle management • Metadata/business glossaries • Organizational design/development • Policy/risk management • Security/privacy/compliance • Stewardship/value creation
This comprehensive white paper applies automation and ITIL best practices to the data center and reviews current industry trends, server automation energy usage issues and a variety of optimization strategies for data center improvement. The effects of virtualization are explored in-depth. Includes detailed sections on increasing operational efficiency using workflow analysis, automating and optimizing server change management, reducing infrastructure complexity and developing security, disaster recovery and business continuity procedures. Step by step instructions for developing metrics and a business case to justify data center and server automation are included.
One of the pillars of the Virtual Data Center is virtual platform infrastructure, or the virtual machine; however virtual platforms are dependent on many other, oft forgotten components of both the physical and virtual data center.
The importance of data processing in today’s business environment is increasing and it is clear that business operations must be secured against every possible contingency to provide continuous uptime. Business continuity is not just concerned with IT infrastructure and data processing, it also includes manual and automatic IT systems with the human involvement. This white paper explains how HP StorageWorks XP for Business Continuity Manager (BCM) Software makes sure that all processes within the data-handling procedures are automated in such a way that in the event of a failure or unplanned disruption, business operations can continue with minimal interruption.
As a growing law firm, Sills Cummis found itself presented with several classic IT challenges. It had to contend with server proliferation within its data center, implement a cost-effective disaster recovery system, and ensure high availability of key applications and services for its attorneys. Through its purchase of the VMware Infrastructure 3 suite, Sills Cummis was able to tackle all of these problems.
Designed for CIOs, IT managers, data center managers and grid computing architects seeking to improve performance, SAS Grid Computing on the HP BladeSystem. C-Class helps accelerate growth and mitigate risks with a simplified, consolidated infrastructure that’s agile enough to efficiently handle change. SAS Grid Manager on HP BladeSystem can lower costs through automation, virtualization and improved IT efficiency. Download HP's "Quick Sizing Guide for SAS Grid running on HP BladeSystem and EVA Storage" to learn more about the equipment needs to deploy SAS Grid Manager on HP BladeSystem.
BI and data warehousing have undergone significant changes in the past decade. With the advent of operational BI came major pressures on the BI environment, involving the need to support operational decision-making. This white paper explores these pressures and how to deal with them by creating a dynamic, future-proof infrastructure.
As total costs of small and mid-sized businesses increase, growing companies look ever closely at how their IT needs affect energy costs. The IBM case studies and survey results herein discuss opportunities to reduce IT energy costs, save money and devise IT investment strategies. Read them and learn how you can expand your business while helping the environment.
Increasing infrastructure complexity has led to unprecedented growth in enterprise systems management data on all systems, including the mainframe and it is becoming more complex to manage SMF data. Download this Technology Brief to learn how to address the complexity of managing SMF data.
Datacenters experience constant change under pressure from technology and business drivers. IT systems are becoming more, not less complex as new technologies are introduced, and IT staffs are not growing to keep pace with the growth in size and complexity of the typical datacenter. Windows administrators need to radically simplify both complex and routine tasks if they are going to effectively respond to the constant pressure to deliver increasing value with limited resources. The most effective thing Windows administrators can do to address this issue is to minimize duties associated with maintaining the existing infrastructure.
This workbook helps you assess current needs and usage and determine where deduplication can positively affect your IT consolidation initiatives, help reduce costs, and create a more "green" environment.
HP is increasingly demonstrating that its vision includes positioning itself as an innovator at the high end especially with respect to managing heterogeneous storage assets. HP appears to be a leading proponent of avoiding dreams of owning the entire data center storage infrastructure. Rather its aim appears to add customer value by recognizing and accommodating the diversity of installed storage assets at customer sites.
This paper provides a comprehensive set of test-proven best practices for properly configuring, deploying, and operating an Oracle 10g database on an HP StorageWorks XP12000 Disk Array (XP12000) in an HP-UX environment using HP StorageWorks XP Continuous Access Software as the remote copy infrastructure.
End-to-end storage means that your data is well-managed, protected, and available when you and your customers need it most. IBM System Storage products simplify your infrastructure with servers and storage in one, allowing you to grow, and lowering the total cost of ownership through advanced energy management. Discover the IBM System Storage family of products, services and solutions, and see how they can bring value to your bottom line.