A Power Plan Supports Virtualization & Cloud Services

September 16th, 2016

Recent reports from Data Center Knowledge (datacenterknowledge.com), an IT industry journal highlighting thought leadership in the data center arena, address the current challenges to Data Center Management in the advent of disruptive trends, specifically Virtualization and Cloud Services. Decidedly innovative, and promising incomparable process advancements, the integration and application of these spaces nonetheless requires adaptive and agile development and deployment measures in order to maintain and ultimately improve availability in increasingly dense computing environments.

Featured writer Peter Panfil, Vice President & General Manager at Emerson Network Power’s Liebert AC Power business, has over 30 years of experience in embedded controls and power. He currently leads the company’s global market and product development division. In his most recent Industry Perspectives report, Mr. Panfil reiterates the now widely held perception that cloud computing has benefits that can’t be ignored, such as the delivery of infrastructure as a service, support for massive sharing, flexibility, and productivity savings, to name just a few. However, Mr. Panfil identifies what he considers a presiding problem in the current cloud computing applications space: weekly media headlines of high-profile outages at data centers that host these sites. A regular interval of such incidents illustrates a general problem—power was and is and shall likely remain a central component of data center management; control, maintenance, handling of power must be given top priority. In the cloud, however, such issues are virtually uncontrollable (at least until an enterprise has developed the requisite understanding, strategy-build, deployment and management regulations for cloud-specific and virtualized spaces.) For now, in most cases, cloud users are inextricably bound to the provider’s level of infrastructure, agility capability, and data management dynamics. And cloud users are completely tied to the provider’s availability.

The report also identifies Virtualization as a valuable tool that affords the established functionality of an enterprise the ability to run multiple virtual machines on a single physical piece of equipment, sharing the resources of that computer across multiple environments. Virtualization consistently improves the efficiency and availability of organizational resources and applications in a quantifiable, and in some cases, dramatic fashion, Mr. Panfil observes. However, virtualization also increases the utilization rate of the server, especially in blade server architecture. Such an impact on the power delivery systems is notable. With applied virtualization, for example, the ease with which resources are able to move from a low-density to high-density power application is significant. By virtualizing, for instance, a single-phase circuit at 15-20 amperes, the utilization rate increases, as does the potential need to deliver more power to that application. This condition might well demand a push to a high-density power application. Before the adoption of virtualization, servers typically operate at a 10-20 percent utilization rate, reports show. Post-virtualization, they run at 60, 70 and even 80 percent. When servers are amped up to those rates, use of all the compute power available for consumption makes for an optimally cost-effective usage standard for that compute power; but there are infrastructural impacts. For example, if you have been using only 200-300 watts of a 500-watt server, then ramp it up to capacity, along with the rest of the 500-watt servers in the rack, the rack gets hotter due to the increased power draw. Thus, it becomes necessary to review the cooling strategy in efforts to obviate server problems and regulate power.

While IT industry professionals assess site-specific cloud services and virtualization activities toward driving up their utilization rates and to garner more efficiency and availability, it is precisely availability that is frequently cited as a major concern. Typically at large companies, Technical Directors and their support teams are used to having responsibility for all of the servers the business uses. The idea of putting parts of their business technology on rented, “black box”-style cloud services or reducing their number of assets by virtualizing makes them uneasy, Mr. Panfil reports. There is a consensus of understanding as to the risks involved, including the potentially considerable cost resulting from power outage. A great percentage of outages are triggered by electrical issues that can be minimized or eliminated with adequate power solutions effectively implemented. The challenge is to optimize the efficiency gains available in power approaches, along with IT criticality, as well as the need for availability.

Industry Perspectives contributing writer Daniel Kennedy, Data Center Product Manager for Tate Access Floors, submitted that variable Cloud Computing, both public and private, is a game-changer for Data Center Management, bringing with it difficulties that may not have existed previously in the data center environment.“In the past, the load profile per rack was typically considered to be relatively stable, in terms of energy consumption and heat production when viewed on a daily, weekly or monthly basis. This was primarily a result of low hardware utilization due to dedicated tasking of each individual server. The only changes that the data center operator needed to handle in terms of electrical distribution or airflow distribution came during moves, adds or changes.”Mr. Kennedy said.

Cloud computing, while disruptive to this stable load environment model, can create an increase in efficiency, enabling IT hardware to be utilized at a much higher level, as a component on the cloud, allowing for maximum utilization of all hardware components. The scalable cloud allows for computing resources to be brought online as required by demand to effect complete, optimally efficient utilization of the hardware.“This progress, however, may complicate the power and airflow provisioning aspect of the data center environment,” Mr. Kennedy reports. Power provisioning is mostly automatic. Server demand for electrical power will scale relatively evenly from very low utilization to peak utilization efficiently, with no intervention from the user. The article states that electrical losses in transformers, and in UPS systems, occur daily and have been addressed by those hardware manufacturers, either through a reduction in transformer steps (e.g. 415V distribution, DC distribution) or optimization of the internal designs (e.g. rotary UPS Systems, Online/Hybrid Conventional UPS units.)

In considering efficient airflow distribution in the Data Center, Mr. Kennedy states that the standard raised-floor cooling designs allocate airflow based on the panel design. “Increasing panel open areas, the use of manual dampers and other products do allow the data center operator some individual, rack level control, but this is only adequate for handling load diversity (different rack loads within the same cooling infrastructure). These products are incapable of handling continuous load profile changes on a per rack basis. The use of variable-speed fans at the air handling equipment level is a common approach as well, but typically this approach falls short of the granularity of control that cloud computing requires for efficient airflow distribution.”

There are different design approaches, any of which could potentially be incompatible with the specifications of individual Data Center sites (e.g. available floor space, hot aisle temperatures, lack of drop ceilings, fire suppression code). For the data center operator with multiple facilities worldwide, different design approaches from data center to data center would certainly increase management complexity.“The key to success in this environment is the ability to control the amount of air and directly match it to the air flow requirements of the IT hardware that exists in the data center at any given moment in time,”Mr. Kennedy states. Directional airflow to the equipment provides superior cooling capabilities to vertical airflow distribution. In this case, the amount of air required at each IT rack on the cloud should be closely controlled to ensure adequate airflow contingent on the current demand. This allows the data center operator to ensure that the cooling infrastructure deployed in the data center is used efficiently. Such an approach removes the requirement for aisle/air containment at the row or rack level through creation of a virtual aisle containment solution that significantly reduces by-pass air by constantly adjusting the amount of airflow on a per rack basis based on demand. This configuration also allows the data center to realize the efficiencies gained by using larger, more cost effective central cooling systems, by eliminating the need to adopt rack or row level cooling solutions, and the potential risks and inefficiencies consistent with these systems.

Thus, meeting the evolving needs of the Cloud is the current and ongoing challenge. Indeed, the cloud computing schema is clearly and effectively leading the way forward to more efficient and highly available computing environments. Data center operators should expect new challenges as cloud computing develops its quotidian navigational functions; prepared with a detailed understanding  of the data center cooling infrastructure, for example, an efficiency-minded data center operator shall be well equipped to ensure that the cooling infrastructure has been designed with considerable attention to these advances in the IT market. Reports suggest the likelihood of developers in a given IT department sharing a healthy enthusiasm for the wealth of immediate access to thousands of virtual servers made available by a cloud provider; also, the ability to code an infinitely scalable platform like Google’s App Engine (the platform-as-a-service tier). However, reports consistently direct IT management to strive for balance between the rapidly developing applications at Internet scale and planning for the impact on the business that any cloud-related downtime will inevitably have. Also of note, analytics reveal that the higher up in the cloud stack an enterprise ascends for procurement of services, the greater the risk of downtime vulnerability due to the constraints of being locked incrementally into a particular provider solution.

Users also need to understand and negotiate an appropriate Service Level Agreement with providers to ensure the proper measure of availability and service. It is important to establish mitigated risk and to make sure that user expectations are met. Therefore it is advisable, before employing a cloud computing vendor or instituting a virtualization strategy, to secure a third-party to conduct a data center audit and complete a thorough proposed growth plan, assessing the variables of availability, sustainability, and efficiency. Data center managers are being challenged to maintain or improve availability in increasingly dense computing environments while reducing costs and augmenting efficiency. Some companies are looking to cloud computing and virtualization for help. Both strategies present certain advantages and opportunities, but supporting them requires a dedication to power—and the rest of the infrastructure, for that matter—so as not to compromise availability.

By all accounts, the cloud is already positioned firmly as a key component in IT, commanding a force of innovative strategy required to drive enterprise redistribution, expansion, and tactical development of resources. Virtualization is the first step many are taking toward better energy savings within a data center. Due diligence applied early in enterprise development toward cloud and virtualization initiatives, in an effort to obviate problems prior to deployment and iterative implementation, regarding power issues in particular but certainly relevant across processes, provides an effective assurance measure in reducing the threat of outages, managing variable logistics like air-flow, and avoiding the identified pitfalls attending these current spaces.

With over two decades of proven superior service in IT, including an integrated approach to service management and an ongoing commitment to advanced innovation, LTI remains a trusted leader and best-in-class provider of IT services worldwide.