Cloud computing by dr. kumar saurabh pdf free download






















The private Cloud services offer greater control over the infrastructure, improving security and service resilience because its access is restricted to one or few organizations. Such private deployment poses an inherent limitation to end user applications i. An organization can buy more machines according to expanding needs of its users, but this cannot be done as fast and seamlessly as with public Clouds.

This resulted in the emergence of hybrid deployments for Clouds where the advantages of both private and public Clouds are made available to the organization. Therefore, organizations can utilize their existing IT infrastructure for maintaining sensitive information within the premises, and whenever require auto-scaling their resources using public Clouds. These resources or services are temporarily leased in peak load times and then released.

The hybrid Cloud, in general, applies to services related to IT infrastructure rather than software services. Figure 4. Deployment Models for Clouds 3. Cloud Computing and Energy Usage Model: A Typical Example In this section, through a typical Cloud usage scenario we will analyze various elements of Clouds and their energy efficiency.

Within datacenters, data goes through a local area network and are processed on virtual machines, hosting Cloud services, which may access storage servers. Each of these computing and network devices that are directly accessed to serve Cloud users contribute to energy consumption.

In addition, within a Cloud datacenter, there are many other devices, such as cooling and electrical devicies, that consume power. In the following section, we discuss in detail the energy consumption of these devices and applications. Cloud Usage Model. The Cloud computing can be used for running applications owned by individual user or offered by the Cloud provider using SaaS.

In both cases, the energy consumption depends on the application itself. If application is long running with high CPU and memory requirements then its execution will result in high energy consumption.

The allocation of resources based on the maximum level of CPU and memory usage will result in much higher energy consumption than actually required. The energy inefficiency in execution of an application emanates from inaccurate design and implementation. The application inefficiencies, such as suboptimal algorithms and inefficient usage of shared resources causing contention lead to higher CPU usage and, therefore, higher energy consumption. However, factors such as energy efficiency are not considered during the design of an application in most of the application domains other than for example embedded devices such as mobile phone.

For instance, it is well known that a physical server has higher performance efficiency than a virtual machine and IaaS providers offer generally access to a virtual machine to its end users [13].

In addition, the management process in the form of accounting and monitoring requires some CPU power. These SLAs may take the form of time commitment for a task to be completed. For instance, to avoid failure, fast recovery and reduction in response time, providers have to maintain several storage replicas across many datacenters.

Since workflow in Web applications require several sites to give better response time to its end user, their data is replicated on many servers across the world.

Therefore, it is important to explore the relationships among Cloud components and the tradeoffs between QoS and energy consumption. Network Devices The network system is another area of concern which consumes a non-negligible fraction of the total power consumption.

In Cloud computing, since resources are accessed through Internet, both applications and data are needed to be transferred to the compute node. In some cases, if data is really large, then it may turn out to be cheaper and more carbon emission efficient to send the data by mail than to transfer through Internet.

In Cloud computing, the user data travels through many devices before it reaches a datacenter. The BNG Broadband Network Gateway network performs traffic management and authentication functions on the packets received by Ethernet switches.

These BNG routers connect to other Internet routers through provider's edge routers. The core network is further comprised of many large routers. Each of these devices consumes power according to the traffic volume.

According to the study conducted by Tucker [15], public Cloud is estimated to consume about 2. They found out that power consumption in transport represents a significant proportion of the total power consumption for Cloud storage services at medium and high usage rates. Even typical network usage can result in three to four times more energy consumption in public Cloud storage than one's own storage infrastructure. Therefore, with the growth of Cloud computing usage, it is expected that energy efficiency of switches and routers will play a very significant role in what since they need to provide capacity of hundreds of terabits of bandwidth.

In the network infrastructure, the energy consumption [16] depends especially on the power efficiency and awareness of wired network, namely the network equipments or system design, topology design, and network protocol design. Most of the energy in network devices is wasted because they are designed to handle worst case scenario.

Therefore, the energy consumption of these devices remains almost the same during both peak time and idle state. Many improvements are required to get high energy efficiency in these devices. For example during low utilization periods, Ethernet links can be turned off and packets can be routed around them. Further energy savings are possible at the hardware level of the routers through appropriate selection and optimization of the layout of various internal router components i.

Due to large number of equipments, datacenters can consume massive energy consumption and emit large amount of carbon. This high usage also translates to very high carbon emissions which was estimated to be about Metric Megatons each year.

Table 3 lists equipments typically used in datacenters with their contribution to energy consumption. It can be clearly observed that servers and storage systems are not the only infrastructure that consumes energy in the datacenter. In reality, the cooling equipments consume equivalent amount of energy as the IT systems themselves. Ranganathan [17] suggests that for every dollar spent on electricity costs in large-scale datacenters another dollar is spent on cooling.

Table 2. In other words, the majority of power usage within a datacenter is used for other purposes than actual IT services. Thus, to achieve the maximum efficiency in power consumption and CO2 emissions, each of these devices need to be designed and used efficiently while ensuring that their carbon footprint is reduced.

A key factor in achieving the reduction in power consumption of a datacenter is to calculate how much energy is consumed in cooling and other overheads. Standard metrics are emerging such as Power Usage Effectiveness PUE [19] which can be used to benchmark how much energy is being usefully deployed versus how much is spent on overhead. The PUE of a datacenter is defined as the ratio of the total power consumption of a facility data or switching center to the total power consumption of IT equipment servers, storage, routers, etc.

PUE varies from datacenters depending on the place where datacenter is located and devices used in its construction. PUE of datacenter can be useful in measuring power efficiency of datacenters and thus provide a motivation to improve its efficiency.

Features of Clouds enabling Green computing Even though there is a great concern in the community that Cloud computing can result in higher energy usage by the datacenters, the Cloud computing has a green lining. There are several technologies and concepts employed by Cloud providers to achieve better utilization and efficiency than traditional computing.

Therefore, comparatively lower carbon emission is expected in Cloud computing due to highly energy efficient infrastructure and reduction in the IT infrastructure itself by multi-tenancy. Virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration [20]. By consolidation of underutilized servers in the form of multiple virtual machines sharing same physical server at higher utilization, companies can gain high savings in the form of space, management, and energy.

According to Accenture Report [7], there are following four key factors that have enabled the Cloud computing to lower energy usage and carbon emissions from ICT. These savings are driven by the high efficiency of large scale Cloud data centers. Dynamic Provisioning: In traditional setting, datacenters and private infrastructure used to be maintained to fulfill worst case demand.

Thus, IT companies end up deploying far more infrastructure than needed. There are various reasons for such over-provisioning: a it is very difficult to predict the demand at a time; this is particularly true for Web applications and b to guarantee availability of services and to maintain certain level of service quality to end users. The Australian Open Website each year receives a significant spike in traffic during the tournament period.

The increase in traffic can amount to over times its typical volume 22 million visits in a couple of weeks [21]. To handle such peak load during short period in a year, running hundreds of server throughout the year is not really energy efficient. Thus, the infrastructure provisioned with a conservative approach results in unutilized resources. Such scenarios can be readily managed by Cloud infrastructure.

The virtual machines in a Cloud infrastructure can be live migrated to another host in case user application requires more resources. Cloud providers monitor and predict the demand and thus allocate resources according to demand.

Those applications that require less number of resources can be consolidated on the same server. Thus, datacenters always maintain the active servers according to current demand, which results in low energy consumption than the conservative approach of over-provisioning.

Multi-tenancy: Using multi-tenancy approach, Cloud computing infrastructure reduces overall energy usage and associated carbon emissions. The SaaS providers serve multiple companies on same infrastructure and software.

This approach is obviously more energy efficient than multiple copies of software installed on different infrastructure. Furthermore, businesses have highly variable demand patterns in general, and hence multi-tenancy on the same server allows the flattening of the overall peak demand which can minimize the need for extra infrastructure.

The smaller fluctuation in demand results in better prediction and results in greater energy savings. Server Utilization: In general, on-premise infrastructure run with very low utilization, sometimes it goes down up to 5 to 10 percent of average utilization.

Even though high utilization of servers results in more power consumption, server running at higher utilization can process more workload with similar power usage. Datacenter Efficiency: As already discussed, the power efficiency of datacenters has major impact on the total energy usage of Cloud computing. By using the most energy efficient technologies, Cloud providers can significantly improve the PUE of their datacenters.

The server design in the form of modular containers, water or air based cooling, or advanced power management through power supply optimization, are all approaches that have significantly improved PUE in datacenters.

In addition, Cloud computing allows services to be moved between multiple datacenter which are running with better PUE values. This is achieved by using high speed network, virtualized services and measurement, and monitoring and accounting of datacenter.

Thus, it has become very important to address the energy efficiency at application level itself. However, this layer has received very little attraction since many applications are already on use and most of the new applications are mostly upgraded version of or developed using previously implemented tools.

Some of the efforts in this direction are for MPI applications [22], which are designed to run directly on physical machines. Thus, their performance on virtual machine is still undefined. Various power efficient techniques [24][25] for software designs are proposed in the literature but these are mostly for embedded devices. In the development of commercial and enterprise applications which are designed for PC environment, generally energy efficiency is neglected.

Mayo et al. As these tasks have the same purpose on each device, the results show that the implementation of the task and the system upon which it is performed can have a dramatic impact on efficiency. Therefore, to achieve energy efficiency at application level, SaaS providers should pay attention in deploying software on right kind of infrastructure which can execute the software most efficiently.

This necessitates the research and analysis of trade-off between performance and energy consumption due to execution of software on multiple platforms and hardware. In addition, the energy consumption at the compiler level and code level should be considered by software developers in the design of their future application implementations using various energy-efficient techniques proposed in the literature.

The consolidation of VMs, VM migration, scheduling, demand projection, heat management and temperature-aware allocation, and load balancing are used as basic techniques for minimizing power consumption. As discussed in previous section, virtualization plays an important role in these techniques due to its several features such as consolidation, live migration, and performance isolation.

Consolidation helps in managing the trade-off between performance, resource utilization, and energy consumption [26]. Similarly, VM migration [48] allows flexible and dynamic resource management while facilitating fault management and lower maintenance cost. Additionally, the advancement in virtualization technology has led to significant reduction in VM overhead which improves further the energy efficiency of Cloud infrastructure.

Abdelsalam et al. They formulated the management problem in the form of an optimization model aiming at minimization of the total energy consumption of the Cloud, taking SLAs into account. The current issue of under utilization and over-provisioning of servers was highlighted by Ranganathan et al. There are several other research work which focus on minimizing the over provisioning using consolidation of virtualized server [27].

Majority of these works use monitoring and estimation of resource utilization by applications based on the arrival rate of requests.

However, due to multiple levels of abstractions, it is really hard to maintain deployment data of each virtual machine within a Cloud datacenter. Thus, various indirect load estimation techniques are used for consolidation of VMs.

Although above consolidation methods can reduce the overall number of resources used to serve user applications, the migration and relocation of VMs for matching application demand can impact the QoS service requirements of the user.

Since Cloud providers need to satisfy a certain level of service, some work focused on minimizing the energy consumption while reducing the number of SLA violations.

One of the first works that dealt with performance and energy trade- off was by Chase et al. They proposed a bidding system to deliver the required performance level and switching off unused servers. Kephart et al.

Song et al. At the operating system level, Nathuji et al. VirtualPower allows the isolated and independent operation of virtual machine to reduce the energy consumption. The soft states are intercepted by Xen hypervisor and are mapped to changes in the underlying hardware such as CPU frequency scaling according to the virtual power management rules.

In addition, there are works on improving the energy efficiency of storage systems. Kaushik et al. The Lightning file system divides the Storage servers into Cold and Hot logical zones using data classification.

These servers are then switched to inactive states for energy saving. Gurumurthi et al. Soror et al. Since power is dissipated in Cloud datacenter due to heat generated by the servers, several work also have been proposed for dynamic scheduling of VMs and applications which take into account the thermal states or the heat dissipation in a data centre. Really very informative and creative cloud computing by dr. For the best experience on our site, be sure to turn on Javascript in your browser.

This concept is computkng good way to yb the knowledge. Usually I never comment on blogs but your article is so convincing that I never stop myself to say something about it. JavaScript seems to be disabled in your browser. Description This book fulfills an important cloud computing by dr. The book reflects the core insights of cloud models, service offerings, cloud architectures and other benefits. Description The widespread acceptance and deployment of cloud computing describes the means of delivering any and all IT- from computer applications, software, business processes, messaging and collaboration — to end-users as a compuging wherever and whenever they need it.

Kumar Saurabh text book. Cloud Computing Training in Delhi. Ankit Kumar 29 March at Gokul Cloud computing by dr. Thanks for sharing such useful information.

Cloud Computing Course in Delhi. Printed in India FM. Cloud Computing is one of the emerging topics in Information Technology. Cloud Computing is also included in the syllabus of many universities. We, therefore, decided to work on a book on this subject for the benefit of the students and teachers.

Some topics in this book are unique and based on published information which is current and timely. Some topics in this book are intended for readers who have no prior knowledge of this subject. So we believe that the book will be helpful to anyone who wants to learn cloud computing. The book is organized into eight chapters, appendix and glossary. Chapter 1 provides basics of cloud computing, like the working principles of Cluster, Grid and Mobile Computing.

Chapter 2 focuses on what Cloud Computing is and the services it provides and also the different deployment models of Cloud Computing. Chapter 3 describes the framework for Cloud Computing. Chapter 4 provides an overview of Virtualization techniques, Virtualization Model and how Virtualization is related to cloud computing. Chapter 5 presents different aspects virtualization procedure and the inter relation- ship among them.

This chapter discusses the issues in scheduling, load distribution, energy efficiency, distribution pattern and also transactional approaches. Lastly, Chapter 8 introduces ways to maintain privacy of the sensitive data and resources using an auditing concept within the third party provider to ensure data privacy and data integrity checks.

There have been several influences from our family who have sacrificed lot of their time and attention to ensure that we are kept motivated to complete this crucial project.



0コメント

  • 1000 / 1000