Faculty of Engineering and Built Environment
Permanent URI for this communityhttp://ir-dev.dut.ac.za/handle/10321/9
Browse
1 results
Search Results
Item Energy-efficient resource management framework for cloud data centers(2023-05) Sibiya, Khulekani; Nleya, BakheThe continuing global surge in various cloud services, IoT, and Edge (Fog) computing has led to a sudden increase in the demand for Datacenters. By definition, a data center is a physical facility that corporations/organizations use to house their critical applications and data. A data canter‟s design is based on a network of computing and storage resources that enable the delivery of shared applications and data. Notable advantages of Data Centers include but are not limited, to their ability to provide services to end-users based on affordable rates in various plans as per contractual agreements. They also offer a robust hardware ecosystem as well as software. In operational terms, data centers offer reliable and enhanced system performance by way of carefully distributing the traffic loads uniformly across the cluster nodes. In that way, end users are excused from maintenance responsibilities. Data centers also afford instant scalability based on changing capacity demands by users. To enhance the fail-safe abilities of data canters, backup systems are incorporated. A notable drawback of Datacenters is the high power consumption which up both CAPEX and OPEX costs. E.g it is prohibitively costly to erect robust cooling systems for a large-scale data center. The same cooling system ought to be scalable to accommodate future expansions of the data centers in terms of new services that may require new hardware to be incorporated. Thus scalability of energy supply capacity is quite a challenge. Thus, how to maximize power utilization and optimizing the performance per power budget is critical for data centers to deliver enough computation ability. Overall the operational costs of Data centers directly link the resource management algorithms implemented to assign virtual machines (VMs) to actual hardware servers and degrees of flexibility to relocate them elsewhere in case of emergencies usually associated with power losses of excessive heating of system elements. The main contribution of this thesis is in proposing and analyzing a hierarchical SLA-based distributed hierarchical resource allocation and optimization scheme, that considers constraints such as energy consumption and cooling-related energy consumption in addition to the scalability issue. We also incorporate a load-balancing algorithm to minimize the operational costs of the proposed scheme. We utilize CloudSim, which is a customizable tool that supports the modeling, and creation of several VMs, (as well as mapping tasks to appropriate VMs) for the scheme‟s performance evaluation. Ultimately obtained results show that the scheme significantly reduces the operational costs of the overall cloud data center system and at the same time ensures energy efficiency.