Theses and dissertations (Engineering and Built Environment)
Permanent URI for this collectionhttp://ir-dev.dut.ac.za/handle/10321/10
Browse
Item An adaptive quotation system for web-based manufacturing(2005) Li, Qingxue; Walker, MarkIncreased global competition is challenging manufacturing industries to bring competitively priced, well-designed and well-manufactured products into the marketplace as quickly as possible. Manufacturing companies are responding to these challenges in their industry by extending current internet trends to create virtual marketplaces where factories, suppliers, and customers are part of the solution. Pressing demands to reduce lead-time by providing a suitable manufacturing price for a product has become an important step in the current competition age. This thesis presents an approach for providing a quotation for a product via the web, automatically and autonomously.Item Advanced reliability analysis of road-slope stability in soft rock geological terrain(2023-05) Sengani, FhatuwaniMost of the national, regional, and local roads in Limpopo Province have been developed through a rugged topography and artificial slopes have been created with loose rocks scattered across the slopes as a results road slope instability is the common challenge. The objective of this research study is to conduct an advanced reliability analysis of road-slope stability in soft rock geological terrain using the national road (N1) and its tributary (R71) as case studies. Limit analysis, limit equilibrium, finite element methods, finite difference methods, machine learning and GIS-based tools have been used for this purpose. Meanwhile, the accuracy classification chart of limit equilibrium methods in homogenous slope and a new method for predicting the stability of slope in multiple faulted slopes were developed. The reproduction of failure evolution of slope instability was also performed, followed by reliability analysis of the slope based on probabilistic analysis. Lastly, an integrated approach to slope stability assessment based on machine learning, geographic information systembased tools and geotechnical methods was presented. To achieve the above, field observations and measurements, structural mapping, limit equilibrium, limit analysis, Monte Carlo simulation, fuzzy inference analysis, and GIS digitization and analysis were performed. Software packages such as SLIDE, FLACslope, Optimum 2G, DIPS, RocLab, and ArcGIS, were used. The accuracy classification chart for Limit Equilibrium Methods LEM), a new method for performing stability analysis in multiple faulted slopes, reproduction of failure evolution of slope was developed. Monte Carlo simulation was established as the most reliable and effective technique to analyze slope stability. The steepness of the slope, rock and soil properties, extreme rainfall and geological features were demonstrated to influence slope instability based on an integrated approach as stated above. From the above-mentioned major findings, it was concluded that the developed accuracy error classification chart of LEMs and the new method of slope stability in multi-faulted slopes are useful. Though the reproduction of failure evolution of slope was successfully achieved, for material to flow for a longer distance, high kinetic energy and more shearing of material are expected to take place during this process. It is recommended that other sophisticated methods be utilized to expand the results.Item Anaerobic co-digestion of agricultural biomass with industrial wastewater for biogas production(2021-03-26) Armah, Edward Kwaku; Chetty, Maggie; Deenadayalu, NirmalaWith the increasing demand for clean and affordable energy which is environmentally friendly, the use of renewable energy sources is a way for future energy generation. South Africa, like most countries in the world are over-dependent on the use of fossil fuels, prompting most current researchers to seek an affordable and reliable source of energy which is also,a focal point of the United Nations Sustainable Development Goal 7. In past decades, the process of anaerobic digestion (AD) also referred to as monodigestion, has proven to be efficient with positive environmental benefits for biogas production for the purpose of generating electricity, combined heat and power. However, due to regional shortages, process instability and lower biogas yield, the concept of anaerobic co-digestion (AcoD) emerged to account for these drawbacks. Given the considerable impact that industrial wastewater (WW) could provide nutrients in anaerobic biodigesters, the results of this study could apprise decisionmakers and the government to further implement biogas installations as an alternative energy source. The study aims at optimising the biogas production through AcoD of the agricultural biomasses: sugarcane bagasse (SCB) and corn silage (CS) with industrial WW sourced from Durban, KwaZulu-Natal, South Africa. The study commenced with the characterisation of the biomasses under this study with proximate and ultimate analysis using the Fourier transform infrared spectroscopy (FTIR), the thermo gravimetric analysis (TGA), the scanning electron microscopy (SEM) and the differential scanning calorimetry (DSC). The untreated biomass was subjected to biochemical methane potential (BMP) tests to optimise and predict the biogas potential for the selected biomass. A preliminary run was carried out with the agricultural biomass to determine which of the WW streams would yield the most biogas. Among the four WW streams sourced at this stage, two WW streams; sugar WW (SWW) and dairy WW (DWW) produced the highest volume of biogas in the increasing order; SWW ˃ DWW ˃ brewery WW > municipal WW. Therefore, both SWW and DWW were selected for further process optimisation with each biomass. Using the response surface methodology (RSM), the factors considered were temperature (25-55 °C) and organic loading rate (0.5-1.5 gVS/100mL); and the response was the biogas yield (m3 /kgVS). Maximum biogas yield and methane (CH4) content were found to be 5.0 m3 /kgVS and 79%, respectively, for the AcoD of CS with SWW. This established the association that existed among the set temperatures of the digestion process and the corresponding organic loading rate (OLR) of the AcoD process operating in batch mode. Both CS and SCB have been classified as lignocellulosic and thus, ionic liquid (IL) pretreatment was adapted in this study to ascertain their potential on the biogas yield. Results showed that the maximum biogas yield and CH4 content were found to be 3.9 m3 /kgVS and 87%, respectively, after IL pretreatment using 1-ethyl-3-methylimidazolium acetate ([Emim][OAc]) for CS with DWW at 55°C and 1.0 gVS/100mL. The IL pretreatment yielded lower biogas but of higher purity of CH4 than the untreated biomass. Data obtained from the BMP tests for the untreated and pretreated biomasses were tested with the existing kinetic models; first order, dual pooled first order, Chen and Hashimoto and the modified Gompertz. The results showed that for both untreated and pretreated biomass, the modified Gompertz had the best fit amongst the four models tested with coefficient of correlation, R 2 values of 0.997 and 0.979, respectively. Comparatively, the modified Gompertz model could be the preferred model for the study of industrial WW when used as co-substrate during AcoD for biogas production. The study showed that higher biogas production and CH4 contents were observed when CS was employed as a reliable feedstock with maximum volume of the untreated and pretreated feedstock reported at 31 L and 20 L respectively.Item Application of kaolin-based synthesized zeolite membrane systems in water desalination(2021-12-01) Aliyu, Usman Mohammed; Isa, Yusuf Makarfi; Rathilal, SudeshAccessibility to potable water worldwide is threatene, despite 71% of the earth’s surface being covered with water. However, 97% of the 71% is too saline for consumption. A usual way of treating salinity is by membrane desalination using reverse osmosis. The disadvantage of this approach is its high cost and short life span of the polymeric membrane used. Creating a new robust high-quality water treatment system using a ceramic membrane will address these challenges due to its robust mechanical properties. In this work, we synthesized different zeolites from South African kaolin under varying conditions such as crystallization time, ageing time and temperature and their effects on the properties of zeolites synthesized was investigated. Sample characterization confirmed the successful synthesis of ZSM-5 and zeolite A. In the synthesis procedure, metakaolin served as the alternative source of silica and alumina and was use to synthesize different types of zeolites under varying synthesis conditions. Synthesized samples were characterized using X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infrared (FTIR) spectroscopy and Brunauer–Emmett–Teller BET surface area. The properties of the synthesized ZSM-5 were influence by the synthesis parameters, typically, crystallization temperature, ageing time and crystallization time. Crystalline ZSM-5 zeolite produced at an ageing time of 24 hours, crystallization time of 48 hours and crystallization temperature of 180°C with Si/Al ratio of 43 and BET surface area of 282 m2 /g. After a 12-hour ageing period, Zeolite A produced at crystallization time of 20 hours, the crystallization temperature of 100°C, Si/Al ratio of 1.3 and BET surface area of 143.88 m2 /g. The findings indicate that aging influences the synthesis of zeolite A, as a relatively crystalline material formed at an ageing time of 12 hours, which continued to decrease as the ageing time was increased. We do not exclude the possibility of Ostwald ripening playing a role in this relationship. Subsequently, the efficiency of zeolite A and ZSM-5 zeolite in removing salt ions, Ca2+, K+ , Mg2+ , and Na+ from synthetic seawater was investigated at room temperature using a batch adsorption system. The effect of adsorbent dosage, agitation speed and contact time were consider. Dosages varied from 2.5 to 6.0 g/100 ml while the contact time varied from 30 to 180 minutes. The results obtained showed that a zeolite dosage of 6.0g/100 ml and agitation speed of 140 revolutions per minute (rpm) yielded a maximum removal efficiency of 89.7 % for Ca2+ and minimum removal efficiency of 1.8 % for Mg2+ at agitation rates of 30 and 120 minutes, respectively. Ion exchange of Na+ by Ca2+, K+ and Mg2+ in the zeolite framework was established. The preference of the overall ion-exchange selectivity of both zeolites A and ZSM-5 are in the order of Ca2+ > K+ > Na+ > Mg2+. Zeolite A showed higher removal efficiency compared to ZSM-5 zeolite. The results point out that the synthesized zeolite was able to desalinate the salt ions in synthetic seawater to a limit below the World Health Organization (WHO) recommended values. Consequently, zeolite synthesized from kaolin offers a cost-effective technology for the desalination of seawater. The desalination and material characterization results used in selecting a potential zeolite for use in reverse osmosis (RO). The material successfully deposited on etched alpha-alumina support to produce zeolite membrane by a hydrothermal technique using a modified in-situ method. Zeolite A and ZSM-5 membranes produced and applied in the RO unit for desalination. The RO membrane experimental results show potential in desalination of synthetic seawater. A machine-learning tool was use to predict the properties of the synthesized ZSM-5 as a function of the hydrothermal parameters. Finally, a techno-economic analysis of synthesizing zeolite using locally available kaolin at a capacity of 5 x 105 kg/yr. has shown that the plant is economically viable with rapid break-even and the payback period is less than 4 years.Item Assessment of control loop performance for nonlinear process(2017) Pillay, Nelendran; Govender, PoobalanController performance assessment (CPA) is concerned with the design of analytical tools that are utilized to evaluate the performance of process control loops. The objective of the CPA is to ensure that control systems operate at their full potential, and also to indicate when a controller design is performing unsatisfactorily under current closed loop conditions. Such monitoring efforts are imperative to minimize product variability, improve production rates and reduce wastage. Various studies conducted on process control loop performance indicate that as many as 60% of control loops often suffer from some kind of performance problem. It is therefore an important task to detect unsatisfactory control loop behavior and suggest remedial action. Such a monitoring system must be integrated into the control system life span as plant changes and hardware issues become apparent. CPA is well established for linear systems. However, not much research has been conducted on CPA for nonlinear systems. Traditional CPA analytical tools depend on the theoretical minimum variance control law that is derived from models of linear systems. In systems exhibiting dominant nonlinear behavior, the accuracy of linear based CPA is compromised. In light of this, there is a need to broaden existing CPA knowledge base with comprehensive benchmarking indices for the performance analysis of nonlinear process control systems. The research efforts presented in this thesis focuses on the development and analysis of such CPA tools for univariate nonlinear process control loops experiencing the negative effects of dominant nonlinearities emanating from the process. Two novel CPA frameworks are proposed; first a model based nonlinear assessment index is developed using an open loop model of the plant in an artificial neural network NARMAX (NNARMAX) representation. The nonlinear control loop is optimized offline using a proposed Nelder Mead-Particle Swarm Optimization (NM-PSO) hybrid search to determine global optimal control parameters for a gain scheduled PID controller. Application of the benchmark in real-time utilizes a synthetic process output derived from the NNARMAX system which is compared to the actual closed loop performance. In the case where no process model is available, a second method is presented. An autonomous data driven approach based on Multi-Class Support Vector Machines (MC- SVMs) is developed and analyzed. Unlike the model based method, the closed loop performance is classified according to five distinct class groups. MC-SVM classifier requires minimal process loop information other than routine operating closed loop data. Several simulation case studies conducted using MATLAB™ software package demonstrate the effectiveness of the proposed performance indices. Furthermore, the methodologies presented in this work were tested on real world systems using control loop data sets from a computer interfaced full scale pilot pH neutralization plant and pulp and paper industry.Item An assessment of the impact of selected construction materials on the life cycle energy performance and thermal comfort in buildings(2021) Haripersad, Rajesh; Lazarus, Ian Joseph; Singh, Ramkishore; Aiyetan, Olatunji AyodejiSouth Africa is a developing country with various construction projects that are being undertaken both by government and the private sector. The requirements for the construction of energy-efficient buildings as well as the selection methods for providing construction materials have hence become important. Energy efficiency improvements needs to be implemented in the construction of these buildings in order to decrease energy usage and costs and provide more comfortable conditions for its occupants. Previous studies revealed that most of the focus for improving energy efficiency in buildings has been on their operational emissions. It is estimated that about 30% of all energy consumed throughout the lifetime of a building is utilized as embodied energy (this percentage varies based on factors such as age of building, climate and materials). In the past this percentage was much lower, but with increased emphasis placed on reducing operational emissions (such as energy efficiency improvements in heating and cooling systems), the embodied energy contribution has become more significant. Hence, it is important to employ a life-cycle carbon framework in analysing the carbon emissions in buildings. The study aims to augment energy efficiency initiatives by showcasing energy reduction strategies for buildings. The study assessed the thermal performance of selected construction materials by analysing different buildings using energy modelling program, EnergyPlus and TRNSYS. The parametric study was set in the central plateau region of South Africa and was performed to determine appropriate energy efficiency improvements that can be implemented for maximum savings. A life cycle cost analysis was performed on the selected improvements. The models created are representative of the actual buildings when simulated data is compared to recorded data from these buildings. Results showed a significant variation in energy and construction costs with varying construction materials over the buildings’ life cycle. Findings suggest that there is a significant reduction in energy usage when simple efficiency measures are implemented. The study recommends the use of different energy efficient building materials and the implementation of passive interventions in the constructing of buildings; the thermal performance of a building be optimized to ensure thermal comfort and the developed model be adopted for use in the engineering and construction industry for the reduction of energy consumption.Item Basic mathematical modelling for polymer woven fabric performance suitable for low energy filtration systems(2019) Mncube, Blessing Thokozani; Rathilal, Sudesh; Pillay, Visvanathan LingamurtiWater is one of the most important and essential resources that people usually misuse and take for granted until it is either gone or unsuitable to be utilized for domestic, industrial or agricultural purposes. The need to explore affordable purification technologies is essential. The filtration processes are innovative technologies that can be employed in water treatment systems or water purification technologies. However, the filtration technologies have one prime limitation factor of which is fouling and biofilm formed on the membrane surface sometimes internal. Recent advancements in polymer science and textiles have led to developing fabric material that can be used as membranes suitable for emerging economies. For years’ people do use fabric to purify river water especially women from rural areas. Yet non-woven materials are used as a membrane by industries as compared to woven fabrics. However, most non-woven fabrics are easily damaged when cleaned with a polymer brush and require periodical replacement. The tapeline and filter manufacture use a woven fabric as a backer before casting or putting a filter on the weave fabric. These prove the fact that any woven fabric can be modified for optimal use. On the other hand, most Engineers and scientists have not given much attention to woven fabrics as a result, woven fabrics are not employed as membranes. Some scientists and engineers believe that woven fabrics are not suitable for treating water for domestic use. Some believe that some woven fabrics can be used as membranes provided they are capable to remove unwanted materials like bacteria and pathogen. The aim of this study is to create a full understanding of the factors that affect the fabrics when used as membranes, especially when the polymer woven fabrics are used as filters to treat water and wastewater. It is essentially important to develop standardized procedures or models that accurately describe the textile woven fabrics behaviour when used as filters. The standardized models or procedures will assist engineers and scientists when developing filtration systems using woven fabrics. The first objective was to evaluate and compare the fabric types that can be used as filters or membranes in water and wastewater treatment processes. The second objective was to identify the applications for woven fabric membranes and evaluate the factors that play a critical role during the filtration process and relationship between those factors. The experimental investigations conducted were to evaluate the (1) main objectives; (2) effect of membrane orientation; (3) effect of feed quality on membrane performance; (4) effect on stable flux quality and quantity of the selected fabrics; (5) effect of fabric type on filtration or microfiltration processes; (6) effect of membrane fouling on membrane performance; (7) develop the basic model suitable in identifying the right fabric for any filtration system operating at low energy. The experimental investigations conducted were to evaluate the selected woven fabrics that were manufactured in South Africa, easy to clean with a polymer brush. Those woven fabrics were tested using South African river water and wastewater from treatment plants. When evaluating different feed solutions, bio-fouling was considered to be the major limiting factor of woven fabrics, but the feed with a lot of bio impurities can be modified for optimization processes. Laboratory apparatus and field apparatus was developed to analyze and evaluate the effect and behaviour of fabrics performance, and cake formed on the fabrics. The result clearly states that a solution or wastewater with a lot of biological organisms produce lower flux and also produces a lower critical/stable flux when compared with the solution with more incompressible solids or impurities. The result clearly shows that all selected fabrics can be used as filters however; the polyester fabric was the only fabric that can be used for microfiltration processes suitable to clean water for domestic use. This polyester fabric removes 99.995% of impurities from the polluted waters. The Permeate water quality coming from this polyester fabric was less than 1NTU, before and after stable flux. Other fabrics can be used as filters but not for microfiltration. These three fabrics are not capable of removing micro-impurities (less than 20 micrometres). The basic mathematical modelling Equation developed, proved that the membrane pore size, driving force, impurities size in polluted water, impurities nature and impurities concentration play major roles in the filtration process especially in stable flux formation. The simple Equation F = Ae−Bt + C was discovered to be suitable to evaluate the fabric performance, where C is the constant flux value, A is the maximum flux value and B is the part of the critical area or rate change. The Equation can be applied to most fabrics that are used as filters. Testing the maximum flux value was critical and achievable when using pure and clean water especially the distilled water. The results show that most solutions with high compressible impurities will take less time to reach a critical or stable flux. The solution or effluent or river water with more bio impurities and more bacteria will have less flux when compared with a solution with more incompressible impurities. Most polymer woven fabrics do not require any sophisticated technologies or additional chemicals to clean. It can be easily brushed with a polymer brush. Brushing the surface of the fabric with balanced tensile strengths in both warp and weft yarns will not rearrange, damage, or affect the pore size. Only sharp objects can damage the polymer fabrics. The knowledge of this report will assist in optimising the filtration system operation at low energy when using woven polymer fabrics as membranes for filtration. The basic mathematical model can be useful to engineers and scientists willing to use woven fabrics as membranes. Hence, mathematical modelling is one of the important tools of engineering optimization and design. This study focuses on the low energy (gravity-driven) systems that treat water and wastewater like Household Point of Use (POU) systems. Other POU systems were tested and compared to POU systems that are made of the Polymer woven fabric. Based on results, it can be concluded that POU's that uses polyester membranes (PWF-POU) are good prospects for area without sophisticated water or wastewater treatment systems since it removes almost all bacteria and impurities. Polyester woven fabrics can be used as a microfiltration membrane not only to process water or wastewater but also to process chemicals, oils, etc. The other selected fabrics that were made of polypropylene filaments need to be modification in order to operate at optimum when cleaning water for domestic and tertiary use. When modifying these polypropylene fabrics, the quality do improved.Item Characterisation of concrete with expanded polystyrene, eggshell powder and non-potable water : a case study(2023-05) Mncwango, Bonke; Allopi, DhirenUrbanisation has brought many benefits but it has also highlighted the global lack of housing alongside global natural resource scarcity. Lack of housing on the surface appears to be a singular problem, however in reality it represents a number of society’s biggest challenges such as crime, pollution (as a result of inadequate waste disposal strategies), unhygienic living conditions, as well as numerous health problems. Governments across the world have made various attempts at addressing the issue of lack of housing, including embarking on large scale social and public housing initiatives, building smaller homes for the homeless, as well as removing certain regulatory barriers to allow more houses to be built at a reduced timeframe. These advances have assisted many individuals and families globally, however, there are still many individuals and families that government housing-aid or housing initiatives have not yet reached. These individuals and families are faced with solving their housing crisis on their own, with their own resources. Globally, concrete remains a supreme building material in the construction industry and therefore is a primary factor of consideration for solving the housing crisis, especially for those who have no financial assistance or aid from government. Concrete’s composition is simple: cement, fine aggregate, coarse aggregate and water. The intricate interaction between all four components is meant to stand the test of time. Unfortunately, it is not only the earth’s diminishing natural resource reserves which are causing a decline in the popularity of conventionally produced concrete, but it is also the irreparable harm that it is causing to the environment. The process of concrete production requires large volumes of cement, and cement remains one of the biggest producers of carbon dioxide. Carbon dioxide is a greenhouse gas which in excessive amounts creates a cover that traps the sun’s heat energy in the atmosphere. Another major criticism of conventional concrete is the requirement that it be produced with clean water which is of a drinkable standard. This criticism is justified when considering the extreme water shortages that are experienced by many low to middle income countries around the world. The amount of financial and human resources that local authorities invest in cleansing water to bring it to a drinkable standard is often overlooked. It is obvious that it is less expensive to use water directly from a river in its natural state than using it after it has undergone numerous cleansing processes by local authorities. There have been a notable number of advances in making concrete more resource-efficient and environmentally friendly. These include the advent of lightweight concretes such as expanded polystyrene concrete. Expanded polystyrene concrete not only saves the amount of aggregate that would normally be required in conventional concrete, it also has excellent acoustic and thermal properties, thereby reducing energy consumption which in turn saves money. However, even with such excellent properties, expanded polystyrene concrete still fails to address two of concrete’s major criticisms which are related to the amount of cement used as well as the amount of clean potable water required for mixing. Therefore, by building on the qualities of expanded polystyrene concrete, this research investigates the potential of lowering the amount of cement required in a concrete mix through the use of eggshell powder. Eggshells are a waste product found everywhere in the world and are readily available in almost limitless quantities. The use of eggshells in concrete to lower the amount of cement required will not only achieve a reduction in the amount of carbon dioxide that is produced in the process of producing concrete, it will also assist in contributing toward solving the escalating waste disposal crisis that currently exists for many waste types such as eggshells. It is common for communities to reside close to a river or a natural flowing watercourse, so this research included river water as a variable. Four different concrete mix scenarios were tested to ascertain through experimentation whether the strength properties of concrete that contains expanded polystyrene, eggshell powder and natural river water in various proportions could in any way compare to a conventionally produced concrete mix. In order to comprehensively study material behaviour in this case, sieve analysis, bulk density, fineness modulus, moisture content as well as specific gravity tests were performed on all aggregates used. Furthermore, in order to achieve the required analytical depth for the materials being studied, x-ray diffraction and energy dispersive spectroscopy tests were conducted. As a means of conducting further trend analysis on the different experimental mixes, logarithmic regression models were developed. Through analysis of the output attained from the aforementioned strategies, this research study found that when cement was substituted by eggshell powder at a percentage of 5 % and simultaneously when coarse aggregate was also substituted by expanded polystyrene at a percentage of 5 %, all mixed with non-potable water, the compressive and flexural strength outcomes marginally differed from the strength outcomes of conventionally produced concrete. Furthermore, the substitution of stone by EPS at a percentage of 10 % when mixed with river water was comparable to the substitution of stone by EPS at a percentage of 10 % when mixed with potable water. The results showed that there was a difference of not more than 1.4 MPa and 0.3 MPa in compressive and flexural strength respectively amongst the averages obtained at each age tested. Study results show that the substitution of potable water by non-potable water reduced both the compressive and flexural strength of the concrete when the mix did not contain eggshell powder. However, when eggshell powder was included in the mix, the strength outcomes of the compressive and flexural strength of the concrete mix was comparable to that of conventionally produced concrete. There may be many reasons why it is important to not deviate from convention in the production of numerous products such as concrete; nevertheless, the value of experimentation as demonstrated in this research is that experimentation can give rise to a variety of innovations accompanied by a wealth of solutions to the environmental and socio-economic issues that the world is currently faced with.Item Combined tropospheric attenuation along satellite path at SHF and EHF bands in subtropical region(2019) Olurotimi, Elijah Olusayo; Sokoya, A. O.; Ojo, J. S.; Owolawi, P. A.The traffic flow of information across the globe is crucial in today’s communication systems, where about 88% population are connected via several smart devices, hence resulting into constraints on the limited available radio resources. Due to the limitations of terrestrial connectivity affecting communication systems, in terms of geographical coverage area and system capacity, which have become serious issues globally. Therefore, there is a need for communication industries to embrace the use of satellite systems. Satellite services have many advantages some of which includes availability, wide coverage area and the ability to accommodate most of the limitations of the terrestrial systems. However, Earth-to-satellite systems, especially those operating at higher frequencies above 7 GHz, usually suffer from degradation due to hydrometeors which are mainly produced in the troposphere. Hydrometeors include rainfall, hail, gases, clouds and snow among others; of which rainfall is the principal factor which contributes highest impairment along the propagation paths, simply termed as rain attenuation. Moreover, the scenario in the tropical and subtropical regions become more pronounced due to the degree of occurrence of precipitation when compared to the temperate region. Other significant factor that usually affects the propagation of signals is attenuation by scattering and absorption due to rain, water vapour, cloudiness and other gases in the atmosphere. Thus, in order to estimate accurate rain attenuation of a location, there is a need for accurate measurements of rain attenuation components such as rain height, rain rate, altitude, slant-path length, among others; of which rain height plays a significant role in the case of satellite links. However, the attenuation due to other tropospheric components cannot be negligible at higher frequencies over any location in order to proffer solution or cater for impairments that may arise as a result of any atmospheric perturbation in a satellite communications system. The significance of rain height in estimating rain attenuation along the satellite path, is ii crucial and this important component has been extensively dealt with in the temperate region, partially in tropical region with no record in subtropical regions. This study, therefore, focuses on the measurement of rain height to assess the degree of attenuation due to precipitation over several locations across South Africa, a subtropical region. In spite of the extensive works that have been carried out on prediction of rain attenuation based on the recommended rain height by the International Telecommunication Union-Regulation over some of the studied locations, the contribution of local rain height data for rain attenuation prediction will enable better results which are the focus of this study. Hence, this thesis presents 5-year rain height measurements based on zero-degree isotherm height (ZDIH) obtained from the Tropical Rainfall Measuring Mission-Precipitation Radar (TRMM-PR) over a subtropical region-South Africa. The component of this work encompasses rain height cumulative distribution, percentage of exceedances, development of the contour maps of rain heights for South Africa, modeling of rain height, tropospheric attenuation prediction due to gas, cloudiness, scintillation, application of rain height for rain attenuation prediction, estimation of total attenuation and prediction of quality of service based on signal to noise ratio. Findings from this work show that the ZDIH distribution is location dependent. Rain heights value ranges from about 4.305 km from the southern region to 5.105 km in the northern region of South Africa. The parameters of the ZDIH distribution models developed with the use of maximum likelihood estimation technique show a wider variation over some selected locations observed. Finally, attenuation due to rain, gas, cloudiness and scintillation were estimated. In addition, the total attenuation and the quality of service based on the propagation signals at SHF and EHF over some selected stations were evaluated and presented in this work.Item Comparative study of anammox-mediated nitrogen removal in three reactor configurations(2021-05-27) Kosgey, Kiprotich Eric; Pillai, Sheena Kumari Kuttan; Kiambi, Sammy Lewis; Bux, Faizal; Chandran, KartikAnaerobic ammonium oxidation (ANAMMOX) is an efficient and cost-effective process developed for biological nitrogen removal from wastewater. However, widespread application of the ANAMMOX process for wastewater treatment remains constrained due to the slow growth of ANAMMOX bacteria, propensity for out-competition by fast growing microbes, and its sensitivity to environmental and operational conditions. Consequently, understanding the influence of mixing conditions in different reactor configurations on this process is paramount in its improvement. This study focused on the comparative analysis of ANAMMOX-mediated nitrogen removal in a hybrid up-flow anaerobic sludge blanket reactor (H-UASB), moving bed biofilm reactor (MBBR) and a gas-lift reactor (GLR). The study involved experimental study of nitrogen removal, bacterial population dynamics and physical properties of the bacterial biomass within the reactors, as well as the description of process performance and the growth of nitrifying and ANAMMOX bacteria in the reactors using a calibrated mechanistic model. All the reactors were operated for 535 days using the same synthetic feed under anaerobic conditions. K1-type carrier materials were added to each reactor for biofilm development. The concentrations of ammonium (NH4 + ), nitrite (NO2 - ) and nitrate (NO3 - ) in the effluent from the reactors were determined colorimetrically. Among the three reactors, MBBR displayed the highest nitrogen removal efficiency (NRE) during the study (66±36%), and contained the lowest concentration of free ammonia (FA) (19±22 mg-N/L) and free nitrous acid (FNA) (0.001±0.001 mg-N/L). In comparison, the NRE and the concentrations of FA and FNA in H-UASB during the study were 63±28%, 91±41 mg-N/L and 0.006±0.004 mgN/L, respectively, while in the GLR, they were 54±39%, 28±29 mg-N/L and 0.002±0.002 mg-N/L, respectively. Based on the ratios of NO2 - consumed to NH4 + consumed, and the ratios of NO3 - produced to NH4 + consumed, the start-up of ANAMMOX process was faster in the MBBR (144 days) compared to H-UASB (193 days) and GLR (272 days). MBBR also displayed less fluctuations in the NREs and nitrogen removal rates (NRRs) during the study compared to H-UASB and GLR. The microbial communities in the suspended biomass in the reactors were characterised using high-throughput sequencing on an Illumina MiSeq platform on days 125, 192, 260, 309 and 535, while the microbial communities in the biofilms were only characterised on day 535 (last day) due to slow biofilm development. Gradual increases in the relative abundance of ANAMMOX bacteria were observed in the suspended biomass in all the reactors between days 125 and 309, which corroborated the observed increases in the NREs. The relative abundance of ANAMMOX bacteria remained consistently higher in H-UASB during the study than in MBBR and GLR. On the contrary, the highest relative abundance of ammonia oxidising bacteria (AOB) was observed in the suspended biomass in the MBBR on day 125 at approximately 38%, while the highest relative abundance of nitrite oxidising bacteria (NOB) and complete ammonia oxidising (COMAMMOX) bacteria was recorded in the suspended biomass in the MBBR at approximately 30% and 5%, respectively. In all the reactors, the relative abundance of AOB in the biofilms and the suspended biomass was comparable on day 535. In addition, on day 535, higher relative abundance of NOB was observed in the biofilms in both GLR and H-UASB at approximately 7% compared to the suspended biomass, while their abundance in the suspended biomass in the MBBR was comparable to that recorded in the biofilms. Furthermore, in both H-UASB and MBBR, higher relative abundance of ANAMMOX bacteria was observed in the suspended biomass compared to the biofilms on day 535, while comparable abundance was observed in the GLR. The highest total microbial diversity (Shannon and Simpson indices) and evenness (Pielou’s Evenness) was observed in the suspended biomass in the MBBR. Granulation of the suspended biomass was observed in both GLR and H-UASB, while the suspended biomass in the MBBR was flocculent. In the MBBR, the colour of the biomass had turned brown on day 125, while the biomass in H-UASB and GLR on this day was tawny and dark-tawny, respectively. However, on day 309, the biomass in all the reactors had turned red, corroborating the highest relative abundance of ANAMMOX bacteria observed during the study. Faster attachment of biomass on the carrier materials in MBBR was observed in the course of study compared to H-UASB and GLR. On the last day, the concentrations of the biomass on the carrier materials in the MBBR was also higher (12 mg/carrier) in the MBBR than in the H-UASB (8 mg/carrier) and GLR (10 mg/carrier). Activated sludge model 1 (ASM 1), which was modified by separating the activities of Nitrospira spp. from those of Nitrobacter spp. as well as by adding both ANAMMOX and COMAMMOX bacterial activities, was used to describe process performance in the reactors. The modified ASM 1 was able to predict the trends in the effluent concentrations of NH4 + , NO2 - and NO3 - in all the reactors. In addition, the correlation of the actual relative abundance of nitrifying and ANAMMOX bacteria, with the model-predicted relative abundance, was positive. The model also indicated higher heterotrophic activities in both GLR and MBBR compared to H-UASB, an indication that continuous mixing in MBBR and alternation of plug-flow conditions with internal gas circulation in GLR favoured heterotrophic bacterial growth. However, the model was limited in predicting the fluctuations in bacterial abundance and the fluctuations in the effluent concentrations of NH4 + , NO2 - and NO3 - in the reactors. The obtained results indicate that better-mixed conditions in the MBBR led to comparable relative abundance of nitrifying bacteria between the biofilms and the suspended biomass, while plug-flow conditions in the H-UASB favoured ANAMMOX bacterial growth in the suspended biomass and the nitrifying bacterial growth in the biofilms. The alternation of internal gas circulation with plug-flow conditions in the GLR also favoured the growth of nitrifying bacteria in the biofilms. Overall, nitrogen removal in H-UASB was likely dominated by ANAMMOX process, while nitrogen removal in MBBR and GLR was as a result of combined ANAMMOX and sequential nitrification-denitrification processes. The novelty of this study stem from the impact of mixing conditions on process performance and microbial ecology of ANAMMOX-mediated systems.Item Crashworthiness analysis of a composite light fixed-wing aircraft including occupants using numerical modelling(2017) Evans, Wade Robert; Jonson, Jon David; Walker, MarkThe development and validation of reliable numerical modelling approaches is important for higher levels of aircraft crashworthiness performance to meet the increasing demand for occupant safety. With the use of finite element analysis (FEA), development costs and certification tests may be reduced, whilst satisfying aircraft safety requirements. The primary aim of this study was the development and implementation of an explicit nonlinear dynamic finite element based methodology for investigating the crashworthiness of a small lightweight fibre reinforced composite aircraft with occupants. The aircraft was analysed as it crashed into soft soil and the FEA software MSC Dytran was selected for this purpose. The aircraft considered for the purposes of this study was based on a typical four-seater single engine fibre-reinforced plastic composite aircraft. The definition of a survivable accident is given by Coltman [1] as: “an accident in which the forces transmitted to the occupant through his seat and restraint system do not exceed the limits of human tolerance to abrupt accelerations and in which the structure in the occupant’s immediate environment remains substantially intact to the extent that a liveable volume is provided for the occupants throughout the crash sequence”. From this definition, it was determined that the FEA models must primarily provide an assessment on the crashworthiness of the aircraft in terms of the structural integrity of the airframe to ensure a minimum safe occupant volume and the tolerance of humans to abrupt (de)accelerations. An assessment of other crashworthiness factors have been ignored in this study, such as post-crash hazards (e.g. fire) and safe egress for the occupants. Stockwell [2] performed a dynamic crash analysis of an all-composite Lear Fan aircraft impacting into concrete with the explicit nonlinear dynamic finite element code MSC Dytran. The structural response of components was qualitatively verified by comparison to experimental data such as video and still camera images. The composite fuselage materials were represented with the use of simplified isotropic elastic-plastic material models, and therefore did not account for the anisotropic properties of composite materials and the associated failure mechanisms. The occupants were represented as lumped masses; therefore occupant response could not be investigated. Malis and Splichal [3] performed a dynamic crash analysis of a composite glider impacting into a rigid surface with MSC Dytran; however further model verification was required. The 50th percentile adult male (occupant of average height and mass) Hybrid III anthropomorphic test device (ATD), also referred to as a crash test dummy, was represented in the analyses with the Articulated Total Body (ATB) model integrated within MSC Dytran. Various injury criteria of the ATB model were evaluated to determine the crashworthiness of the glider. Bossak and Kaczkowski [4] performed global dynamic crash analyses of a composite light aircraft crash landing. Representative wet soil, concrete and rigid impact terrains were modelled using Lagrangian-based finite element techniques and only the vertical velocity component of the aircraft was considered to simplify analyses. It was assumed that the previous use of only a downward vertical velocity component was a result of possible numerical instabilities which commonly occur with the use of Lagrangian solvers when considering problems with large deformations, which is a characteristic of crash analyses (i.e. the addition of a horizontal velocity component may result in severe element deformation of the soft soil terrain, resulting in premature analysis termination). Analyses of the occupant were performed in separate local models, using accelerations derived from the global analyses results. The real-time interactions between the occupant and aircraft therefore could not be investigated, which is considered a major disadvantage. Impact analyses of helicopters into water were performed by Clarke and Shen [5], and Wittlin et al. [6]. Both these papers showed promising results with the use of Eulerian-based finite element techniques to model the water. Additionally, combined horizontal and forward velocity components were assigned to the fuselages with success. It must be noted that the fuselages were modelled as rigid bodies; therefore the effect of structural failure on analyses could not be investigated. Fasanella et al. [7] performed drop tests of a composite energy absorbing fuselage section into water using Eulerian, Arbitrary Lagrange Eulerian (ALE) and Smooth Particle Hydrodynamics (SPH) meshless Lagrangian-based finite element techniques to represent water. Successful correlation between experimental and numerical data was achieved; however, structural failure could not be modelled with the Eulerian-based finite element technique due to analysis code limitations at the time. A “building block” approach was used in this study to develop accurate numerical modelling techniques prior to the implementation of the full-scale crash analyses. Once the blocks produced satisfactory results in themselves, they were then integrated in order to achieve the abovementioned primary aim of this study. The sub-components (or blocks) were the occupant (viz, FEA of the human bodies’ response to impact), (FEA of) soft soil impact and (FEA of) fibre-reinforced plastic composite structures. This approach is intuitive and provides key understanding of how each sub-component contributes to the full-scale crash analyses. Published literature was reviewed, where possible, as a basis for the development and validation of the techniques employed for each sub-component. The technique required to examine the dynamic response of an occupant with MSC Dytran, integrated with the ATB model, was demonstrated through the analysis of a sled test. The numerical results were found to be comparable to experimental results found in the literature. An Eulerian-based finite element technique was implemented for soft soil impact analyses, and its effectiveness was determined through correlation of experimental penetrometer drop test results found in the literature. An investigation into the performance of the Tsai-Wu failure criterion to capture the onset and progression of failure through the layers of fibre reinforced composite laminates was conducted for an impulsively loaded unidirectional laminate strip model. Based on the results obtained, the techniques implemented for each sub-component were deemed valid for crashworthiness applications (viz. to achieve the project aim). Full-scale crash analyses of impacts into rigid and soft soil terrains with varying aircraft impact and pitch angles were investigated. Typical limitations encountered in previously published works were overcome with the techniques presented in this study. The aircrafts’ laminate layup schedule was explicitly defined in MSC Dytran, thereby eliminating the inherent inaccuracies of using isotropic models to approximate laminated composite materials. The aircraft was assigned both horizontal and vertical velocity components instead of only a vertical component, which increased the model accuracy. Numerical instabilities, due to element distortion of the terrain when using a Lagrangian approach, were eliminated with the use of an Eulerian soft soil model (Eulerian techniques are typically used to model fluids where large deformations occur, which is a characteristic of crash analyses). Structural failure was successfully implemented by coupling Lagrangian and Eulerian solvers. The ATB model allowed for the real-time interactions between the occupant and aircraft to be investigated, unlike previously where analyses of the occupant were performed in separate local models using accelerations derived from the global analyses results. The results obtained from the crash analyses provide an indication of the forces transmitted to the occupant through the seat and restraint system, and the aircraft’s ability to provide a survivable volume throughout the crash event. The explicit nonlinear dynamic finite element based methodology was successfully implemented for investigating the crashworthiness of small lightweight composite aircraft, satisfying the primary aim of this study. Chapter 1 provides a review of fibre reinforced composite materials, the finite element method (FEM), ATDs and associated analysis codes, human tolerance limits to abrupt (de)accelerations, and crash dynamics and environment. The review of the FEM initially focuses on the fundamentals of FEA and then on the features specific to MSC Dytran as it is used throughout this study. Chapter 2 discusses the development of suitable numerical modelling techniques at the sub-component level and the implementation of these techniques within the full-scale crash analyses. Chapter 3 presents and discusses the full-scale crash analyses results for three impacts into rigid terrain and three impacts into soft soil terrain with varying aircraft pitch and impact angles. The results obtained from the crash analyses provide an indication of the forces transmitted to the occupant through the seat and restraint system, and the aircraft’s ability to provide a survivable volume throughout the crash event. Chapter 4 provides a conclusion of the work performed in this study and highlights various areas for future work.Item Development of a hybrid fuzzy-mathematical cleaner production evaluation tool for surface finishing(2007) Telukdarie, ArneshThe metal finishing industry has been rated among the most polluting industries worldwide. This industry has traditionally been responsible for the release of heavy metals such as chrome, nickel, tin, copper etc into the environment. The application of cleaner production systems to a range of industries, including the metal finishing industry has provided significant financial and environmental benefits. An example of a successful application cleaner production in the metal finishing industry is the reduction in the typical water consumption from 400 1/m² to less than 10 1/m² of plated product. The successful application of cleaner production to the mental finishing industry has encountered many barriers. These barriers include the need for a highly skilled cleaner production auditor and the need for rigorous plant data to effectively quantify the cleaner production potential of the company under consideration. This study focuses on providing an alternate user-friendly audit system for the implementation of cleaner production in the mental finishing industry. The audit system proposed eliminates the need for the need for both a technical auditor and rigid plant data. The proposed system functions solely on plant operator inputs. The operator’s knowledge is harnessed and used to conduct an efficient and effective cleaner production audit. The research is based on expert knowledge, which was gained by conducting audits on some 25 companies using traditional auditing tools. This company audits were used to construct a database of data that was used in the verification of the models developed in this study. The audit is separated into different focus components. The first system developed was based on fuzzy logic multi variable decision-making. For this system the plant was categorized into different sections and appropriate fuzzy ratings were allocated based on experience. Once the allocations were completed multi variable decision analysis was used to determine the individual variable impact. The output was compared and regressed to the database equivalent. Operator inputs can then be used to determine the individual category outputs for the cleaner for the production rating for the company under consideration. The second part of this study entails the development of mathematical models for the quantification of chemical and water consumptions. This was based on the present and ideal (cleaner production) plant configuration. Cleaner production operations are compared to present operations and potential savings quantified. Mathematical models were developed based on pilot scale experiments for the acid, degreaser and zinc plating process. The pilot experiments were carried out on a PLC controlled pilot plant. These models were developed form factorial experimentation on the variables of each of the plating processes. The models developed aid in the prediction of the relevant optimum consumptions. The key challenge in traditional evaluation systems has been the quantification of the plant production. The most effective measure of production is by means of the surface area plated. In this study a novel approach using the modeled acid consumption is proposed. It was assumed that the operator inputs for the above models would not be precise. The models developed allowed for input variations. These variations were incorporated into the model using the Monte Carlo technique. The entire cleaner production evaluation system proposed is based on an operator questionnaire, which is completed in visual basic. The mathematical model was incorporated into the visual basic model. For the purpose of model verification the mathematical models were programmed and tested using the engineering mathematical software, Mat Lab. The combined fuzzy logic and mathematical models prove to be a highly effective means of completing the cleaner production evaluation in minimal time and with minimal resources. A comparative case study was conducted at a local metal finishing company. The case study compares the input requirements and outputs from the traditional systems with the system proposed in this study. The traditional model requires 245 inputs whilst the model proposed in this study is based on 56 inputs. The data requirements for the model proposed in this study is obtained from a plant operator in less than one hour whilst previous models required high level expertise over a period of up to two weeks. The quality of outputs from the model proposed is found to be very comparable to previous models. The model is actually found to be superior to previous models with regards predicting operational variations, water usages, chemical usages and bath chemical evolution. The research has highlighted the potential to apply fuzzy-mathematical hybrid systems for cleaner production evaluation. The two limitations of the research were found to be the usage of a linear experimental design for model development and the availability of Mat Lab software for future application. These issues can be addressed as future work. It is recommended that a non-linear model be developed for the individual processes so as to obtain more detailed process models.Item Development of a multi-criteria decision-support tool for improving water quality to assist with engineering infrastructure and catchment management(2024-05) Ngubane, Zesizwe; Sokolova, Ekaterina; Stenström, Thor-Axel; Dzwairo, BloodlessResearch combining water quality modelling, quantitative chemical/microbial risk assessment, and stakeholder engagement to prioritise catchment areas facing water pollution problems to devise effective pollution mitigation strategies are limited. This research therefore aimed to address this gap by providing a practical and comprehensive framework that supports wellinformed decision-making processes in water pollution alleviation. By integrating multiple criteria and catchment aspects, this framework can assist infrastructure, operational, and ecological managers within a catchment in prioritising best management practices (BMPs) to reduce pollution and mitigate against potential resultant impacts. Given this context, uMsunduzi catchment, in KwaZulu-Natal, South Africa was chosen as a study site. UMsunduzi River is a major tributary of uMngeni River that is used for water supply to the cities of Pietermaritzburg and Durban. The study begins with the data synthesis from diverse sources of scientific data to identify chemical and microbial hazards, utilising a water quality modelling tool to map point and nonpoint source pollution in the catchment. The assessment encompasses the presence of pathogens such as Cryptosporidium and Escherichia coli (E. coli) in the catchment, with rural areas showing a greater contribution from animal sources, while urban areas are affected by impaired wastewater infrastructure. Quantitative microbial risk assessment (QMRA) was conducted, assuming no water treatment within the catchment. The investigation considered multiple exposure routes, including domestic drinking and recreational activities for both adults and children. The results indicate that the probability of infection from Cryptosporidium and E. coli exceeds acceptable levels set by South African water quality guidelines and the World Health Organization. The assessment further included a chemical risk assessment on various chemical groups, including organochlorinated pesticides (OCPs), pharmaceuticals and personal care products (PPCPs), heavy metals, nitrates, and phosphates. Elevated carcinogenic risks were observed for most OCPs, while noncarcinogenic pesticide effects pose long-term risks. Heavy metals and PPCPs are within sub-risk levels, but phosphates have notable ecological and health impacts, particularly in Inanda Dam, a key source of potable water for Durban. In this study, a unique contribution is made by incorporating both chemical and microbial risk assessment. Furthermore, the risk assessment methodology not only encompasses various chemical pollutants and exposure pathways but addresses the nuanced issue of water consumption variability between children and adults. To address these identified risks, a multi-criteria decision analysis methodology is employed to engage stakeholders in the risk management process. Affected, involved, and interested stakeholders, along with economic, environmental, and social criteria, contribute to the selection of Best Management Practices (BMPs). The Simple Multi-Attribute Rating Technique for Enhanced Stakeholder Take-up (SMARTEST) is utilised to identify suitable interventions. The study culminates in the recommendation of BMPs that aim to change behaviour, including public education on livestock grazing management, safe medication disposal, and responsible fertilizer and pesticide use. Pollution management measures, such as solid waste control and river cleanup, are suggested, along with infrastructure management improvements, like sewer system maintenance. This research strived to bridge the gap in water pollution alleviation by presenting a practical and comprehensive framework designed to support well-informed decision-making processes. This framework, with its integration of multiple criteria and considerations, stands poised to aid infrastructure, operational, and ecological managers within a catchment in prioritising BMPs aimed at reducing pollution and mitigating resultant health impacts.Item Development of multi-objective optimization model for project portfolio selection using a hybrid method(2024-05) Mogbojuri, Akinlo Olorunju; Olanrewaju, Oludolapo AkanniSelecting inappropriate projects and project portfolios can result in irreversible wasted economic opportunities, reduced manpower value, and missed prospects and other resources for the organization. As a result, to achieve the best possible outcome, all criteria to enable the best possible choices to be made should be considered. Choosing projects wisely and managing the project portfolio can assist organizations in gaining a better understanding of their projects and their risks and advantages. When faced with budget and other constraints, the ability to select an optimal mix of projects is a significant advantage in the project selection process. The selection of projects by means of employing an effective method is uncommon because many methods are deemed ineffective due to limitations on the number of projects that can be chosen, along with the failure to select economical projects. Project selection is a complex, multicriteria decision-making procedure involving numerous and frequently competing goals. The complexities of project selection problems stem primarily from the large number of projects that are required to be selected for an appropriate collection of investment projects. The study identified some research gaps such as limited studies on social sustainability benefits, criteria for public project selection not being considered or mentioned, and the decision-making committee or expert generating weight to the deviational variables instead of using weighting techniques. The aim of this study is to employ an integrated approach to establish a multi-objective optimization approach for public project portfolio selection. The specific research objectives are to develop an integrated method of Analytic Hierarchy Process, Goal Programming and Genetic Algorithm (AHP-GP-GA), establish a relationship for the developed models to correct the bias of each model and apply the integrated method in a selected community with a set of projects. Data was collected by compiling a well-structured questionnaire for decision-makers analysed by applying the AHP and GP methods. The composition of the integrated approach includes decision support tool with exact and includes meta-heuristic modelling known as Analytic Hierarchy Process, Goal Programming and Genetic Algorithms (AHP-GP-GA) for solving public project portfolio selection problems. The Analytic Hierarchy Process model was used to develop project selection criteria, assign relative priority weights of decision makers, and determine the overall weight of project alternatives. The GP constructed the mathematical model to handle large numbers of objectives and constraints. The GA is the solution algorithm for the effective and flexible optimization model to produce optimal solutions. The AHP and GA employed Spice Logic and MATLAB software packages to analyse, validate and enhance the research. The AHP model highlighted some sub-criteria and project criteria attributes that are significant to project selection criteria. These criteria are economic development, job creation, community acceptance, structure aligned with company goals, employment record of project manager, locality of the project, finish period of the project selected, project threats and political impact. Meanwhile, empirical research on public agencies was undertaken with the AHP-GP-GA, AHP-GP and GP separately to address the problem. The GP and AHP-GP used the LINGO 18.0 software package, while the developed integrated method AHP-GP-GA was solved using MATLAB software package to exhibit the competence of the model and the research. The high point of the empirical research showed that the AHP-GP-GA model can solve large-scale, or complex problems with a large number of decision variables. It selected more projects compared to the AHP-GP and GP standalone model and provided more optimal solutions, which made the approach robust and flexible for solving decision-making problems. The theoretical and practical contributions of the study are the research, which will improve the knowledge and understanding of researchers or academia in PPSP and add to the literature to enhance the existing methods of integrated approaches. The stakeholders in project management practitioners like organization management, top executives, senior and junior supervisors, and personnel connected to the projects will also benefit from the research in selecting optimal projects from the various solution options, saving costs, and learning how to handle and select more complex projects in large-scale real-life situations. This study recommends further research on the integration of stochastic models, evolutionary algorithms, or computation with AHP and GP for the Public Project Portfolio Selection Problem.Item The development of SCADA control and remote access for the Indlebe Radio Telescope(2016) Dhaniram, Ajith Deoduth; Janse van vuuren, Gary Peter; Govender, P.The proposed supervisory control and data acquisition solution is intended to gather data from all sub-systems and provide control commands related to the Indlebe Radio Telescope. Currently the control commands are executed from the command line prompt of the Skypipe software. These control commands are used to change the elevation angle of the antenna. The supervisory control and data acquisition system will be interfaced to sub-systems namely; a programmable logic controller, a weather station, an uninterruptible power supply and a camera. It will be used to manually or automatically control the elevation angle of the antenna, includes a menu structure that allows for easy navigation to the sub-systems and allows for trending, alarming, logging and monitoring of all system parameters. The proposed system will mitigate the lack of information on the existing system. A global system for mobile communication unit has also been installed to monitor the temperature within the Indlebe control room, detect a power failure and communicate this information to supervisors, using its short message service option. Implementing a solution of this nature means that all data from the various sub-systems are brought together, giving a single platform to monitor data and provide manual and automatic control functionality. Problem solving, understanding and maintenance of the system will also become easier.Item Diesel engine performance modelling using neural networks(2005) Rawlins, Mark SteveThe aim of this study is to develop, using neural networks, a model to aid the performance monitoring of operational diesel engines in industrial settings. Feed-forward and modular neural network-based models are created for the prediction of the specific fuel consumption on any normally aspirated direct injection four-stroke diesel engine. The predictive capability of each model is compared to that of a published quadratic method. Since engine performance maps are difficult and time consuming to develop, there is a general scarcity of these maps, thereby limiting the effectiveness of any engine monitoring program that aims to manage the fuel consumption of an operational engine. Current methods applied for engine consumption prediction are either too complex or fail to account for specific engine characteristics that could make engine fuel consumption monitoring simple and general in application. This study addresses these issues by providing a neural network-based predictive model that requires two measured operational parameters: the engine speed and torque, and five known engine parameters. The five parameters are: rated power, rated and minimum specific fuel consumption bore and stroke. The neural networks are trained using the performance maps of eight commercially available diesel engines, with one entire map being held out of sample for assessment of model generalisation performance and application validation. The model inputs are defined using the domain expertise approach to neural network input specification. This approach requires a thorough review of the operational and design parameters affecting engine fuel consumption performance and the development of specific parameters that both scale and normalize engine performance for comparative purposes. Network architecture and learning rate parameters are optimized using a genetic algorithm-based global search method together with a locally adaptive learning algorithm for weight optimization. Network training errors are statistically verified and the neural network test responses are validation tested using both white and black box validation principles. The validation tests are constructed to enable assessment of the confidence that can be associated with the model for its intended purpose. Comparison of the modular network with the feed-forward network indicates that they learn the underlying function differently, with the modular network displaying improved generalisation on the test data set. Both networks demonstrate improved predictive performance over the published quadratic method. The modular network is the only model accepted as verified and validated for application implementation. The significance of this work is that fuel consumption monitoring can be effectively applied to operational diesel engines using a neural network-based model, the consequence of which is improved long term energy efficiency. Further, a methodology is demonstrated for the development and validation testing of modular neural networks for diesel engine performance prediction.Item Distributed generation optimization in future smart grids(2022-09-29) Chidzonga, Richard Foya; Nleya, BakheEver-surging global power(energy) demands coupled with the need to avail it in a reliable, as well as efficient manner, have led to the modernization of legacy and cur-rent power system grids into Smart Grid (SGs) equivalents. This is mostly achieved by blending the existing systems with an information subsystem that will facilitate duplex communication, i.e., electrical power flowing towards the end users while information characterising the grid’s performance can also be relayed, mostly in the reverse direction. Thus, the information subsystem interconnects other core (key) entities such as generation, distribution, transmission, and end-user terminals to interrelate in real-time, and in the process, achieving a well reliable, robust as well as efficiently managed SG power system. As such, in the emerging distributed power systems of the future, Demand Side Management (DSM) will play an important role in dealing with stochastic renewable power sources and loads. A near-unity load factor can be secured by employing De-mand Response methods with storage systems as well as regulatory control mechanisms. Increasing deployment of Renewable Energy generation and other forms of unconventional loads such as Plug-In Electric Vehicles will aid DR implementation with attendant better results for both prosumers and the utilities. The central objective of DSM is to minimize peak-to-average ratio (PAR) and energy costs by switching to cheaper RES as well as reduction of CO2 emissions. This work focused on emergent techniques and microgrid optimization with special attention to load scheduling. Techniques for DSM, mathematical models of DSM, and optimization methods have been reviewed. State-of-the-art methodologies entering the DSM mainstream are data science, advanced metering infrastructure, and blockchain technologies. An improved atom search optimization technique is applied for DSM to substantially reduce power and energy costs in typical standalone or grid-tied microgrids. Further the day ahead dispatch problem of MGs with DEGs subject to a non-convex cost function is solved and simulated using quadratic particle swarm optimization. In the later case, the objective function includes the DEGs ‘valve-point’ loading effect in the ‘fuel-cost’ curve. The impact of DSM on convex and non-convex energy management problems with different load participation levels is investigated. Ultimately, it is demonstrated that the quadratic particle swarm optimization algorithm efficiently solves the non-convex energy management system (EMS) problem. In addition, we propose a hierar-chical optimal dispatch framework that relies on several objectives to achieve the overall design goal of a reliable and stable power supply, coupled with economic ben-efits to prosumers who elect to participate in power trading. Evaluation of the pro-posed framework is carried out analytically and by way of simulation. Overall, it is deduced from the obtained analytical as well as simulation results that the combination of appropriately sized battery storage systems (BESS) and renewable type generators such as PVs and WTs will help achieve a stable and reliable power supply to all users in the SG (or MG) and at the same time, it affords resilience. Final-ly, in our closing chapter, we also spell out possible future research directions.Item Energy assessment and scheduling for energy optimisation of a hot dip galvanising process(2021-12-01) Dewa, Mendon; Nleya, Bakhe; Dzwairo, BloodlessThe dearth of energy sustainability is posing major challenges both locally and glob- ally. Galvanising furnaces are categorised as dominant consumers of electricity in the overall galvanising industry. Relatively little research has been carried out concerning energy optimisation through sequencing or scheduling algorithms by way of enhancing the performance of galvanising lines. In this regard, the research centres on evaluating overall energy performance in this industry. The research sought to introduce an opti- mal energy optimisation-scheduling algorithm for a hot dip galvanising process. A DMAIC based methodology was presented for the provisioning of a structured prob- lem-solving process for improving energy efficiency in a galvanising process. Its framework embraces an energy sustainability assessment of four batch hot-dip galva- nising plants. Four energy minimisation opportunities were identified and quantifiable energy and cost savings, as well as avoided carbon dioxide emissions were derived from the analysis of one of the plants. Production or zinc used was identified as the main driver for electricity consumption for Plant 1, while the number of dips per month, amount of zinc used, and ambient temperature conditions were identified as the rele- vant variables for developing a regression model for Plant 2. The amount of zinc used and ambient temperature conditions were found to be the relevant variables for Plant 3. The derived regression model for Plant 4 was based on the amount of zinc used and ambient temperature conditions. The energy performance indicators for a galvanising plant were established through a comparison of actual and expected consumption, energy intensity index, cumulative sum, and specific energy consumption. A bi-objective GECOS algorithm was further introduced to reduce the total energy consumption as well as makespan. The simula- tion results revealed that the GECOS algorithm outperforms McNaughton’s algorithm, Shortest Processing Time Algorithm, and Integer Linear Programming algorithms on minimising makespan on parallel processing machines. The key contributions to the body of knowledge from the study include a unique eval- uation of electrical energy consumption by a hot-dip galvanising plant, development of an energy consumption baseline and performance indices, and the developed novel bi-objective GECOS algorithm that considers reducing total energy consumption by the process tanks as well as makespan. Future research work may focus on hybrid genetic algorithm-artificial immune system scheduling tools that would derive synergy from the advantages of both algorithms to improve energy performance.Item Energy-efficient PLIA-RWA algorithms for transparent optical networks(2017) Mutsvangwa, Andrew; Nleya, BakheThe tremendous growth in the volume of telecommunication traffic has undoubtedly triggered an unprecedented information revolution. The emergence of high-speed and bandwidth-hungry applications and services such as high-definition television (HDTV), the internet and online interactive media has forced the telecommunication industry to come up with ingenious and innovative ideas to match the challenges. With the coming of age of purposeful advances in Wavelength Division Multiplexing (WDM) technology, it is inherently practicany possible to deploy ultra-high speed all-optical networks to meet the ever-increasing demand for modern telecommunication services. All-optical networks are capable of transmitting data signals entirely in the optical domain from source to destination, and thus eliminate the incorporation of the often bulky and high-energy consuming optical to-electrical-to-optical (OEO) converters at intermediate nodes. Predictably, all-optical networks consume appreciably low energy as compared to their opaque and translucent counterparts. This low energy consumption results in lower carbon footprint of these networks, and thus a significant reduction in the greenhouse gases (GHGs) emission. In addition, transparent optical networks bring along other additional and favourable rewards such as high bit-rates and overall protocol transparency. Bearing in mind the aforementioned benefits of transparent optical networks, it is vital to point out that there are significant setbacks that accompany these otherwise glamourous rewards. Since OEO conversions are eliminated at intermediate nodes in all-optical networks, the quality of the transmitted signal from source to destination may be severely degraded mainly due to the cumulative effect of physical-layer impairments induced by the passage through the optical fibres and associated network components. It is therefore essential to come up with routing schemes that effectively take into consideration the signal degrading effects of physical -layer impairments so as to safeguard the integrity and health of transmitted signals, and eventually lower blocking probabilities. Furthermore, innovative approaches need to be put in place so as to strike a delicate balance between reduced energy consumption in transparent networks and the quality of transmitted signals. In addition, the incorporation of renewable energy sources in the powering of network devices appears to gain prominence in the design and operation of the next-generation optical networks. The work presented in this dissertation broadly focuses on physical-layer impairment aware routing and wavelength assignment algorithms (PLIA-RWA) that attempt to: (i) achieve a sufficiently high quality of transmission by lowering the blocking probability, and (ii) reduce the energy consumption in the optical networks. Our key contributions of this study may be summarized as follows: Design and development of a Q-factor estimation tool. Formulation, evaluation and validation of a QoT-based analytical model that computes blocking probabilities. Proposal and development of IA-RWA algorithms and comparison with established ones. Design and development of energy-efficient RWA schemes for dynamic optical networks.Item Evaluating the model of multi : Global Navigation Satellite Systems (GNSS) constellation to mitigate the multipath signals(2023-05) Madonsela, Bhekinkosi Pheneas; Davidson, Innocent Ewaen; Mukubwa, E.; Moloi, K.The Global Navigation Satellite Systems (GNSS) are evolving continuously and are being used in many applications across the world. The GNSS is used in electrical industries, the banking sector, the agricultural sector, the transportation and logistics sectors, etc. The architecture and operation of the GNSS is made up of three subdivisions: the space section, control section, and user section. The space segment consists of the satellite constellation to generate and emit the GNSS code phase and carrier phase signals. The space segment further stores and broadcasts the navigation data that is uploaded to the system by the GNSS controllers. For accurate Position, Velocity and Time (PVT), the satellite constellation must have at least three or four satellites visible to the GNSS receiver. The control segment is also known as the ground segment and is accountable for the complete operations of the GNSS. The ground segment further controls and preserves the conformation files of the satellite constellation, updates the navigation data to all satellites, controls the atomic clock of the GNS, and predicts the satellite ephemeris. The user segment is made up of GNSS receivers; their purpose is to receive the GNSS signal that contains the code phase and carrier phase to determine the pseudorange and other observables. There numerous issues that obstruct the application of the GNSS across all sectors mentioned above; those are signal attenuation in the satellite channels, signal diffraction, and signal multipath. Hence, this thesis focused on mitigating the GNSS multipath signal by investigating the concept of Combined Signal Detection (CSD). The purposes was to reduce the impact of signal degradation and further enable the GNSS receivers to withstand the signal degradation in deep rural areas. There are numerous existing methodologies to mitigate multipath signal and improve the positioning, velocity and timing in urban areas. However, the proposed CSD approach provided the better performance by using the vector detection of all visible satellites to improve (DP) Direct Positioning , High Sensitivity (HS) and clock bias. Furthermore, the capabilities of the Global Positioning Systems (GPS) and Galileo satellites are integrated to accommodate the adoption of CSD concept. The CSD concept require the GNSS receiver that is capable of processing multi-frequencies. The multi-constellation GNSS receiver use numerous satellites that are in space, such as Global Positioning Systems (GPS) satellites, BeiDou satellites, Galileo satellites, and Glonass satellites. However, the similarities between the constellations is investigated before the system is integrated for multi-constellation. The concept of CSD proved to be capable of mitigating the signal multipath without introducing an external device or circuit. This thesis further provided the comprehensive analysis of sources that contribute towards the signal degradation