In an article published in the journal Scientific Reports, researchers proposed a new metaheuristic algorithm called hippopotamus optimization (HO). This technique can solve optimization problems across various domains including science, engineering, and technology. Moreover, it was tested on several benchmark functions and engineering problems and showed superior performance compared to other well-known metaheuristic algorithms.
Background
Metaheuristic optimization algorithms are stochastic methods that can solve complex, nonlinear, and high-dimensional problems by mimicking natural phenomena such as physical laws, biological evolution, animal behavior, and human intelligence. These algorithms are widely used in science, engineering, and technology. One of the challenges in designing metaheuristic algorithms is to balance exploration and exploitation, which are two essential aspects of the search process.
Exploration refers to the global search for promising regions in the search space, while exploitation refers to the local search for refining the solutions within a region. A good optimization algorithm should be able to balance exploitation and exploration, avoiding premature convergence to local optima and ensuring diversity and convergence.
About the Research
In the present paper, the authors have designed an innovative swarm-based HO algorithm, taking inspiration from the behavior of hippopotamuses in their natural environment. Hippopotamuses are semi-aquatic mammals that live in groups known as pods or bloats, comprising several females, males, and young individuals. The dominant male hippopotamus assumes the role of leader within the herd, safeguarding the territory against predators and intruders.
The algorithm presented is structured around a three-phase model that closely mimics the position update, defense, and evasion strategies observed in hippopotamuses. Three primary behavioral patterns of hippopotamuses have been identified in the study, which holds potential for optimization purposes:
- Hippopotamus' position update in the river or pond: This phase models the movement of hippopotamuses within the herd, influenced by the dominant hippopotamus and the mean position of some randomly selected hippopotamuses. This phase enhances the exploration and diversity of the algorithm.
- Hippopotamus’ defense against predators: This phase models the defensive behavior of hippopotamuses when they face a predator or an intruder, such as a lion or a hyena. The hippopotamuses rotate towards the predator, open their powerful jaws, and emit loud vocalizations to scare them away. This phase improves the exploitation and convergence of the algorithm.
- Hippopotamus escaping from the predator: This phase models the evasive behavior of hippopotamuses when they encounter a group of predators or fail to repel them with their defensive behavior. The hippopotamuses flee from their predators and seek the nearest water body, such as a river or a pond. This phase prevents the algorithm from getting trapped in local optima and maintains the balance between exploration and exploitation.
The researchers mathematically formulated the HO algorithm using a three-phase model that incorporates the above-mentioned behaviors. The algorithm uses a population of hippopotamuses as candidate solutions for the optimization problem and updates their positions according to the specified relationships in each phase. It stops when a predefined criterion is met, such as the maximum number of iterations or the desired accuracy.
Furthermore, the study evaluated the performance of the HO algorithm on a set of 161 standard benchmark functions of various types, such as unimodal, multimodal, fixed-dimensional, high-dimensional, and zigzag pattern functions. The algorithm was tested using the congress on evolutionary computation (CEC) 2019 and CEC 2014 test suites, which are widely used for assessing the performance of optimization algorithms.
Moreover, it was compared with 12 well-known metaheuristic algorithms, such as whale optimization algorithm (WOA), grey wolf optimizer (GWO), salp swarm algorithm (SSA), particle swarm optimization (PSO), sine cosine algorithm (SCA), firefly algorithm (FA), grasshopper optimization algorithm (GOA), teaching–learning-based optimization (TLBO), moth-flame optimization algorithm (MFO), invasive weed optimization algorithm (IWO), arithmetic optimization algorithm (AOA), and evolution strategy with covariance matrix adaptation (CMA-ES).
Research Findings
The outcomes showed that the HO algorithm achieved the best rank in 115 out of 161 benchmark functions and outperformed the other algorithms in terms of finding the optimal value, convergence speed, and robustness. The authors also performed statistical analysis to confirm the significance of the results. They applied the HO algorithm to solve four engineering design problems, namely the welded beam design problem, the pressure vessel design problem, the speed reducer design problem, and the spring design problem. These problems were constrained optimization problems that involved minimizing the cost or weight of a structure while satisfying some design constraints.
The study compared the HO algorithm with the same 12 algorithms used for the benchmark functions. The results showed that the HO algorithm obtained the best solutions for all four problems while satisfying the constraints and demonstrating high efficiency and feasibility.
The algorithm could solve optimization problems in medical engineering, control and mechanical engineering, telecommunication engineering, energy engineering, civil engineering, economics, and chemical engineering. It could handle complex, nonlinear, and high-dimensional problems with ease and could balance exploration and exploitation effectively. Moreover, it could be extended or modified to suit different types of problems, such as multi-objective, discrete, or constrained optimization problems.
Conclusion
In summary, the novel algorithm proved effective and efficient for solving optimization problems. It mimicked the position update, defensive, and evasive strategies of hippopotamuses to balance the exploration and exploitation of the search space. Moreover, it showed superior performance compared to other well-known algorithms on various benchmark functions and engineering design problems.
The researchers acknowledged limitations and challenges and suggested directions for future work. They believe the HO algorithm could be further improved by incorporating some adaptive parameters, hybridization techniques, or parallelization methods.