Introducing Hippopotamus Optimization for Superior Problem Solving

In an article published in the journal Scientific Reports, researchers proposed a new metaheuristic algorithm called hippopotamus optimization (HO). This technique can solve optimization problems across various domains including science, engineering, and technology. Moreover, it was tested on several benchmark functions and engineering problems and showed superior performance compared to other well-known metaheuristic algorithms.

Study: Introducing Hippopotamus Optimization for Superior Problem Solving. Image credit: BEST-BACKGROUNDS/Shutterstock
Study: Introducing Hippopotamus Optimization for Superior Problem Solving. Image credit: BEST-BACKGROUNDS/Shutterstock

Background

Metaheuristic optimization algorithms are stochastic methods that can solve complex, nonlinear, and high-dimensional problems by mimicking natural phenomena such as physical laws, biological evolution, animal behavior, and human intelligence. These algorithms are widely used in science, engineering, and technology. One of the challenges in designing metaheuristic algorithms is to balance exploration and exploitation, which are two essential aspects of the search process.

Exploration refers to the global search for promising regions in the search space, while exploitation refers to the local search for refining the solutions within a region. A good optimization algorithm should be able to balance exploitation and exploration, avoiding premature convergence to local optima and ensuring diversity and convergence.

About the Research

In the present paper, the authors have designed an innovative swarm-based HO algorithm, taking inspiration from the behavior of hippopotamuses in their natural environment. Hippopotamuses are semi-aquatic mammals that live in groups known as pods or bloats, comprising several females, males, and young individuals. The dominant male hippopotamus assumes the role of leader within the herd, safeguarding the territory against predators and intruders.

The algorithm presented is structured around a three-phase model that closely mimics the position update, defense, and evasion strategies observed in hippopotamuses. Three primary behavioral patterns of hippopotamuses have been identified in the study, which holds potential for optimization purposes:

  • Hippopotamus' position update in the river or pond: This phase models the movement of hippopotamuses within the herd, influenced by the dominant hippopotamus and the mean position of some randomly selected hippopotamuses. This phase enhances the exploration and diversity of the algorithm.
  • Hippopotamus’ defense against predators: This phase models the defensive behavior of hippopotamuses when they face a predator or an intruder, such as a lion or a hyena. The hippopotamuses rotate towards the predator, open their powerful jaws, and emit loud vocalizations to scare them away. This phase improves the exploitation and convergence of the algorithm.
  • Hippopotamus escaping from the predator: This phase models the evasive behavior of hippopotamuses when they encounter a group of predators or fail to repel them with their defensive behavior. The hippopotamuses flee from their predators and seek the nearest water body, such as a river or a pond. This phase prevents the algorithm from getting trapped in local optima and maintains the balance between exploration and exploitation.

The researchers mathematically formulated the HO algorithm using a three-phase model that incorporates the above-mentioned behaviors. The algorithm uses a population of hippopotamuses as candidate solutions for the optimization problem and updates their positions according to the specified relationships in each phase. It stops when a predefined criterion is met, such as the maximum number of iterations or the desired accuracy.

Furthermore, the study evaluated the performance of the HO algorithm on a set of 161 standard benchmark functions of various types, such as unimodal, multimodal, fixed-dimensional, high-dimensional, and zigzag pattern functions. The algorithm was tested using the congress on evolutionary computation (CEC) 2019 and CEC 2014 test suites, which are widely used for assessing the performance of optimization algorithms.

Moreover, it was compared with 12 well-known metaheuristic algorithms, such as whale optimization algorithm (WOA), grey wolf optimizer (GWO), salp swarm algorithm (SSA), particle swarm optimization (PSO), sine cosine algorithm (SCA), firefly algorithm (FA), grasshopper optimization algorithm (GOA), teaching–learning-based optimization (TLBO), moth-flame optimization algorithm (MFO), invasive weed optimization algorithm (IWO), arithmetic optimization algorithm (AOA), and evolution strategy with covariance matrix adaptation (CMA-ES).

Research Findings

The outcomes showed that the HO algorithm achieved the best rank in 115 out of 161 benchmark functions and outperformed the other algorithms in terms of finding the optimal value, convergence speed, and robustness. The authors also performed statistical analysis to confirm the significance of the results. They applied the HO algorithm to solve four engineering design problems, namely the welded beam design problem, the pressure vessel design problem, the speed reducer design problem, and the spring design problem. These problems were constrained optimization problems that involved minimizing the cost or weight of a structure while satisfying some design constraints.

The study compared the HO algorithm with the same 12 algorithms used for the benchmark functions. The results showed that the HO algorithm obtained the best solutions for all four problems while satisfying the constraints and demonstrating high efficiency and feasibility.

The algorithm could solve optimization problems in medical engineering, control and mechanical engineering, telecommunication engineering, energy engineering, civil engineering, economics, and chemical engineering. It could handle complex, nonlinear, and high-dimensional problems with ease and could balance exploration and exploitation effectively. Moreover, it could be extended or modified to suit different types of problems, such as multi-objective, discrete, or constrained optimization problems.

Conclusion

In summary, the novel algorithm proved effective and efficient for solving optimization problems. It mimicked the position update, defensive, and evasive strategies of hippopotamuses to balance the exploration and exploitation of the search space. Moreover, it showed superior performance compared to other well-known algorithms on various benchmark functions and engineering design problems.

The researchers acknowledged limitations and challenges and suggested directions for future work. They believe the HO algorithm could be further improved by incorporating some adaptive parameters, hybridization techniques, or parallelization methods.

Journal reference:
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, March 06). Introducing Hippopotamus Optimization for Superior Problem Solving. AZoAi. Retrieved on September 19, 2024 from https://www.azoai.com/news/20240306/Introducing-Hippopotamus-Optimization-for-Superior-Problem-Solving.aspx.

  • MLA

    Osama, Muhammad. "Introducing Hippopotamus Optimization for Superior Problem Solving". AZoAi. 19 September 2024. <https://www.azoai.com/news/20240306/Introducing-Hippopotamus-Optimization-for-Superior-Problem-Solving.aspx>.

  • Chicago

    Osama, Muhammad. "Introducing Hippopotamus Optimization for Superior Problem Solving". AZoAi. https://www.azoai.com/news/20240306/Introducing-Hippopotamus-Optimization-for-Superior-Problem-Solving.aspx. (accessed September 19, 2024).

  • Harvard

    Osama, Muhammad. 2024. Introducing Hippopotamus Optimization for Superior Problem Solving. AZoAi, viewed 19 September 2024, https://www.azoai.com/news/20240306/Introducing-Hippopotamus-Optimization-for-Superior-Problem-Solving.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Reinforcement Learning Stabilizes Flow in Engineering