Power loss model identification in gearboxes: Balancing identifiability and testing time
- 10 hours ago
- 3 min read
By: Arash M. Zadeh Fard, Ward Rottiers, Matteo Kirchner, Bert Pluymers, Wim Desmet, Frank Naets

Why did we do this study?
Reliable model identification is essential in many engineering applications, because models are often used to understand system behaviour, support design decisions, and predict performance under different operating conditions. However, obtaining the experimental data needed for accurate model identification can be slow and expensive. In practice, this creates a clear trade-off: enough experiments are needed to reliably identify model parameters, but extensive testing increases time, cost, and practical complexity.
This challenge appears in many settings. For example, in thermal systems, batteries, drivetrains, or industrial processes, experiments may require long stabilisation times before meaningful measurements can be taken. In other cases, changing between operating conditions may itself be costly or time-consuming. As a result, it is often not feasible to test every possible operating point in a dense experimental grid.
This study was motivated by the need to improve model identification efficiency. Rather than performing more tests, the key question is how to perform better ones: which experiments provide the most useful information, and how can they be selected to improve parameter identifiability while keeping testing time under control.
What did we do?
This challenge is addressed through an adaptive design of experiments framework that selects testing conditions systematically and efficiently. The aim is to improve parameter identifiability while keeping the overall testing effort manageable. Instead of using a fixed set of experiments from the start, the method updates the test plan as new information becomes available.
At each stage, the framework selects the parameters that are difficult to identify accurately and then chooses the next operating point to provide the most useful information. In this way, each new experiment is selected with a clear purpose.
The method then goes one step further by also considering the total testing time as an objective. This allows balancing two goals: obtaining a more identifiable model and reducing the time required to complete the experiments.
This is especially important in cases where testing time depends not only on the number of experiments, but also on the order in which they are performed. For example, when thermal effects are significant, transitioning between different operating conditions can lead to long wait times before the system stabilises. By accounting for this and reducing unnecessary temperature changes between successive tests, the framework helps shorten the overall campaign without compromising model quality.
Key findings
In this study, the methodology is demonstrated on gearbox power-loss model identification. The case study shows how adaptive DoE can guide the selection and sequencing of operating points, resulting in a more efficient identification process and substantially shorter testing campaigns. More broadly, the work highlights how adaptive and optimisation-based DoE can support model identification problems in which experimental time and parameter reliability must both be considered.
The proposed adaptive framework improves parameter identifiability by iteratively selecting the most informative operating points.
A bi-objective formulation allows balancing identifiability with total testing time, rather than optimising only one of the two.
Test sequencing is shown to matter: reducing temperature swings between experiments lowers overall testing effort.
The approach enabled substantially shorter testing campaigns while maintaining reliable model identification.
How did VSC contribute?
The proposed methodology relies on repeated optimisation and identifiability analyses, which can become computationally demanding when many candidate operating points and parameter combinations must be evaluated. VSC provided the computational resources needed to run a large number of independent optimisations efficiently. This made it possible to explore adaptive and bi-objective DoE strategies in a practical way, and to support the development of a framework that balances identifiability with testing time.
Read the full scientific publication in ScienceDirect here
🔍 Your Research Matters — Let’s Share It!
Have you used VSC’s computing power in your research? Did our infrastructure support your simulations, data analysis, or workflow?
We’d love to hear about it!
Take part in our #ShareYourSuccess campaign and show how VSC helped move your research forward. Whether it’s a publication, a project highlight, or a visual from your work, your story can inspire others.
🖥️ Be featured on our website and social media. Show the impact of your work. Help grow our research community
📬 Submit your story: https://www.vscentrum.be/sys



