2024 : 9 : 8
Vahid Rafeh

Vahid Rafeh

Academic rank: Associate Professor
ORCID: https://orcid.org/0000-0002-2486-7384
Education: PhD.
ScopusId: 14054926800
HIndex:
Faculty: Engineering
Address: Arak University
Phone:

Research

Title
An approach based on knowledge exploration for state space management in checking reachability of complex software systems
Type
JournalPaper
Keywords
Model checking  State space explosion  Reachability property  Ensemble classification
Year
2020
Journal SOFT COMPUT
DOI
Researchers Jafar Partabian ، Vahid Rafeh ، Hamid Parvin ، Samd Nejatian

Abstract

Model checking is one of the most efficient techniques in software system verification. However, state space explosion is a big challenge while using this technique to check different properties like safety ones. In this situation, one can search the state space to find a reachable state in which the safety property is violated. Hence, reachability checking can be done instead of checking safety property. However, checking reachability in the worst case may cause state space explosion again. To handle this problem, our idea is based on generating a small model consistent with the main model. Then by exploring the state space entirely, we search it to find the goal states. After finding the goal states, we label the paths which start from the initial state and leading to a goal state. Then using the ensemble classification technique, the necessary knowledge is extracted from these paths to intelligently explore the state space of the bigger model. Ensemble machine learning technique uses Boosting method along with decision trees. It follows sampling techniques by replacement. This method generates k predictive models after sampling k times. Finally, it uses a voting mechanism to predict the labels of the final path. Our proposed approach is implemented in GROOVE, which is an open source toolset for designing and model checking graph transformation systems. Our experiments show a significant improvement in terms of both speed and memory usage. In average, our approach consumes nearly 42% fewer memory than other approaches. Also, it generates witnesses nearly 20% shorter than others, in average.