State Distribution Policy for Distributed Model Checking of Actor Models

Authors

  • Ehsan Khamespanah University of Tehran
  • Marjan Sirjani Reykjavik University
  • Mohammadreza Mousavi Halmstad University
  • Zeynab Sabahi Kaviani University of Tehran
  • Mohamadreza Razzazi Amirkabir University of Technology

DOI:

https://doi.org/10.14279/tuj.eceasst.72.1022

Abstract

Model checking temporal properties is often reduced to finding accepting cycles in Buchi automata. A key ingredient for an effective distributed model checking technique is a distribution policy that does not split the potential accepting cycles of the corresponding automaton among several nodes. In this paper, we introduce a distribution policy to reduce the number of split cycles. This policy is based on the call dependency graph, obtained from the message passing skeleton of the model. We prove theoretical results about the correspondence between the cycles of the call dependency graph and the cycles of the concrete state space and provide empirical data obtained from applying our distribution policy in state space generation and reachability analysis. We take Rebeca, an imperative interpretation of actors, as our modeling language and implement the introduced policy in its distributed state space generator. Our technique can be applied to other message-driven actor-based models where concurrent objects or services are units of concurrency.

Downloads

Published

2015-11-25

How to Cite

[1]
E. Khamespanah, M. Sirjani, M. Mousavi, Z. Sabahi Kaviani, and M. Razzazi, “State Distribution Policy for Distributed Model Checking of Actor Models”, eceasst, vol. 72, Nov. 2015.