|Stochastic Systems Group|
Probabilistic models for distributed and concurrent systems: Markov nets
Large distributed systems are numerous in information technology. For such systems, only local states can be considered, not global ones (what would it mean considering the state of the Web?); similarly, events can only be ordered locally in time, not globally; last, there are plenty of subsystems that progress independently without interacting for some period of time (and even sometimes for ever). Probabilistic models that take these features into account have been only recently developed, independently by Varacca and Winskel, and by Abbes and Benveniste.
I shall present Markov nets, an extension of Markov and semi-Markov chains to distributed and concurrent systems. Our starting point are (safe) Petri nets with true-concurrency semantics; this means that executions of the net are modeled by traces, not firing sequence; a trace is the equivalence class of firing sequences modulo the exchange of concurrent consecutive events (transitions). This way, irrelevant interleavings are not considered and states become really local (the marking graph is never considered). Traces are partially ordered sequences of events.
We show how to construct probabilities on the set of all maximal traces of a net. A particular construction yields the Markov nets. Markov nets satisfy a strong Markov property, plus a new property stating that concurrent subsystems are probabilistically independent (a property not satisfied by previous stochastic Petri net models). They also satisfy a new local renewal theory and a new law of large numbers. This analysis is interesting for concurrency theory in general, as it points out the very nature of choice and local state in nondeterministic systems.
This study was originally motivated by distributed fault management and alarm correlation in telecom networks. Samy Abbes is now applying this approach to network information theory.
Problems with this site should be emailed to email@example.com