PRM inference using Jaffray & Faÿ’s Local Conditioning

PRM inference using Jaffray & Faÿ’s Local Conditioning

0.00 Avg rating0 Votes
Article ID: iaor20116984
Volume: 71
Issue: 1
Start Page Number: 33
End Page Number: 62
Publication Date: Jul 2011
Journal: Theory and Decision
Authors: ,
Keywords: behaviour
Abstract:

Probabilistic Relational Models (PRMs) are a framework for compactly representing uncertainties (actually probabilities). They result from the combination of Bayesian Networks (BNs), Object‐Oriented languages, and relational models. They are specifically designed for their efficient construction, maintenance and exploitation for very large scale problems, where BNs are known to perform poorly. Actually, in large‐scale problems, it is often the case that BNs result from the combination of patterns (small BN fragments) repeated many times. PRMs exploit this feature by defining these patterns only once (the so‐called PRM’s classes) and using them through multiple instances, as prescribed by the Object‐Oriented paradigm. This design induces low construction and maintenance costs. In addition, by exploiting the classes’ structures, PRM’s state‐of‐the‐art inference algorithm ‘Structured Variable Elimination’ (SVE) significantly outperforms BN’s classical inference algorithms (e.g., Variable Elimination, VE; Local Conditioning, LC). SVE is actually an extension of VE that simply exploits classes to avoid redundant computations. In this article, we show that SVE can be enhanced using LC. Although LC is often thought as being outperformed by VE‐like algorithms in BNs, we do think that it should play an important role for PRMs because its features are very well suited for best exploiting PRM classes. In this article, relying on Faÿ and Jaffray’s works, we show how LC can be used in conjunction with VE and deduce an extension of SVE that outperforms it for large‐scale problems. Numerical experiments highlight the practical efficiency of our algorithm.

Reviews

Required fields are marked *. Your email address will not be published.