Prior work, however, offered either heuristic techniques with poor guarantees of accuracy or approaches with proven guarantees but poor performance in practice. Consequently, constrained counting and sampling have been subject to intense theoretical and empirical investigations over the years. In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints. In particular, when the amount of exact inference is equally limited, collapsed compilation is competitive with the state of the art, and outperforms it on several benchmarks.Ĭonstrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, privacy, probabilistic reasoning, and constrained-random verification. Our experimental evaluation shows that collapsed compilation performs well on standard benchmarks. More- over, by having a partially compiled circuit available during sampling, collapsed compilation has access to a highly effective proposal distribution for importance sampling. These properties are naturally exploited in exact inference, but are difficult to harness for approximate inference. This online collapsing, together with knowledge compilation inference on the remaining variables, naturally exploits local structure and context- specific independence in the distribution. It is a collapsed sampling algorithm that incrementally selects which variable to sample next based on the partial sample obtained so far. We introduce collapsed compilation, a novel approximate inference algorithm for discrete probabilistic graphical models. The experimental results show that PartialKC is more accurate than both SampleSearch and SearchTreeSampler, PartialKC scales better than SearchTreeSampler, and the KC technologies can obviously accelerate sampling. Each calling of PartialKC consists of multiple callings of MicroKC, while each of the latter callings is a process of importance sampling equipped with KC technologies. An unbiased estimate of the model number can be computed via a randomly partial Decision-DNNF formula. We propose a generalized Decision-DNNF (referred to as partial Decision-DNNF) via introducing a class of new leaf vertices (called unknown vertices), and then propose an algorithm called PartialKC to generate randomly partial Decision-DNNF formulas from the given formulas. Decision-DNNF is an important KC language that captures most of the practical compilers. Although exact model counters can be naturally furnished by most of the knowledge compilation (KC) methods, in practice, they fail to generate the compiled results for the exact counting of models for certain formulas due to the explosion in sizes.
Model counting is the problem of computing the number of satisfying assignments of a given propositional formula.
More than 500 benchmarks were used to test the method and the results show a significant reduction in error when compared to other approximate methods, with competitive runtimes. The queries of interest are prior and posterior singleton marginals and the partition function. We show that our algorithm for incremental construction of clique trees always generates a valid CT and our approximation technique automatically maintains consistency of within-clique beliefs. The algorithm returns a forest of calibrated clique trees corresponding to all partitions. The approximate CT serves as a starting point for the construction of CT for the next partition. This step involves exact inference to calibrate the CT and a combination of exact and approximate marginalization for approximation. Once the clique size constraint is reached, the infer and approximate part of the algorithm finds an approximate CT with lower clique sizes to which new nodes can be added.
Nodes are added to the CT as long as the sizes are within a user-specified clique size constraint. In the build stage of this approach, bounded-clique size partitions are obtained by building the clique tree (CT) incrementally. We propose an alternative approach for approximate inference based on an incremental build-infer-approximate (IBIA) paradigm. Techniques for approximate inference typically use iterative BP in graphs with bounded cluster sizes. Exact inference in Bayesian networks is intractable and has an exponential dependence on the size of the largest clique in the corresponding clique tree, necessitating approximations.