Skip to content

Latest commit

 

History

History
42 lines (30 loc) · 2.56 KB

README.md

File metadata and controls

42 lines (30 loc) · 2.56 KB

counting-benchmarks

A collection of model counting (#SAT) benchmarks. If you have some interesting instances, please consider submitting them! See below for details.

This repository lives at http://tinyurl.com/countingbenchmarks.

Organization

All the benchmarks are contained in the benchmarks folder, which has three subfolders for different types of problems:

  • basic: #SAT problems
  • projection: projection counting (#∃SAT) problems
  • weighted: weighted #SAT problems

Each of these problem types is in turn divided into several categories:

  • application: instances arising from applications
  • quasi-application: instances derived from SAT instances that arise from applications (e.g. taking an industrial SAT instance and treating it as a #SAT instance even if the model count isn't necessarily meaningful)
  • crafted: specially constructed instances not necessarily having an application
  • random: randomly generated instances
  • misc: instances that have not been categorized

Within these categories are folders for each set of benchmarks. Each set has a README file explaining how the benchmarks were generated and citing any relevant papers. The misc benchmarks may include sets lacking a description, e.g. if they derive from work that has not yet been published.

For convenience in running experiments with all benchmarks of a given type, every folder contains a benchmarkList.txt file listing paths to every benchmark in the folder. For example, one can use the list in projection/application/pmc-symbolic-markov to run all the benchmarks from that particular set, or the list in projection to run all projection counting benchmarks from any set. Occasionally a benchmark set includes the same benchmark in multiple different subfolders; the benchmarkList.txt files will only list the benchmark once.

Encodings

All three types of benchmarks are encoded in extensions of the standard DIMACS format for CNF formulas. The README files in the basic, projection, and weighted folders explain the extension used for the corresponding type of benchmark. All benchmarks have been compressed with gzip.

Submitting Benchmarks

We are trying to put together a wide variety of benchmarks from as many sources as possible. If you would like to submit some new instances, contact me (but please don't attach large benchmarks to an email). Remember to include a description of how the instances were generated. Thank you for contributing!