Miroslav Velev's SAT Benchmarks
The following Boolean Satisfiability (SAT) benchmarks were generated as described in papers [C10], [C11], [C13], [C15], [C17], [C20], [C22], [C24], and [C25].
All benchmarks are available in CNF format. (Some suites are also available in SVC, BLIF, ISCAS, and Trace formats.)


B28
VLIW-SAT-4.0: 10 satisfiable CNF formulas from formal verification of VLIW processors with instruction queues and 9-stage pipelines. These formulas are from buggy variants of the most complex model in VLIW-UNSAT-4.0.
Formats: [CNF, README].
Note: The biggest file is over 300 MB.  
B27
VLIW-UNSAT-4.0: 4 unsatisfiable CNF formulas from formal verification of VLIW processors with instruction queues and 9-stage pipelines.
Formats: [CNF, README].
Note: The biggest file is over 180 MB.  
B26
VLIW-SAT-2.1: 10 satisfiable CNF formulas from formal verification of VLIW processors with instruction queues. These formulas are from buggy variants of a more complex VLIW processor than the one used for VLIW-SAT-2.0.
Formats: [CNF, README].
Note: The biggest file is over 390 MB.  
B25
VLIW-SAT-2.0: 10 satisfiable CNF formulas from formal verification of VLIW processors with instruction queues.
Formats: [CNF, README].
Note: The biggest file is over 190 MB.  
B24
VLIW-UNSAT-2.0: 9 unsatisfiable CNF formulas from formal verification of VLIW processors with instruction queues.
Formats: [CNF, README].
Note: The biggest file is over 210 MB.  
B23
PIPE-SAT-1.1: 10 satisfiable CNF formulas from buggy variants of the pipe benchmarks, and generated as presented in paper [C20].
Formats: [CNF, README].
Note: The biggest file is over 100 MB.  
B22
LIVENESS-UNSAT-2.0: 9 unsatisfiable CNF formulas from formal verification of liveness of single-issue pipelined and dual-issue superscalar DLX processors. The formulas are generated after applying abstraction techniques.
Formats: [CNF, README].
Note: The biggest file is over 130 MB.  
B21
VLIW-UNSAT-3.0: 2 unsatisfiable CNF formulas from formal verification of VLIW processors with instruction queues. The formulas are generated differently from those in VLIW-UNSAT-2.0.
Formats: [CNF, README].
Note: The biggest file is over 60 MB.  
B20
PIPE-SAT-1.0: 10 satisfiable CNF formulas from buggy variants of the pipe benchmarks.
Formats: [CNF, README].
Note: The biggest file is over 160 MB.  
B19
PIPE-OOO-UNSAT-1.1: 14 CNF formulas expressing the correctness of bigger variants of the pipe_ooo benchmarks, and generated as presented in paper [C20].
Formats: [CNF, README].
Note: The biggest file is over 220 MB.  
B18
PIPE-OOO-UNSAT-1.0: 15 CNF formulas expressing the correctness of bigger variants of the pipe_ooo benchmarks.
Formats: [CNF, README].
Note: The biggest file is over 400 MB.  
B17
LIVENESS-SAT-1.0: 10 satisfiable CNF formulas from formal verification of liveness of single-issue pipelined and dual-issue superscalar DLX processors.
Formats: [CNF, README].
Note: The biggest file is over 230 MB.  
B16
LIVENESS-UNSAT-1.0: 12 unsatisfiable CNF formulas from formal verification of liveness of single-issue pipelined and dual-issue superscalar DLX processors.
Formats: [CNF, README].
Note: The biggest file is over 250 MB.  
B15
DLX-IQ-UNSAT-2.0: 32 CNF formulas from formal verification of DLX processors with multicycle functional units, exceptions, branch prediction, and instruction queues. This suite is located at SATLIB.org.
Formats: [CNF, README].
Note: The biggest file is over 300 MB.  
B14
ENGINE-UNSAT-1.0: 10 CNF formulas from formal verification of out-of-order processors with register renaming and a reorder buffer. This suite is located at SATLIB.org.
Formats: [CNF, README].
Note: The biggest file is 11 MB.  
B13
DLX-IQ-UNSAT-1.0: 32 CNF formulas expressing the correctness of DLX processors with instruction queues. This suite is located at SATLIB.org.
Formats: [CNF, README].
Note: The biggest file is over 250 MB.  
B12
PIPE-UNSAT-1.1: 14 CNF formulas expressing the correctness of wide-issue superscalar processors with in-order execution. (These are bigger variants of the pipe benchmarks.) The formulas are generated as explained in paper [C20]. This suite is located at SATLIB.org.
Formats: [CNF, README].
Note: The biggest file is over 200 MB.  
B11
PIPE-UNSAT-1.0: 13 CNF formulas expressing the correctness of wide-issue superscalar processors with in-order execution. (These are bigger variants of the pipe benchmarks.) This suite is located at SATLIB.org.
Formats: [CNF, README].
Note: The biggest file is over 300 MB.  
B10
NPE-1.0: 6 CNF formulas generated in microprocessor formal verification without exploiting the property of Positive Equality.
Formats: [CNF, README].
Note: The two biggest benchmarks are 327 MB when uncompressed with 'gunzip', and have close to 900,000 CNF variables and 15 million clauses.  
B9
FVP-SAT.3.0: 20 satisfiable CNF formulas from formal verification of buggy variants of an out-of-order superscalar processor that has 64 Reorder Buffer entries and can fetch/retire up to 4 instructions per cycle.
Formats: [CNF, README].
Note: Each file is approximately 25 MB.  
B8
FVP-UNSAT.3.0: 6 unsatisfiable CNF formulas for correctness of out-of-order superscalar processors that have 64 Reorder Buffer entries, and do not implement register renaming.
Formats: [CNF, README].
Note: The files are between 25 MB and 94 MB.  
B7
VLIW-SAT.1.1: the 100 CNF formulas from VLIW-SAT.1.0 generated in a slightly different way.
Formats: [CNF]; [ISCAS, README]; [ISCAS-CGRASP, README].
Note: The files are between 51 MB and 99 MB.  
B6
FVP-UNSAT.2.0: 21 unsatisfiable and 1 satisfiable formulas, generated in the formal verification of correct superscalar microprocessors.
Formats: [CNF, README]; [ISCAS, README]; [ISCAS-CGRASP, README].
Note: The biggest benchmark is 14.8 MB. The hardest benchmarks in this suite are much more challenging than those in FVP-UNSAT.1.0.  
B5
VLIW-SAT.1.0: 100 big satisfiable CNF formulas, generated in the formal verification of buggy variants of a VLIW microprocessor.
Formats: [CNF, README].
Note: The file is 114 MB, such that each benchmark is up to 6 MB when uncompressed with 'gunzip'.  
B4
SSS-SAT.1.0: 100 satisfiable formulas, generated in the formal verification of buggy variants of a dual-issue superscalar microprocessor with exceptions, multicycle functional units, and branch prediction.
Formats: [CNF, README]; [BLIF]; [ISCAS, README]; [ISCAS-CGRASP, README]; [Trace, README].  
B3
FVP-UNSAT.1.0: 4 unsatisfiable formulas, generated in the formal verification of correct superscalar and VLIW microprocessors.
Formats: [CNF, README]; [BLIF]; [ISCAS, README].  
B2
Superscalar Suite 1.0a (SSS.1.0a): 9 satisfiable CNF formulas, generated in the formal verification of dual-issue superscalar microprocessors.
Formats: [CNF, README].  
B1
Superscalar Suite 1.0 (SSS.1.0): 48 Boolean formulas, generated in the formal verification of single-issue pipelined and dual-issue superscalar microprocessors.
Formats: [CNF, README]; [SVC, README]; [Trace, README].  
List of some early papers that use the SAT benchmarks



Last updated on January 6, 2004.