storm/build/bin/storm --prism qcomp/benchmarks/mdp/wlan/wlan.6.prism --prop qcomp/benchmarks/mdp/wlan/wlan.props cost_min --constants COL=0 --engine dd --ddlib sylvan --sylvan:maxmem 4096 --sylvan:threads 4 --timemem
Symbolic engine with Sylvan using 4GB memory
Storm 1.4.1 (dev)
Date: Mon Dec 9 13:22:27 2019
Command line arguments: --prism qcomp/benchmarks/mdp/wlan/wlan.6.prism --prop qcomp/benchmarks/mdp/wlan/wlan.props cost_min --constants COL=0 --engine dd --ddlib sylvan '--sylvan:maxmem' 4096 '--sylvan:threads' 4 --timemem
Current working directory:
Time for model input parsing: 0.023s.
WARN (DdPrismModelBuilder.cpp:1196): The reward model does not assign any non-zero rewards.
Time for model construction: 0.659s.
--------------------------------------------------------------
Model type: MDP (symbolic)
States: 5007548 (3894 nodes)
Transitions: 11475748 (20377 nodes)
Choices: 6350470
Reward Models: cost
Variables: rows: 13 meta variables (47 DD variables), columns: 13 meta variables (47 DD variables), nondeterminism: 7 meta variables (7 DD variables)
Labels: 2
* deadlock -> 0 state(s) (1 nodes)
* init -> 1 state(s) (48 nodes)
--------------------------------------------------------------
Model checking property "cost_min": R[exp]{"cost"}min=? [F ((s1 = 12) & (s2 = 12))] ...
WARN (SymbolicMinMaxLinearEquationSolver.cpp:44): Selected method is not supported for this solver, switching to value iteration.
WARN (SymbolicMinMaxLinearEquationSolver.cpp:44): Selected method is not supported for this solver, switching to value iteration.
Result (for initial states): 7625
Time for model checking: 325.098s.
Performance statistics:
* peak memory usage: 3864MB
* CPU time: 1288.496s
* wallclock time: 325.805s