Storm.dd

Benchmark
Model:wlan v.1 (MDP)
Parameter(s)MAX_BACKOFF = 4, COL = 0
Property:cost_min (exp-reward)
Invocation (dd)
storm/build/bin/storm --prism qcomp/benchmarks/mdp/wlan/wlan.4.prism --prop qcomp/benchmarks/mdp/wlan/wlan.props cost_min --constants COL=0 --engine dd --ddlib sylvan --sylvan:maxmem 4096 --sylvan:threads 4 --timemem
Symbolic engine with Sylvan using 4GB memory
Execution
Walltime:29.78178906440735s
Return code:0
Relative Error:0.0
Log
Storm 1.4.1 (dev)

Date: Mon Dec  9 13:22:27 2019
Command line arguments: --prism qcomp/benchmarks/mdp/wlan/wlan.4.prism --prop qcomp/benchmarks/mdp/wlan/wlan.props cost_min --constants COL=0 --engine dd --ddlib sylvan '--sylvan:maxmem' 4096 '--sylvan:threads' 4 --timemem
Current working directory: 

Time for model input parsing: 0.057s.

 WARN (DdPrismModelBuilder.cpp:1196): The reward model does not assign any non-zero rewards.
Time for model construction: 0.526s.

-------------------------------------------------------------- 
Model type: 	MDP (symbolic)
States: 	345000 (2867 nodes)
Transitions: 	762252 (14365 nodes)
Choices: 	440206
Reward Models:  cost
Variables: 	rows: 13 meta variables (43 DD variables), columns: 13 meta variables (43 DD variables), nondeterminism: 7 meta variables (7 DD variables)
Labels: 	2
   * deadlock -> 0 state(s) (1 nodes)
   * init -> 1 state(s) (44 nodes)
-------------------------------------------------------------- 

Model checking property "cost_min": R[exp]{"cost"}min=? [F ((s1 = 12) & (s2 = 12))] ...
 WARN (SymbolicMinMaxLinearEquationSolver.cpp:44): Selected method is not supported for this solver, switching to value iteration.
 WARN (SymbolicMinMaxLinearEquationSolver.cpp:44): Selected method is not supported for this solver, switching to value iteration.
Result (for initial states): 7625
Time for model checking: 20.471s.

Performance statistics:
  * peak memory usage: 3859MB
  * CPU time: 82.400s
  * wallclock time: 21.090s