May 4, 2011 at 3:45 AM
Edited May 4, 2011 at 3:47 AM

What is the core team's opinion of computationally expensive analytical models that are not simulations?
I've recently replaced the matlabadin simulation code with a lowlevel model written in C#. As it is not a simulation, it does not suffer from the randomness present in simulations but due to it's lowlevel nature, is fairly computationally expensive.
A weighted directed graph is generated where each node represents a unique configuration of resources, ability cooldowns and temporary buffs (eg [HolyPower=0; abilityA=2s; abilityB=3s; abilityC=0s, buffD=4s]) and vertices represent the probability of transitioning
from the source state to the destination state for the chosen ability usage (eg if we use abilityC in the previous example then we might transition to [HolyPower=1; A=0.5s; B=1.5s; C=3s; D=3.5s] if the ability hits [HolyPower=1; A=0.5s; B=1.5s; C=3s; D=3.5s],
[HolyPower=0; A=0.5s; B=1.5s; C=3s; D=3.5s] if the ability misses, or some other state if we get a proc that gives us a temporary buff.
Once the reachability graph is generated, the steadystate state probabilities are calculated using iterative approximation. Once state probabilities are known, the abilities chosen are aggregated with the final output being casts/sec for each ability used
from which dps can be easily calculated (technically we output more than that as we include damagealtering temporary buffs such as Inquisition and Sacred Duty in the output since they affect ability damage).
As you can probably guess, taking such a bruteforce approach is not cheap. For ProtPaladin we track Holy Power, 7 ability CDs (WoG,CS,J,AS,Cons,HW,HoW) and 4 temporary buffs (GC,SD,Inq,EG_ICD) which gives a total theoretical state space of ~10^16 state. Fortunately,
realistic rotations use between 20,000 and 150,000 states. 150k state rotations take ~0.5s to generate the graph and 34s to converge to steadystate state probabilities. For ProtPaladin, the graph only varies with ability rotation, hit, expertise, haste,
3 talents & 1 glyph.
Further info and source code can be found at http://maintankadin.failsafedesign.com/forum/viewtopic.php?p=593320#p593320. The state generation code in matlabadin is C# as I have a .NET background and matlab was too slow. Incorporating it into Rawr is feasible,
it just might not desirable.




Sounds similar to what we already do for Mage, which (I believe) is a Markovian State Machine. I'm sure Kavan can comment further on it.
I believe some level of optimization is needed to actually have it used in Rawr; I think it follows the full state machine for the main calculation, but assumes the rotation won't change for each item in a chart, unless the stats change significantly, or
any known 'especially volatile' stats change (things like set bonuses, etc).




I'm currently trying to model a good rotation for the ret module.
For ret it would be Holy Power, 6 ability CDs (TV, CS, J, Cons, HW, HoW) and 5 temporary buffs (DP proc, Exo, Inq, Zeal, AW).
But I don't have the mathematical background to implement this for retribution...




I'm happy to do the backend state machine coding and I'm currently in the process of adding ret rotation capability to the prot capability in the C# matlabadin code. I notice you did not include GoAK, WoG(+Eternal Glory+Selfless Healer) in your
list of abilities/buffs. I presume (unlike prot) ret ignore WoG a a rotation option. How are you intending to model GoAK?
How do we incorporate this into Rawr? If we precompute state machine results, calculation is reduced to a simple lookup at runtime. Unfortunately, we would need to do a different calculation for each relevant talent (Inquiry of Faith, Divine Purpose, Sanctity
of Battle, The Art of War, Sanctified Wrath = 384 combinations)(Zealotry/DS can be inferred from the rotation) and hit,exp,haste combination. Even if we only took 6 data points for haste/exp/hit and interpolated from there we would need 15+Mb in lookup
tables per rotation (and at the very minimum we need 4 rotations to handle Zealotry/DS talents).
It could be feasible if we precomputed results for standard talent builds but then we're looking at at least 30s get to a result for the Talents and Glyphs screen. Short of refactoring Rawr to use load indicators and perform calculations on a background
thread, I don't see that being a usable experience.




Yes, ret ignore WoG. GoAK strength bonus is handled as a flat bonus, the angel will be a pet. So it will not have something to do with the normal rotation.
Well it would be nice to have it depending on every talent, but i'd say for 99% of all calculation it will be with a standard set of talents.
Since haste decreases the cooldown of CS and DS for Ret, is it even possible to use this method?




That's the inherent problem with Markovian States. While they would be great for general use in small scenarios, you get into the number of abilities we keep track of in Rawr models and you hit supercomputer requirements for processing the chain once (or
more than once) per calc 5 million times in a row (during an optimize). You can do simplifying methods that would utilize a cached calc until a major change occurs, which is part of what Kavan does, to alleviate the computation weight
on the processor but it's still not perfect.
Precalculating the state spaces for all possible scenarios and just using a list of ability counts and uptimes out of that on your end, then flooding those numbers into Rawr wouldn't really be feasible either. That's a big chunk of data to have to put in
it that would have to have new releases come out to update/fix when issues or new scenarios are identified.
If you can finagle the settings of your state generator so that it can perform 20,000 iterations of itself in say 0.5 seconds (ah hell, lets shoot for 1 second for now), lets give it a try and see how well it performs in Rawr.




What you're describing is what Mage model does in cycle analyzer. I'm not sure how efficient iterative approximation is. I'm using direct solving for steady state via LU decomposition and I have a solver for it using unsafe C# code. You can check CycleGenerator,
GenericCycle and for example ArcaneCycleGeneratorBeta in mage model. I've also branched some of that to Rawr.Base.Algorithms.MarkovProcess, but that doesn't include the improvements made since the branch.
What mage model does in terms of caching calculations between small changes in gear is not related to cycle solving, that is on the level of cooldown stacking. I'm not using the generic cycles for general calculations. It's possible that it would be workable,
but I haven't tried it. What I do instead for the complex cycles is decompose it into simpler models, trying to minimize for errors, until it's at a size that can be solved symbolically (or close to it).
What you can do instead with such a complex model is use it for verification of your fast approximaiton models used for comparisons. Rawr already supports an asyncrhonous model. Whenever you make changes it first computes everything normally, and then if
the model supports asyncrhonous mode it starts a background computation that is displayed after it's finished. You can use that to display more accurate data and also display some error characteristics of the approximation model.




Kavan wrote:
I'm using direct solving for steady state via LU decomposition and I have a solver for it using unsafe C# code.
What you can do instead with such a complex model is use it for verification of your fast approximaiton models used for comparisons. Rawr already supports an asyncrhonous model. Whenever you make changes it first computes everything normally, and then if
the model supports asyncrhonous mode it starts a background computation that is displayed after it's finished. You can use that to display more accurate data and also display some error characteristics of the approximation model.
We tried LU decomposition for prot but with 20150k states but even matlab UMFPACK died due to the memory requirements hence why we're using bruteforce iterative approximation.
As for ret haste, I'm intended to approximate it by modelling only a 'nice' haste points that drop the CD in 0.1s increments to prevent the state space exploding and then interpolating those results to other haste values.
It sounds very much like I'll have to do the analytical hard yards and come up with some functions that fit the raw data.




How quickly do the "answers" change is my question? Does a 0.1s change in haste result in significantly different gear being optimal? If so by a lot or by a tiny margin? A statistically valid margin?
Perhaps once can just solve for accuracy instead of precision and increase the "equivalency" windows further to reduce the state space. Whenever rawr says there is a ~1000 point difference in two pieces of gear out of the 1.5 million overall points
for my character's current setup...those items are really effectively equivalent. Rawr simply isn't that precise...at least not to that degree.




The main problem I think is the effect on user experience. The main desired property of Rawr models is that they're deterministic and that under small changes they produce similar results, except if there is an underlying discontinuity in the model.
What you don't want is that by increasing crit rating by 1 you get a radically different result just due to noise introduced in the solver. Even worse is getting different results on repeated calculations of the same input.



May 17, 2011 at 2:49 AM
Edited May 17, 2011 at 3:25 AM

khanthal wrote:
A statistically valid margin? .... ~1000 point difference in two pieces of gear out of the 1.5 million ... Rawr simply isn't that precise...at least not to that degree.
Kavan wrote:
The main desired property of Rawr models is that they're deterministic and that under small changes they produce similar results, except if there is an underlying discontinuity in the model.
The markov chain models themselves are exact* and will return the same result for each invocation. The accuracy depends on how many iterations we converge for which for the protpally rotations modeled is 1e10. Differences of 2dps between different
ability priorities is meaningful even though actual RNG variance is a much more than that.
Essentially, the calculation I am working on would return cast/s (+relevant shortterm selfbuffs) for each ability with the problem being it takes quite a lot of computation to get a result so is not suitable for direct usage by Rawr. From what I've analysed
so far, there doesn't appear to be any significant discontinuities in cast/s when varying hit/exp** so a simple set of functional approximations (probably exponential or polynomial from what I've seen so far) should give a good data fit and be fast enough
for Rawr.
*excluding unmodeled variables such as latency
**the markov chain casts/s calculation I am working on uses meleehit% and rangehit% and not hit/exp directly and as such discontinuities at the hit cap & exp soft/hard caps are not discontinuities in the model itself




Uhh? What? imin, we definately use the hit/exp directly by converting it into the related percentages. Please don't say we do something by not doing something else, that just confuses people.



May 17, 2011 at 3:22 AM
Edited May 17, 2011 at 3:25 AM

Jothay wrote:
Uhh? What? imin, we definately use the hit/exp directly by converting it into the related percentages. Please don't say we do something by not doing something else, that just confuses people.
"the model" I was referring to is the matlabadin markov chain casts/s calculation that I have been working and am discussing for suitability for Rawr, NOT any existing Rawr models. I have updated the wording on my post: apologies if my wording has caused
any confusion. My point was essentially that I don't have to worry about hit/exp cap as Rawr would convert hit/exp into percentages and by using these percentages, the hit/exp hard/soft caps get handled before being input into the markov chain cast/s calculation
thus resulting in a nicer casts/s calculation that one using hit/exp directly.




Ah, ok, that makes more sense. Lord knows if I read it that way, someone else would have



May 17, 2011 at 3:24 AM
Edited May 17, 2011 at 3:30 AM

Noise on the order of 2 dps shouldn't be much of an issue, specially if you're planning to fit it with functional interpolation (I'd suggest some form of piecewise polynomial).
EDIT: Never mind, I just realized the 2 dps was just an example of a meaningful difference even if rng variance is large.




iminmmnni wrote:
"the model" I was referring to is the matlabadin markov chain casts/s calculation that I have been working and am discussing for suitability for Rawr, NOT any existing Rawr models. I have updated the wording on my post: apologies if my wording has caused any
confusion. My point was essentially that I don't have to worry about hit/exp cap as Rawr would convert hit/exp into percentages and by using these percentages, the hit/exp hard/soft caps get handled before being input into the markov chain cast/s calculation
thus resulting in a nicer casts/s calculation that one using hit/exp directly.
Yes, the ret module has a combattable implemented which will return %values for each attack. (Also every Cap should be in)
I'd say we give it a try and see if the output is acceptable. (If iminmmnni will do the work, since I don't have enough background)
I'd be happy to help you, wherever I can. (I do have more time again to work on the module, yay!)

