I just realised something quite important actually: we've never been able to study variance because of random and curve fitting, i.e. different results across different data sets. However, cycles can provide us a framework to study it where the results will be the same across all data sets.

The cycles will need to stay finite in order to offer us keyframes that can be used as checkpoints:

First to 1 repeat; 2 repeats; 3 repeats (..) of either the Cycle Length or the Defining Element.

First to 1 repeat of inner cycle; first to 1 repeat of outer cycle; first to 2 repeats of outer cycle...

Returning to some earlier examples, I couldn't actually tell you right now if each of the outcomes represent a finite series or not - but I am guessing they are random as there's no defined limit:

s1, d2, d3, s3, s3, d2

CL1s, CL2s, CL3s, CL1d, CL2d

Anyway, with variance once a series of losses/dispersion kicks in the first rule is to wait them out because we don't know when it will end. Once the wins start to come back in, and under what frequency, we could start to measure the variance all the while keeping tabs on the law of large numbers. Whatever the stats are for variance it ought to hold true across any finite series of outcomes.