ares289, I'll respond to your comments one by one.
The number of pockets on the wheel is not a magical force forcing a particular result, and even if it were, it would only mean that then there would also be no "independence", so the conclusion from this is that the constant number of pockets DOES NOT in any way contradict the theory that says about the lack of complete independence of each individual event.
No, it's not a magical force, but if you're choosing your bets based on past numbers alone, and each number is equally likely from one spin to the next, why should past outcomes matter? You could say that it's an assumption that numbers are equally likely, but that's why I said 'past numbers
alone'. In that case there is physical independence; there is no
physical connection between successive spins, and where there is physical independence there is always statistical independence.
While it's true that you can't prove independence, there are lots of cases where it's intuitively obvious. I would argue that roulette is one such case because there is clearly no connection between one spin and the next, assuming normal conditions. But if it isn't intuitively obvious, there are statistical tests such as the Chi-square test for independence which you can use for any events A and B. If you can find a dependence between them and it's strong enough, you have your holy grail.
the lack of complete independence of each individual event.
Again, this is a poor understanding of 'independent'. An event cannot be independent on its own. You must always specify another event with respect to which the event is independent from, otherwise it has no meaning.
The problem is only in your head because the word of INDEPENDENT doesn't change its meaning depending on the context (no matter what you say), so all you do is introduce unnecessary conceptual chaos without discovering anything new.
Actually, I partly agree with you here in that I probably overemphasized the ambiguity of the word. The main fallacy isn't so much that there is a difference between 'independent' as used normally, and 'statistically independent' as used in the context of probability, but that Ellison has ignored (either intentionally or because he was careless) the fact that independence is a
relation between two events. Nowhere in his article does he refer to this; he just talks about the
single 'event' of outcomes conforming to a pattern or distribution:
If every tabIe game resuIt is an independent event, how can we ever expect any particuIar number to come up at aII? We can’t, because there wouId be nothing to stop the wheeI from seIecting a different number, every time. And yet, the same peopIe who say that these numericaI events are immacuIateIy independent, expect the numbers to conform with the probabiIities. But if such events were truIy independent, there wouId never be a moment, or even a sustained period, when any number couId be expected to show up.
It doesn't even make sense to say that '
If every table game result is an independent event', because independence is a relationship between two events. But on the other hand, if he said : '
if events A and B in roulette are independent, how can we expect any particular number to come up at all?', even though now the first part is correct, it makes the second part nonsense, because if A and B
are independent, of course it doesn't stop any particular number coming up.
What he's actually talking about is
The Law of Large Numbers, which he has conflated with independence. Outcomes conform to probabilities because of the LLN.
The Law of Large Numbers :
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.
(Wiki)
So Ellison seems to be making a valid point, but it's only because he has misinterpreted the concept of independence, and as a result has confused it with the law of large numbers. But these concepts don't depend on each other. Events A & B can be independent, meaning that there is no connection between A & B, and outcomes can also conform to probabilities in the long run.
Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other.
EXACTLY, so instead of misinterpret just learn the meaning of the word "IF", because used in this definition only indicates that everything written after it is only a form of ASSUMPTION and nothing more.
No, 'if' doesn't indicate an assumption here, because the statement is a definition and could be written without using 'if'. e.g.
Two events being independent, statistically independent, or stochastically independent means that the occurrence of one does not affect the probability of occurrence of the other.Or as :
For two events to be independent, statistically independent, or stochastically independent, the occurrence of one must not affect the probability of occurrence of the other.
The 'if' only separates the two parts of the definition, and the purpose of a definition is to explain the meaning of a term which may not be very clear in terms of more familiar terms which
are understood.
It is only your unfounded inference, which you are not able to prove in any way, besides, if something is "predictable in the long run", it only makes it possible to deduce that it must be ALSO predictable in the short run, just to a lesser degree.
No, not really. There is no law of small numbers.
It is important to remember that the law only applies (as the name indicates) when a large number of observations is considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).
(Wiki)
Ellison has used the fact that the LLN means you can 'predict' that a certain pattern or distribution will form in the long term to argue that this is really the same as saying that outcomes aren't independent. But it's a fallacious argument. The pattern isn't that you can predict the
position of a certain outcome in a sequence (as would be the case if successive outcomes or events were dependent), but that you can predict the
distribution of outcomes in a sufficiently large sample.
https://en.wikipedia.org/wiki/Law_of_large_numbers