- #1

- 6

- 0

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter johnG2011
- Start date

- #1

- 6

- 0

- #2

- 530

- 7

If p is the common probability then sigma-additivity implies a contradiction both for p=0 and for p>0.

- #3

- 2,161

- 79

If p is the common probability then sigma-additivity implies a contradiction both for p=0 and for p>0.

I don't think there's a contradiction. Any element can be chosen from an infinite countable set with probability zero given a uniform distribution. The probability can never be greater than zero. The probability of choosing an even number from the infinite countable set of natural numbers is 0.5. The probabilities of either an odd number or an even number sum to unity. Perhaps I'm misunderstanding your argument.

Last edited:

- #4

Stephen Tashi

Science Advisor

- 7,709

- 1,516

JohnG2011,

You're talking about various examples whose probablity measures are different and assuming the conclusions from one apply to the other.

Of the assertions you made, I find these interesting.

The probability of choosing an even number from the infinite countable set of natural numbers is 0.5.

What probability measure are you using to draw this conclusion? I think it contradicts what you are trying to prove. When people say "the probability that a random integer is even is 0.5" the only sensible interpretation I can make of that is to phrase it as a statement about a limit of a sequence of probability distributions. The probability that an integer chosen at random from a unform distribution on the integers from 1 to L is approximately 1/2 for large L and approaches 1/2 as a limit as L approaches infinity. However, there is no probablity distribution that is the limit of these distributions.

If you mean something like a uniform distribution on the reals from -1 to 1 then that's true, but irrelevant to the question. In that case, the definition of the probablity measure and the meaning of integration say you do a calculus type integral, not a discrete summation. I think the problem's reference to "singletons" boxes you into using discrete summation. (If can show it doesn't then perhaps you've made a notable discovery.)Any element can be chosen from an infinite countable set with probability zero given a uniform distribution

- #5

disregardthat

Science Advisor

- 1,866

- 34

I don't think there's a contradiction. Any element can be chosen from an infinite countable set with probability zero given a uniform distribution. The probability can never be greater than zero. The probability of choosing an even number from the infinite countable set of natural numbers is 0.5. The probabilities of either an odd number or an even number sum to unity. Perhaps I'm misunderstanding your argument.

Nope, as bpet explained sigma-additivity contradicts this. One will need the set to be uncountable or finite if all singletons are part of the sigma algebra.

- #6

- 2,161

- 79

Nope, as bpet explained sigma-additivity contradicts this. One will need the set to be uncountable or finite if all singletons are part of the sigma algebra.

OK. If take every every natural number divisible by three, why can't I say that the probability of selecting such a number from the set of natural numbers is 1/3?

- #7

HallsofIvy

Science Advisor

Homework Helper

- 41,847

- 966

The sum of all probabilities must be 1. If the probability of each possibility is 0, even an infinite sum of all "0"s is 0.I don't think there's a contradiction. Any element can be chosen from an infinite countable set with probability zero given a uniform distribution. The probability can never be greater than zero. The probability of choosing an even number from the infinite countable set of natural numbers is 0.5. The probabilities of either an odd number or an even number sum to unity. Perhaps I'm misunderstanding your argument.

In your example, prob of even numbers being 1/2, prob of odd numbers 1/2, you have only

- #8

- 2,161

- 79

You are NOT assigning a probability to each integer so you do NOT have a probability over a countable number of outcomes.

What I'm saying is that the natural numbers can be divided into an arbitrary number of infinite subsets: say every number divisible by 3 and every number divisible by 7. The probabilities can be summed (1/3)+(1/7)-(1/3)(1/7)= 9/21.

My point is that we can talk about probabilities over an infinite countable set despite the obvious fact that, given a uniform distribution, the probability of any given number is 0. I can still say that the selected number will have P=1/3 of being divisible by 3, P = 1/7 of being divisible by 7, and P=9/21 of being divisible by either.

Last edited:

- #9

disregardthat

Science Advisor

- 1,866

- 34

SW, it doesn't work like that.

The probability measure which assigns each singleton the probability 0 will

http://en.wikipedia.org/wiki/Measure_(probability [Broken])

See countable additivity.

If a probability measure on the integers assigned each singleton set probability 0, then

P({even integers}) = P({0} U {2} U {-2} U {4} U {-4} U ...) = P({0}) + P({2}) + P({-2}) + P({4}) + P({-4}) + ... = 0 + 0 + 0 + 0 + ... = 0.

If the measure of singleton sets where a positive constant p, then the measure of the even integers would be an infinite sum of p's, which does diverge (but must be equal to 1 for P to be a probability measure).

Last edited by a moderator:

- #10

- 2

- 0

I think this really emphasizes the role the sigma algebra plays in probability. It is viable to say the probability of selecting a multiple of 3 is 1/3, so long as you restrict the measurable events appropriately. You could restrict your measurable events to the sigma algebra generated by the events [itex]\{n|n \mod 3 = 0 \}[/itex], [itex]\{n|n \mod 3 = 1 \}[/itex], and [itex]\{n|n \mod 3 = 2 \}[/itex] and the above statement would make sense. The problem is then that we couldn't say anything about probabilities of subsets of those events, because those probabilities aren't even defined for this sigma algebra.

For this case, the infinite sample space doesn't matter; we may as well work with [itex]\Omega = \{1,2,3\} [/itex] and the uniform distribution.

- #11

- 2,161

- 79

SW, it doesn't work like that.

The probability measure which assigns each singleton the probability 0 willforcethe measure of each subset (even the infinite ones) to be 0.

http://en.wikipedia.org/wiki/Measure_(probability [Broken])

Well, I have to admit that I've answered questions in this forum to the effect that all infinite subsets of the natural numbers must have the same cardinality as the set of all natural numbers, that is [itex]\aleph _0[/itex]. I never considered the impact on probability theory. Sampling theory involves finite samples from theoretically infinite sets. Clearly any finite segment of the positive real number line of some length will have more natural numbers divisible by small natural numbers than by large natural numbers. I'm going to think about this for a while.

Last edited by a moderator:

- #12

Stephen Tashi

Science Advisor

- 7,709

- 1,516

A slight digression: A popularized description of Erdos's work in number theory is that he introduced proofs involving probability. Anyone know what probability spaces and measures were involved in that?

- #13

- 530

- 7

Might have been a variant of Natural Density which (I think) uses finite additivity instead of sigma-additivity.

- #14

Stephen Tashi

Science Advisor

- 7,709

- 1,516

I looked at "Natural density" on the Wikipedia and it's an intuitively pleasing idea. I wonder if there is a relation between it and the "improper priors" that some people use in Bayesian statistics. Can we express most useful "improper priors" as limits of a sequence of probability distributions, each of which has support on a proper subset of the domain of the random variable?

I think one way to get a Bayesian interpretation of frequentist confidence intervals for the mean of a normal distribution is to compute an answer (e.g. the probability that the mean is in (5.0, 7.0) , a specific numerical interval) based on a uniform prior on (-L,L) and then look at the limit of the answer as L approaches infinity. However, that sequence of distributions doesn't approach a distribution. I suppose one could define the limit as a "generalized distribution" by analogy to a Dirac delta function being a generalized function.

- #15

- 2,161

- 79

I think one way to get a Bayesian interpretation of frequentist confidence intervals for the mean of a normal distribution is to compute an answer (e.g. the probability that the mean is in (5.0, 7.0) , a specific numerical interval) based on a uniform prior on (-L,L) and then look at the limit of the answer as L approaches infinity. However, that sequence of distributions doesn't approach a distribution. I suppose one could define the limit as a "generalized distribution" by analogy to a Dirac delta function being a generalized function.

I'm not following. The Dirac delta is a distribution representing the limit of the normal distribution with mean 0 as the variance goes to 0. I'm not sure the Dirac delta can even be properly called a function as such. The integral is

Last edited:

- #16

Stephen Tashi

Science Advisor

- 7,709

- 1,516

Both the Dirac delta and a (fictitious) uniform distribution on minus infinity to infinity are zero at most places but still integrate to 1. I visualize the Dirac delta function [itex] \delta_m [/itex] as the limit of a sequence of normal distributions with mean m and standard deviations approaching 0. I visualize a "uniform distribution on minus infinity to infinity" as a limit of a sequence of uniform distributions from -L to L as L approaches infinity. So I think there is an analogy between the two concepts, without them being the same concept.

- #17

- 2,161

- 79

Both the Dirac delta and a (fictitious) uniform distribution on minus infinity to infinity are zero at most places but still integrate to 1. I visualize the Dirac delta function [itex] \delta_m [/itex] as the limit of a sequence of normal distributions with mean m and standard deviations approaching 0. I visualize a "uniform distribution on minus infinity to infinity" as a limit of a sequence of uniform distributions from -L to L as L approaches infinity. So I think there in an analogy between the two concepts, without them being the same concept.

I don't see how that gets us to sigma additivity. It seems we could just define a function on the interval of natural numbers [1,n] such that [itex]f(x)=\lim_{n\rightarrow\infty} n(x)/n [/itex] where [itex]0\leq x \leq 1[/itex].

Last edited:

- #18

Stephen Tashi

Science Advisor

- 7,709

- 1,516

I'm not claiming that things like the Dirac delta function are actual functions or that they define actual probability measures. Likewise, the jargon "improper prior" doesn't refer to an actual probability distribution.

Share:

- Replies
- 2

- Views
- 1K

- Replies
- 1

- Views
- 2K