Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Universality

Comments

  • Registered Users, Registered Users 2 Posts: 3,745 ✭✭✭Eliot Rosewater


    Thanks Fremen!

    Near the start he says "Roughly speaking, this law [of large numbers] asserts that if one is making a set of samples via some random method, then as one makes the sample size larger and larger, the average outcome of those samples will almost always converge to a single number, known as the expected outcome of that random method."

    I used to think that the rate of convergence was pretty fast, or, at least, fast enough so as to make it useful in most/all situations. However I implemented an experiment that showed the rate of convergence, in the particular case, to be terrible.

    A friend of mine told me that one of the old classical (Greek or Roman) mathematicians devised a (what we would now call) statistical method for finding the vale of pi. The mathematician drew a large square, and then inscribed a circle in it, so that the circle fitted perfectly inside the square. He then waited until it rained, went out and counted the number of raindrops in the areas, and made some calculations.

    If the probability of a raindrop landing in a particular point is the same everywhere, then the probability of a raindrop landing in the circle given that it landed in the square is the (area of the circle)/(area of square)=pi/4.

    And so, as per expected values,
    (#of raindrops in circle)/(#raindrops in square) -> pi/4.
    as the sample size increases

    I implemented this as a C++ program, and simulated 10 million raindrops. And the result was atrocious! It got pi to a stunning 2 decimal places of accuracy, and took a long enough time to do it, too - about 1.3 seconds.

    Food for thought!


  • Registered Users, Registered Users 2 Posts: 2,481 ✭✭✭Fremen


    Yes, that's known as the Monte Carlo method. I think the story about the romans is probably apocryphal - the earliest example of the method being used historically is as a thought experiment in the buffon's needle problem. It only really came into its own in the 20th century because of the invention of computers.

    I'm surprised you got such lousy convergence though. Monte carlo methods have error of order sqrt(N), where N is the number of samples you generate. This means that in order to get one extra decimal place of accuracy, you need ten times the number of samples. Still, I would have expected four or five decimal places with ten million samples.

    That sounds rubbish, but it turns out that this is independent of the dimension of the space you're working in. This is in contrast to, say, Riemann integration, where computational effort for a fixed error grows exponentially with the dimension of the underlying space. In order to calculate difficult integrals in dimension higher than three or so, it turns out that Monte Carlo methods are the only game in town.


  • Registered Users, Registered Users 2 Posts: 3,745 ✭✭✭Eliot Rosewater


    I realised in the last hour that the original program I wrote was stupid. The raindrops are universally distributed across the square, yet for some reason I used a random number generator to generate the raindrops. :confused: So I've implemented it again except this time just iterating though points. The results are significantly better - for 16 million raindrops I get 4 digits correct, the approximation being about 0.000008 out. I'll optimise the program now to see the sqrt(N) thing in action!

    Thanks for the Wiki links - some very interesting stuff there!


  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    I used to think that the rate of convergence was pretty fast
    In some ways, you could view the central limit theorem as a first order estimate on the rate of convergence.


Advertisement