Velocity Conference 2011 Workshop on Decisions in the Face of Uncertainty or Just Enough Statistics to be Dangerous

John Rauser from Amazon (Principal Quantitative Engineer) is giving this session on statistics. He’s a totally self-trained statistician. He knows enough to be dangerous.

  • Estimation and Uncertainty
  • Statistics Models: What should I expect?
  • Statistics Inference: Is the difference just to chance?
  • Decisions in the face of uncertainty

An Exercice in Estimation

Started with an example of “How Old is Jeff Bezos?”. He says it is impossible to guess to an accuracy of a day or even a minute. Calls out human error when he was born recording the minute he was born. When the question is asked in English, the answer (normal response) is 50. Always give 2 numbers (upper bound and a lower bound). This is so you are 90% sure that the true value is b/w the two values. The true answer to the question was 47 btw…

What’s he getting at? Confidence Intervals (booyah!). Makes me feel special since I’m a big fan of CI’s and have been touting them for quite a while now.

Next he had us fill-out a sheet of paper to cover the exercise of estimation. There were 10 questions ranging from the # of people who flew into San Jose airport in 2009 to how many fans did Lady Gaga have on Twitter on May 31, 2011? None of the questions were all that meaningful, just a point Rauser was trying to make about the need for 2 numbers. He also noted that folks in the Czech Republic drink about 1 pint a day of beer.

Note to Geoff: Probably should discuss estimation in the presentation at BbWorld. What do you think?

Measuring reduces uncertainty. You can never get to 0 uncertainty. Only example that doesn’t work is the bank example. If your bank told you that you had between $900 and $10,000 you probably would be upset with the bank. Giving a single number is called “Making a Decision”.

Statistics Models: What should I expect?

We were all trying to get 9 out of 10 right in the example. In our actual example, no one got 10 out 10, 9 out of 10…and practically no one got 8 out of 10 right.

What is the chance of getting 10 right out of 10 questions if the chance of getting each one right is .9?rewrite as…What is the chance of getting k “successes” out of n questions if the chance of getting each one right is p?

one other rewrite

What is the chance of k successes in N success/fail trials where the chance of success for each trial is p?

B(k;n,p) = ???

This is the binomial distribution…the problem of the points…

In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial; when n = 1, the binomial distribution is a Bernoulli distribution. The Binomial distribution is an n times repeated Bernoulli trial. The binomial distribution is the basis for the popular binomial test of statistical significance.

The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution is a good approximation, and widely used.

Showed an interesting chart in which he created a tree structure starting at 1-2. If I win, go to the nodes to the left. If you win go to the right. It’s Pascal’s triangle…which you can pull-out the Fibanacci numbers.

This is great…but real question is why do we want to use this for performance statistics? Do we or do we not? I could see us using this for probability modeling of performance scenarios. What else would we use it for?

Rauser went back to his example of the 10 questions from earlier. We didn’t get exactly what we expected. Maybe our model is wrong?

Statistics Inference

…talked about frequentists vs. bayesians

Frequentist inference is one of a number of possible ways of formulating generally applicable schemes for making statistical inferences: that is, for drawing conclusions from statistical samples. An alternative name is frequentist statistics. This is the inference framework in which the well-established methodologies of statistical hypothesis testing and confidence intervals are based. Other than frequentistic inference, the main alternative approach to statistical inference is Bayesian inference, while another is fiducial inference. While “Bayesian inference” is sometimes held to include the approach to inference leading to optimal decisions, a more restricted view is taken here for simplicity.

Two approaches…

1) Canned Tests: Old school statistics (hypothesis)

 

*Student’s T

 

2) Direct Simulation
Check on a paper from 2005 by George Cobb that talks about why the computer should be used for statistical modeling. Used R as his statistical analysis tool. Created a simple histogram of the example from the presentation to figure out the P values.

Referenced this web site to check-out called Stats with Cats.

  • Normal Distribution
  • Exponential Distribution
  • Gamma Distribution: time for a sequence of tasks (plotting web site latency)

Decisions in the Face of Uncertainty

Started with the problem…how many business cards in stack?

Souders guessed 202…then changed to 150 to 250. Gave a 90% confidence interval. Normal curve is between lower bound and upper bound 90%. 5% to left of lower and 5% to right. Normal has mean and standard deviation. Mean is upper + lower / 2. Standard Dev is mean – lower = 1.64 STDV (1.64 is the 5% from earlier).

In Steve’s head: Mean = 200 and STDEV = 30

What’s Steve’s guess? It’s 202. Natural answer is 200. Why? It’s most likely to be correct. What are trying to do? Maximize our chance to be correct or maximize something else? Guess right and win…guess more and win more.

Trying to get us to realize that you want to maximize the Expected Value.

Use Sage to help refresh Calculus capabilities. Wish this was around when I was in high school.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s