# Crash Course in A/B testing for Beginners

67 %
33 %
Information about Crash Course in A/B testing for Beginners
Education

Published on March 7, 2014

Author: WayneLee9

Source: slideshare.net

## Description

This is a crash course in A/B testing from the statistical view. Focus is placed on the overall idea and framework assuming very little experience/knowledge in statistics.

Crash Course in A/B testing A statistical perspective Wayne Tai Lee

Roadmap • What is A/B testing? • • • • • • Good experiments and the role of statistics Similar to proof by contradiction “Tests” Big data meets classic asymptotics Complaints with classical hypothesis testing Alternatives?

What is A/B Testing • An industry term for controlled and randomized experiment between treatment/control groups. • Age old problem….especially with humans

What most people know: Gather samples Apply treatments Compare Measure Outcome Assign treatments A ? B

What most people know: Only difference is in the treatment! A ? B

Reality: Variability from Samples/Inputs Variability from Treatment/function Variability from Measurement A ?????? B How do we account for all that?

Confounding: • If there are variabilities in addition to the treatment effect, how can we identify/isolate the effect from the treatment?

3 Types of Variability: • Controlled variability • Systematic and desired • i.e. our treatment • Bias • Systematic but not desired • Anything that can confound our study • Noise • Random error but not desired • Won’t confound the study but makes it hard to make a decision.

How do we categorize each? Variability from Samples/Inputs Variability from Treatment/function Variability from Measurement A ?????? B

Reality: Good instrumentation! A ?????? B

Reality: Randomize assignment! Convert bias to noise A ?????? B

Reality: Randomize assignment! Convert bias to noise A ?????? B Your population can be skewed or biased….but that only restricts the generalizability of the results

Reality: Think about what you want to measure and how! Minimize the noise level/variability in the metric. A ? B

A good experiment in general: - Good design and implementation should be used to avoid bias. - For unavoidable biases, use randomization to turn it into noise. - Good planning to minimize noise in data.

How do we deal with noise? - Bread and butter of statisticians! - Quantify the magnitude of the treatment - Quantify the magnitude of the noise - Just compare…..most of the time

Formalizing the Comparison Similar to proof by contradiction - You assume the difference is by chance (noise)

Formalizing the Comparison Similar to proof by contradiction - You assume the difference is by chance (noise) - See how the data contradicts the assumption

Formalizing the Comparison Similar to proof by contradiction - You assume the difference is by chance (noise) - See how the data contradicts the assumption - If the surprise surpasses a threshold, we reject the assumption. - ….nothing is “100%”

Difference due to chance? Red -> treatment; Black -> control ID Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 PV 39 209 31 98 9 151

Difference due to chance? Red -> treatment; Black -> control ID Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 PV 39 209 31 98 9 151 | | | | | | | Let’s measure the difference in means! mean 72 mean 124.5 Diff = -52.5 ….so what?

Difference due to chance? Red -> treatment; Black -> control ID PV ID PV Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 39 209 31 98 9 151 1 2 3 4 5 6 39 209 31 98 9 151 If there was no difference from the treatment, shuffling the treatment status can emulate the randomization of the samples.

Difference due to chance? Red -> treatment; Black -> control ID PV ID PV Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 39 209 31 98 9 151 1 2 3 4 5 6 39 209 31 98 9 151 Diff = 122.25 – 24 = 98.25

Difference due to chance? Red -> treatment; Black -> control ID PV ID PV Person 1 Person 2 Person 3 Person 4 Person 5 Person 6 39 209 31 98 9 151 1 2 3 4 5 6 39 209 31 98 9 151 Diff = 107. 5 – 53.5 = 54

Difference due to chance? 50000 repeats later….. Our original -52.5

Difference due to chance? Our original -52.5 46.5% of the permutations yielded a larger if not the same difference as our original sample (in magnitude). Are you surprised by the initial results?

“Tests” Congratulations! - You just learned the permutation test! - The 46.5% is the p-value under the permutation test.

“Tests” Congratulations! - You just learned the permutation test! - The 46.5% is the p-value under the permutation test. Problems: - Permuting the labels can be computationally costly. - Not possible before computers! - Statistical theory says there are many tests out there.

Standard t-test: 1) Calculate delta: = mean_treatment – mean_control 2) Assumes follows a Normal distribution then calculate the p-value. p-value = sum of red areas - 0 3) If p-value < 0.05 then we reject the assumption that there is no difference between treatment and control. 28 “Tests”

Big data meets classic Stats 29 Wait, our metrics may not be Normal!

Big Data meets Classic Stat We care about the “mean of the metric” and not the actual metric distribution. 30 Wait, our metrics may not be Normal!

Big Data meets Classic Stat We care about the “mean of the metric” and not the actual metric distribution. 31 Wait, our metrics may not be Normal! Central Limit Theorem: The “mean of the metric” will be Normal if the sample size is LARGE!

Assumptions with t-test - Normality of %delta - Guaranteed with large sample sizes - Independent Samples - Not too many 0’s That’s IT!!! - Easy to automate. - Simple and general. 32 Big Data meets Classic Stat

What are “Tests”? 33 • Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean.

• Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean. Guarantees: • By setting the p-value to compare to a 5% threshold, we control P( Test says difference exists | In reality NO difference) <= 5% 34 What are “Tests”?

• Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean. Guarantees: • By setting the p-value to compare to a 5% threshold, we control P( Test says difference exists | In reality NO difference) <= 5% • By setting the power of the test to be 80%, we control P( Test says difference exists | In reality difference exists) >= 80% 35 What are “Tests”?

• Statistical tests are just procedures that depend on data to make a decision. • Engineerify: Statistical tests are functions that take in data, treatments, and return a boolean. Guarantees: • By setting the p-value to compare to a 5% threshold, we control P( Test says difference exists | In reality NO difference) <= 5% • By setting the power of the test to be 80%, we control P( Test says difference exists | In reality difference exists) >= 80% • Increasing this often requires more data 36 What are “Tests”?

Meaning: All treatments No difference Difference exist 37 Reality Useless treatments Impactful treatments

Meaning: All treatments No difference Difference exist 38 Reality Useless treatments Test Decision No difference Difference Exists Impactful treatments No difference Difference Exists

Meaning: All treatments No difference Difference exist 39 Reality Useless treatments Test Decision Guarantees through conventional thresholds No difference >95% Difference Exists <=5% Impactful treatments No difference Difference Exists <20% >=80%

Meaning: All treatments No difference Difference exist 40 Reality Useless treatments Test Decision Guarantees through conventional thresholds Jargon No difference >95% Difference Exists <=5% Significance level Impactful treatments No difference Difference Exists <20% >=80% Power

Meaning: 41 - Most appropriate over repeated decision making - E.g. spammer or not

- Most appropriate over repeated decision making - E.g. spammer or not - Not seeing a difference could mean - There is no difference - Not enough power 42 Meaning:

- Most appropriate over repeated decision making - E.g. spammer or not - Not seeing a difference could mean - There is no difference - Not enough power - Seeing a difference could mean - There is a difference - Got unlucky/lucky 43 Meaning:

- Most appropriate over repeated decision making - E.g. spammer or not - Not seeing a difference could mean - There is no difference - Not enough power - Seeing a difference could mean - There is a difference - Got unlucky/lucky - Your specific test is either impactful or not. (100% or 0%) Not what most people want to hear…. 44 Meaning:

Complaints with Hypth Testing 45 • People get really stuck on p-values and tests. • Confusing, boring, and formulaic.

Complaints with Hypth Testing 46 • People get really stuck on p-values and tests. • Confusing, boring, and formulaic. • Statistical significance != Scientific significance • You could detect a .000001 difference, so what?

• People get really stuck on p-values and tests. • Confusing, boring, and formulaic. • Statistical significance != Scientific significance • You could detect a .000001 difference, so what? • Multiple Hypothesis testing • 5% false positive is 1 out of 20. Quite high! • http://xkcd.com/882/ • Most published results are false still (Ioannidis 2005) 47 Complaints with Hypth Testing

• People get really stuck on p-values and tests. • Confusing, boring, and formulaic. • Statistical significance != Scientific significance • You could detect a .000001 difference, so what? • Multiple Hypothesis testing • 5% false positive is 1 out of 20. Quite high! • http://xkcd.com/882/ • Most published results are false still (Ioannidis 2005) • What is it answering? • Nothing specific about your test…. probabilities are over repeated trials. 48 Complaints with Hypth Testing

Both children of a British mother died within a short period of time. Mother was convicted of murder because p-value was low. If she was innocent, the chance of both children dying is low p-value = P( two deaths | innocent ) 49 Abuse: Prosecutor Fallacy

Both children of a British mother died within a short period of time. Mother was convicted of murder because p-value was low. If she was innocent, the chance of both children dying is low p-value = P( two deaths | innocent ) In fact, we should be looking at P( innocent | two deaths ) This is the prosecutor’s fallacy. 50 Abuse: Prosecutor Fallacy

Example: 51 All Mothers Guilty Mothers Two deaths Innocent Mothers Two deaths

Example: base line matters! 52 All Mothers Guilty Mothers Two deaths Innocent Mothers Two deaths P-value can be small. But base line can be huge.

Any Alternatives? 53 P( innocent | two deaths ) is what we want…… but does it make sense? Bayesian methodology: P( difference exists | data ) This requires knowing P(difference exists), i.e. the prior - Philosophical debate, “What is a probability?” - Easy to cheat the numbers

- How to deal with multiple hypothesis testing? - What are we doing in the company? - Rumor has it that “Multi-armed bandit > A/B testing”? 54 Questions?

 User name: Comment:

## Related presentations

September 30, 2020

#### BSc Radiology: A lucrative and fulfilling career o...

September 30, 2020

#### IPHI’s Bachelor of Science (B.Sc.) in Nursing

September 30, 2020

#### UP Police Constable Eligibility

September 30, 2020

#### How to Sleep well at Night

September 30, 2020

#### 5 factors to consider while selecting bioinformati...

September 30, 2020

## Related pages

### A/B Testing Statistics Crash Course: Ignorant No More

Here's primer on A/B testing statistics to make sure you're not ... There’s a challenge with running A/B tests: ... Crash Course on A/B Testing ...

### A/B Testing for Mobile Websites: A Crash Course Webinar ...

A Crash Course in A/B Testing for Mobile Websites ... Thank you so much for joining Mobify and Optimizely for a crash course in A/B testing on mobile ...

### A Crash Course in A/B Split Testing Using Campaigner ...

Many email marketers say they haven’t tried A/B split testing because they ... A Crash Course in A/B Split Testing Using Campaigner Experiments. ...