Thread: 23andMe update
View Single Post
Old 12-09-2013, 06:05 PM
johnt johnt is offline
Senior Member
 
Join Date: Apr 2009
Location: Stafford, UK
Posts: 1,059
15 yr Member
johnt johnt is offline
Senior Member
 
Join Date: Apr 2009
Location: Stafford, UK
Posts: 1,059
15 yr Member
Default

Tupelo3 asks:

"Why can't a customer, like me, be given a reasonable answer as to why one genetic testing company tells me I have a high risk for a disease and a competitor, using the same raw data, informs me I have lower than average risk. "

A possible answer can be found in the article pointed to by Debbi, to whom my thanks. It is by John Wilbanks. He writes [1]:

"[A] “traditional” submission to the FDA would be of a very specific kind of analysis based on randomized controlled trials. It is designed to keep bad things from happening to people, not to make sure good things happen to people... Modern tech culture doesn’t work that way. Bayes’ rule is about probability. It’s a different way of knowing that you know something, and it’s one in which there is far more tolerance for uncertainty than the FDA is accustomed to."

I suspect the differences stem from whether frequentist inference or Bayesian inference is used and, if the latter, what prior distribution is used.

As the sample size increases these measures converge, but for small samples the different methods can give very different results.

Normally in mathematics there is a clear consensus as to what is right and what is wrong, but this is an area where there is much disagreement.

To get a flavour of the problem, let's look at the calculation of the probability of getting a head in a coin toss, where the coin is possibly biased. To keep things simple, we will toss the coin just once (i.e. a sample size of 1). Note: I'm simplifying both approaches.

Suppose we get a tail.

The frequentist would tally the results: heads 0, tails 1 and conclude that the probability of heads = 0/1 = 0

The Bayesist says, we don't know specifically about this coin, but we know from past experience that coins are not usually biased, so we start with an estimate of 0.5 for the likelihood of a head. After the coin is tossed we adjust this value. We could do this in many ways. For instance, we could say that our past experience has the same value as one toss of the coin. This leads to an update of the likelihood to 0.5*0.5+0.5*0 = 0.25

Reference

[1] http://www.xconomy.com/national/2013...iness-model/2/

John
__________________
Born 1955. Diagnosed PD 2005.
Meds 2010-Nov 2016: Stalevo(75 mg) x 4, ropinirole xl 16 mg, rasagiline 1 mg
Current meds: Stalevo(75 mg) x 5, ropinirole xl 8 mg, rasagiline 1 mg
johnt is offline   Reply With QuoteReply With Quote
"Thanks for this!" says:
lab rat (12-10-2013)