Put together, a couple stories in the NYTimes today show that while preventive medicine is theoretically the way of the future, it’s going to be a cultural challenge getting the public to synch up with the program.
First, there’s Tara Parker Pope’s column about the American Academy of Pediatrics recommendation to prescribe statins to children as a longterm preventive measure against heart disease. The idea is to identify those children at a higher lifetime risk for heart disease as early as possible - as early as eight years old - and take preventive measures to ward off the disease.
The backlash comes from pediatricians, who flag that there is scant evidence that’s it’s safe to take statins over several years, let alone decades, let alone 40 or 5o years.
The second story kicks in at the opposite end of life: It concerns recommendations for elderly women to undergo multiple mammograms every year to screen for breast cancer. In this case, there’s somewhat more sound evidence for the intervention.
The mammography study, published in May in The Journal of Clinical Oncology, looked at the records of more than 12,000 patients aged 80 and older who were given diagnoses of breast cancer from 1996 to 2002. It found that among those who had a mammogram every year or two before their diagnosis, 68 percent found the cancer at an early stage, compared with 33 percent of those who skipped mammograms altogether.
Five years after the breast cancer diagnosis, 75 percent of the frequent screeners were alive, compared with only 48 percent of those who had not been screened for at least five years before their cancer was found.
So what’s the controversy? Basically, it boils down to the idea that by the time women hit 85 or 90, there are fairly low odds that they’ll die of breast cancer. Basically, they’re so old that they’re more likely to die of something else, not breast cancer.
In both cases, the issue seems to pivot on one question: What constitutes sufficient evidence to recommend screening for large populations? Is one study enough? And when you’re talking the very old or the very young, screening measures risk bumping up against a ‘yuck factor’ - the idea that we are medicalizing populations, or forcing people into medical interventions, when they should “just be living.”
My sense is that these are simply the first bumps along a pretty clear path towards a whole arsenal of screening panels. In a year or two, pretty much every demographic - be it the very young, the very old, or some slice in between - will have a handful of screening tests that they fall under. The sense of outrage that comes with these recommendations will fall away, because it’ll be up to us, as individuals, to decide whether or not to plug into these panels. But better to have the option of engaging early, and face our risks, rather than wait for the worst to happen.
Published by: tgoetz on July 8th, 2008 | Filed under Epidemiology, screening
Comment now »
Nearly 30 years after smallpox was eradicated from the face of the earth, it still stands alone as the only pathogen to have been deliberately eliminated (Though efforts on guinea worm and polio are getting close). Catching up on back issues of Science, I was surprised to learn that, at long last, there is now another virus very close to eradication. Rinderpest. The only catch: it doesn’t affect humans. But that doesn’t make the prospect of rinderpest eradication any less stunning.
Quick background: Rinderpest is a viral disease that afflicts livestock, mainly cattle. It is brutal, often killing a third of a herd. A century ago, it spread throughout Asia, Africa, and Europe, but various efforts, culminating in a sustained international campaign, begun in 1994, has driven it to isolated patches in Africa. Lately, it’s been confined to Kenya, and now may be even gone from there.
Just because it’s a bovine disease, though, doesn’t mean it doesn’t have human impact. Consider this passage from the Science story (subscription required), describing an outbreak in South African in 1897 that killed about 90% of the cattle population, as well as other livestock and local game:
With herding, farming, and hunting all but gone, mass starvation set in. An estimated one-third of the population of Ethiopia and two-thirds of the Maasai people of Tanzania died of starvation. The rinderpest epizootic also altered the continent’s ecological balance by reducing the number of grazing animals, which had kept grasslands from turning into the thickets that provide breeding grounds for the tsetse fly. Human sleeping sickness mortality surged.”
More recent outbreaks have likewise proven devasting to cattle and human populations alike; a particularly virulent outbreak in Sudan in the late 1980s killed 80% of calves, and with cow milk unavailable, human children began to starve, resulting in a horrible famine.
Another example of how we’re just a part of one big ecosystem.
Eradication is on target for 2010. Can’t wait to toast this one.
Published by: tgoetz on June 27th, 2008 | Filed under Disease, global health
Comment now »
OK - A couple more thoughts on this move by health departments in California and New York to regulate personal genomics. I’ve made my quasi-libertarian case that this is my information and shouldn’t be mediated by an under-informed (and possibly antagonistic) physician gatekeeper. And I’ll leave the companies to make their own case on the issue of lab oversight.
But now let me make an argument on public health grounds - the home turf, after all, of these state agencies. To my mind, their actions will directly contravene their own mandate, and will have the result of reducing the public’s health.
The California DPH says it’s acting to “protect consumers.” As Wired Science’s Alexis Madrigal ferreted out, in a nifty bit of reporting, CDPH’s Karen Nickel said in a June 13 meeting that the state’s primary concern is that personal genomics companies are creating the “worried well” - citizens who stumbled into a level of knowledge about their genome that they were unprepared for, and now have may be fretting in a a way detrimental to their health & well being. Put aside the fact that, as a public health matter, the “worried well” is a supremely thin basis for action (what pray tell is the prevelence of “worried well” in California? The incidence? The relative risk of learning one’s genome? What sort of epidemiological studies have been performed to measure this population?). And put aside the fact that, as others have noted, the customers of 23andMe and Navigenics and other personal genomics companies are, in demographic terms, probably the *least* likely to be categorized as “uninformed” or naive. These are early adopters, they’re paying lots of money (opting-in), and are probably far more prepared to reckon with genomic information than the typical citizen. But put that all aside.
My argument is simply that by restricting personal genomics to a physician-vetted service, these state public health departments would be eviscerating the actual public-health utility from genomics. The whole *point* of learning one’s genomic predispositions is as a predictive and preventative tool. Learn early, so as to change our behaviors, intervene early, and either skirt or reduce the prospects of disease. This is a *long term* tool. But by regulating the service, these state health departments would severely impinge the opportunity to make the largest public health impact, in two specific regards:
1) Public health, by definition, is about populations, not individuals, and Nickels makes a quasi-population argument when she identifies this group of “worried well.” OK, let’s take a stab at quantifying this. The worried well would be some fraction of personal-genomics customers; let’s give Nickels a big gimme and say that 20% of customers would somehow overreact in a way that’s detrimental to their health (I’m of course making that up, and it’s almost absurdly high, but let’s go with it). Stress would be the most obvious detriment, but it could be something like taking unnecessary medication or supplements, etc.
Now consider what percentage of personal-genomics customers actually engage in their genomic information in a way that’s *beneficial* to their health, as intended. These people pay their money, get their results, spot their risks, and change their lives, often in small ways or rarely in big ways. Let’s low-ball it and say that 60% of all customers act on their results (paying $1,000 or $2,500 is actually a significant motivator, but let’s assume 40% just ignore the results altogether). And of course not all results would be positive; some would be null. So let’s take another slice and say some fraction of the 60% actually measurably benefit, either in peace of mind or in some slight amelioration of diet and exercise or doctor’s visits. Let’s say half of the 60% - so 30%. Even at that low number, that is very clearly a net positive - the public’s health at large has been improved.
Of course, I’m making up these figures, and really it’s probably impossible to measure (though I bet 23andMe and Navigenics are crafting customer surveys to help fill in this picture). But really, by any informed assessment, the net potential for improving the public’s health far outweighs the possible detrimental effects of “worried well.” Just as vaccines and even exercise already have their detrimental side effects, sure, so will personal genomics. But if you’re tasked with improving the public’s health, as these agencies are, why not consider the benefits as well as the risks?
2) So genomics are useful as a predictive tool, they give us a peek into our longterm health prospects and an opportunity to intervene and improve those prospects. The fact that consumers in California can, for the moment, engage in that information now at their own behest means they are getting the information when they want it, which is by definition as early as possible.
So what’s the logical consequence of forcing a physician into the picture as a middleman? Well, it’s a pretty good guess that it’ll delay people from getting the information. Put physician-phobias and reluctance to schedule a visit and all sorts of other procrastinations together, and I think this would result in less and later genotyping; both a significant delay in when this information reaches people - as well as a significant reduction in the number of people who actually bother to jump through these extra hoops. The net result, again: a squandered opportunity to impact the public’s health.
I would be the first to acknowledge that the actual science is fairly raw here; we’re in the early days of using our genomes for actual health decisions. But that’s the point: better to get familiar with the information now, when it’s fairly low-impact, and work out the kinks than to wait for the science to somehow emerge fully-formed and neatly packaged. Because if we’re waiting on the physician community for that day, it’ll never come.
But assuming the public health department acknowedges that genetics *does* have some utility for our health, then I’d remind them that a fundamental principle of public health is awareness - give citizens information earlier so that they can avoid putting themselves at risk. That principle drives public health’s actions against smoking, infectious disease, sexually transmitted disease, natural disasters, and so many other threats. Likewise, it drives their actions on positive behaviors like proper nutrition and exercise. So why, in the case of genomics, should the same principle not apply? Why, in this case, do state health departments think the public should be prevented from learning about their risks?
Published by: tgoetz on June 27th, 2008 | Filed under Epidemiology, Technology, Policy, Genetics
11 Comments »
…but this op-ed in the New York Times, by Gary Hart, really strikes me as a profound framing of what the future bodes.
Regardless of your politics, you have to consider his list of challenges that the next president - whoever he may be - will face:
They include globalized markets; the expansion of the information revolution into places like China; the emergence of new world powers including India and China; climate deterioration; failing states; the changing nature of war; mass migrations; the proliferation of weapons of mass destruction; viral pandemics; and many more.
As a framing device, this is a brilliant list. We really are at a point in the nation & in the world where our politics have fallen far out of step with reality. Just consider, for instance, the stuff that this blog typically trafficks in - genomics, public health, infectious disease, science in general. To my mind, these are the things that will force our future, yet how often do any of them come up in political dialogue?
Viral pandemics, for instance, are a perfect example of what our politicians should be protecting us against - a perfect role for government, really, where the free market will be unlikely to react - yet how often does the topic come up?
Anyway, as I say, forgive the politics, but definitely worth a read.
Published by: tgoetz on June 25th, 2008 | Filed under Policy, history
Comment now »
Though much attention has been paid - here at Epidemix and elsewhere - on the power of genomics as a predictive tool for disease, there are other approaches to forecasting risk that are potentially more helpful, equally bold, if somewhat less sexy. I had the chance a couple weeks ago to learn about one: a new predictive test for diabetes risk developed by Tethys Bioscience. It is a cool tool, and I think it represents a new breed of diagnostics and predictive testing.
The idea behind the Tethys test, called PreDx, is to create a tool that can accurately identify those at increased risk for developing type II diabetes. Diabetes, we know, is one of the fastest growing diseases in the country (and world), accelerated by the upsurge in obesity. 24 million Americans have diabetes, with another 2 million cases diagnosed annually. 60 million Americans are at high risk of developing diabetes, many of these obese or overweight.
The traditional tool for diagnosing the disease, as well as for diagnosing a *risk* for developing the disease, is a blood glucose test. The so-called “gold standard” test, a fasting blood sugar value of 140 mg/dl constitutes diabetes, while normal levels run between 70-110 mg/dl. You can see the issue here: What does a value of between 109 and 139 mean? This is the problem with these firm cut-offs - their you-have-it-or-you-don’t nature means that you’re failing to capture people until they have a disease. We’re missing the opportunity to get ahead of illness and maintain health.
In the last couple years several genome-wide association studies have identified certain genetic variants with diabetes, to great fanfare. But the problem with these associations is that the rest of the puzzle is, to mix metaphors, blank. We don’t yet know what context these associations exist in, so the seven or eight markers that have been identified may be the complete span of genetic influence, or they may be seven or eight of 1,000 markers out there. In other words, there’s lots of work to do there still.
A more traditional attempt at early detection has been the diagnosis of metabolic syndrome, which I’ve written about lots. In a nutshell, metabolic syndrome is an attempt to establish some cutoffs - from glucose, blood pressure, waist circumference - to define a disease that’s a precursor to other disease (namely diabetes and heart disease). It’s an ambitious extrapolation of our ability to quantify certain biological markers, but it’s inexact and, the argument goes, hasn’t proven any better at actually identifying those at risk than blood glucose alone. In other words, it has defined a pre-disease state without actually changing the outcome (at least, that’s the argument; it is a subject of great debate).
OK, so that’s the backdrop: a single conventional test that identifies disease better than risks, an emerging but incomplete measure of potential risk, and a measure of pre-disease that has ambiguous impact on the disease. So how about something that identifies risk accurately enough and early enough and strongly enough that it actually impacts the progression towards disease?
That’s the idea behind PreDx. The test itself is an ELISA test, which for you microarray junkies may seem disappointing. ELISAs are nothing fancy, they’ve been used for nearly 40 years to detect proteins. The cool part, though, is what goes into that test. Tethys scanned through thousands of potential biomarkers that have been associated with components of diabetes - obesity, metabolic disorder, inflammation, heart disease - and settled on a handful that all closely correlate with diabetes. That’s the ELISA part, testing for levels of those five or so biomarkers. Then second stage of the test is the algorithm: a statistical crunching of the various levels and presence of those markers, to arrive at a Diabetes Risk Score.
The DRS is a number between 1 and 10, shown to the tenth of a point, that equals a risk for developing type II diabetes over the next five years. a 7.5 equals a 30% risk of developing diabetes within 5 years, a 9 equals a 60% risk. (the risk for the general population is about 12%, equal to a 5.5 on the PreDx scale)
So what’s cool here is the algorithm. Unlike many new diagnostic tests, the smarts aren’t necessarily in the chemistry or the complexity of the technology (it’s not quantum dots or microfluidics or stuff like that). The smarts are in the algorithm, the number crunching. Basically, the test lets the numbers do the work, not the chemistry.
At $750 a pop, the PreDx test is too expensive to be used as a general screening test - it’s best used by physicians who’ve already determined their patient is at an increased risk, through conventional means. Tethys says that’ll save $10,000 in healthcare costs on the other end. In other words, it’s a way to pull people out of that pre-diabetes pool, spot their trajectory towards disease, intervene, and avoid onset. In other other words, it’s a tool to change fate. Which is kinda impressive.
Another interesting thing here is the simplicity of the 1 to 10 scale. Obviously, this is the work of the algorithm; the actual data doesn’t neatly drop into a 4.5 or a 7.3 figure, it must be converted into those terms. That in/of itself is a complicated bit of biostatistics, and it’s beyond me to assess how they do it. But the fact that an individual will be presented with a Diabetes Risk Score of, say, 8.1, and then shown a chart that very clearly puts this at about a 40% risk of developing diabetes - well, that’s a lot easier for a lay person to make sense of than a blood glucose level of 129 miligrams-per-deciliter. Heck, it’s a lot easier for a *physician* to make sense of.
What’s more, this is a quantification of *risk*, not a straight read of a biological level. That’s a very different thing, much closer to what we want to know. We don’t want to know our blood glucose level, we want to know what our blood glucose level says about our health and our risk for disease. The closest physicians can usually get us to *that* number is to go to general population figures - in the case of diabetes, that 12% risk figure for the general population.
What PreDx represents, then, is how we are moving from a general risk to a personalized number. This isn’t the abstract application of population studies to an individual, it’s the distillation of *your* markers, using statistical analysis to arrive at an individual risk factor.
So I find this pretty compelling. Tethys is developing other predictive diagnostic tests for cardiovascular disease and bone diseases, but far as I know, there are not many similar predictive diagnostic tests out there, for any disease. As mentioned, we have the genomic assocations, which are coming along.
Anything else somebody can clue me in on? Lemme know…
Published by: tgoetz on June 25th, 2008 | Filed under Disease, Technology, obesity, algorithms
2 Comments »
Can Virgin Trick You Into Better Health?
Behavior change is hard.
That’s well established in public health research. Despite millions of dollars and thousands of studies trying to get people to change their habits and improve their health, and despite plenty of evidence that shows that when people in fact *do* change their behavior their health improves, well, most people keep acting pretty much the same way. I’ve written about this before, but it’s one of the big cunundrum’s behind personalized medicine or, more specifically, personal genomics. It’s one thing to know that we have an elevated risk, and it’s a second thing to know that by changing our diet or exercise habits we can ameliorate that risk. But it’s quite a different thing to actually go out and do that. And this fact puts tremendous strain on our healthcare system, and means that, despite the promise of predictive medicine, we will always be dealing with late-stage conditions that could’ve been avoided.
Here I’ll quote from an email sent by a secret physician source:
My opinion about why medical care focuses so much on treatment of advanced disease is that people really do not want to participate in preventive health care. Take screening for colon cancer. It’s been around for decades and there is hard data to show that the mortality for colon cancer has decreased significantly in those who are screened. (Of course, ‘those who are screened’ are primarily upper middle class white folk, but that’s an entirely different topic) I am currently seeing 50-60 patients a week, 70% of whom come into my office for rectal bleeding, a sign of colon cancer. I ask everyone of them if they have had a colonoscopy yet. Of those who are in a high-risk category - family history or age over 50 - at least half have not. (Approx. 50-60% of the 50-60 pts are high risk) When I ask if they know of the association between bleeding and colon cancer and whether they knew colonoscopy could prevent colon cancer, they will say ‘Yeah my doctor told me to get one, but I don’t want to go through that test’. It is this (relatively) small percentage of people who end up neglecting medical advice and seek treatment only for advanced disease who end up costing our health care system so much money.
Liver transplants are another example. I’m not sure of the exact figures on this but I’m pretty sure the majority of liver transplants are in people who are alcoholics, albeit reformed. And the cost of a single liver transplant would probably pay for intensive alcohol rehab for dozens of people.
In essence, patient’s actions are a much larger driving force in terms of health care costs than society wants to admit.
It’s especially frustrating, because so much of what we mean by “behavior change” boils down to two things: Eat better. Get more exercise. (there’s a third: Quit smoking. But that has all sorts of separate implications, which I hope to deal with later.)
All this is prelude to mentioning one novel approach to behavior change: Virgin HealthMiles.
Part of the Richard Branson Virgin empire, Virgin HealthMiles is a web-based tool to help people exercise more and more regularly. It is a fancied up exercise diary: you join, enter some exercise goals, and track your progress. There are a growing number of tools like this out there (Nike+ is a cool one for runners, that allows open sharing of routes and stats). But Virgin HealthMiles is different because it adopts the “miles” concept from frequent flier programs as a reward system. The more exercise you do, and the closer you track to your goals, the more miles you get; miles can be redeemed for hot fudge sunda… oops, redeemed for HealthCash, which can be converted into gift certificates at various stores.
Right now, HealthMiles is open only through employer health benefits programs; it’s not open to individuals.
And the million dollar question: Does it work? Well, there’s no evidence on the site, besides some anecdotal information. But my guess is it is likely more effective for people than just trying to remember that they should be exercising. The traditional way of advocating behavior change - the way the NIH does it - advocates similar principals (goals, targets, etc) Especially for the data-inclined, the life-hacker crowd, this sort of stuff is like catnip - you keep coming back to check you stats. And the thing is, as Kevin Kelly has noted, more and more of us are finding utility in tracking our lives. This isn’t a fringe thing, it’s an early adopter thing, and the masses are catching on (see the popularity of Nike+ and all the spin off mapping sites).
Published by: tgoetz on June 19th, 2008 | Filed under Uncategorized, Epidemiology, Trends, databases
Comment now »
About the news that California health regulators have sent cease-and-desist letters to a baker’s dozen genetic testing firms, forbidding them from selling tests without a doctor’s order:
I have two observations. First, I know that the direct-to-consumer personal genomics twosome, 23andMe and Navigenics, have been diligent in working with the FDA to make sure that their tests line up with current testing regulations and efficacy rules. So on some level, this may be a turf battle between state health departments (NY state has sent a previous notice) and the Feds.
Second, this to me reflects as much a cultural disagreement as a legal or regulatory one. That is, there is the assumption in the states’ letters that, because genetic information has medical implications, the dissemination of this information must fall under their jurisdiction. But there are, in fact, all sorts of areas in life that have medical implications that we don’t consider the province of government - a pregnancy test, most obviously. We neither want nor assume that doctor’s should have a gatekeeper role in establishing whether we are or are not pregnant, nor do we look to the state to protect us from that information. Pregnancy is a part of life, and it has all sorts of implications and ramifications. So too with DNA.
To my mind, genetic information is a new sort of personal information that the state and even the physician community are terribly slow and old-fashioned in reckoning with. Even those with knowledge in genetics, such as the Gene Sherpa blogger, assume a paternalistic tone: “I am just shocked and awed that some in the public think that they can do this on their own without professional help. Do you build your own home? What about fight your own court cases? Some do their own taxes…but only when it isn’t complicated. Trust me, this IS COMPLICATED!”
Now, I agree with Steve Murphy about a lot of things, but I totally diverge from him on this. Having been tested by both 23andMe and Navigenics, I can say that, yes, it’s complicated. But frankly I don’t need a doctor, and I don’t want a doctor, to facilitate my understanding of what my DNA means. Yes, there are some medical implications, but these are hardly live-or-die moments. What’s more, when I have shared my results with physicians, they’re largely greeted with a shrug. I don’t want to “trust” a doctor, no matter how skilled or well trained, I want the access to my genetic data just as I want to know, without government approval or physician filtering, all sorts of information about myself. The assumption that there must be a layer of “professional help” is exactly what the new age of medicine bodes - the automation of expertise, the liberation of knowledge, and the democraticization of the tools to interpret and put to use fundamental information about who we are as people. Not as patients, but as individuals. This is not a dark art, province of the select few, as many physicians would have it. This is data. This is who I am. Frankly, it’s insulting and a curtailment of my rights to put a gatekeeper between me and my DNA.
This is *my* data, not a doctor’s. Please, send in your regulators when a doctor needs to cut me open, or even draw my blood. Regulation should protect me from bodily harm and injury, not from information that’s mine to begin with.
Published by: tgoetz on June 17th, 2008 | Filed under Law, Policy, Genetics, Health Care Industry
5 Comments »
Why Normal Matter More Than Sick
Here’s the gist:
For all sorts of conditions, there’s often no definition of normal. In heart disease, for example, CT screening tests can spot abnormalities in arterial plaque — but no research exists on whether that information is actually predictive of heart disease or stroke. “We need to know normal variation,” says Pat Brown, a professor of biochemistry at Stanford University School of Medicine. “It’s really underappreciated as a part of science.”
This essay came out of several conversations with Brown, an oncologist at Stanford who’s one of those scientists whose knowledge seems to span into every possible corner - and can articulate relevance in clear, compelling terms. He has been ranting (in the best possible way) about the need to know normal for a while, and when I couldn’t get him to write this essay himself, I did the next best thing: pulling as much of the idea out of his brain as I could muster.
And it’s something that many scientists, particularly those working at the molecular and DNA level, are saying: We need to know more about the baseline. I spent the past couple days at the Institute for Systems Biology symposium (where Bill Gates gave the closing keynote), and many of the presentations touched on this theme: there’s too much variation at the cellular level to be able to clearly say (in the plainest terms) this cell is bad and this one is good. Only when we understand the variations at a cellular or molecular level will we know when to intervene on disease, or whether treatments are working, and so forth. As the story makes clear, this is especially important for cancer right now - are we overtreating or undertreating? - but it will become a relevant issue for all intervention soon enough.
And push it forward: Should we know what’s normal not just in the human population, but in our own body? For instance, a fellow from the ISB told me about an experiment that the institute conducted one day: They had several staff in and took blood samples, then went out and had lunch. After the meal, they took another blood draw. Sure enough, the before/after blood makeup was totally different - not just in terms of stuff like glucose levels but down to the protein level. Long/short: even in one person, it’s difficult to know what’s “normal” on a protein level. But the more we can discern these baselines, the more we’ll know when to treat, and how, and even whether.
More on the symposium later, if I can manage.
Published by: tgoetz on April 22nd, 2008 | Filed under Disease, cancer, Self Promotion
2 Comments »
Today marks another entrant in the personal genomics game: Navigenics, the much anticipated startup out of Redwood Shores, Ca, is open for business.
The company arrives as direct competition to 23andMe and DeCodeMe, both of which began offering direct-to-consumer genotyping last year. Navigenics was originally planning to launch around the same time as the competition, but ended up taking several months longer to fine-tune it’s product. As planned, Navigenics is taking a more clinical approach to personal genomics, with a more overt pitch towards the medical implications.
I had the opportunity to visit the company last week and get a preview of the service. Here are a few standout observations.
1) The Results: Navigenics launches offering results on 18 diseases, from glaucoma to colon cancer to Alzheimer’s. This is about the same number as 23andMe had at it’s launch, though they’re now up to 58 different conditions.
One big distinction is that 23andMe lets users peruse the entire results of their genotype run – more than 500,000 different SNPs. Navigenics, even though they’re using a 1 million SNP chip (as does DecodeMe), is more circumspect with its results, only letting customers see the results for those conditions they’ve vetted.
2) The Business Model: As has been anticipated, Navigenics will charge an initial fee of $2,500 for a one year membership – and then an annual fee of $250. This compares with about $1000 for permanent access at 23andMe and DecodeMe.
That’s been criticized as a bad deal, especially since you can’t look at the 1 million results. But Navigenics offers an intriguing twist: It will freeze your spit sample, allowing the company to re-test your DNA as more associations with different SNPs are discovered (and deemed scientifically valid). Mari Baker, Navigenics’ CEO, says they expect to go back two or three times a year to extract more data points.
That means that there’s a clear trade off: with 23andMe, you’re buying into today’s technology, and they promise to show you everything they have. With Navigenics, they’re not going to show you everything, but they promise to keep you up to date as the technology and the science improve.
3) The Calls: One thing you notice when you get your 23andMe results is how subtle the differences are between the average person’s risk for disease and your own. For colorectal cancer, for instance, my 23andMe results tell me that I have a .21 out of 100 chance of developing the disease, compared to a .26 out of 100 average risk. That may be scientifically valid distinction, but as a consumer it is so slight as to be no difference.
Navigenics uses a different method of calculating your genetic risk. I won’t get into the details here, but basically they make a “Lifetime Risk analysis” that results in what they believe are stronger calls. Certainly the numbers are more emphatic; examples I saw showed, for instance, a person with a 51 percent risk of heart disease, compared to an average 42 percent risk. That’s a striking difference.
4) The Physician: Navigenics puts a great deal of emphasis on the utility of genotype data for useful medical insights. It’s clearly one of their main selling points. To that end, “we’re putting education towards the top of our agenda,” says Mari Baker, and they’ve bankrolled an online continuing medical education course on Genomic and Personalized Medicine with Medscape. What’s more, they suggest customers bring doctors their Health Compass Report, a primer for personal physicians explaining what the company does, how it calculates risk, and what their patients results might precondition them for. To me, this is a bold and aggressive endorsement of the power of genomic data for real-time medical insights, much stronger than anything 23andMe has done. It’ll be interesting to see how the medical community responds.
So it’s a more high-fidelity environment, with less flexiblity for the user. And though 23andMe gets lots of attention because of their founders’ Google connection, make no mistake: Navigenics is as blue-chip as they come, with top management from T-Gen and Kleiner Perkins.
Long and short: Yesterday personal genomics was an oddity. Today, it’s an industry.
This is a cross-post from WIRED SCIENCE
Published by: tgoetz on April 7th, 2008 | Filed under Genome
1 Comment »
Practicing Patients: The Response
Overall, I’ve been fairly blown away by the response – I’ve gotten dozens of emails and the story has been been blogged mightily. In stories like this, where I’m writing about one company or person, it’s important to keep in mind that the enthusiasm is more for the ideas and portent of PatientsLikeMe and not my story, per se. But I’m going to assume that the humble messenger - me - did a fair job conveying the import of the message, and that that’s worth something. Anyway, the reactions fall along a few lines:
1) Patients are thrilled. I’ve gotten some striking messages from people with a chronic disease or from people related to a person with disease, for whom a resource like PatientsLikeMe is a godsend. Many patients with chronic disease are, surprisingly, simply left confused by their interactions with the medical world - doctors and specialists all telling them slightly different things, giving them slightly different courses of action and treatments. The ability to connect with others like themselves and then, moreover, to take *their* advice on treatments seems remarkably clarifying for these patients. ” went to a conference in Tampa,” one wrote me, “and different doctors believe in different treatments with different patients. Is this because of where they were trained or evidence-based medicine?”
It’s a fascinating problem - one I hadn’t anticipated. I knew patients would be eager to get resources other than their doctors; I knew that they were frustrated with the time they get from their doctors; but I didn’t realize that they were so frustrated with the *information* they’re getting from their doctors.
2) Doctors are leery. I knew this was out there, and I tried to make sure the piece reflected the skepticism, and in some cases antagonism that physicians might have towards a resource like PatientsLikeMe. This isn’t just for competitive reasons (though there is that element). I think doctors are legitimately concerned about their
physicans patients embarking on treatments that may hurt them rather than help them, and that the power of PatientsLikeMe’s data - it all looks so convincing - might compell some patients to disregard advice or treatment to their detriment. But another aspect to this is that doctors aren’t comfortable, themselves, with the idea of their world being transduced into data.
Medicine, from a physician’s POV, is still largely empirical. Diagnosis and treatment is often guesswork - educated guesswork, but still guesswork. They are trained in science - educated in science - but that doesn’t mean doctors are all comfortable with the idea of data-driven decision making. (read Jerome Groopman’s How Doctors Think for a startling - to my eyes - reminder of how much variability there is in physicians’ diagnosis). Since PatientsLikeMe reduces all this to data - treatments and results are right there to be correlated - it’s an unnerving glimpse into the future.
3) The field is wide open. Among general readers, many have fastened onto the story’s penultimate paragraph:
Really, when you start looking, information can be found everywhere. If we could gather in structured communities and create databanks to inform our approach to life decisions, not just health decisions but also gardening or parenting or car-buying decisions, we could do everything in a more informed manner. Were we all to avow a philosophy of openness and churn our experiences into hard numbers, we could presumably improve our odds in all sorts of decisions. Why not a PregnantLikeMe or a ParentsLikeMe or even, really, an all-encompassing PeopleLikeMe?
First off, as a writer it’s terribly gratifying to know that folks actually made it to the end of a 5,000+ word article. So that’s nice. But really, the idea here is one that I’ve been mulling over for a few months - the power of data to be collected and deployed in unanticipated places for unexpected results. I’m especially interested, obviously, in the potential of data to affect health-care, both in terms of predicting and averting illness as well as to treat it (more - lots more - on this later). So it’s doubly gratifying that this idea has such resonance among readers - that ordinary folks get this idea, that there is power in ordinary life, in unharnessed information, to guide our actions. It’s the idea, really, that some sort of meta-analysis of daily life could go on (should go on?) that would let us *really* learn from our mistakes. Obviously, as the next graph in the story made clear, I’m not the first one to have this realization. So it’ll be fascinating to explore where this idea goes, and how innovators may respond to both take advantage of the opportunities for data collection and aggregation as well as to respond to the demand individuals have to make better decisions.
4) Some metrics. So since this was such a data-intensive story, I thought some metrics might be illuminating.
7 6 - highest rank of story on NYTimes’ Most Emailed list. [corrected! Someone caught it at #6, & passed along the JPEG to prove it]
30 percent - Amount enrollment at PatientsLikeMe has increased in seven days.
10 - factor by which enrollment at PatientsLikeMe mood community increased (from about 100 members to somewhere just over 1,000) in that time.
14 - Number of companies who’ve written to tell me they loved the story - and wouldn’t I like to speak to their CEO? (Honestly, I appreciate these emails & have learned of some interesting endeavors because of them)