The Gene Collector: George Church & the Personal Genome Project


My latest story in Wired, a profile of geneticist George Church, is in the August issue, now on the stands (and online here).

In some regard, it’s a follow-up to my previous story on personal genomics. But it is really my effort to shine the light on one person who’s doing so much to propell us towards the future of genomics. Church is frighteningly intelligent, yet notably calm and kind (and generous with his time, explaining for me, for instance, the principles of synthetic biology again and again until some of it got through).

It was great fun talking with him and reporting the story. My hope is it helps people understand the ambitions and potential of personal genomics, if pursued on a massive scientific scale.

Published by: tgoetz on July 22nd, 2008 | Filed under Genome, Self Promotion
Comment now »




How the Taliban is Bringing Back Polio

On a NYTimes blog, a harrowing tale of the conflict between religious extremism and the WHO in Pakistan. Fearful that the polio vaccine causes impotence (it doesn’t), local clerics in northern Pakistan waged a campaign against the vaccine, and Unicef called off its immunization effort. The result: The first case of polio in the area since 2003.

Some fascinating overlaps with the war against the Taliban in the area. Worth a read - and a longer exploration by someone.

Published by: tgoetz on July 17th, 2008 | Filed under Disease, WHO, Eradication
Comment now »




America: Now Fatter than Ever


We are too fat. The CDC is reporting today that the United States is now officially more than one-quarter obese. In the latest issue of the MMWR, the numbers are staggering: 25.6 percent of American adults are clinically obese, according to a body-mass-index assessment, up from 23.9% in 2005 and way up from 15.3% in 1995.

And note that’s obese, not overweight: when you include those numbers, defined as a BMI greater than 25, the percentage approaches two-thirds of all American adults.

The map itself paints a dramatic picture. I’ve stuck the new 2007 map below of the previous version from 2005, and you can see the slow creep of fat across the nation.

So much for Healthy People 2010, an effort by the CDC, started back in 2000, to get the country towards 15% adult obesity rate. I’ve been skeptical in the past of efforts to treat obesity as a disease; it seems just the sort of lifestyle condition that creeping medicalization doesn’t need to sweep up. But these stats are changing my mind: We Americans just can’t seem to stop eating. Whatever’s going on - genetically, environmentally, metabolically - it’s not something finger-wagging and brochures can take care of. What’s more, this is going to be *extremely* expensive.

I think it’s high time somebody came up with some innovative behavior-modification strategies, short of pharmaceuticals, that might start to turn this tide. More on that later.

Charts via CDC
Seat photo from DrBaloney via Flickr

Published by: tgoetz on July 17th, 2008 | Filed under CDC's MMWR, obesity
1 Comment »




What Medicine Owes the Beatles

A factoid I just came across: If not for the Beatles, we wouldn’t have CT scans, aka CAT scans, the advanced medical scanning technology that lets your doctor see how badly your bones are broken or whether your aunt really has emphysema.

Here’s the story: in the 1960s, a middle-aged engineer named Godfrey Hounsfield was working at Electrical & Musical Instrument Ltd., where he began as a radar researcher in 1951. The company, known as EMI for short, was a typical industrial scientific company at the time, working on military technology and the burgeoning field of electronics. Hounsfield was a skilled but unexceptional scientist, leading a team that built the first all-transistor computer in 1958. Through its work in radar the company began working in broadcasting equipment, which complimented its ownership of several recording studios in London. Specifically, at Abbey Road. In the 50s, the company began releasing LPs, and by the end of that decade, thanks to an acquisition of Capitol Records, the company had become a powerhouse in popular music.

Then, in 1962, on the recommendation of EMI recording engineer George Martin, the company signed the Beatles to a recording contract.

That was the bang - over the next decade (and for years thereafter) the company earned millions of dollars from the fab four. So much money, the company almost didn’t know what to do with it.

Meanwhile, Hounsfield’s success with computers had earned him good standing in the science side of the company. Flush with money broken out of teenagers’ piggy banks worldwide, EMI gave Hounsfield the freedom to pursue independent research. Hounsfield’s breakthrough was combining his work with computers together with an interest in X-rays. Invented in 1908 1895, X-rays were still pretty much used to image bodies in two dimensions from a fixed position. Hounsfield’s idea was to measure in three dimensions, by scanning an object - most dramatically, a human head - from many directions. The result was a cross-sectional, interior image that he called computed tomography, or CT. As the Nobel Prize committee put it, in giving him the Nobel Prize in medicine in 1979, before the CT scanner, ”ordinary X-ray examinations of the head had shown the skull bones, but the brain had remained a gray, undifferentiated fog. Now, suddenly, the fog had cleared.”

First released as a prototype by EMI in 1971 - the year after the Beatles broke up – CT scanners started to appear at hospitals in the mid 1970s; today there are about 30,000 in use worldwide.

UPDATE: Since this post has generated lots of interest from Beatles fans, I should add some context on EMI’s Beatles revenues: in 1963 alone, the company made $2.2 million (about $15 million today) off George Martin’s productions, which one assumes was largely due to the Beatles. And that was a relatively quiet year for the group, considering they didn’t debut on Ed Sullivan until Feb. 1964. (From the crazy site, Beatlemoney.com

A historical take from somebody who worked with Hounsfield is here.

Hounsfield’s Nobel lecture is here.

Published by: tgoetz on July 16th, 2008 | Filed under Technology, Misc.
4 Comments »




A Rundown of iPhone Health Apps

Out of the bounty of new apps for the iPhone, I was pleased to see a couple dozen focused on health & fitness. To me, the potential here is this: Combine a device that’s easy to use and portable with the growing trend of life-logging. The result, I hope, would be several apps that let us track our health, quantatively, and log progress and data (basically, the idea would be not unlike the Virgin HealthMiles product I blogged about recently).

So here’s my cursory rundown of the new iPhone apps that look most promising.

Diet & Nutrition

  • Lots of calorie counting apps here, from Lupi’s Diet, a reference for calorie content of 7000+ foods ($4.99), to free BMI calculators.
  • My favorites include: Nutrition, a log of offical nutritional information for a dozen or more fast food outlets from McDonalds to Chili’s, that links with maps, too (Free!).
  • iCalorie tracks calories consumed by meal, as well as calories expended by exercise. It also logs weight, creating a basic diet tool. ($4.99)
  • Calculate Points ($4.99) is a Weight Watcher compatible points tracker. Useless unless you’re in WW, jus the sort of closed system that the iPhone-type tracking should free us from, no?
  • Absolute Fitness ($14.99) is a souped-up diet tracker and reference. It has nuritional content for 7000+ foods, and automatically calculates dietary goals based on your profile and targets. It also works the data nerd angle pretty well, offering graphs and charts of your progress.
  • (Unfortunately the promising-looking Fit, which let you track daily calorie intake versus calories burned, shipped a buggy product.)

Fitness

  • Most of the apps here are from the GoLearn brand, from Whagaa software.
  • The GoLearn apps are all about training, and though they offer some in the way of tracking, it’s mostly advice videos. They start at $9.99 and seem pretty much aimed at the novice.
  • The Athlete’s Calculator calculates time, distance, and pace for a variety of sports, from cycling to running to swimming (but don’t wear it in the pool, I assume). It can also calculate splits. Pretty rudimentary for $6.99.
  • Steps is just the sort of simple widgety app the iPhone is made for: It uses the device motion sensor in the iPhone to act as a pedometer, and calculates your number of strides based on your height and weight. It tracks distance, speed, calories burned. And it’s only $1.99.

Medicine/Medical Records

In addition to a couple pregnancy calculators (the Wheel for $14.99 and Birth Buddy for $4.99), these mostly fall in as health records tools - LifeRecord - or electronic emergency cards - EmergenKey ($1.99) ICE ($.99) and Emergency Card ($2.99). The personal health record tools look cool - you can store MRIs and other image files - but I wonder how compatible they are with Google Health or any other PHR out there (I imagine not at all). The emergency cards seem like a good idea, but fatally flawed: You really want to count of a EMT to find your iPhone, then click on the right app to find your health info? Seems unlikely to me.

But two nifty apps here:

  • Quitter is a free app that simply tracks how many days since your last cigarette and calculates how much money you’ve saved by not smoking. Nothing fancy, but every little reminder helps…
  • Kenkou uses the Japanese term for “health” (they say) to name their nifty app that will appeal to all the life loggers out there. It tracks your weight, blood pressure, blood sugar, mood, cholesterol, and other health metrics. No it doesn’t actually measure these datapoints, but simply by logging them, it will let users think about their health as a portfolio of numbers and data points. At $4.99, it’s definitely something I’m gonna be playing around with.

Published by: tgoetz on July 15th, 2008 | Filed under Technology, gadgets
Comment now »




The Backlash Against Screening & Prevention

Put together, a couple stories in the NYTimes today show that while preventive medicine is theoretically the way of the future, it’s going to be a cultural challenge getting the public to synch up with the program.

First, there’s Tara Parker Pope’s column about the American Academy of Pediatrics recommendation to prescribe statins to children as a longterm preventive measure against heart disease. The idea is to identify those children at a higher lifetime risk for heart disease as early as possible - as early as eight years old - and take preventive measures to ward off the disease.

The backlash comes from pediatricians, who flag that there is scant evidence that’s it’s safe to take statins over several years, let alone decades, let alone 40 or 5o years.

The second story kicks in at the opposite end of life: It concerns recommendations for elderly women to undergo multiple mammograms every year to screen for breast cancer. In this case, there’s somewhat more sound evidence for the intervention.

The mammography study, published in May in The Journal of Clinical Oncology, looked at the records of more than 12,000 patients aged 80 and older who were given diagnoses of breast cancer from 1996 to 2002. It found that among those who had a mammogram every year or two before their diagnosis, 68 percent found the cancer at an early stage, compared with 33 percent of those who skipped mammograms altogether.

Five years after the breast cancer diagnosis, 75 percent of the frequent screeners were alive, compared with only 48 percent of those who had not been screened for at least five years before their cancer was found.

So what’s the controversy? Basically, it boils down to the idea that by the time women hit 85 or 90, there are fairly low odds that they’ll die of breast cancer. Basically, they’re so old that they’re more likely to die of something else, not breast cancer.

In both cases, the issue seems to pivot on one question: What constitutes sufficient evidence to recommend screening for large populations? Is one study enough? And when you’re talking the very old or the very young, screening measures risk bumping up against a ‘yuck factor’ - the idea that we are medicalizing populations, or forcing people into medical interventions, when they should “just be living.”

My sense is that these are simply the first bumps along a pretty clear path towards a whole arsenal of screening panels. In a year or two, pretty much every demographic - be it the very young, the very old, or some slice in between - will have a handful of screening tests that they fall under. The sense of outrage that comes with these recommendations will fall away, because it’ll be up to us, as individuals, to decide whether or not to plug into these panels. But better to have the option of engaging early, and face our risks, rather than wait for the worst to happen.

Published by: tgoetz on July 8th, 2008 | Filed under Epidemiology, screening
1 Comment »




Another Disease (almost) Eradicated

Nearly 30 years after smallpox was eradicated from the face of the earth, it still stands alone as the only pathogen to have been deliberately eliminated (Though efforts on guinea worm and polio are getting close). Catching up on back issues of Science, I was surprised to learn that, at long last, there is now another virus very close to eradication. Rinderpest. The only catch: it doesn’t affect humans. But that doesn’t make the prospect of rinderpest eradication any less stunning.

Quick background: Rinderpest is a viral disease that afflicts livestock, mainly cattle. It is brutal, often killing a third of a herd. A century ago, it spread throughout Asia, Africa, and Europe, but various efforts, culminating in a sustained international campaign, begun in 1994, has driven it to isolated patches in Africa. Lately, it’s been confined to Kenya, and now may be even gone from there.

Just because it’s a bovine disease, though, doesn’t mean it doesn’t have human impact. Consider this passage from the Science story (subscription required), describing an outbreak in South African in 1897 that killed about 90% of the cattle population, as well as other livestock and local game:

With herding, farming, and hunting all but gone, mass starvation set in. An estimated one-third of the population of Ethiopia and two-thirds of the Maasai people of Tanzania died of starvation. The rinderpest epizootic also altered the continent’s ecological balance by reducing the number of grazing animals, which had kept grasslands from turning into the thickets that provide breeding grounds for the tsetse fly. Human sleeping sickness mortality surged.”

More recent outbreaks have likewise proven devasting to cattle and human populations alike; a particularly virulent outbreak in Sudan in the late 1980s killed 80% of calves, and with cow milk unavailable, human children began to starve, resulting in a horrible famine.

Another example of how we’re just a part of one big ecosystem.

Eradication is on target for 2010. Can’t wait to toast this one.

Published by: tgoetz on June 27th, 2008 | Filed under Disease, global health
1 Comment »




The Public Health Case for Direct-to-Consumer Personal Genomics

OK - A couple more thoughts on this move by health departments in California and New York to regulate personal genomics. I’ve made my quasi-libertarian case that this is my information and shouldn’t be mediated by an under-informed (and possibly antagonistic) physician gatekeeper. And I’ll leave the companies to make their own case on the issue of lab oversight.

But now let me make an argument on public health grounds - the home turf, after all, of these state agencies. To my mind, their actions will directly contravene their own mandate, and will have the result of reducing the public’s health.

The California DPH says it’s acting to “protect consumers.” As Wired Science’s Alexis Madrigal ferreted out, in a nifty bit of reporting, CDPH’s Karen Nickel said in a June 13 meeting that the state’s primary concern is that personal genomics companies are creating the “worried well” - citizens who stumbled into a level of knowledge about their genome that they were unprepared for, and now have may be fretting in a a way detrimental to their health & well being. Put aside the fact that, as a public health matter, the “worried well” is a supremely thin basis for action (what pray tell is the prevelence of “worried well” in California? The incidence? The relative risk of learning one’s genome? What sort of epidemiological studies have been performed to measure this population?). And put aside the fact that, as others have noted, the customers of 23andMe and Navigenics and other personal genomics companies are, in demographic terms, probably the *least* likely to be categorized as “uninformed” or naive. These are early adopters, they’re paying lots of money (opting-in), and are probably far more prepared to reckon with genomic information than the typical citizen. But put that all aside.

My argument is simply that by restricting personal genomics to a physician-vetted service, these state public health departments would be eviscerating the actual public-health utility from genomics. The whole *point* of learning one’s genomic predispositions is as a predictive and preventative tool. Learn early, so as to change our behaviors, intervene early, and either skirt or reduce the prospects of disease. This is a *long term* tool. But by regulating the service, these state health departments would severely impinge the opportunity to make the largest public health impact, in two specific regards:

1) Public health, by definition, is about populations, not individuals, and Nickels makes a quasi-population argument when she identifies this group of “worried well.” OK, let’s take a stab at quantifying this. The worried well would be some fraction of personal-genomics customers; let’s give Nickels a big gimme and say that 20% of customers would somehow overreact in a way that’s detrimental to their health (I’m of course making that up, and it’s almost absurdly high, but let’s go with it). Stress would be the most obvious detriment, but it could be something like taking unnecessary medication or supplements, etc.

Now consider what percentage of personal-genomics customers actually engage in their genomic information in a way that’s *beneficial* to their health, as intended. These people pay their money, get their results, spot their risks, and change their lives, often in small ways or rarely in big ways. Let’s low-ball it and say that 60% of all customers act on their results (paying $1,000 or $2,500 is actually a significant motivator, but let’s assume 40% just ignore the results altogether). And of course not all results would be positive; some would be null. So let’s take another slice and say some fraction of the 60% actually measurably benefit, either in peace of mind or in some slight amelioration of diet and exercise or doctor’s visits. Let’s say half of the 60% - so 30%. Even at that low number, that is very clearly a net positive - the public’s health at large has been improved.

Of course, I’m making up these figures, and really it’s probably impossible to measure (though I bet 23andMe and Navigenics are crafting customer surveys to help fill in this picture). But really, by any informed assessment, the net potential for improving the public’s health far outweighs the possible detrimental effects of “worried well.” Just as vaccines and even exercise already have their detrimental side effects, sure, so will personal genomics. But if you’re tasked with improving the public’s health, as these agencies are, why not consider the benefits as well as the risks?

2) So genomics are useful as a predictive tool, they give us a peek into our longterm health prospects and an opportunity to intervene and improve those prospects. The fact that consumers in California can, for the moment, engage in that information now at their own behest means they are getting the information when they want it, which is by definition as early as possible.

So what’s the logical consequence of forcing a physician into the picture as a middleman? Well, it’s a pretty good guess that it’ll delay people from getting the information. Put physician-phobias and reluctance to schedule a visit and all sorts of other procrastinations together, and I think this would result in less and later genotyping; both a significant delay in when this information reaches people - as well as a significant reduction in the number of people who actually bother to jump through these extra hoops. The net result, again: a squandered opportunity to impact the public’s health.

I would be the first to acknowledge that the actual science is fairly raw here; we’re in the early days of using our genomes for actual health decisions. But that’s the point: better to get familiar with the information now, when it’s fairly low-impact, and work out the kinks than to wait for the science to somehow emerge fully-formed and neatly packaged. Because if we’re waiting on the physician community for that day, it’ll never come.

But assuming the public health department acknowedges that genetics *does* have some utility for our health, then I’d remind them that a fundamental principle of public health is awareness - give citizens information earlier so that they can avoid putting themselves at risk. That principle drives public health’s actions against smoking, infectious disease, sexually transmitted disease, natural disasters, and so many other threats. Likewise, it drives their actions on positive behaviors like proper nutrition and exercise. So why, in the case of genomics, should the same principle not apply? Why, in this case, do state health departments think the public should be prevented from learning about their risks?

Published by: tgoetz on June 27th, 2008 | Filed under Epidemiology, Technology, Policy, Genetics
12 Comments »




forgive the politics…

…but this op-ed in the New York Times, by Gary Hart, really strikes me as a profound framing of what the future bodes.

Regardless of your politics, you have to consider his list of challenges that the next president - whoever he may be - will face:

They include globalized markets; the expansion of the information revolution into places like China; the emergence of new world powers including India and China; climate deterioration; failing states; the changing nature of war; mass migrations; the proliferation of weapons of mass destruction; viral pandemics; and many more.

As a framing device, this is a brilliant list. We really are at a point in the nation & in the world where our politics have fallen far out of step with reality. Just consider, for instance, the stuff that this blog typically trafficks in - genomics, public health, infectious disease, science in general. To my mind, these are the things that will force our future, yet how often do any of them come up in political dialogue?

Viral pandemics, for instance, are a perfect example of what our politicians should be protecting us against - a perfect role for government, really, where the free market will be unlikely to react - yet how often does the topic come up?

Anyway, as I say, forgive the politics, but definitely worth a read.

Published by: tgoetz on June 25th, 2008 | Filed under Policy, history
Comment now »




A Predictive Tool for Diabetes


Though much attention has been paid - here at Epidemix and elsewhere - on the power of genomics as a predictive tool for disease, there are other approaches to forecasting risk that are potentially more helpful, equally bold, if somewhat less sexy. I had the chance a couple weeks ago to learn about one: a new predictive test for diabetes risk developed by Tethys Bioscience. It is a cool tool, and I think it represents a new breed of diagnostics and predictive testing.

The idea behind the Tethys test, called PreDx, is to create a tool that can accurately identify those at increased risk for developing type II diabetes. Diabetes, we know, is one of the fastest growing diseases in the country (and world), accelerated by the upsurge in obesity. 24 million Americans have diabetes, with another 2 million cases diagnosed annually. 60 million Americans are at high risk of developing diabetes, many of these obese or overweight.

The traditional tool for diagnosing the disease, as well as for diagnosing a *risk* for developing the disease, is a blood glucose test. The so-called “gold standard” test, a fasting blood sugar value of 140 mg/dl constitutes diabetes, while normal levels run between 70-110 mg/dl. You can see the issue here: What does a value of between 109 and 139 mean? This is the problem with these firm cut-offs - their you-have-it-or-you-don’t nature means that you’re failing to capture people until they have a disease. We’re missing the opportunity to get ahead of illness and maintain health.

In the last couple years several genome-wide association studies have identified certain genetic variants with diabetes, to great fanfare. But the problem with these associations is that the rest of the puzzle is, to mix metaphors, blank. We don’t yet know what context these associations exist in, so the seven or eight markers that have been identified may be the complete span of genetic influence, or they may be seven or eight of 1,000 markers out there. In other words, there’s lots of work to do there still.

A more traditional attempt at early detection has been the diagnosis of metabolic syndrome, which I’ve written about lots. In a nutshell, metabolic syndrome is an attempt to establish some cutoffs - from glucose, blood pressure, waist circumference - to define a disease that’s a precursor to other disease (namely diabetes and heart disease). It’s an ambitious extrapolation of our ability to quantify certain biological markers, but it’s inexact and, the argument goes, hasn’t proven any better at actually identifying those at risk than blood glucose alone. In other words, it has defined a pre-disease state without actually changing the outcome (at least, that’s the argument; it is a subject of great debate).

OK, so that’s the backdrop: a single conventional test that identifies disease better than risks, an emerging but incomplete measure of potential risk, and a measure of pre-disease that has ambiguous impact on the disease. So how about something that identifies risk accurately enough and early enough and strongly enough that it actually impacts the progression towards disease?

That’s the idea behind PreDx. The test itself is an ELISA test, which for you microarray junkies may seem disappointing. ELISAs are nothing fancy, they’ve been used for nearly 40 years to detect proteins. The cool part, though, is what goes into that test. Tethys scanned through thousands of potential biomarkers that have been associated with components of diabetes - obesity, metabolic disorder, inflammation, heart disease - and settled on a handful that all closely correlate with diabetes. That’s the ELISA part, testing for levels of those five or so biomarkers. Then second stage of the test is the algorithm: a statistical crunching of the various levels and presence of those markers, to arrive at a Diabetes Risk Score.

The DRS is a number between 1 and 10, shown to the tenth of a point, that equals a risk for developing type II diabetes over the next five years. a 7.5 equals a 30% risk of developing diabetes within 5 years, a 9 equals a 60% risk. (the risk for the general population is about 12%, equal to a 5.5 on the PreDx scale)

So what’s cool here is the algorithm. Unlike many new diagnostic tests, the smarts aren’t necessarily in the chemistry or the complexity of the technology (it’s not quantum dots or microfluidics or stuff like that). The smarts are in the algorithm, the number crunching. Basically, the test lets the numbers do the work, not the chemistry.

At $750 a pop, the PreDx test is too expensive to be used as a general screening test - it’s best used by physicians who’ve already determined their patient is at an increased risk, through conventional means. Tethys says that’ll save $10,000 in healthcare costs on the other end. In other words, it’s a way to pull people out of that pre-diabetes pool, spot their trajectory towards disease, intervene, and avoid onset. In other other words, it’s a tool to change fate. Which is kinda impressive.

Another interesting thing here is the simplicity of the 1 to 10 scale. Obviously, this is the work of the algorithm; the actual data doesn’t neatly drop into a 4.5 or a 7.3 figure, it must be converted into those terms. That in/of itself is a complicated bit of biostatistics, and it’s beyond me to assess how they do it. But the fact that an individual will be presented with a Diabetes Risk Score of, say, 8.1, and then shown a chart that very clearly puts this at about a 40% risk of developing diabetes - well, that’s a lot easier for a lay person to make sense of than a blood glucose level of 129 miligrams-per-deciliter. Heck, it’s a lot easier for a *physician* to make sense of.

What’s more, this is a quantification of *risk*, not a straight read of a biological level. That’s a very different thing, much closer to what we want to know. We don’t want to know our blood glucose level, we want to know what our blood glucose level says about our health and our risk for disease. The closest physicians can usually get us to *that* number is to go to general population figures - in the case of diabetes, that 12% risk figure for the general population.

What PreDx represents, then, is how we are moving from a general risk to a personalized number. This isn’t the abstract application of population studies to an individual, it’s the distillation of *your* markers, using statistical analysis to arrive at an individual risk factor.

So I find this pretty compelling. Tethys is developing other predictive diagnostic tests for cardiovascular disease and bone diseases, but far as I know, there are not many similar predictive diagnostic tests out there, for any disease. As mentioned, we have the genomic assocations, which are coming along.

Anything else somebody can clue me in on? Lemme know…

Published by: tgoetz on June 25th, 2008 | Filed under Disease, Technology, obesity, algorithms
2 Comments »