Tuesday, December 18, 2012

Should Human Genes Be Patented?


Recently, there has been much controversy regarding whether it is legal for human genes to be patented; although genes have been patented in the past (~20% of all human genes have been patented over the past 30 years), the case regarding the patenting of BRCA1 and BRCA2 genes by Myriad Genetics has resulted in a landmark opportunity for the Supreme Court to rule on whether any patent on any human gene is legal. The Yale Student Science Diplomats discussed this case, now known as Association of Molecular Pathology (AMP) v. U.S. Patent and Trademark Office (USPTO), and its potential implications with Prof. Daniel Kevles of the History Department and the Law School. The discussion was titled, “Human Genes and Human Rights.”
During the discussion, Prof. Kevles provided the Diplomats with a detailed history of gene patenting, as well as the specifics of the case against BRCA1/2. These genes have been linked to hereditary breast and ovarian cancer, in which up to 8% of women with breast or ovarian cancer have mutations in BRCA1/2. The story began in 1990 when Mary Claire King located the BRCA1 gene on chromosome 17. A race quickly ensued to discover the exact location of the gene, which Myriad Genetics won in 1994 and again in 1995 for BRCA2. Myriad applied for 7 patents for these 2 genes in 1997 and 1998 and received them in 2001. Just a few weeks ago, the Supreme Court accepted claims against these patents for review. However, the legal history of this case dates back to 2009, when the American Civil Liberties Union (ACLU) and the Public Patent Foundation filed a brief against the USPTO and Myriad Genetics. This was the ACLU’s first patent case, and it drew enormous interest by various groups: the plaintiffs were the patients, physicians and medical researchers who claimed to be disadvantaged by these patents, and the defendants were biotech and trade associations who claimed that the patents were necessary to stimulate progress in biomedical research.
It is important to note that Myriad does not hold patents on the naturally occurring gene in the body, as only a product that is “markedly different” from a product of nature can be patented, as previously ruled in 1911 by patenting adrenaline in its crystallized form isolated from the body, as well as patenting a genetically-modified bacterium in 1980. Rather, Myriad’s BRCA1/2 patents are for (1) the isolated DNA of the genes, (2) fragments for the genes to be used as probes for sequence identity, and (3) a diagnostic test for comparing an individual’s genetic sequence with known mutations/variants associated with breast and ovarian cancer, in which the holder of the gene patent receives a royalty for each administered test. These patents provide Myriad with the right to exclude all others from using their “invention;” only Myriad can conduct the BRCA1/2 diagnostic test and disclose the results of the test to a patient. Because of this monopoly, Myriad charges $3500 for the diagnostic test, which some health insurances will not cover. Furthermore, a patient cannot ask for a second opinion because Myriad claims that their diagnostic test is the “gold standard,” and clinicians and researchers cannot develop new diagnostic tests or even evaluate the accuracy of Myriad’s test.
For these reasons, the ACLU claimed standing to suit based on the technicalities of the test, as well as a violation of human rights. Regarding the diagnostic test itself, Article 35 Section 101 of the Constitution states that a patent can be awarded for a new and useful machine or manufacturing process or an improvement on such a process, or a new composition of matter. Myriad claims that their patent on the isolated DNA is in fact a new composition of matter because the ends of DNA are altered slightly upon extraction. However, the counterargument is that this actually does not matter because the base pair identities are still the same in the isolated form, and this base pair information is what is important for the diagnostic test. Regarding the case against human rights, the ACLU claims that holding a monopoly on this diagnostic test is denying patients of fundamental information and violates the 1st Amendment. Furthermore, the patent restricts progress in conducting research on these genes.
In March 2010, Judge Richard Sweet ruled in favor of the plaintiff because he claimed that there was no actual process involved in the diagnostic test; rather, it was simply a “mental act” of comparing an individual’s BRCA1/2 sequence with other DNA sequences known to be associated with breast and ovarian cancer. Therefore, the patent is not for a new composition of matter and is thus illegal. Myriad appealed this ruling, and in 2011 three judges from the Court of Appeals for the Federal Circuit (CAFC) ruled again: they also said that the diagnostics test was not patentable; however, they ruled against Sweet  2 to 1 on the patentability of a new composition of matter, and thus this aspect of the patent was upheld. The ACLU then appealed to the Supreme Court in early 2012; at the time, the Supreme Court did not look at the case but instead asked the three judges to reconsider their ruling based on another recent case, Mayo v. Prometheus, which disallowed a patent on the process of administering a drug and measuring changes in a metabolite afterwards; this case concluded that anything that retards the progress of science cannot be patented.
Prof. Kevles explained to the Diplomats the importance of understanding the background of the two judges from the CAFC who ruled against Judge Sweet and the one judge who upheld Sweet’s ruling. Prof. Kevles said that the first judge who ruled against Sweet, Judge Alan Lourie, is a former chemist (I’ve never heard of a scientist turned judge, so this was interesting for me to hear!). This judge determined that the “expansive issues” (i.e. the human rights issues) should be excluded from consideration, and that the patentability of DNA should be treated like any other chemical molecule. The second judge, Judge Kimberly Moore, is a former electrical engineer (!) and also said that the isolated DNA was patentable because it has such an obvious use for the biotech industry. Lastly, the third judge, Judge William Bryson, who upheld Sweet’s ruling, used to work in the Department of Justice and stressed the importance of the human rights issues associated with the case, as well as the restriction of the progress of science.
Now that the Supreme Court has agreed to examine this case, how should they rule? The main issue is whether isolated DNA is considered a new composition of matter and can be patented. The patent prevents anyone besides Myriad Genetics from making, using or selling information concerning the isolated DNA of the BRCA1/2 genes and any mutations, variations or rearrangements of this DNA.  There are many stakeholders in this case: on the one hand, competition in the biotech industry can be strengthened with the security that research findings can be patented (and more competition should fuel better research); on the other hand, patients do not have proper ownership over their own medical information, and other medical researchers who may be studying BRCA1/2 may be forced to halt their research due to issues with violating Myriad’s patents.
Prof. Kevles explained that this case boils down to property rights vs. human rights, and that these patents have so far only benefitted the biotech industry and are not for the greater good of cancer research and diagnosis.  He explained that this case has much more at stake than a patent for a new pharmaceutical because you can always develop another drug; however, DNA by nature is “unsubstitutable” and you cannot “invent around it.”  It is also interesting to note that Myriad has had difficulties obtaining patents in Europe, as EU law states that a patent cannot be awarded if it is “contrary to public order and morality.” Prof. Kevles also mentioned that many biotech companies have ownership over other genes, but these companies issue licenses for others to research these genes and have not experienced the same problem that Myriad is now faced with. However, I would be curious to know if these genes are simply “less interesting” or “less controversial” than Myriad’s BRCA1/2. Or, is it truly just as profitable to accrue licensing fees than to have a patent monopoly on a gene?
It is also worth noting that whole genome sequencing technology is actually cheaper (and the price keeps decreasing) than Myriad’s diagnostic test (although sequencing used to cost more before this patent battle started), so any trained scientist could hypothetically  sequence BRCA1/2 (and every other gene) in an individual’s DNA and compare this to the published sequences readily available online. However, the problem is that only Myriad Genetics knows what the appropriate disease variants of these sequences are (without other researchers confirming that the research on these variants is scientifically sound). The nature of scientific research is to have a transparent, peer-reviewed evaluation of your research, and the patents get in the way of this entire process and destroy the foundation of how research is conducted and validated.  Scientific research, especially critical research on cancer diagnostics, is for the betterment of society as a whole, and no company or other entity should have a monopoly on this process. In addition, the civil rights arguments of this case are extremely relevant and should not be ignored; in today’s society, there should be no question regarding whether a patient should have the right to all of his/her medical information using the best diagnostic tools available.
Still, it seems that there needs to be some kind of decision that will not allow for a similar case to be brought to the Supreme Court in the future. As Prof. Kevles said, Myriad does not want these patents just to be “evil;” they have a reason for doing so that they feel is valid. Every biotech company has the right to make a profit from their research, and patents may seem like a secure way to protect their investments for 20 years. However, this case has become so notorious because the genes in question have been linked to breast and ovarian cancer (I’m sure this would not be an issue if Myriad was studying plant genes, for example). I believe that the Supreme Court should decide that different rules need to apply in these situations where human health is at risk, and thus genes that can be used as cancer diagnostic tools should not be patented; this is the only way to allow for progress of scientific research and progress within our society as a whole. However, along with this ruling comes another Pandora’s Box regarding healthcare and insurance coverage for the information associated with an individual’s personal genetic sequence.
This landmark case will be addressed in June 2013, so stay tuned for the Supreme Court’s ruling!

Thalyana Smith-Vikos

Wednesday, December 12, 2012

What Does "Falling Off the Fiscal Cliff" Mean for Research?


Scientists are always worried about their funding. It’s the nature of the job; while most scientists would love to spend all of their time on experiments, reality dictates that they have to spend a significant amount of time writing grant and fellowship proposals. Faculty compete with their peers for large grants, and at the same time postdocs and grad students are competing for fellowship awards. For many if not most scientific disciplines, the primary source of these funds is the federal government. Thus, when news comes from Washington that the pool of money could shrink precipitously - as it has in late 2012 - the stress and worry become amplified.
How did we get here? Congress has not been able to work out a budget the way that it used to in the past. Part of the political difficulty comes from members of Congress having different electoral incentives to vote for or against budgetary measures (repeal of the Bush tax cuts, spending cuts, etc.). This led to the debt ceiling crisis and subsequently the Budget Control Act of 2011, a rather convoluted path that contains provisions for automatic, across-the-board spending cuts called sequestration. These cuts were designed as a “Sword of Damocles” to hang over Congress’s head during the 2012 session, giving members of Congress impetus to act on a budget either through traditional legislative negotiations or through the “super committee”. Despite their intent, both avenues failed to produce budget legislation, and the sequestration cuts are slated to go into effect Jan 2, 2013, in the event this month’s lame duck Congress does not come to an agreement with President Obama (or if both parties decide not to punt to a later date). Currently Obama and Speaker of the House Boehner are conducting negotiations on budget measures that could be put to a vote before sequestration takes effect. Much of the media coverage on sequestration has focused on broad economic consequences of sequestration or on the fight over taxes and entitlement programs, but I would like to focus on what is at stake for the scientific enterprise.
Sequestration calls for 8.2% cuts to be distributed amongst both defense and non-defense discretionary spending, with only a few programs spared such as Medicare and Social Security. Federal funding for science related-research across all agencies would face a $3.9 billion cut in 2013 alone. Two of the primary federal funding agencies for universities, the National Institutes of Health (NIH) and Nation Science Foundation (NSF) would face cuts of $2.5 billion and $586 million, respectively. The director of the NIH, Francis Collins, had said that his agency would be unable to award about 2,300 grants in 2013 that it otherwise would have granted. Areport by Research!America cites the economic toll  for NIH cuts in human terms: 33,000 jobs and $4.5 billion in economic activity lost. Cuts to the NSF would result in 19,300 researchers, students and technicians no longer being funded.
The anxiety in the scientific community is palpable. The funding climate is already tense after the one-time infusion of funds from the 2010 stimulus dried up. Here in New Haven, the Yale Daily News recently took the temperature of Yale faculty who are facing the effects of a potential fiscal cliff:
“I think we are all terrified,” said Chris Cotsapas, assistant professor of neurology and genetics at the Yale School of Medicine. “If I don’t bring money in, then I can’t pay the people in my lab, and I can’t pay my salary. It’s kind of that simple.”

83% of Yale’s federal research funding comes through the NIH, and even though it is an elite research institution, nobody will be immune from the effects of a significant budget cuts. MITprojects a loss of $40 million in research revenue. Undoubtedly a prolonged sequestration would have dire effects on graduate and undergraduate education.
But beyond the economic impact on universities, their researchers, and the local coffee shops and retail stores that their salaries go into, there is also the loss of innovation and new knowledge that comes from the research enterprise. Basic and translational science funded by the NIH provides avenues for drug development by the pharmaceutical and biotech industries. NSF-funded research enables new technologies for clean energy. Researchers create new inventions that can be patented by universities and brought into incubator startups or acquired by larger companies. If sequestration takes effect and Congress does not restore the funds, research-fueled innovation and invention will inevitably slow down and sputter across many industries. New life-saving therapies that otherwise would be developed in startups to and brought to clinical trials over the coming decade could be lost. Throw in cuts to the Centers for Disease Control (CDC, $490 million) and Food and Drug Administration (FDA, $319 million) and the health and well-being of the nation becomes an even bigger concern.
The good news is it doesn’t have to be this way. The scientific community is organizing to make its voice heard on Capitol Hill. Groups such as Research!America, the Coalition for Life Sciences (CLS) and the Federation of American Societies for Experimental Biology (FASEB) provide opportunities for scientists to learn more about the legislative process, email their members of congress, or even meet them on Capitol Hill. Perhaps the research community is a few years behind the business community and other constituencies in terms of developing these relationships. It’s time to catch up. If you are a faculty, postdoc, grad student or a technician who is funded by a federal research award, call your congressman and let them know what sequestration means to your career and your livelihood.

Resources:

Kenneth Buck 
PhD, Dept. Molecular, Cell and Developmental Biology

Friday, December 7, 2012

Evaluating Obama’s commitment to science


Election promises are a crucial part of a candidate’s platform in the race to presidency. Due to the high-stakes nature of presidential elections, candidates often overstate their promises in order to stand out and gain support. This often leads to doubt by the public, thereby increasing apathy towards voting and ultimately decreasing voter turnout. One way to gauge a leader’s commitment to their promises is by looking at their track record and evaluating their relevant performance in the past. In this light, the members of Yale Science Diplomats set out to evaluate the science-related promises made by President Barack Obama during his second presidential campaign by reviewing his contributions to science during his first term.

From the beginning of his presidency, it was clear that science was of high importance to Obama’s administration. Immediately, Obama appointed esteemed scientists, such as Nobel laureate Steven Chu, to lead the agencies overseeing US research and development. A major effort was made to protect scientific findings from political manipulation, and an early-term promise that “political officials should not suppress or alter scientific or technological findings and conclusions” was largely kept, although with some exceptions.

Where science lay in Obama’s list of priorities was made even clearer when the stimulus bill was signed in 2009. A surprisingly large proportion of the bill was devoted to science such that individual institutions received significant boosts in funding. Furthermore, Obama’s administration birthed many innovative projects such as the Advanced Research Projects Agency-Energy (ARPA-E), which funds high-risk energy research, as well as the National Center for Advancing Translational Sciences (NCATS), which aims to expedite drug development. Obama also set out to establish the first greenhouse-gas regulations, and even overturned federal restrictions on funding stem cell research.

Although Obama’s commitment to science has proven to be strong, there are aspects of his agenda that could still use improvement. For instance, Obama has shown opposition to many of NASA’s space programs, notably eliminating the project Constellation that aimed to return astronauts to the Moon. Moreover, the administration stumbled in response to the Deepwater Horizon oil rig disaster by severely underestimating the quantity of oil that spilt into the Gulf of Mexico, leading many to question the administration’s earlier promise that scientific findings and political agendas would remain separate.

Overall, while not flawless, Obama’s dedication to science and scientific integrity, as highlighted by his performance during his first term, is very assuring. Witnessing his advocacy for science convinces us that Obama is sincere in his promises to further prioritize science, and it will be exciting to see how his scientific policies will evolve over time.


Yevgeniy Serebrenik
2nd year, MCDB (Molecular, Cellular, and Developmental Biology)

Monday, November 26, 2012

When n=1: How can we encourage scientific reproducibility?


High profile reports of bacteria that incorporated arsenic instead of phosphorus and of a particle traveling faster than the speed of light have not fully stood up to public scrutiny. The inability of other scientists to replicate these widely publicized experiments has brought increased attention to the issue of the reproducibility of scientific experiments.
Adding fuel to the fire, Amgen scientists reported they could not repeat 89% of the published findings on promising targets for cancer therapeutics they investigated. These events have led to outrage that public dollars are being spent on such poor research. There have been a number of proposals for ways the scientific community can maximize the reproducibility of published results. These include additional ethics training for students and young investigatiors and standardization of guidelines for publication. Unfortunately, too often the lack of reproducibility gets conflated with the more serious issues of carelessness and fraud. While both poor science and (in rare cases) outright fraud contribute to the publication of work that cannot be reproduced, there are other issues to consider.

It is important to be clear that the failure of one lab to replicate the work of another is not the same as proving the original work to be false. Many bench scientists struggle to reproduce results, both published and within their own labs. When dialogue is open between the two scientists performing the experiments, it is usually easy to see where miscommunication or lack of detail in a protocol has led to a different result. In my opinion, a vital step in reducing issues with reproducibility is to encourage the publication of detailed protocols. Far too often, Materials and Methods sections are short and among the first areas to be cut when conforming to a journal’s word limit. Instead, we should expect each published article to clarify important details including the temperature at which experiments were performed, the concentration of all reactants and the equipment used for each step of a procedure. Only when replicate experiments have been performed precisely under the same conditions should the original be regarded with skepticism.
New and interesting ways of providing detailed experimental procedures have proliferated in recent years with the publication of Nature Protocols and JoVE, two repositories for highly detailed methods. Providing thorough explanation of techniques and procedures will become common practice if high profile labs lead the way by sharing their novel methods. The NIH can encourage the use of these repositories by making procedural transparency a component of the score that determines whether a grant is funded or not.
There are creative attempts going on to identify results that are irreproducible (or conversely, identify reproducible results) to minimize time and effort spent in other labs following up on poor data. The blog Retraction Watch wants to make sure the scientific community is aware of papers that have been withdrawn or retracted. While this project less directly aids in improving reproducibility, it helps with the larger goal of preventing the waste of time spent by researchers trying to replicate false or incomplete experiments. The authors of the blog note in their FAQ section that there is no comprehensive database of retractions from scientific journals. While the retraction of an article may be noted in the place of the original manuscript on the publisher’s website, little publicity is given to these notices. Indeed, it is hardly in the publisher’s best interest to do so. 
A particularly bold group has approached this problem by founding the Reproducibility Initiative, a new resource to help scientists by adding to the impact of experiments that have been reproduced. For a fee, the group will match interested investigators to researchers who will then attempt to repeat their experiments. As part of the service, the investigator has the option to publish the results of validating experiments in an online journal (the publication is optional so investigators may choose not to publish experiments that conflict with their original results). Validation of the initial results qualifies the original manuscript--if published in participating journals--for recognition of that reproduction. Presumably, this validation adds to the impacts of the results.
How well this initiative succeeds will depend entirely on the quality of the scientists performing the follow-up experiments and the ability to communicate between the original and replicating labs. If the follow-up lab is able to quickly replicate a result, the situation will be beneficial to all. However, if the follow-up lab cannot replicate the published result, the amount of benefit will depend on the two labs working together to determine why the replication failed. An inexplicable discrepancy helps no one, for the reasons discussed above. As a scientist, I would certainly be glad to know others could reproduce my results, but if they failed to do so, I would not necessarily trust their results over my own due to my greater knowledge of my own methods. Inexplicable discrepancies could lead to potentially time consuming and costly searches to reconcile the results of the two labs. This may prevent scientists from using the service if they have even one bad experience.
How do you think the scientific community can improve the reproducibility of publically funded research? Leave your ideas in the comments!


Irene Reynolds Tebbs 
6th year, Molecular Biophysics and Biochemistry 

Thursday, October 25, 2012

Emphasizing New Teaching Methodologies for Better Student Learning: The MORE Model and Beyond


“I can’t think of a purpose for this lab exercise.” This was a comment I received from a student this week regarding his most recent lab writeup. He’s not alone in feeling this way; the purpose of lab classes seems confined to understanding basic textbook concepts with little real-world applicability. Most students, and indeed most people, would find these exercises to be tedious and boring, yet little has been done to change the current teaching paradigm.

Now, a handful of professors have begun to rethink the concept of lab classes and design them to emphasize scientific knowledge within the framework of a real-world problem. At Colorado State University, graduate student Colin Blair and professors Dawn Rickey and Ellen Fisher have designed a lab class to teach chemistry principles through properties and applications of gold nanoparticles. Titled “Exploring Gold Nanoparticles,” the class is based upon the MORE Framework, which stands for Model, Observe, Reflect, and Explain, a teaching model developed by Rickey when she was a graduate student at UC Berkeley. In the "Exploring Gold Nanoparticles" class, students are given only basic knowledge of the lab principles beforehand and are encouraged to formulate a hypothesis of what will happen based on this and any outside knowledge they have. They then perform the experiment, reflect on the results obtained, and revise their hypotheses before progressing to the next stage of experiments, as if they were discovering principles in the field for the first time. Specifically, students learn how to synthesize gold nanoparticles, how to analyze them by atomic force microscopy, and then how to use gold naoparticles to develop a pregnancy test that can differentiate hormone levels in different human serum samples. The professors found that when taught with the MORE model, students retained information better than if they were taught using the standard lab courses. An essay describing the format and outcomes of the class recently won the Inquiry-Based Instruction Prize from Science magazine, a monthly essay contest that explores unconventional science classes for high school and college students in a variety of scientific disciplines.

Although “Discovering Gold Nanoparticles” is not the first class of its kind, classes like this remain rare seemingly due to the time required to develop them. Going through past IBI winners, a major driving force in the development of these classes seems to be a single or small handful of extremely dedicated professors or graduate students. We at the YSSD wondered whether this was due to some fundamental difficulty in applying this methodology. During a group session of only 10 minutes we found that we could come up with several labs based on-real world applications, including teaching students about genetics by observing olfaction behavior in drosophila (fruit flies), teaching students about neuroscience by looking at how electrical currents affect nerves in aplysia (sea slugs), and teaching students about geological processes through examination of different rocks and minerals. While not possible in all subjects, our session demonstrated that the applicability of inquiry-based instruction and/or the MORE framework is vast and that limitations in their development may be a lack of awareness of this novel methodology or the time required to develop a new class.

Research has demonstrated that students learn better when taught with the MORE model and other inquiry-based instruction methodologies, yet lab classes based on these are few and far between. Nevertheless, about 10% of YSSD members reported that they had personally participated in inquiry-based laboratory classes, and 100% of YSSD members said that they would have taken an inquiry-based class if it had been offered at their undergraduate institutions. Because they have the potential to so greatly improve student learning and understanding of science in the general population, development of new classes should be a priority for both professors and educational institutions.

Researchers, here is something you can do: sit down with a colleague or labmate and brainstorm for 10 minutes about how some aspect of your research could be taught using the MORE Model. Then, get in touch with your department's curriculum committee or your institution’s teaching center and see if there are others with similar interests. If you distribute the development of a new class across several professors in such a manner, the time requirement drops precipitously. As awareness of the MORE methodology and inquiry-based instruction in general increases, we will hopefully see more classes like “Discovering Gold Nanoparticles.”


Natalie Ma
2nd year, MCDB (Molecular, Cellular, and Developmental Biology)

Nancy Tao
3rd year, Chemistry

Monday, September 24, 2012

The dilemma of federal funding for science research


Recently I attended Capitol Hill Day, which is organized twice a year by the Coalition for the Life Sciences, an alliance of several organizations focused on science policy. I encourage all scientists (grad students, postdocs, professors) to participate in this event! During Capitol Hill Day, our group of scientists met with staff members of various senators and representatives to discuss the importance of long-term, sustainable federal funding of biomedical research. The National Institutes of Health (NIH) and the National Science Foundation (NSF) are primarily responsible for funding research at universities, but there is a threat that this funding will be reduced significantly in the next fiscal year. In a dire economy, it is difficult to make decisions about what takes precedence in terms of receiving funding from the government; however, we scientists urged Congress to understand that the situation is already very bad, and we fear that if funds are cut even further, this will result in a complete standstill of science research across the country.

The NIH and NSF fund grants for science research at institutions nationwide; without these grants, it would be impossible for a laboratory group to continue to conduct research. New professors are especially desperate for these grants; the lack of funding is part of the reason why it has been shown that only a meager 5% of science PhDs end up becoming tenured professors. In this new age of technological advancements, there are many new sophisticated techniques that scientists can use to conduct their research in a thorough and comprehensive manner; however, these technologies can be very expensive, and most laboratories will require multiple federal grants to cover the costs. We should not deny scientists the opportunity to conduct the best research possible, as the biomedical discoveries being made in labs across the country directly affect the well-being of us all, now and in the future. For example, I am studying the genetics of aging, i.e. factors - separate from your surrounding environment - that are already "encoded" within you that determine how long you may live. As the American population continues to live longer, it is becoming increasingly critical to understand the process of how aging actually occurs, so that we can work towards developing therapies for promoting healthy aging in all individuals.

I will admit that some scientists are better than others in terms of explaining to the general public the importance and relevance of their research, which I feel is a real misfortune because the research of biomedical scientists is directly related to improving the quality of life of the general public! This disconnect could be responsible for the stereotype of scientists as elitist or unapproachable (which is of course not true), and improving communication between scientists and the general public (and especially those responsible for funding research) could alleviate any confusion and increase awareness regarding the importance of our research.

Besides funding new or continued research grant proposals, the NIH and NSF also provide funding to institutions for training grants for PhD students. I was surprised to learn that most of the people I met with on Capitol Hill did not realize that training new science PhDs is one of the critical uses of federal funding! I have witnessed a generation of young, intelligent individuals committed to conducting science research and helping our country be a leader in biomedical discoveries; we have received federal support along this journey, of which we are extremely thankful. However, the problem we now face is that our government needs to follow up on its investments - all of these new PhDs who wish to continue conducting research and start their own labs cannot do so because of the lack of funding for new research grants. I personally feel that all this potential in current and future generations of scientists is quickly fading away.

Another example of how federal funding is used is developing science education outreach programs. Scientists including myself have volunteered with an outreach program called Family Science Nights, which is an after-school program where scientists set up demo lab experiments that elementary and middle school students can do; parents are also encouraged to work with their children to do the demos, as well. These programs promote scientific curiosity, learning how to apply the scientific method, and doing hands-on experiments to get both students and parents interested and excited about science. The Family Science Nights also encourage students to design their own science fair project by the end of the school year and participate in the city-wide Science Fair. These science outreach programs are critical because we as lab scientists have access to materials and equipment that are simply not available in the average public school because they are too costly. Additionally, the volunteers can be mentors and role models for the students, acting as real-life examples of what you could become if you study and enjoy doing science. I believe there should be many more relationships developing between public schools and university scientists across the country. Nationwide, students are performing very poorly in science compared to other subjects. According to the College Readiness Benchmarks set by the standard-test makers, ACT, only 30% of high school graduates met the "benchmark" of being likely to pass a first-year college course in science without remedial classwork. It is clear that we need to act now to improve science education, starting with younger children and continuing through high school.

Lastly, I will mention that without these grants from the NIH and NSF, we will not only lose future generations of science PhDs, witness many university labs shutting down and ceasing research, and dissolve any outreach programs in the schools, but the future of that city's local economy will also be disrupted on a large scale. Every science lab indirectly employs many other workers, including marketing, production and distribution of all the products and equipment we use in the lab, other start-up companies founded on research done in the lab, etc. Scientists do, in fact, have a large contribution to the economy.

Overall, I had a very positive impression of my meetings with the congressmen's staff; we engaged in fruitful discussions about where scientists stand regarding the importance of federal funding of science research, and where the government stands regarding how to allocate said funding. It seems that there are still many decisions left to be made before the budget for the next fiscal year is complete, so I remain cautiously optimistic that funding for biomedical research will be maintained at the highest level possible.

During the Capitol Hill Day that I attended, there were about 20 graduate students, postdocs and professors representing states from all over the country; I enjoyed meeting all these scientists with a similar interest in advocating for sustainable federal funding for biomedical research. We all had our own personal stories to explain to the congressmen's staff exactly how this funding is so critical for the work we do on a daily basis, as well as how our research directly impacts the economy. There was also a staff member from the Coalition for the Life Sciences present at all of these meetings to help us get our points across. For example, one of the main goals for this Capitol Hill Day was to ask Congress to protect the NIH and NSF from sequestration, which will go into effect January 1 unless a vote is made beforehand. This would result in a 22% cut for the NIH and 29% cut for the NSF over 9 years, which would have an unrecoverable effect on each of our labs in particular and on the biomedical enterprise as a whole. Additionally, we stressed to Congress that the increase in federal funding for biomedical research has not been above inflation since 2003, so we have already witnessed the impacts of constricted funding. We had the opportunity to meet with congressmen's staff from our state as well as from a few neighboring states; although there was limited time to get our points across, I really enjoyed my discussions with the staff members, who all seemed to be very receptive to our cause and interested to hear our personal accounts. It was very clear to me, though, that without the Coalition for the Life Sciences organizing and facilitating all of these meetings, it would have been extremely difficult for me to actually have these discussions.

Scientists today need to include advocacy as part of their job description; we all need to spend more time being involved with programs like Capitol Hill Day or being grassroot advocates for the Coalition for the Life Sciences, where we can make our voices heard and ensure that the funding for our research will still be here for years to come. During Capitol Hill Day, I had the pleasure of listening to a briefing by Dr. Siddartha Mukherjee, who presented a historical perspective of how cancer research has changed over the years, which is also described in his book, "Emperor of All Maladies". At the conclusion of his talk, Dr. Mukherjee stressed to the audience the importance of funding for research to help address issues like the costs of personalized therapies, developing better clinical trials, and training young scientists. He made it clear that all of the research he was describing, as well as any future prospects of continued biomedical research, would not have been possible without scientists advocating for NIH and NSF funding. Following in Dr. Mukherjee's footsteps, I have invited one of the senators from my state to come visit our university's laboratories and see the research we are conducting; I hope this can be an example of how to solidify a relationship between scientists and Congress now and in the future.

Thalyana Smith-Vikos
4th year, Molecular, Cellular and Developmental Biology
Check out her personal blog at thebiologyblogger.blogspot.com

Tuesday, September 4, 2012

Join the "We Are Research" Campaign


It’s no secret that NIH funding for the upcoming fiscal year is far from secure. As graduate students, we are shielded from much of the stress and effort required to fund the labs we work in, but as future scientists, we know that responsibility will be on our shoulders soon enough. 

If you want to ensure funding for biomedical research continues even as the United States faces difficult budget decisions, this is the time to get involved. 

The American Society for Cell Biology (ASCB) is sponsoring a campaign called “We Are Research” to give Congress a glimpse into the lives of the people performing the research they fund. To participate, all you have to do is take a photo of your lab with a sign proudly proclaiming “We are research!” and submit it to the ASCB website. Once you’re there, take a look at the other suggestions for additional ways to get involved.

To learn more about the current debate over the budget for the coming fiscal year, take a look at the updates on the Coalition for LifeScience (CLS) website. Consider joining the CLS to get timely updates on science policy issues and learn about the outreach opportunities they sponsor as well.

Irene Reynolds Tebbs
6th year, Molecular Biophysics and Biochemistry 

Wednesday, July 25, 2012

Book Review: “How Economics Shapes Science”



As I traveled up and down the East Coast interviewing at graduate schools, the professors interviewing me at each university repeated the same bit of advice on how to succeed in graduate school: “You’ve just got to really, really, love your work.”

While I have no doubt this advice was offered up with the utmost sincerity, my experience thus far has convinced me that succeeding in science requires a bit more than pure passion. It requires luck, and also quite a few expensive resources.

Paula Stephan agrees wholeheartedly in her 2012 book, “How Economics Shapes Science.” This book clearly lays out how money influences every step of the research process—and by extension, every stage of a scientist’s career. Money influences who attends graduate school; who takes a post-doctoral fellowship; who tries and who succeeds in obtaining a tenure-track professorship; and who will have a productive, long term research career.

Stephan is an economist by training, and writes in the Preface that her book is intended for a large audience—including other economists and policy makers, but also scientists. She makes good on her claim by writing in clear, jargon-free language. Presumably for the benefit of the economists and policy makers, she tries hard to paint a realistic picture of lab life across disciplines, from mathematics to biology—and she does rather well.

For the laboratory trained scientist, it is tempting to assume that these chapters describing our every day life will not hold many surprises, but the details of the importance and the distribution of these resources are quite interesting. I was fascinated to learn, for example, that Johns Hopkins University has a core facility that allows their researchers to order any mouse model they need—even one that needs to be made on demand. Scientists at lower tier universities working on similar research questions would have a difficult time competing with labs at Johns Hopkins who take advantage of this service! At the very least, these chapters are worth skimming for the thoughtful sections at the end of each chapter on how public policy should maximize the utility of each resource.

The remainder of the book describes who is performing research in the United States and how the money that funds research is distributed. Stephan returns to several questions in multiple chapters of this book, but I found two particularly interesting.

The first is, should the majority of publically funded research be performed in a university setting? While it is clear Stephan values research universities highly, a number of arguments are made for research to also occur at non-degree granting institutions. The most obvious of these arguments is that university labs graduate far more students each year than the number of available faculty positions. Stephan argues that using tax dollars to train students—particularly in the early years when little time is spent performing research—is inefficient if those students are forced by a scarcity of professorships into careers as high school teachers or technical writers. One can certainly make the counter argument that graduate training prepares students to excel in alternative careers, but it is difficult to determine if this justifies the heavy investment by taxpayers.

A second, and even more central question running through this book is whether we are funding research properly. Here again, there are more questions than answers, but after reading this book I was convinced that our current situation is far from optimal. I was previously of the over-simplistic mindset that scientists should always to lobby for additional research funds. More money should mean more—and higher quality—research.

But Stephan challenges her readers to think about additional funding concerns. In particular, Stephan refers frequently to the doubling of the NIH budget between 1998 and 2003. The rapid influx of money led universities into building and hiring trajectories they couldn’t sustain as funding levels flattened out. To prevent these and other unintended consequences from fluctuations in funding, Stephan argues strongly for long-term consistency in research support from year to year. 
           
These tough questions, and the others raised throughout the book, are particularly meaningful as the United States continues to face a difficult financial situation. As the author points out, a country with less money to spend on research must be even more careful that each dollar is spent optimally. Anyone interested in the future of science in America will find it worth his or her time to read and learn from this book.  

Irene Reynolds Tebbs
5th year, Molecular Biophysics and Biochemistry

Wednesday, June 13, 2012

Communicating Science Effectively: The “Flame Challenge” is an example to follow!

I commend participants in Alan Alda’s recent competition to explain what a flame is to an 11 year-old (with the small caveat that it should apply to adults just as much as children!). At the World Science Festival held in New York City last weekend they announced the winner of the competition out of over 800 submissions from around the world. The winning video, created by graduate student Ben Ames of the University of Innsbruck in Austria, does an excellent job explaining complicated theories from chemistry and physics to a lay audience. Mr. Ames clearly worked very, very hard on this video, but I hope you’ll agree, that this level of effort is worth it.

 

As a member of the Yale Student Science Diplomats, I strongly believe that scientists have an obligation to broadly disseminate their work, and explain it carefully and clearly to others. There’s a common phrase we use for this: “Communicating Science to Non-Scientists.” But I’ve always felt a bit uncomfortable with this phrase. It unintentionally implies some sort of fundamental difference between the intellectual capabilities of scientists and non-scientists. 

The reason a lay person might not understand a scientific article is usually the same reason a scientist might not understand a their cell-phone contract –- they don’t “speak the language” or know the relevant laws. While jargon is helpful when experts in a field communicate amongst themselves, scientists often overuse complicated terms, especially with lay audiences. We also usually forget to provide enough background information before getting to the main point. Without the right context, even things that are explained precisely and slowly are incomprehensible. It is not a matter of intelligence, but explanation. 

Making scientific concepts more accessible helps with two additional roadblocks to communicating science to people: boredom and apathy. If you don’t get lost in jargon, you have a better chance of recognizing why something is interesting and important (and science is usually both!). If you think something is cool and relevant to me, you’ll be more likely to absorb--and even use--the information you’re given. So, scientists, teachers, parents, and journalists will be most effective if they speak about science in an engaging and relatable way. 

Some people might reasonably argue that you just can’t make everything clear and exciting–-and that attempts to do so could distract from, or even distort, the truth. Proponents of this argument are the ones who draw a clear line between the abilities of scientists on non-scientists. The scientist who perpetuates this myth might say, “Why should I put a lot of effort into explaining something when it’s just too hard for them to understand?” Meanwhile, the average adult might react, and further perpetuate the myth, by saying “I’m not a science person, so I shouldn’t bother trying to understand this.” I concede that there are cases where it is impractical to thoroughly explain a scientific result or theory, but I think we need to embrace the hard job of balancing clarity, potency, and accuracy. In fact, it is possible to strike this balance even with something as complicated as fire. 

Becky van den Honert
Psychology PhD Student

Tuesday, May 29, 2012

April Science in the News: Our Friends, Our Foes, Our Machines


“What is the weather like?”

Any person you meet can likely answer this question without batting an eyelash. But what about a machine? Although seemingly simple, understanding and responding to such an inquiry requires a certain amount of intelligence. For instance, a refrigerator might not be the best thing to consult when deciding what to wear in the morning. But Emmett Sprecher began April’s Science in the News (SITN) presentation by asking his phone this very question. A few seconds later, Siri, the personality behind the iPhone 4 spit out New Haven’s weather forecast for the evening. How can your smartphone do what your refrigerator cannot? This was the topic discussed in the basement of the New Haven Public library: artificial intelligence (AI).


Siri, the iPhone’s AI voice recognition system can answer questions about the weather, compose texts for you, play music, and even search the Internet. 
Photo: Ankit Disa.

The three SITN presenters, Emmett, ThaiBinh Luong, and Christopher Bolen -- all recent or current graduates in Yale’s computational biology program -- used an array of futuristic and real-world examples of AI to discuss what AI actually is, how it works, and what the future of AI may hold. Here, I will try to break down their answers for you (spoiler alert: no, robots probably won’t take over the world… according to Christopher, at least).

The April SITN presenters answering questions following their talk at the New Haven Public Library. 
From left to right, Christopher Bolen, ThaiBinh Luong, and Emmett Sprecher. 
Photo: Ankit Disa

Before we can answer how AI operates or how it might affect us in the future, we must answer the first question, what is AI? In the true circular fashion of academia, many computer scientists define artificial intellgence as “the science and engineering of making intelligent machines”.1Emmett clarified this definition by considering examples that we (whether or not we are aware of them) encounter on a regular basis.  Siri, our iPhone friend, is a recognizable example, perhaps, in part, due to its voice.  However, consider the computer opponent in video games, the Roomba (the robotic vacuum cleaner), and Watson (the robot that recently won a game of Jeopardy); these are all examples of machines that utilize artificial intelligence in some capacity.  As Emmett explained, they all have the ability to perceive their environments and take actions which they determine will lead to the best chance of success.  Success in this case is broadly defined; it could mean winning a video game or sucking up a crumb.  In contrast, non-intelligent machines, like assembly line robots, always take pre-conscribed actions and do not modify their behavior based on their interactions with their environment.

http://upload.wikimedia.org/wikipedia/commons/d/d3/Euclid_flowchart_1.png
http://chzsomuchpun.files.wordpress.com/2010/07/ce822881-9696-4207-8d74-3444a9f49f38.jpg  
                                        
Algorithm vs. Al Gore rhythm.
An algorithm is a set of rules to follow -- for instance, a flow chart (left). Al Gore rhythm (right) is probably something our former vice president doesn't have much of (just a guess!).



So, how does AI work? As ThaiBinh explained, the basic concept is not as complicated as one might think. Often, as in many video game systems, for instance, the decision-making process follows algorithms (not to be confused with any Al Gore rhythms) that are coded into the AI computer. You can think of an algorithm as a recipe or a procedure that tells the machine to take action B when it encounters situation A. The success of such an AI, however, depends on the quality of these rules. Instead of fixed rules, however, one could build an AI to find patterns in the incoming data and make predictions about future outcomes. This is how Netflix is able to make suggestions about movies you may like -- it compares what movies other people like who have the same interests as you.

AI can help us clean our carpets or figure out the weather, but how far can it go? Google’s self-driving car, which uses Google Street View information and visual sensors to navigate, has already logged over 200,000 miles, and driverless car licenses have been approved by the State of Nevada. In the medical field, in addition to the growth of robot-assisted surgeries, IBM has agreed to use Watson to help doctors make medical diagnoses.

Truly, it seems AI is and will continue to revolutionize our everyday lives for the better. But we are often presented with futuristic possibilities in movies and sci-fi books of robots taking over the world and revolting against their creators (us). Don’t worry, Christopher assures, this scenario is very unlikely. Why? Because AI is, in the end, a set of rules programmed in by a person, so if robots were to take over the world… we’d have to tell it to!


 http://images.wikia.com/terminator/images/1/19/Terminator_robot.jpg



Looks like we won't have to worry about a Terminator doomsday scenario ... unless we decide we want to!




- Ankit Disa
3rd year, Applied Physics PhD