Beware the creeping cracks of bias
Alarming cracks are starting to penetrate deep into the scientific edifice. They threaten the status of science and its value to society. And they cannot be blamed on the usual suspects inadequate funding, misconduct, political interference, an illiterate public. Their cause is bias, and the threat they pose goes to the heart of research.
Bias is an inescapable element of research, especially in fields such as biomedicine that strive to isolate cause effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized. Yet if biases were random, then multiple studies ought to converge on truth. Evidence is mounting that biases are not random. A Comment in Nature in March reported that researchers at Amgen were able to confirm the results of only six of 53 'landmark studies' in preclinical cancer research (C. G. Begley L. M. Ellis Nature 483, 531 533; 2012). For more than a decade, and with increasing frequency, scientists and journalists have pointed out similar problems.
Early signs of trouble were appearing by the mid 1990s, when researchers began to document systematic positive bias in clinical trials funded by the pharmaceutical industry. Initially these biases seemed easy to address, and in some ways they offered psychological comfort. The problem, after all, was not with science, but with the poison of the profit motive. It could be countered with strict requirements to disclose conflicts of interest and to report all clinical trials.
Yet closer examination showed that the trouble ran deeper. Science's internal controls on bias were failing, and bias and error were trending in the same direction towards the pervasive over selection and over reporting of false positive results. The problem was most provocatively asserted in a now famous 2005 paper by John Ioannidis, currently at Stanford University in California: 'Why Most Published Research Findings Are False' (J. P. A. Ioannidis PLoS Med. 2, e124; 2005). Evidence of systematic positive bias was turning up in research ranging from basic to clinical, and on subjects ranging from genetic disease markers to testing of traditional Chinese medical practices.
How can we explain such pervasive bias? Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated but the necessary cultural change is incredibly difficult to achieve.
Researchers seek to reduce bias through tightly controlled experimental investigations. In doing so, however, they are also moving farther away from the real world complexity in which scientific results must be applied to solve problems. The consequences of this strategy have become acutely apparent in mouse model research. The technology to produce unlimited numbers of identical transgenic mice attracts legions of researchers and abundant funding because it allows for controlled, replicable experiments and rigorous hypothesis testing the canonical tenets of 'scientific excellence'. But the findings of such research often turn out to be invalid when applied to humans.
A biased scientific result is no different from a useless one. Neither can be turned into a real world application. So it is not surprising that the cracks in the edifice are showing up first in the biomedical realm, because research results are constantly put to the practical test of improving human health. Nor is it surprising, even if it is painfully ironic, that some of the most troubling research to document these problems has come from industry, precisely because industry's profits depend on the results of basic biomedical science to help guide drug development choices.
Scientists rightly extol the capacity of research to self correct. But the lesson coming from biomedicine is that this self correction depends not just on competition between researchers, but also on the close ties between science and its application that allow society to push back against biased and useless results.
It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications (such as drugs) and straightforward indicators of desired outcomes (such as reduced morbidity and mortality).
Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves. Useful steps to deal with this threat may range from reducing the hype from universities and journals about specific projects, to strengthening collaborations between those involved in fundamental research and those who will put the results to use in the real world. There are no easy solutions. The first step is to face up to the problem before the cracks undermine the very foundations of science.
2012 05 15 02:41 PM
Systematic bias may also be compounded in large, complex projects where the main body of work is done by graduate students and postdoctoral researchers. Mistakes made by junior researchers early in the project cycle may not be detected until much later on (if at all) and in some cases after the original investigator has left the research group. It is therefore important that senior investigators work hard to manage complex group dynamics, build mechanisms to ensure continuity and provide strong support for junior researchers.
2012 05 14 01:30 PM
I have read Daniel Sarewitz column the creeping cracks of bias with great interest. Especially important are, in my view, his words about complexity in which scientific results must be applied to solve problems We are not able to translate our knowledge from the lab to the bedside. It resembles to me a comment by Francis Collins, the Director of NIH, published in June issue of Nature Reviews Drug Discovery for therapeutic gold Collins criticizes current translational process which is, according to him, with frustration and offers new direction called repurposing Simply, if you have a drug and a patient taking the drug for her illness A, the drug may also accidentally cure another illness (illness

of the same patient. By building new type of pharmacovigilance (cf. Boguski et al. Science 2009) we can monitor adverse effects of all currently used drugs. This kind of clinical evidence represents world complexity Sarewith writes about. Basically, we don know precise mechanism of action of those repurposed drugs in new diseases they would be used for. But we can test their efficacy in clinical trials even without repatenting them and, in the case of success, dramatically lower cost of our health care systems (cf. Cvek Drug Discovery Today 2012).
2012 05 14 11:36 AM
While I would applaud Dr. Sarewitz for addressing this issue, I'm a little dismayed that the most obvious source of this creeping bias is completely ignored. The problem here isn't with science, it is with politics. Our funding is controlled politically. It is increasingly subject to the whims of any innuendo slinging lunatics who may wish to vault to prominence on the political stage.
We are supposed to be "objective" scientists, yet we don't hesitate when the search committees are formed to rely upon completely arbitrary criteria to winnow the field. We all bemoan how ridiculous the impact factor is, yet it is, without doubt, the overriding criteria for all scientific funding. The history of science is replete with stories of how discoveries that would lead to Nobel prizes were dismissed by the "top" journals for their "lack of novelty" or any number of equally ridiculous excuses for why legitimate, groundbreaking science should be relegated to less prominence.
There are a number of simple solutions here. While it would be impractical to expect to divorce scientific funding from politics completely, we need to eliminate the pretense that once the money has been allocated, it is free of such influence. It is the unscientific, unobjective political criteria that drives science funding throughout the grant review process. In the life sciences, if you can't convince the reviewer you may be able to provide direct insight into the components of a disease process, you might as well save the time you would spend writing. Perhaps no greater example of the inanity of the current system is the likely universally acceptable statement that many of the orphan GPCRs are of undoubted biological importance, yet the ability to get a grant funded to study one would depend on first linking it to a "biologically relevant" problem.
Any objective scientist can, I think, immediately recognize that this is not the way science should work. We cannot predict what will be most important in the future based on what we already know. If we could, science would find the end point of all potential human knowledge pretty quickly. We might arguably have no human genome sequence at this point in history if the scientists of the 60's and 70's had their funding tied to the same ridiculous, arbitrary political criteria that exist today.
Finally, I would like to point out that diversity in the approach to scientific questions is achieved by the employment of a diversity of researchers. The problems outlined above create a snowball effect by continually winnowing the field of scientists only to those who manage to fit the increasingly stringent, increasingly arbitrary constraints imposed by the politicization of scientific funding. If you want to discover the culprit for the problem of creeping bias, perhaps no more obvious source could be named.
The solution is to enshrine into law a requirement that once funding is allocated to the duly appointed agencies, it will be the scientific community as a whole that will be the sole arbiter of how best to allocate the available funds. Furthermore, I would argue that a significant majority of the available funding should be allocated first, to funding people. The number of permanent, tenured positions should be dramatically increased. Each tenured position should be able to rely upon an adequate, if modest, level of funding allowing the researcher to pursue any course of research they consider worthy. The remainder of available funding could then be apportioned in a grant review process similar to the current regime for projects requiring excess funds that are found suitability worthy by a scientific consensus. A scientific consensus free of the whims of the political elite, and those, for whose votes they feel the need to pander.
2012 05 13 07:01 PM
Majority of the scientific discoveries made till date was prior to 1950s and what is being presently done is finding their applications [usually roaming around their peripheries]: this is an established fact and claim. There by, lingering around facts are safer; while those with the power of networks, positions tend to big and their discoveries/ stories [media, press, pod cats and so on!]. It is easy to find out that, few PIs [and their labs] tend to go on rampage while many tend to back/ hide their research. Not uncommon to find students/ PIs calculating/citing the Impact Factor of their so called publications up to places and boast around for being greater researcher. Even a science career starter [for example myself] would notice, that there are institutions, both in and world which work day and night out generating to satisfy the need for agencies than towards a conscious efforts to the tax payers hard earned money Majority of the PIs in due course of time have changed their field of research, not aware of the development in the new area, do not work anymore and depend on the system in place to guide them through [basically unaware of the bench level goof ups by a doctorate/ MS/ Postdoc]. Collaborative efforts, convincing abilities within the laboratories [and the lab members] are non existent, lack of communication between laboratories and departments working on similar areas galore. Retractions found out based on falsification of data, plagiarism, manipulations are seldom found out, and when so, we ignore, esp. the whole network of people involved who already cited this work for their publications or those who climbed in academic ladder based on these scientific discoveries/ publications. Bureaucracy, hierarchical systems, unaware public, top tier personnel in funding agencies, perks based on patents leading to promotions, jobs and positions! No wonder, houses/ business firms and industries tend to be addressing more and worthy research than Further relevant questions also are applicable to the of funding to appropriate and worthy institutions For example, with mushrooming of research institutions,
Wholesale authentic nhl jersey online store with largest discount, universities and researchers, where is the QC or a panel to decide which projects to fund [more realistic ones] and which not to, at all! We all have come across Graduate students that when they treat with substance the M effect will be there and then all positive results lead towards the validation of claims, and negative results towards the of a new hypothesis altogether. How many times we see funding agencies sponsoring grant proposals on of hexavalent Mo ions on the roots hairs of primary roots of Japonica rice under N,P,K depleted conditions in a rainy season! or for that matter of a biodiversity in the tropical rain forest of Borneo How much potential these 2 studies mentioned hold, in terms of deliverables? No wonder, it delivers facts based on re claimation of previously established findings. Questions also are related to lack of guidance and mentorship from intellectual, knowledgeable mentors or reflection of fraudulence in the younger generation of researchers? Of course my opinion reflects wider issues than just the of discussion in the above said comments, but are intricately related as of the potential reasons leading to this misery in R, D T. Moreover, though, we cannot label everyone with the same tar, but the tendency towards of this bias/ negative useless results is growing alarmingly, and is a bigger social stigma that we all have to suffer through, at some point of time in our lives. Banhatti Your point about astrophysicist assuming that the mass in concentrated in one point in galaxies is completely ridiculous. Mass distribution is indeed calculated from the observed star star distribution. And, this distribution happen to be strongly peaked at the core, but it is not point like. Nevertheless, the dark matter is also observed in elliptical galaxies, galaxies clusters and in large scale structures. Using this example as an indication of scientific bias is silly at least.
2012 05 10 09:21 PM
I appreciate the forceful discussion of bias, but the notion that "useless" research is "no different" than biased research because neither can be applied is a serious mistake. One might as well say that a newborn baby is no different than an ill and bedridden adult, because neither can hold a full time job. Or that applied research is no different than biased research because neither improves our fundamental understanding of nature. He should stick to offering good argument for such research, as he has in the past, rather than spuriously conflating basic research with "biased" research.
相关的主题文章:
http://3dbnsxmx.com/forumdisplay.php?fid=4
http://www.dqwkzx.cn/E_GuestBook.asp
http://www.qzwfedu.cn/E_GuestBook.asp