World's first medical networking and resource portal

Articles
Category : All ; Cycle : March 2010
Medical Articles
Mar31
Sujok - Hand and foot Acupressure
Happy to know that my article regarding the 'quick remedy to common head ache' has been helped a lot of people to find an easy solution for head ache. Thank you for the doctors and other people who were tried it out successfully. Some of my friends asked me to give explain : " How it works". According to Sujok, the Acu pressure points related to the head is located on our thumb. When we apply a tight rubber band on our thumb, it restricts the blood circulation to the fingure tip and it helps to solve our head ache as a remote control helps us to control the Television or AC. There are a lot of pressure points situated in our hands and foot which corresponds to different organs, and stimulation of these points will help us to solve the problems related to these organs.


Category (General Medicine)  |   Views (22178)  |  User Rating
Rate It


Mar23
SPOT FAT REDUCTION USING ELECTRO-ACUPUNCTURE
Effective ElectroAcupuncture procedure for SPOT FAT REDUCTION is reported by Steven Aung of the University of Alberta at the ICMART congress.

He describes -

The technique of inserting 2-3 inches needles into
the fatty area such as abdomen and hip to reduce the volume of the fat by using electrical strong stimulation
connecting longitudinally, which I call ‘Aung Liposuction Acupuncture’ technique. This method in treating localized
accumulation of excessive fat is very effective...similar to the Mesotherapy procedure postulated by Dr. Michael Pistor of France.

SPECIAL MEDICAL ACUPUNCTURE TECHNIQUES:
THE SECRETS OF ACHIEVING
THE OPTIMAL QI RESPONSE

Steven K.H. Aung
University of Alberta, Edmonton, Alberta, Canada

In medical acupuncture philosophy and special techniques of therapy are usually very important. This will enhance
the optimal Qi response which will increase efficacy of the patients. It is also dependent on the therapist’s Qi, since
the therapist has good, purified, harmonious Qi energy and the response is tremendous. That is why the response
of the acupuncture varies on different acupuncturists, different techniques that enhance the quality of therapy. Some
ancient techniques have been practiced by many eminent healers which have been found to immensely increase the
response. The combination points that was used according to the flow of meridians together with the understanding
of the mind, body, spirit connection, which gives the most optimal effect of the therapy. These points are used in
combination with other points such as the area at the top of the head, GV.20 and HN.EX.1 (x 4) and also in the neck
area such as BL.11 and BL.12 together with GV.14. There are many combination's use of points in other areas such
as the wrist, ankle, abdomen, upper and lower back, etc.

Besides, the technique of inserting 2-3 inches needles into
the fatty area such as abdomen and hip to reduce the volume of the fat by using electrical strong stimulation
connecting longitudinally, which I call ‘Aung Liposuction Acupuncture’ technique. This method in treating localized
accumulation of excessive fat is very effective.

ICMART 2008 Research Paper,

Note - Patient Safety and Sterile conditions are absolutely important and this should not be attempted by lay readers as self treatment procedure and must be done by a qualified doctor or an Acupuncturist.


Category (Cosmetic Surgery)  |   Views (16763)  |  User Rating
Rate It


Mar21
Most Published Medical Research Findings Are False
Why Most Published Research Findings Are False

Published research findings are sometimes refuted by subsequent evidence, says Ioannidis, with ensuing confusion and disappointment.

John P. A. Ioannidis

Summary

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

John P. A. Ioannidis is in the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece, and Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts-New England Medical Center, Tufts University School of Medicine, Boston, Massachusetts, United States of America.

_______________________________________
Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof.


Modeling the Framework for False Positive Findings Top

Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings.


It can be proven that most claimed research findings are false


As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance [10,11]. Consider a 2 × 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fields where either there is only one true relationship (among many that can be hypothesized) or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R/(R + 1). The probability of a study finding a true relationship reflects the power 1 - β (one minus the Type II error rate). The probability of claiming a relationship when none truly exists reflects the Type I error rate, α. Assuming that c relationships are being probed in the field, the expected values of the 2 × 2 table are given in Table 1. After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 - β)R/(R - βR + α). A research finding is thus more likely true than false if (1 - β)R > α. Since usually the vast majority of investigators depend on a = 0.05, this means that a research finding is more likely true than false if (1 - β)R > 0.05.
thumbnail

Table 1. Research Findings and True Relationships
doi:10.1371/journal.pmed.0020124.t001

What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 × 2 tables.
Bias Top

First, let us define bias as the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced. Let u be the proportion of probed analyses that would not have been “research findings,” but nevertheless end up presented and reported as such, because of bias. Bias should not be confused with chance variability that causes some findings to be false by chance even though the study design, data, analysis, and presentation are perfect. Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias. We may assume that u does not depend on whether a true relationship exists or not. This is not an unreasonable assumption, since typically it is impossible to know which relationships are indeed true. In the presence of bias (Table 2), one gets PPV = ([1 - β]R + uβR)/(R + α − βR + u − uα + uβR), and PPV decreases with increasing u, unless 1 − β ≤ α, i.e., 1 − β ≤ 0.05 for most situations. Thus, with increasing bias, the chances that a research finding is true diminish considerably. This is shown for different levels of power and for different pre-study odds in Figure 1. Conversely, true research findings may occasionally be annulled because of reverse bias. For example, with large measurement errors relationships are lost in noise [12], or investigators use data inefficiently or fail to notice statistically significant relationships, or there may be conflicts of interest that tend to “bury” significant findings [13]. There is no good large-scale empirical evidence on how frequently such reverse bias may occur across diverse research fields. However, it is probably fair to say that reverse bias is not as common. Moreover measurement errors and inefficient use of data are probably becoming less frequent problems, since measurement error has decreased with technological advances in the molecular era and investigators are becoming increasingly sophisticated about their data. Regardless, reverse bias may be modeled in the same way as bias above. Also reverse bias should not be confused with chance variability that may lead to missing a true relationship because of chance.
thumbnail

Figure 1. PPV (Probability That a Research Finding Is True) as a Function of the Pre-Study Odds for Various Levels of Bias, u

Panels correspond to power of 0.20, 0.50, and 0.80.
doi:10.1371/journal.pmed.0020124.g001
thumbnail

Table 2. Research Findings and True Relationships in the Presence of Bias
doi:10.1371/journal.pmed.0020124.t002
Testing by Several Independent Teams Top

Several independent teams may be addressing the same sets of research questions. As research efforts are globalized, it is practically the rule that several research teams, often dozens of them, may probe the same or similar questions. Unfortunately, in some areas, the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation. An increasing number of questions have at least one study claiming a research finding, and this receives unilateral attention. The probability that at least one study, among several done on the same question, claims a statistically significant research finding is easy to estimate. For n independent studies of equal power, the 2 × 2 table is shown in Table 3: PPV = R(1 − βn)/(R + 1 − [1 − α]n − Rβn) (not considering bias). With increasing number of independent studies, PPV tends to decrease, unless 1 - β < a, i.e., typically 1 − β < 0.05. This is shown for different levels of power and for different pre-study odds in Figure 2. For n studies of different power, the term βn is replaced by the product of the terms βi for i = 1 to n, but inferences are similar.
thumbnail

Figure 2. PPV (Probability That a Research Finding Is True) as a Function of the Pre-Study Odds for Various Numbers of Conducted Studies, n

Panels correspond to power of 0.20, 0.50, and 0.80.
doi:10.1371/journal.pmed.0020124.g002
thumbnail

Table 3. Research Findings and True Relationships in the Presence of Multiple Studies
doi:10.1371/journal.pmed.0020124.t003
Corollaries Top

A practical example is shown in Box 1. Based on the above considerations, one may deduce several interesting corollaries about the probability that a research finding is indeed true.
Box 1. An Example: Science at Low Pre-Study Odds

Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10−4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R/(R + 1) = 10−4. 1Let us also suppose that the study has 60% power to find an association with an odds ratio of 1.3 at α = 0.05. Then it can be estimated that if a statistically significant association is found with the p-value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 × 10−4.

Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specified, changes in the disease or control definitions, and various combinations of selective or distorted reporting of the results. Commercially available “data mining” packages actually are proud of their ability to yield statistically significant results through data dredging. In the presence of bias with u = 0.10, the post-study probability that a research finding is true is only 4.4 × 10−4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them finds a formally statistically significant association, the probability that the research finding is true is only 1.5 × 10−4, hardly any higher than the probability we had before any of this extensive research was undertaken!

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true. Small sample size means smaller power and, for all functions above, the PPV for a true research finding decreases as power decreases towards 1 − β = 0.05. Thus, other factors being equal, research findings are more likely true in scientific fields that undertake large studies, such as randomized controlled trials in cardiology (several thousand subjects randomized) [14] than in scientific fields with small studies, such as most research of molecular predictors (sample sizes 100-fold smaller) [15].

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects, such as the impact of smoking on cancer or cardiovascular disease (relative risks 3–20), than in scientific fields where postulated effects are small, such as genetic risk factors for multigenetic diseases (relative risks 1.1–1.5) [7]. Modern epidemiology is increasingly obliged to target smaller effect sizes [16]. Consequently, the proportion of true research findings is expected to decrease. In the same line of thinking, if the true effect sizes are very small in a scientific field, this field is likely to be plagued by almost ubiquitous false positive claims. For example, if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true. As shown above, the post-study probability that a finding is true (PPV) depends a lot on the pre-study odds (R). Thus, research findings are more likely true in confirmatory designs, such as large phase III randomized controlled trials, or meta-analyses thereof, than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information, such as microarrays and other high-throughput discovery-oriented research [4,8,17], should have extremely low PPV.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be “negative” results into “positive” results, i.e., bias, u. For several research designs, e.g., randomized controlled trials [18–20] or meta-analyses [21,22], there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes) [23]. Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test) [24] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported. Regardless, even in the most stringent research designs, bias seems to be a major problem. For example, there is strong evidence that selective outcome reporting, with manipulation of the outcomes and analyses reported, is a common problem even for randomized trails [25]. Simply abolishing selective publication would not make this problem go away.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u. Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28].

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. This seemingly paradoxical corollary follows because, as stated above, the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [29]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [29].

These corollaries consider each factor separately, but these factors often influence each other. For example, investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field, further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely, the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research, enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation.
Most Research Findings Are False for Most Research Designs and for Most Fields Top

In the described framework, a PPV exceeding 50% is quite difficult to get. Table 4 provides the results of simulations using the formulas developed for the influence of power, ratio of true to non-true relationships, and bias, for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases, but power and pre-test chances are higher compared to a single randomized trial. Conversely, a meta-analytic finding from inconclusive studies where pooling is used to “correct” the low power of single studies, is probably false if R ≤ 1:3. Research findings from underpowered, early-phase clinical trials would be true about one in four times, or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse, especially when underpowered, but even well-powered epidemiological studies may have only a one in five chance being true, if R = 1:10. Finally, in discovery-oriented research with massive testing, where tested relationships exceed true ones 1,000-fold (e.g., 30,000 genes tested, of which 30 may be the true culprits) [30,31], PPV for each claimed relationship is extremely low, even with considerable standardization of laboratory and statistical methods, outcomes, and reporting thereof to minimize bias.
thumbnail

Table 4. PPV of Research Findings for Various Combinations of Power (1 - ß), Ratio of True to Not-True Relationships (R), and Bias (u)
doi:10.1371/journal.pmed.0020124.t004
Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias Top

As shown, the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information, at least based on our current understanding. In such a “null field,” one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias.

For example, let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between “null fields,” the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases.

For fields with very low PPV, the few true relationships would not distort this overall picture much. Even if a few relationships are true, the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally, investigators have viewed large and highly significant effects with excitement, as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data, analyses, and results.

Of course, investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a “null field.” However, other lines of evidence, or advances in technology and experimentation, may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods, technologies, and conflicts may be operating.
How Can We Improve the Situation? Top

Is it unavoidable that most research findings are false, or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability.

Better powered evidence, e.g., large studies or low-bias meta-analyses, may help, as it comes closer to the unknown “gold” standard. However, large studies may still have biases and these should be acknowledged and avoided. Moreover, large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high, so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow, specific questions. A negative finding can then refute not only a specific proposed claim, but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria, such as the marketing promotion of a specific drug, is largely wasted research. Moreover, one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [32–34].

Second, most research questions are addressed by many teams, and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However, this may require a change in scientific mentality that might be difficult to achieve. In some research designs, efforts may also be more successful with upfront registration of studies, e.g., randomized trials [35]. Registration would pose a challenge for hypothesis-generating research. Some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment. Regardless, even if we do not see a great deal of progress with registration of studies in other fields, the principles of developing and adhering to a protocol could be more widely borrowed from randomized controlled trials.

Finally, instead of chasing statistical significance, we should improve our understanding of the range of R values—the pre-study odds—where research efforts operate [10]. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above, whenever ethically acceptable, large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test [36].

Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [37], usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective, they would still be very useful in interpreting research claims and putting them in context.
References Top

1. Ioannidis JP, Haidich AB, Lau J (2001) Any casualties in the clash of randomised and observational evidence? BMJ 322: 879–880. Find this article online
2. Lawlor DA, Davey Smith G, Kundu D, Bruckdorfer KR, Ebrahim S (2004) Those confounded vitamins: What can we learn from the differences between observational versus randomised trial evidence? Lancet 363: 1724–1727. Find this article online
3. Vandenbroucke JP (2004) When are observational studies as credible as randomised trials? Lancet 363: 1728–1731. Find this article online
4. Michiels S, Koscielny S, Hill C (2005) Prediction of cancer outcome with microarrays: A multiple random validation strategy. Lancet 365: 488–492. Find this article online
5. Ioannidis JPA, Ntzani EE, Trikalinos TA, Contopoulos-Ioannidis DG (2001) Replication validity of genetic association studies. Nat Genet 29: 306–309. Find this article online
6. Colhoun HM, McKeigue PM, Davey Smith G (2003) Problems of reporting genetic associations with complex outcomes. Lancet 361: 865–872. Find this article online
7. Ioannidis JP (2003) Genetic associations: False or true? Trends Mol Med 9: 135–138. Find this article online
8. Ioannidis JPA (2005) Microarrays and molecular research: Noise discovery? Lancet 365: 454–455. Find this article online
9. Sterne JA, Davey Smith G (2001) Sifting the evidence—What's wrong with significance tests. BMJ 322: 226–231. Find this article online
10. Wacholder S, Chanock S, Garcia-Closas M, Elghormli L, Rothman N (2004) Assessing the probability that a positive report is false: An approach for molecular epidemiology studies. J Natl Cancer Inst 96: 434–442. Find this article online
11. Risch NJ (2000) Searching for genetic determinants in the new millennium. Nature 405: 847–856. Find this article online
12. Kelsey JL, Whittemore AS, Evans AS, Thompson WD (1996) Methods in observational epidemiology, 2nd ed. New York: Oxford U Press. 432 p.
13. Topol EJ (2004) Failing the public health—Rofecoxib, Merck, and the FDA. N Engl J Med 351: 1707–1709. Find this article online
14. Yusuf S, Collins R, Peto R (1984) Why do we need some large, simple randomized trials? Stat Med 3: 409–422. Find this article online
15. Altman DG, Royston P (2000) What do we mean by validating a prognostic model? Stat Med 19: 453–473. Find this article online
16. Taubes G (1995) Epidemiology faces its limits. Science 269: 164–169. Find this article online
17. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, et al. (1999) Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 286: 531–537. Find this article online
18. Moher D, Schulz KF, Altman DG (2001) The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 357: 1191–1194. Find this article online
19. Ioannidis JP, Evans SJ, Gotzsche PC, O'Neill RT, Altman DG, et al. (2004) Better reporting of harms in randomized trials: An extension of the CONSORT statement. Ann Intern Med 141: 781–788. Find this article online
20. International Conference on Harmonisation E9 Expert Working Group (1999) ICH Harmonised Tripartite Guideline. Statistical principles for clinical trials. Stat Med 18: 1905–1942. Find this article online
21. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, et al. (1999) Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 354: 1896–1900. Find this article online
22. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, et al. (2000) Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-analysis of Observational Studies in Epidemiology (MOOSE) group. JAMA 283: 2008–2012. Find this article online
23. Marshall M, Lockwood A, Bradley C, Adams C, Joy C, et al. (2000) Unpublished rating scales: A major source of bias in randomised controlled trials of treatments for schizophrenia. Br J Psychiatry 176: 249–252. Find this article online
24. Altman DG, Goodman SN (1994) Transfer of technology from statistical journals to the biomedical literature. Past trends and future predictions. JAMA 272: 129–132. Find this article online
25. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG (2004) Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 291: 2457–2465. Find this article online
26. Krimsky S, Rothenberg LS, Stott P, Kyle G (1998) Scientific journals and their authors' financial interests: A pilot study. Psychother Psychosom 67: 194–201. Find this article online
27. Papanikolaou GN, Baltogianni MS, Contopoulos-Ioannidis DG, Haidich AB, Giannakakis IA, et al. (2001) Reporting of conflicts of interest in guidelines of preventive and therapeutic interventions. BMC Med Res Methodol 1: 3. Find this article online
28. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992) A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 268: 240–248. Find this article online
29. Ioannidis JP, Trikalinos TA (2005) Early extreme contradictory estimates may appear in published research: The Proteus phenomenon in molecular genetics research and randomized trials. J Clin Epidemiol 58: 543–549. Find this article online
30. Ntzani EE, Ioannidis JP (2003) Predictive ability of DNA microarrays for cancer outcomes and correlates: An empirical assessment. Lancet 362: 1439–1444. Find this article online
31. Ransohoff DF (2004) Rules of evidence for cancer molecular-marker discovery and validation. Nat Rev Cancer 4: 309–314. Find this article online
32. Lindley DV (1957) A statistical paradox. Biometrika 44: 187–192. Find this article online
33. Bartlett MS (1957) A comment on D.V. Lindley's statistical paradox. Biometrika 44: 533–534. Find this article online
34. Senn SJ (2001) Two cheers for P-values. J Epidemiol Biostat 6: 193–204. Find this article online
35. De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, et al. (2004) Clinical trial registration: A statement from the International Committee of Medical Journal Editors. N Engl J Med 351: 1250–1251. Find this article online
36. Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294: 218–228. Find this article online
37. Hsueh HM, Chen JJ, Kodell RL (2003) Comparison of methods for estimating the number of true null hypotheses in multiplicity testing. J Biopharm Stat 13: 675–689. Find this article online

Citation: Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

Published: August 30, 2005

Copyright: © 2005 John P. A. Ioannidis. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Competing interests: The author has declared that no competing interests exist.

Abbreviation: PPV, positive predictive value


Category (General Medicine)  |   Views (16851)  |  User Rating
Rate It


Mar20
Robot Surgeon, coming soon at MPUH Nadiad
BUSINESS_STANDARD
Sohini Das | 2010-03-16 01:50:00

Robot surgeons: Coming soon at Nadiad

Kidney patients have a reason to cheer as Nadiad - a sleepy town lying between Ahmedabad and Vadodara, would soon have a robot conducting urology surgeries on them that will not only have superfine precision, but will also less than halve the post operation recovery time.

A non-profit 170-bed charitable hospital run by the Muljibhai Patel Society for Research in Nephro-Urology, the Muljibhai Patel Urological Hospital (MPUH), popularly known as Nadiad Kidney Hospital, is planning to buy the Rs 10 crore da Vinci robot, a four-armed US made model for intricate prostate and kidney surgeries. "We have always tried to adopt new technology, and would not like to deprive our patients of the new-age surgical experience that definitely comes with numerous advantages," said Mahesh Desai, managing trustee and chairman, department of urology at the Nadiad Kidney Hospital.

So, what are the advantages? As Desai points out: "In a computer assisted surgery, a doctor can sit at a remote place and conduct live surgery. Only recently, a doctor in New York did a surgery on a patient in Paris. In a hospital set up, an experienced doctor can preside over two or three simultaneous operations from his chamber while young doctors could man the operation table together with the robot".

On top of it, while it takes around two months to recover from an open surgery, a patient will up on his feet in 10 days after a robotic surgery as the scars and incisions are restricted to minimal extents. However, he will have to shell out around 25 times more for a robotic surgery that can cost around Rs 5 lakh per surgery.

Nadiad Kideny hospital is planning to extend the services of the robot to other surgical fields apart from urology, like paediatrics, gynaecology and oncology to make the service cost-effective.

"We already conduct uro-oncological paediatric urological surgeries, we can now think of offering the service to some gynaecological clinics, or can start a uro-gynaeocology department ourselves", Desai said. He also pointed out that by 2012, similar systems on the lines of the da Vinci surgical systems would be open for other equipment makers to manufacture as the system goes off-patent. Intuitive Surgical has the monopoly over da Vinci robots now.

When asked, if the hospital would consider waiting till the robotic surgery set up was available at more competitive prices, Desai said that they wanted to acquire it much sooner than 2012. "We will take up this issue at the next budgetary meeting. The management or promoters are likely to make the capital expenditure while the cost of running and maintaining the system will lie with the hospital", he explained.

Being a charitable hospital, around 68 per cent of all operative cases at the Muljibhai Patel Urological Hospital are free or subsidised, the remaining are either paid or partly paid cases.

While the cost of acquiring the system is around Rs 10 crore which comes with the basic machine and around 100 robotic arms, one has to later on incur a recurring cost of replenishing the robotic arms that come for $6000 each and last 10 operations. The hospital is considering attracting patients from neighbouring states who will pay for availing the technology that will not only ensure 100 per cent accuracy and precision but also reduce recovery time. This could take care of the recurring cost of maintenance that runs into Rs10-12 lakh per annum and the service could come free for the needy patients.

MPUH had organised a live transmission of a surgery from the USC Institute of Urology, US at the hospital auditorium recently for 70 people and is upbeat on training personnel to equip them for conducting such surgeries. Nadiad Kidney hospital was one of the five international locations along with Venezuela, UK, China and Kuwait which saw a live demonstration of the da Vinci Robot at work.

Robotic surgeries, particularly in urology and gynecologic-oncology, are increasingly being promoted as the new standard of care. Hospitals are marketing it, patients are asking for it, and young physicians are expecting to be trained on it. The number of da Vinci robotic surgical systems installed worldwide has ballooned from 210 in 2003 to 1,395 last year, with 1,028 of those in the United States, according to records from Intuitive Surgical, which makes the da Vinci robot.

In India only the All India Institute of Medical Sciences (AIIMS) owns a da Vinci robot.


Category (Kidney & Urine)  |   Views (19854)  |  User Rating
Rate It


Mar20
Andrology Training Workshop at Nadiad Kidney Hospital
PRESS NOTE
MULJIBHAI PATEL UROLOGICAL HOSPITAL, NADIAD

ANDROLOGY TRAINING WORKSHOP

Jayaramdas Patel Academic Centre (JPAC) at the Muljibhai Patel Urological Hospital (MPUH), Nadiad organised a 3-day Andrology Training Workshop for postgraduate doctors during March 18-20, 2010. The programme provided a comprehensive overview of all practical aspects of Andrology and it included live surgical demonstrations, lectures and case discussions .

Dr. Rupin Shah was the Course Director. Other faculty included Drs Sanjay Kalra, Manish Bankar, SS Vasan, Vijay Kulkarni and Ravindra Sabnis. More than 60 delegates from all over India and abroad participated in the workshop.

There were 24 sessions on topics such as, Male infertility, Assisted Reproductive techniques, Azoospermia (total absence of sperm in the ejaculate), Ejaculatory Disorders, Undescended Testis, Vasectomy & VVA, Medical management of OAS (oligoasthenozoospermia), Surgical management of OAS: Varicocele, Hypogonadotropic hypogonadism, MSD (Male sexual dysfunction), Penile prosthesis – choosing the right implant, operative techniques, complications, Sperm retrieval – non-surgical and surgical methods, Comparison of PDE5 inhibitors, Intra-cavernosal injections, Vacuum erection devices, and surgery for erectile dysfunction.

Live surgical/video demonstrations covered microsurgical varicocelectomy, needle biopsy and testicular mapping, microsurgical VEA and VVA, TURED, penile curvature correction and penile prosthesis implantation, etc.


P A Joseph
Officer on Special Duty
MPUH, Nadiad


Category (Kidney & Urine)  |   Views (19810)  |  User Rating
Rate It


Mar19
Spinal HuaTo JiaJi - the Magic Spinal Points
Without question, some of the most dynamic acupuncture points on the human body are known as the Hua Tuo Jia Ji points. These points, philosophically and clinically, may effectively treat every condition of the human system. My teachers - Master Kiiko Matsumoto and David Euler start back treatments using Huato Jiaji points.

They are extremely easy to locate and use. They respond not only to the acupuncture needle but through any type of percussion such as a neurological reflex hammer, Wartenberg pin wheel, tuning fork, green and red laser, percussive instrument, gua sha, tei shein (noninvasive needle) or firm digital pressure. Any form of stimulation works absolute wonders in clinical practice.

These classic points are located just half a cun, or human inch (the distance across the widest part of the patient's thumb) bilateral to Du Mai (GV) (midline over the vertebral spinous process) from T1 through L5. Classically there are 17 pairs of points (34 points total) attributed to Hua Tuo. Within the last 2,000 years, these points have extended both upward through the cervical spine and downward across the sacrum. The points in the cervical and sacrum are simply known as jia (lining) ji (spine).

The points were discovered by the legendary physician Hua Tuo, who was born in 110 A.D. and lived to the almost unprecedented age of 97. He was reputedly murdered by the ruler of the Wei Dynasty after he suspected an assassination attempt when Hua Tuo suggested brain surgery for his severe headaches. Hua Tuo was rumored to have found the secrets of exceptional health and longevity.

Only the 17 bilateral points attributed to Hua Tuo carry his name as Hua Tuo Jia Ji. These points are located at the level of the spinous process of the vertebrae 0.5 cun from the midline. The shu (associated points) of the 12 primary meridians beginning at T3 are located 1.5 cun from the Du Mai midline. These points work startlingly similar to the meric system of chiropractic. Chiropractic's explanation of its exceptional clinical response in most health conditions is that of an insulted nerve at the level of the intervertebral foramina due to displacement of the vertebrae (a vertebral fixation). These spinal fixations will cause both hypertonicity and hypotonicity of the paravertebral musculature, resulting in neurothlipsis or so-called "pinched nerve." The involved nerve will affect the organ and tissue. It is well known in chiropractic that the third thoracic vertebrae has a direct response on anything in the level of the lung to include the bronchi, pleura, chest, breast, etc. This same explanation extends up and down the spine to affect all organs, muscles, bones and structures of the body.

Hua Tuo, on the other hand, developed a system of healing which appears to be remarkably similar to chiropractic but 2,000 years prior to the discovery of both osteopathy and chiropractic. By the stimulation of these specific points at the precise vertebral level, virtually any condition can be positively affected. That does not mean to say these points will cure everything, however the success rate in using these points at the precise locations is nothing short of miraculous. The key is understanding the exact level of the vertebra in relation to the organ and tissue it controls. For example, Thoracic 6 is specific to the Stomach, whereas Thoracic 7 and 8 are specific to the Spleen/Pancreas. The jia ji points of C7 is specific to the thyroid, as well as the shoulders and elbows.

The classic acupuncture point known as BL10 is 1.3 cun bilateral to DU15, which is just below the pseudo spinous of the first cervical vertebrae. However the jia ji point, located 0.5 cun lateral to DU15, will affect the pituitary gland, scalp, brain, inner and middle ear, and sympathetic nervous system. An "energetic subluxation" at this point will manifest itself in neurasthenia, insomnia, hypertension, migraine, chronic tiredness, vertigo, headaches and susceptibility to colds.

These reflexes are very specific and have been a vital part of certain specialty practices of chiropractic for well over a century. The stimulation of the Hua Tuo Jia Ji points are not routine or well known in the chiropractic profession. However, the reflex levels are classic and extremely well established. Chiropractic physicians will routinely adjust the vertebrae by hand or cause hyperstimulation through automatic or manual adjusting devices.

I always advise practitioners to stimulate not only the Hua Tuo Jia Ji point but the GV and, if appropriate, the shu point during treatment. Gua sha is an exceptional way to stimulate these points as it is quick, easy and effective.

In as much as the specific reflex areas of the spine are generally only used by specialty chiropractic practices, it is assumed that most acupuncture practitioners are unaware of these specific locations. Electro-meridian imaging (EMI) will direct the practitioner to the precise vertebral level since any meridian involvement will always reflex directly to the spine. This is an exceptional way to determine general levels to be treated. However, if one knows the specific areas affected, it is a matter of academia to select the proper level or combination of levels. Contact me directly with your request for a specific meric reflex chart and begin using this absolutely incredible system of acupuncture. You will be amazed at the response.


Category (Back & Neck)  |   Views (19845)  |  User Rating
Rate It


Mar19
BIOMEDICAL ACUPUNCTURE - AN OVERVIEW
Acupuncture is a Proven and Effective adjunct treatment along with Western Bio Medicine. Hundreds of RCTs - Controlled clinical trials and hundreds of rigorous publications in western journals have revealed the scientific and causal mechanism for many of the acupuncture effects such as – release of Endorphins, Serotonin, Cortisol – a blitz of neuro-chemical cascade.

The World Health Organization (WHO), National Institute of Health (NIH) US, US Food and Drug Administration (FDA) and National Health Service (NHS) UK have conclusive proof that acupuncture works and is a great adjunct therapy to western Bio-medicine in some areas of treatment.

US NIH Consensus Conference found that in conditions such as of chronic pain, asthma, stroke, nausea, drug dependence, stroke rehabilitation, headache, menstrual cramps, tennis elbow, fibromyalgia, myofascialpain, osteoarthritis, low back pain, carpal tunnel syndrome acupuncture may be useful as an adjunct treatment or an acceptable alternative.

Acupuncture works by neuroanatomical mechanism. Acupuncture Analgesia (AA) is by stimulation of small diameter nerves ( PANs - Peripheral Afferent Nerves/ Nociceptors) which send impulses via spinal chord to the midbrain and pituitary. This triggers the release of endorphins and monoamines which block pain message. It acts on the HPA axis (Hypothalamic Pituitary Adrenal) resulting in multiple neuro-chemical release and thus produces astounding results in treating a wide range of ailments - according to Dr. Joesph Audette MD of Harvard Medical School and Dr. Willard PhD of New England College US.

Acupuncturists look at Fascia as a MOBIUS STRIP in the body. A complex WEB of ligaments, fibroelastic connecting tissues, tendons, peritonea and pleura, periosteum and dura, interdigitate that connects and envelopes the whole body which Holds the body structure - Viscera, Bones, Neural network etc. Thus a disruption in one area can affect not only surrounding structures but also those distal.

Acupuncture adjusts Somatic Dysfunctions - altered or impaired function of related components of the somatic (body framework) system: skeletal, arthrodial and Myofascial structures as well as related vascular, lymphatic and neural elements. The application of TART - Texture, Asymmetry, Range of Motion, Tenderness in palpation is unique to Acupuncture treatments that utilizes the mind-body to find a cure from within.

Science Research findings state:

Acupuncture regulates the secretion of various growth factors such as vascular endothelial growth factor, bFGF, NGF,TGF-beta, IL-6, and the expression of various growth control genes such as c-fos, Bcl, Bax, fas and FasL. Acupuncture regulates apoptosis, regeneration, differentiation and cell proliferation of various tissues. The ‘classic’ neuro-humoral factors induced by acupuncture such as endorphins and serotonin also have growth-control effects according to Dr. Helene Longevin MD.

According to Pomerenz - Acupuncture Points have high electric conductance, high current density and high density of gap junctions. Acupoints can be activated by nonspecific stimuli – causing long lasting systemic effects.

According to German research -

Acupuncture has been used for over 2000 years for a wide variety of complaints with minimal side effects. Based on the experience in Chinese medicine and the anticipated positive effects, acupuncture has been widely accepted in Western medicine as well. Some clinical evidence supports the efficacy of acupuncture treatment, but randomized controlled trials have been conducted for a few of all possible locomotive disorder indications, and the results have been equivocal. ..... One of the outcomes on which consensus appears to exist is that 10-20 sessions are generally necessary, and that initial improvement can be expected to occur by the 10th treatment. Rigorous trials should be conducted to improve clinical validity and provide scientific proof of the efficacy of acupuncture. Clinical trials like the German Acupuncture Trials (GERAC), funded by the German health insurance companies, have been launched with the aim of furthering knowledge in this area.

PMID: 11956897 [PubMed - indexed for MEDLINE]

Schmerz. 2002 Apr;16(2):121-8.
[Acupuncture in the treatment of locomotive disorders - status of research and situation regarding clinical application]

Molsberger A, Böwing G, Haake M, Meier U, Winkler J, Molsberger F.

Forschungsgruppe Akupunktur und traditionelle chinesische Medizin, Düsseldorf, Germany.

[Article in German] http://www.ncbi.nlm.nih.gov/pubmed/11956897


Randoph M. Nesse, M.D., is Professor of Psychiatry, Director, ISR Evolution and Human Adaptation Program, The University of Michigan - says " I am open-minded about acupuncture, but I would rather put my faith in an explanation based on nerve impulses than mysterious energy flows that have never been demonstrated to have physical reality" ( http://www.edge.org/documents/archive/edge64.html)


So even skeptics agree that Acupuncture works!


Category (Brain & Nerves)  |   Views (15445)  |  User Rating
Rate It


Mar16
Acupuncture May Relieve Joint Pain Caused by Some Breast Cancer Treatments
Acupuncture May Relieve Joint Pain Caused by Some Breast Cancer Treatments

A new study, led by researchers at the Herbert Irving Comprehensive Cancer Center at NewYork-Presbyterian Hospital/Columbia University Medical Center, demonstrates that acupuncture may be an effective therapy for joint pain and stiffness in breast cancer patients who are being treated with commonly used hormonal therapies.

Results were published in the Journal of Clinical Oncology.

Joint pain and stiffness are common side effects of aromatase inhibitor therapy, in which the synthesis of estrogen is blocked. The therapy, which is a common and effective treatment for early-stage, hormone-receptor-positive breast cancer in post-menopausal women, has been shown in previous research to cause some joint pain and stiffness in half of women being treated.

"Since aromatase inhibitors have become an increasingly popular treatment option for some breast cancer patients, we aimed to find a non-drug option to manage the joint issues they often create, thereby improving quality of life and reducing the likelihood that patients would discontinue this potentially life-saving treatment," said Dawn Hershman, M.D, M.S., senior author of the paper, and co-director of the breast cancer program at the Herbert Irving Comprehensive Cancer Center at NewYork-Presbyterian Hospital/Columbia University Medical Center, and an assistant professor of medicine (hematology/oncology) and epidemiology at Columbia University Medical Center.

To explore the effects of acupuncture on aromatase inhibitor-associated joint pain, the research team randomly assigned 43 women to receive either true acupuncture or sham acupuncture twice a week for six weeks. Sham acupuncture, which was used to control for a potential placebo effect, involved superficial needle insertion at body points not recognized as true acupuncture points. All participants were receiving an aromatase inhibitor for early breast cancer, and all had reported musculoskeletal pain.

Among the women treated with true acupuncture, findings demonstrated that they experienced significant improvement in joint pain and stiffness over the course of the study. Pain severity declined, and overall physical well-being improved. Additionally, 20 percent of the patients who had reported taking pain relief medications reported that they no longer needed to take these medications following acupuncture treatment. No such improvements were reported by the women who were treated with the sham acupuncture.

"This study suggests that acupuncture may help women manage the joint pain and stiffness that can accompany aromatase inhibitor treatment," said Katherine D. Crew, M.D., M.S., first author of the paper, and the Florence Irving Assistant Professor of Medicine (hematology/oncology) and Epidemiology at Columbia University Medical Center and a hematological oncologist at NewYork-Presbyterian Hospital/Columbia University Medical Center. "To our knowledge, this is the first randomized, placebo-controlled trial establishing that acupuncture may be an effective method to relieve joint problems caused by these medications. However, results still need to be confirmed in larger, multicenter studies."

Source:ScienceDaily (Mar. 5, 2010) —

Adapted from materials provided by Columbia University Medical Center, via EurekAlert!, a service of AAAS.

Journal Reference:

Katherine D. Crew, Jillian L. Capodice, Heather Greenlee, Lois Brafman, Deborah Fuentes, Danielle Awad, Wei Yann Tsai, and Dawn L. Hershman. Randomized, Blinded, Sham-Controlled Trial of Acupuncture for the Management of Aromatase Inhibitor-Associated Joint Symptoms in Women With Early-Stage Breast Cancer.

Journal of Clinical Oncology, 2010; 28 (7): 1154 DOI: 10.1200/JCO.2009.23.4708


Category (Women’s Health)  |   Views (17651)  |  User Rating
Rate It


Mar16
Stanford University Research Trial finds Acupuncture gives remarkable results
Pregnancy: Depression Relief, Without Drugs

Stanford University Research Trial finds Acupuncture gives remarkable results

Up to a quarter of all women suffer from depression during pregnancy, and many are reluctant to take antidepressants. Now a new study suggests that acupuncture may provide some relief during pregnancy, even though it has not been found to be an effective treatment against depression in general.

The Stanford University study recruited 150 depressed women who were 12 to 30 weeks pregnant, and randomly assigned 52 to receive acupuncture specifically designed for depressive symptoms, 49 to regular acupuncture and 49 to Swedish massage.


Each woman received 12 sessions of 25 minutes each; those given acupuncture did not know which type they were getting. (In the depression-specific treatment, needles are inserted at body points that are said to correspond to symptoms like anxiety, withdrawal and apathy.)


After eight weeks, almost two-thirds of the women who had depression-specific acupuncture experienced a reduction in at least 50 percent of their symptoms, compared with just under half of the women treated with either massage or regular acupuncture.


There was no significant difference in the rates of complete remission — about a third in each group. The findings appear in the March issue of Obstetrics & Gynecology.


The lead author, Rachel Manber, a professor of psychiatry and behavioral sciences at Stanford, said the results suggested that some symptoms of depression during pregnancy might be related to physical discomfort that is alleviated by acupuncture.

Still, the results were striking, she said, adding, “It’s quite remarkable, especially since the prevalence of depression is highest in the third trimester of pregnancy, so it goes against the course of how you would expect depression to go.”

credits - By RONI CARYN RABIN, Published: February 24, 2010

You can access this article at - http://www.nytimes.com/2010/03/02/health/research/02preg.html?ref=health

Free Research Paper at:
http://journals.lww.com/greenjournal/Fulltext/2010/03000/
Acupuncture_for_Depression_During_Pregnancy__A.7.aspx


Category (Fertility, Pregnancy & Birth)  |   Views (17354)  |  User Rating
Rate It


Mar15
Robotics Series on Kidney and Prostate at JPAC, MPUH Nadiad
ORGAN SPECIFIC STEP-BY-STEP ROBOTIC SYMPOSIA, ON KIDNEY AND PROSTATE

LIVE TRANSMISSION FROM
USC INSTITUTE OF UROLOGY, USA
AT JPAC, NADIAD KIDNEY HOSPITAL

USC Institute of Urology, USA organised a concept series of organ-specific ‘step-by-step’ Robotic Symposia focused on live surgical demonstrations of various urologic procedures. The Part-I during March 12-13, 2010 was focused on Kidney & Prostate. The dominant feature of this unique symposium series was an emphasis on the ‘nuts & bolts’ of the practical technical aspects of robotic technique, both for the beginner and the expert. World-class faculty provided in-depth discussions on each step of the actual operative procedure, with live transmission to 5 international locations, including JPAC (Jayaramdas Patel Academic Centre) at Nadiad Kidney Hospital. These was Intense audience participation with audience response systems.

Dr. Inderbir S Gill was the Course Chairman whereas the Course Directors were Drs Mihir M Desai and Monish Aron, of USC Institute of Urology, USA. The international faculty included Dr. Mahesh Desai, Chairman, Department of Urology & President-Elect of Society Internationale d’Urologie, MPUH, Nadiad.

The symposium covered introduction to Robotic-assisted kidney surgery and introduction to Robotic-assisted Radical Prostatectomy, followed by moderated panel discussions and live transmission from operation theatre.

There were instructional video session as well.

P A JOSEPH
OSD, MPUH NADIAD


Category (Kidney & Urine)  |   Views (20274)  |  User Rating
Rate It


Browse Archive