Publish or perish: Improving your H-factor made easy through PleaseCiteMe.com

The above post has been re-blogged now that the San Francisco Declaration has been created which has been endorsed by Science.

The San Francisco Declaration is looking to challenge some of the strategic games being played in the world of academia to increase productivity, get tenure and climb the rankings. These games could easily lead to both a decline of scientific quality and an erosion of trust in science all together. Below there’s an exerpt from the editorial that can be found in Science (see also: http://www.aaas.org/news/releases/2013/0516_impact-factors.shtml#fb). The San Francisco Declaration can be found here:The San Francisco Declaration.

————————————————————————————————————————————————–Science Endorses New Limits on Journal Impact Factors

———————- —————–A measure developed to assess the quality of scientific journals has distorted how research is evaluated, and should be not be used to judge an individual’s work, Science Editor-in-Chief Bruce Alberts writes in the 17 May issue of the journal.

The editorial coincides with the release of the San Francisco Declaration of Research Assessment (DORA), which grew out of a gathering of scientists at the December 2012 meeting of the American Society for Cell Biology (ASCB). More than 150 scientists and 75 scientific organizations including Science’s publisher AAAS have endorsed DORA, which recommends specific changes to the way scientific journal rankings are used in hiring scientists, funding research and publishing papers.

———————————————————————One of the most popular ranking measures, called Journal Impact Factor or JIF, ranks research journals based on the average number of times its papers are cited by other papers. (The higher the JIF score, the more often its research papers are cited by others.) JIF was devised to rank journals, but is now often used to evaluate an individual’s research, by looking at whether she or he has published in high-score journals.

This misuse of the JIF score encourages far too much “me-too science,” Alberts writes. “Any evaluation system in which the mere number of a researcher’s publications increases his or her score creates a strong disincentive to pursue risky and potentially groundbreaking work, because it takes years to create a new approach in a new experimental context, during which no publications should be expected.”

Alberts notes that an unhealthy obsession with journal ranking scores may also make journals reluctant to publish papers in fields that are less cited, such as the social sciences, compared to papers from highly-cited fields such as biomedicine.

————————————–The DORA guidelines offer 18 specific recommendations for discontinuing the use of JIF in scientists’ hiring, tenure, and promotion, along with ways to assess research on its own merits apart from its place of publication.

Transformative learning

Publish or perish: Improving your H-factor made easy through PleaseCiteMe.com

“What’s your h-factor?” is a question that is increasingly asked at gatherings of scientists or during interviews for academic positions. Scientific careers depend on h-factors these days. What am I talking about?

The h-index is an index that attempts to measure both the productivity and impact of the published work of a scientist or scholar. The index is based on the set of the scientist’s most cited papers and the number of citations that they have received in other publications. The index can also be applied to the productivity and impact of a group of scientists, such as a department or university or country.

A scientist has index h if h of his/her Nppapers have at least h citations each, and the other (Np − h) papers have no more than h citations…

View original post 792 more words