Movement for fair open access at Cognition

A worthy request.

With you as cosignatory I would like to write the Cognition Editorial Board, asking them to request the 5 components of Fair Open Access publishing listed below. No demands or ultimatums will be issued at this point - only a request. If you agree to this I'll send you exactly one email (no more) that contains a final draft letter and will ask you to give a thumbs up or thumbs down to your participation. In the meantime, I'm encouraging everyone to continue supporting and contributing to Cognition: It's the creation of our collective research dollars, time, and efforts.

Dorothy Bishop on her PrePrint Experiences at PeerJ

Short but informative. Preprints seem like a good idea, generally, for scientific discussion.

What are the benefits to you personally of publishing your work as a PeerJ PrePrint prior to any formal peer review process?

It is very useful to get feedback from experts in the field before finalising a manuscript; ultimately, it should save time, because the paper is more likely to have a smooth passage through a formal review process if you have anticipated and responded to major criticisms, and also been pointed to other relevant literature. Having said that, I don’t yet know if our paper will be accepted for publication! However, even if it is not, it has been useful to have the debate about the p-curve method out in the open, and our pre-print allowed us to put our views in the public domain in a permanent, citeable format.

Negotiating with publishers for immediate self-archiving

I'm surprised this was so easy.

After that everything was simple. I logged in to my zenodo.org account and uploaded the author’s copy of the manuscript. As a result, anyone searching for the article on google scholar will find the publisher’s version requesting 34.95 EUR for pdf access, and right next to it a link to exactly the same article freely available via Zenodo. That’s it! Nice and clean!

Dorothy Bishop on what a new face of scientific publishing might look like

No traditional journals; it's all "consensual communication". Open science built in (emphasis added):

  1. The study is then completed, written up in full, and reviewed by the editor. Provided the authors have followed the protocol, no further review is required. The final version is deposited with the original preprint, together with the data, materials and analysis scripts.

Scientific publishers' confidentiality clauses keep journal costs secret (!)

I did not know this. Crazy.

Controversially (and maybe illegally), when negotiating contracts with libraries, publishers often insist on confidentiality clauses — so that librarians are not allowed to disclose how much they are paying. The result is an opaque market with no downward pressure on prices, hence the current outrageously high prices, which are rising much more quickly than inflation even as publishers’ costs shrink due to the transition to electronic publishing.

Scientists share their stories of sexism in publishing

Following up on #AddMaleAuthorGate, some of the many tales of sexism still rampant in academia.

One of Sang’s bad experiences came from a paper in which she tracked patterns of co-authorship in a leading journal in her field over the course of 10 years. She found that white men frequently publish together, whereas female and minority scientists are more often at the periphery of these networks.

“One of the reviewers argued that the reason there are so few women and black academics in the social networks is because the research they produced just isn’t good enough to get into the top journals — and the editor agreed,” Sang said.

What should science look like?

Reimagining of what science should be like from Björn Brembs. Lots of interesting ideas here. I love the idea of being able to explore published data:

I would also be able to double-click on any figure to have a look at other aspects of the data, e.g. different intervals, different intersections, different sub-plots. I’d be able to use the raw data associated with the publication to plot virtually any graph from the data, not just those the others offer me as a static image, as today.

...which requires authors to share their data and link it to publication:

As an author, I want my data to be taken care of by by institution: I want to install their client to make sure every piece of data I put on my ‘data’ drive will automatically be placed in a data repository with unique identifiers.

Bonus points for being posted/published on The Winnower.

Rethinking scientific publishing

Guardian article summarizing a recent debate on the future of scientific publishing. I like the idea of taking a step back and thinking about the best way to communicate and evaluate science, rather than simply keeping the status quo.

Stuart Taylor, the Publishing Director at the Royal Society, raised a more fundamental question about what we expect scientific authors to do. “Authors still create journals in prose-style — do we really need to produce all that text?” Taylor wondered if the traditional formats were still appropriate for presenting scientific results in the internet age. Taylor’s suggestion that the standard structure of a scientific article might be out-of-date met with some approval — and some scepticism. Could researchers sustain a coherent argument without prose?

Dorothy Bishop on data sharing

But the one thing I've learned as I wiped the egg off my face is that error is inevitable and unavoidable, however careful you try to be. The best way to flush out these errors is to make the data public. This will inevitably lead to some embarrassment when mistakes are found, but at the end of the day, our goal must be to find out what is the case, rather than to save face.

Guardian article summarizing kerfuffle over shoddy editorial practices

In case you missed it, last month Dorothy Bishop wrote a blog post and a follow up highlighting what might charitably be described as "surprising" editorial practices. This Guardian article summarizes the sad state of affairs.

Matson isn’t the only academic to benefit from what might be generously referred to as an “extremely efficient” review process. Bishop’s analysis also identified other researchers who have published frequently in RIDD and RASD, including Jeff Sigafoos, Mark O’Reilly and Giuliano Lancioni. Bishop has provided data showing that for 73 papers appearing in RASD and RIDD co-authored by these researchers between 2010 and 2014, 17 were accepted the same day that they were received, 13 within one day, and 13 within two days. We contacted Sigafoos and Lancioni with this data, and they responded:

The figures you state for 73 papers is routine practice for papers published in RIDD and RASD. A large percentage of all papers published in any given issue of RIDD and RASD appear to have received a rapid rate of review as indicated would happen in the official editorial policy of these journals.

Kudos to Prof. Bishop and others for pointing out such shockingly appalling behavior.

John Ioannidis on scientific accuracy

Interesting interview from Vox with the author of "Why most published research findings are false" (and many other articles), including personal tidbits:

He even has a mythical origin story. He was raised in Greece, the home of Pythagoras and Euclid, by physician-researchers who instilled in him a love of mathematics. By seven, he quantified his affection for family members with a "love numbers" system. ("My mother was getting 1,024.42," he said. "My grandmother, 173.73.")

and thoughts on how to improve science:

Recently there’s increasing emphasis on trying to have post-publication review. Once a paper is published, you can comment on it, raise questions or concerns. But most of these efforts don’t have an incentive structure in place that would help them take off. There’s also no incentive for scientists or other stakeholders to make a very thorough and critical review of a study, to try to reproduce it, or to probe systematically and spend real effort on re-analysis. We need to find ways people would be rewarded for this type of reproducibility or bias checks.

eNeuro encourages submission of negative results

Negative Results: Research papers from authors who tried to test important hypotheses but did not get the outcome they expected. Failed preclinical tests are particularly welcome. These manuscripts must include testing the hypothesis by multiple experimental approaches, rigorously reproducing the experimental models of others that you claim to refute, and meticulous use of both positive and negative controls.

Failure to Replicate: Research papers from authors who could not reproduce someone else’s work of significant importance despite using the same methodology (explaining why the reproduction failed is not mandatory).

Refreshing.

(I'm not a big fan of media-specific journal titles like eLife or eNeuro—it's not like we have printJournal of Neuroscience—but I do like some of the changes they are instituting.)

UPDATE: Greg Hickok confirms that Psychonomic Bulletin and Review also encourages negative results (although this isn't spelled out on their website). Is there a central list of "journals that are happy to publish your negative results"?