Marcus Munafò on his lab's experience with open science

We’ve certainly found it a healthy and worthwhile thing to do. It shines a light on our work, makes us more accountable, and encourages us to slow down slightly and check everything one extra time. There’s been a positive reaction to it within my group. I think once you achieve a cultural change like this then it becomes self-sustaining.

"The median Turker...had completed 300 total academic studies"

Workers on Mechanical Turk are experienced participants, which may be a problem for some studies.

“If you’re running social psychology studies on Turk, watch out, because [the subjects] have gotten experienced, and that can change effects,” Rand said. “So if you run my experiment on Turk right now, you won’t get any effect. Which sucks for me.”

To be clear, extreme experience isn’t always a problem, Rand said. There are some psychological tests that are so robust that no amount of experience will override the effect, said Jesse Chandler, a researcher at the University of Michigan’s Institute for Social Research. The Stroop effect, for example, which involves identifying colors when the color of a word doesn’t match the color spelled out by the text. When the word “red,” for example, is colored green, it takes longer to override the automatic reading of the text and choose green.

Intuition and big data in neuroscience

Thoughtful essay from Eve Marder on how science progresses.

A deeper cause for my trepidation comes from the existence of parallel pathways and multiple solutions in large brains. Detailed connectomes will assuredly show the extent to which neurons and brain regions are connected by multiple parallel pathways. While we can speculate that parallel pathways can provide additional robustness to circuits, or increase their dynamic range and flexibility, they make it far more difficult to predict the outcome of a specific pattern of activity or understand the results of a perturbation. As the number of parallel pathways and brain circuit loops increase, deducing how information flows through large and multiply interconnected circuits and why and under which contexts will be a challenge for many, many years to come.

Challenges of being a whistleblower, especially at early career stage

Many hurdles and risks, little (obvious) reward.

“As someone in your early career stage, you don’t want to do this,” he told Broockman. “You don’t want to go public with this. Even if you have uncontroversial proof, you still shouldn’t do it. Because there’s just not much reward to someone doing this.”

Not a great incentive structure for self-correcting science.

Dorothy Bishop on what a new face of scientific publishing might look like

No traditional journals; it's all "consensual communication". Open science built in (emphasis added):

  1. The study is then completed, written up in full, and reviewed by the editor. Provided the authors have followed the protocol, no further review is required. The final version is deposited with the original preprint, together with the data, materials and analysis scripts.

Scientific publishers' confidentiality clauses keep journal costs secret (!)

I did not know this. Crazy.

Controversially (and maybe illegally), when negotiating contracts with libraries, publishers often insist on confidentiality clauses — so that librarians are not allowed to disclose how much they are paying. The result is an opaque market with no downward pressure on prices, hence the current outrageously high prices, which are rising much more quickly than inflation even as publishers’ costs shrink due to the transition to electronic publishing.

Nikos Logothetis quits primate research

Huge news. Logothetis' lab has been at the center of numerous aspects of nonhuman primate research, including understanding the neurophysiological underpinnings of the BOLD fMRI signal.

In a letter last week to fellow primate researchers, Logothetis cites a lack of support from colleagues and the wider scientific community as key factors in his decision. In particular, he says the Max Planck Society—and other organizations—should pursue criminal charges against the activists who target researchers.

Logothetis did not seem to feel supported by his colleagues:

Logothetis’s letter also faults his scientific colleagues in Tübingen for distancing themselves from the controversy. The neighboring Max Planck Institute for Developmental Biology posted a disclaimer on its website emphasizing that there are no monkeys at the institute, he notes, and colleagues at the nearby Hertie Institute for Clinical Brain Research refused to issue a declaration of support.

Logothetis is moving to rodents.

What should science look like?

Reimagining of what science should be like from Björn Brembs. Lots of interesting ideas here. I love the idea of being able to explore published data:

I would also be able to double-click on any figure to have a look at other aspects of the data, e.g. different intervals, different intersections, different sub-plots. I’d be able to use the raw data associated with the publication to plot virtually any graph from the data, not just those the others offer me as a static image, as today.

...which requires authors to share their data and link it to publication:

As an author, I want my data to be taken care of by by institution: I want to install their client to make sure every piece of data I put on my ‘data’ drive will automatically be placed in a data repository with unique identifiers.

Bonus points for being posted/published on The Winnower.

Rethinking scientific publishing

Guardian article summarizing a recent debate on the future of scientific publishing. I like the idea of taking a step back and thinking about the best way to communicate and evaluate science, rather than simply keeping the status quo.

Stuart Taylor, the Publishing Director at the Royal Society, raised a more fundamental question about what we expect scientific authors to do. “Authors still create journals in prose-style — do we really need to produce all that text?” Taylor wondered if the traditional formats were still appropriate for presenting scientific results in the internet age. Taylor’s suggestion that the standard structure of a scientific article might be out-of-date met with some approval — and some scepticism. Could researchers sustain a coherent argument without prose?

figshare on open data plans in research

It is no longer efficient or sustainable for humans to be the gatekeepers of academic content. Principles must be put in place so that content can still be reused long after the data creators are dead. Content must adhere to ethical and commercial sensitivities where necessary, but with the dawn of research data management plans, the funders of research ultimately decide what level of access society has.

and this snipped from the NIH public access plan:

“NIH intends to make public access to digital scientific data the standard for all NIH-funded research, as well as exploring ways to advance data as a legitimate form of scholarship through data citation and other means.”

Improving reproducibility in fMRI

Nice recommendations and overview from Cyril Pernet and Jean-Baptiste Poline.

Making neuroimaging reproducible will not make studies better; it will make scientific conclusions more verifiable, by accumulating evidence through replication, and ultimately make those conclusions more valid and research more efficient. Two of the main obstacles on this road are the lack of programming expertise in many neuroscience or clinical research laboratories, and the absence of widespread acknowledgement that neuroimaging is (also) a computational science.

Introduction to open science and data management

Great overview of some concrete ways to practice open science.

8. Let everyone watch.

Consider going open. That is, do all of your science out in the public eye, so that others can see what you’re up to. One way to do this is by keeping an open notebook. This concept throws out the idea that you should be a hoarder, not telling others of your results until the Big Reveal in the form of a publication. Instead, you keep your lab notebook (you do have one, right?) out in a public place, for anyone to peruse. Most often an open notebook takes the form of a blog or a wiki, and the researcher updates their notebook daily, weekly, or whatever is most appropriate. There are links to data, code, relevant publications, or other content that helps readers, and the researcher themselves, understand the research workflow.

Making your lab a "highly reliable organization"

Must-read from Jeff Rouder on applying characteristics of "highly reliable organizations" in a lab setting.

[M]istakes and other adverse events not only cost us time, they affect the trust other people may place in our data and analyses. We need to keep them at a minimum and understand them when they occur. So I began adapting the characteristics of highly reliable organizations for my lab. Making changes does take time and effort, but I suspect the rewards are well worth it.

We're talking about this in lab meeting this week, and starting to implement some of these ideas immediately.

Dorothy Bishop on data sharing

But the one thing I've learned as I wiped the egg off my face is that error is inevitable and unavoidable, however careful you try to be. The best way to flush out these errors is to make the data public. This will inevitably lead to some embarrassment when mistakes are found, but at the end of the day, our goal must be to find out what is the case, rather than to save face.

Guardian article summarizing kerfuffle over shoddy editorial practices

In case you missed it, last month Dorothy Bishop wrote a blog post and a follow up highlighting what might charitably be described as "surprising" editorial practices. This Guardian article summarizes the sad state of affairs.

Matson isn’t the only academic to benefit from what might be generously referred to as an “extremely efficient” review process. Bishop’s analysis also identified other researchers who have published frequently in RIDD and RASD, including Jeff Sigafoos, Mark O’Reilly and Giuliano Lancioni. Bishop has provided data showing that for 73 papers appearing in RASD and RIDD co-authored by these researchers between 2010 and 2014, 17 were accepted the same day that they were received, 13 within one day, and 13 within two days. We contacted Sigafoos and Lancioni with this data, and they responded:

The figures you state for 73 papers is routine practice for papers published in RIDD and RASD. A large percentage of all papers published in any given issue of RIDD and RASD appear to have received a rapid rate of review as indicated would happen in the official editorial policy of these journals.

Kudos to Prof. Bishop and others for pointing out such shockingly appalling behavior.

John Ioannidis on scientific accuracy

Interesting interview from Vox with the author of "Why most published research findings are false" (and many other articles), including personal tidbits:

He even has a mythical origin story. He was raised in Greece, the home of Pythagoras and Euclid, by physician-researchers who instilled in him a love of mathematics. By seven, he quantified his affection for family members with a "love numbers" system. ("My mother was getting 1,024.42," he said. "My grandmother, 173.73.")

and thoughts on how to improve science:

Recently there’s increasing emphasis on trying to have post-publication review. Once a paper is published, you can comment on it, raise questions or concerns. But most of these efforts don’t have an incentive structure in place that would help them take off. There’s also no incentive for scientists or other stakeholders to make a very thorough and critical review of a study, to try to reproduce it, or to probe systematically and spend real effort on re-analysis. We need to find ways people would be rewarded for this type of reproducibility or bias checks.