Some things I've found help reduce my stress around science

Good advice all around!

Redefine success. I’ve found that if I recalibrate what success means to include accomplishing tasks like peer reviewing papers, getting letters of recommendation sent at the right times, providing support to people I mentor, and the submission rather than the success of papers/grants then I’m much less stressed out.

Great memo on family-friendly scheduling from Brown University's Provost

Well-said.

Family-friendly scheduling does not mean that all events after 5:30 should be prohibited. Rather, it means that those engaged in programming should be conscious of the exclusions created by after-hours events and should take proactive steps to accommodate faculty unable to stay on campus into the evening. It requires chairs and directors to recognize the baseline pressures created by the scheduling grid and the fact that many faculty with children must teach courses that extend beyond the time of the university's daycare provision. It forces an acknowledgement that there is no perfect time for a lecture on campus; a 5:30 lecture excludes some faculty just as a lecture at 12:00, 2:00, or any other time typically associated with classroom teaching excludes others. Too often we hear that "5:30 is the only time that everyone can make," but this is patently not true.

Six things your research mentor wants you to know (but probably won't think to tell you)

Most of these are pretty true for research mentors at all levels.

1. If I don’t hang out and chat at the lab, it doesn’t mean that I don’t like you.
It probably means that I’m overextended or don’t have much spare time each day. I might be in the lab more hours per day than you are in an entire week, and I still might not have enough time to accomplish my goals. Alternatively, your lab schedule might overlap with my busiest time of the day, or I might need to leave lab at a specific time each day, leaving me no extra time to socialize. Therefore, I might focus on conversations that teach you how to interpret results or gain a new research skill, because I want our limited time together to make the greatest impact on your research experience. That might mean sticking to conversations about research and science.

The principle of assumed error

Nice post from Russ Poldrack.

The principle is that whenever one finds something using a computational analysis that fits with one’s predictions or seems like a “cool” finding, they should assume that it’s due to an error in the code rather than reflecting reality.  Having made this assumption, one should then do everything they can to find out what kind of error could have resulted in the effect.  This is really no different from the strategy that experimental scientists use (in theory), in which upon finding an effect they test every conceivable confound in order to rule them out as a cause of the effect.  However, I find that this kind of thinking is much less common in computational analyses. Instead, when something “works” (i.e. gives us an answer we like)  we run with it, whereas when the code doesn’t give us a good answer then we dig around for different ways to do the analysis that give a more satisfying answer.  Because we will be more likely to accept errors that fit our hypotheses than those that do not due to confirmation bias, this procedure is guaranteed to increase the overall error rate of our research.  If this sounds a lot like p-hacking, that’s because it is

 

You don’t need more free time

As I discovered in a study that I published with my colleague Chaeyoon Lim in the journal Sociological Science, it’s not just that we have a shortage of free time; it’s also that our free time, in order to be satisfying, often must align with that of our friends and loved ones. We face a problem, in other words, of coordination. Work-life balance is not something that you can solve on your own.

 

Eve Marder on owning your mistakes

In another example, one of my PhD students recently failed to replicate some of our previously published observations, for reasons we still don't fully understand. This was obviously painful, but we published the most complete description of the new data and our best assessment of what could be responsible for the discrepancy (including the possibility that our wild-caught crab populations are affected by climate change). Ironically, the first version of the manuscript was rejected (not by eLife) because a reviewer said we shouldn't be allowed to publish something that disagreed with our own prior observations!

Reproducible analysis in the MyConnectome project

We have released code and data with papers in the past, but this is the first paper I have ever published that attempts to include a fully reproducible snapshot of the statistical analyses. I learned a number of lessons in the process of doing this:

  1. The development of a reproducible workflow saved me from publishing a paper with demonstrably irreproducible results, due to the OS-specific software bug mentioned above. This in itself makes the entire process worthwhile from my standpoint.

  2. Converting a standard workflow to a fully reproducible workflow is difficult. It took many hours of work beyond the standard analyses in order to develop a working VM with all of the analyses automatically run; that doesn’t even count the time that went into developing the browser. Had I started the work within a virtual machine from the beginning, it would have been much easier, but still would require extra work beyond that needed for the basic analyses.

  3. Ensuring longevity of a working pipeline is even harder. The week before the paper was set to published I tried a fresh install of the VM to make sure it was still working. It wasn’t. The problem was simple (miniconda had changed the name of its installation directory), and highlighted a significant flaw in our strategy, which was that we had not specified software versions in our VM provisioning. I hope that we can add that in the future, but for now, we have to keep our eyes out for the disruptive effects of software updates.

Writing by Omission

Fantastic piece in the New Yorker about cutting down text. Good for succinct, clear writing, and page-limited scientific articles and grant applications!

Green 4 does not mean lop off four lines at the bottom, I tell them. The idea is to remove words in such a manner that no one would notice that anything has been removed. Easier with some writers than with others. It’s as if you were removing freight cars here and there in order to shorten a train—or pruning bits and pieces of a plant for reasons of aesthetics or plant pathology, not to mention size. Do not do violence to the author’s tone, manner, nature, style, thumbprint.

Excellent advice for how to write a grant

Excellent advice - slightly tailored for British/European grant systems but applicable for all of us.

Keep your proposal simple. Introducers with their pile of a dozen or more grants to present will, at the panel meeting, only be able to keep in the forefront of their mind one main thing about your proposal. Make that the deliverable (i.e. the answer to the burning question), and make it a cool one. That deliverable should appear right at the start of your proposal, and also at the end, with the middle simply there to explain it.

Interview with Jonathan Eisen on gender balance in science

In addition, there are a large number of ways implicit biases (i.e., those that are not necessarily purposefully trying to be biased against women) affect women’s careers in science. For example, since women on average tend to be more responsible for child care in families with children, lack of support for childcare in various venues has a disproportional effect on women. One classic example of implicit bias is in the discussion and recognition of scientists in the media, popular press, and in various related activities. For various reasons, the work on male scientists is overrepresented in such promotional actions.

Great post from Michael Frank on improving reproducibility in science

Hard to argue with any of these suggestios.

2. Everything open by default. There is a huge psychological effect to doing all your work knowing that everyone will see all your data, your experimental stimuli, and your analyses. When you're tempted to write sloppy, uncommented code, you think twice. Unprincipled exclusions look even more unprincipled when you have to justify each line of your analysis.** And there are incredible benefits of releasing raw stimuli and data – reanalysis, reuse, and error checking. It can make you feel very exposed to have all your experimental data subject to reanalysis by reviewers or random trolls on the internet. But if there is an obvious, justifiable reanalysis that A) you didn't catch and B) provides evidence against your interpretation, you should be very grateful if someone finds it (and even more so if it's before publication).