Resting state nuisance regressors remove variance with network structure

Really interesting work from Bright and Murphy. The "highlights":

  • Data variance removed by nuisance regressors contains network structure.
  • Simulated regressors unrelated to noise also extract data with network structure.
  • Random sampling of original data (as few as 10% of volumes) reveals robust networks.
  • After optimal number, motion regressors remove similar variance as simulated ones.
  • Excessive nuisance regressors extract random signal variance with network structure.

Calling attention to meetings with skewed speaker gender ratios, even when it hurts, part 2

Great post from Jonathan Eisen.

Why is this important? Well, speaking at a meeting is important for people's careers. It helps in merit and promotion and tenure cases. It helps get their work recognized and known. Speaking at a meeting is also good practice for speaking at other meetings. Having diverse speakers also is important in terms of providing role models to attendees. And having diverse speakers helps a meeting not just be about the same old, white, men talking about their ideas. Or, in other words, it makes a meeting more, well, diverse. And almost certainly more interesting. And so on.

Early-late life trade-offs and the evolution of ageing in the wild

Here, we compiled 26 studies of free-ranging vertebrate populations that explicitly tested for a trade-off between performance in early and late life. Our review brings overall support for the presence of early-late life trade-offs, suggesting that the limitation of available resources leads individuals to trade somatic maintenance later in life for high allocation to reproduction early in life. We discuss our results in the light of two closely related theories of ageing—the disposable soma and the antagonistic pleiotropy theories—and propose that the principle of energy allocation roots the ageing process in the evolution of life-history strategies.

Friederici & Singer review: Grounding language processing on basic neurophysiological principles ($)

I haven't read this, but it looks interesting. Listed highlights:

  • Single neurons react similarly to persons and their names.
  • Nested oscillations can explain parallel processes.
  • Syntactic processes have a confined neural representation.
  • Different structural networks underlie different language aspects.

Modulation-frequency-specific adaptation in awake auditory cortex ($)

By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels.

Differential roles for attention subnetworks during reading

From the Schlaggar lab:

One set included bilateral Cingulo-opercular regions and mostly right-lateralized Dorsal Attention regions (CO/DA+). This CO/DA+ region set showed response properties consistent with a role in reporting which processing pathway (phonological or lexical) was biased for a particular trial. A second set was composed primarily of left-lateralized Frontal-parietal (FP) regions. Its signal properties were consistent with a role in response checking.

Representation of women in the journal Cognition (PDF)

Great letter from Klatzky, Holt, and Behrmann (Carnegie Mellon).

Perusing the table of contents, we were struck by the fact that among the 19 authors listed for the 12 articles, only one female author was present. While the substantive content of the issue may persuade us that the face of cognition is changing, it appears that changes in gender distribution are not to be expected. The face of cognitive science will remain unequivocally male.

How many participants should you collect? An alternative to the N * 2.5 rule

Another great post from Daniel Lakens.

  1. Determine the maximum sample size you are willing to collect (e.g., N = 400)
  2. Plan equally spaced analyses (e.g., four looks at the data, after 50, 100, 150, and 200 participants per condition in a two-sample t-test).
  3. Use alpha levels for each of the four looks at your data that control the Type 1 error rate (e.g., for four looks: 0.018, 0.019, 0.020, and 0.021; for three looks: 0.023, 0.023, and 0.024; for two looks: 0.031 and 0.030).
  4. Calculate one-sided p-values and JZS Bayes Factors (with a scale r on the effect size of 0.5) at every analysis. Stop when the effect is statistically significant and/or JZS Bayes Factors > 3. Stop when there is support for the null hypothesis based on a JZS Bayes Factor < 0.3. If the results are inconclusive, continue. In small samples (e.g., 50 participants per condition) the risk of Type 1 errors when accepting the null using Bayes Factors is relatively high, so always interpret results from small samples with caution.
  5. When the maximum sample size is reached without providing convincing evidence for the null or alternative hypothesis, interpret the Bayes Factor while acknowledging the Bayes Factor provides weak support for either the null or the alternative hypothesis. Conclude that based on the power you had to observe a small effect size (e.g., 91% power to observe a d = 0.3) the true effect size is most likely either zero or small.
  6. Report the effect size and its 95% confidence interval, and interpret it in relation to other findings in the literature or to theoretical predictions about the size of the effect.

Divided attention disrupts perceptual encoding during speech recognition

Adding cognitive load increased the likelihood that listeners would select a word acoustically similar to the target even though its frequency was lower than that of the target. Thus, there was no evidence that cognitive load led to a high-frequency response bias. Rather, cognitive load seems to disrupt sublexical encoding, possibly by impairing perceptual acuity at the auditory periphery.

Oscillatory visual activity modulates spatial attention to rhythmic inputs

Using high-density electroencephalography in humans, a bilateral entrainment response to the rhythm (1.3 or 1.5 Hz) of an attended stimulation stream was observed, concurrent with a considerably weaker contralateral entrainment to a competing rhythm. That ipsilateral visual areas strongly entrained to the attended stimulus is notable because competitive inputs to these regions were being driven at an entirely different rhythm. Strong modulations of phase locking and weak modulations of single-trial power suggest that entrainment was primarily driven by phase-alignment of ongoing oscillatory activity.

Why big data is in trouble: they forgot about applied statistics

Turns out doing a proper statistical anlaysis actually takes some thought and knowledge.

One reason is that when you actually take the time to do an analysis right, with careful attention to all the sources of variation in the data, it is almost a law that you will have to make smaller claims than you could if you just shoved your data in a machine learning algorithm and reported whatever came out the other side.

Rethinking scientific publishing

Guardian article summarizing a recent debate on the future of scientific publishing. I like the idea of taking a step back and thinking about the best way to communicate and evaluate science, rather than simply keeping the status quo.

Stuart Taylor, the Publishing Director at the Royal Society, raised a more fundamental question about what we expect scientific authors to do. “Authors still create journals in prose-style — do we really need to produce all that text?” Taylor wondered if the traditional formats were still appropriate for presenting scientific results in the internet age. Taylor’s suggestion that the standard structure of a scientific article might be out-of-date met with some approval — and some scepticism. Could researchers sustain a coherent argument without prose?

figshare on open data plans in research

It is no longer efficient or sustainable for humans to be the gatekeepers of academic content. Principles must be put in place so that content can still be reused long after the data creators are dead. Content must adhere to ethical and commercial sensitivities where necessary, but with the dawn of research data management plans, the funders of research ultimately decide what level of access society has.

and this snipped from the NIH public access plan:

“NIH intends to make public access to digital scientific data the standard for all NIH-funded research, as well as exploring ways to advance data as a legitimate form of scholarship through data citation and other means.”

ICA methods for motion correction in resting state fMRI

From the abstract:

Results demonstrated that ICA-AROMA, spike regression, scrubbing, and ICA-FIX similarly minimized the impact of motion on functional connectivity metrics. However, both ICA-AROMA and ICA-FIX resulted in significantly improved resting-state network reproducibility and decreased loss in tDoF compared to spike regression and scrubbing. In comparison to ICA-FIX, ICA-AROMA yielded improved preservation of signal of interest across all datasets.

Adaptive thresholding for reliable inference in single subject fMRI analysis

Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates.

Improving reproducibility in fMRI

Nice recommendations and overview from Cyril Pernet and Jean-Baptiste Poline.

Making neuroimaging reproducible will not make studies better; it will make scientific conclusions more verifiable, by accumulating evidence through replication, and ultimately make those conclusions more valid and research more efficient. Two of the main obstacles on this road are the lack of programming expertise in many neuroscience or clinical research laboratories, and the absence of widespread acknowledgement that neuroimaging is (also) a computational science.

Always use Welch's t-test instead of Student's t-test

Helpful post from Daniel Lakens with simulations and explanations.

Take home message of this post: We should use Welch’s t-test by default, instead of Student’s t-test, because Welch's t-test performs better than Student's t-test whenever sample sizes and variances are unequal between groups, and gives the same result when sample sizes and variances are equal.