Predicting the timecourse of object processing with MEG (vision + semantics)

Here, we test whether a model of individual objects—based on combining the HMax computational model of vision with semantic-feature information—can account for and predict time-varying neural activity recorded with magnetoencephalography. We show that combining HMax and semantic properties provides a better account of neural object representations compared with the HMax alone, both through model fit and classification performance. Our results show that modeling and classifying individual objects is significantly improved by adding semantic-feature information beyond ∼200 ms.