Skip to main content
  • Poster presentation
  • Open access
  • Published:

Neural representation in F5: cross-decoding from observation to execution

Mirror neurons fire during both action execution and observation of a similar action performed by another individual [1]. However, this definition does not account for the existence of representational equivalence between execution and observation. To investigate this issue we recorded 68 neurons from area F5 of a macaque monkey trained either to execute reaching-to-grasp actions towards objects or to observe the experimenter performing the same actions [2], and adopted a decoding framework to find whether neurons effective in decoding the object/grip type in (1) execution and (2) observation conditions do exist, and most critically, to (3) assess whether transfer between execution and observation decoders (i.e. cross-decoding) can be employed. By 'transfer' we mean the application of the decoder parameters estimated using the neural discharge in observation to the neural firing recorded in execution, and vice versa. The success rate of such a decoder indicates the equivalence of representations in the two conditions.

Our analysis indicates that, at the level of single neurons, object/grip-specific decoders can be constructed, i.e. the type of the object/grip employed in either execution or observation can be decoded (success rate: 80%-100%, chance level: 25%). However, only in 10% of the cases (corresponding to the congruent type mirror neurons [1]) the decoder based on the execution discharge was effective when transferred to the observation discharge. The same was true for the reverse transfer. To extend this analysis at the population level we examined all pair performance of a 10-neuron set, consisted of 4 neurons having the best decoding performance and 6 neurons randomly selected. Out of the 45 possible pairs, 7 displayed high success rates (80% on average) in cross-decoding. Remarkably, high performing pairs were constituted only when one of the neurons displaying reliable decoding performance was paired with a randomly selected -poor solo decoder- neuron, which acted as a "helper". These results strongly point to a population based representation where good and poor decoders may cooperate to form a robust recognition system.

Methods

Neuronal discharges during each condition were trimmed and represented as 14-bin histogram vectors. In the two neuron analysis, each neuron was reduced to a 7-bin histogram, and their concatenation results in a 14-tuple vector, to ensure similar decoder complexity (constant number of adjustable parameters). Thus, for each condition, ten 14-tuple neural firings (one per trial) made up the rows of the input matrix X, and the corresponding object ids (1-4) made up the output vector Y. We assumed a linear relation between input and output as XW = Yand solved for the weights (the decoder parameters) using the pseudo-inverse solution W = X†Y. Then, given a 14-tuple vector representation, z, of a discharge, the predicted object id is given by y p r e d = arg min i = 0..5 { [ 0 , 1 , 2 , 3 , 4 , 5 ] − z T W } , where 0 and 5 indicates a definite wrong prediction. For execution-only and observation-only experiments, leave-one-out cross validation was applied to obtain the success rates in decoding. For cross-decoding analysis, the weight vector W obtained in one condition was used to predict the object type in the other condition by using the data from that condition.

References

  1. Fadiga L, Fogassi L, Rizzolatti G: Action recognition in the premotor cortex. Brain. 1996, 119: 593-609.

    Article  PubMed  Google Scholar 

  2. Papadourakis V, Raos V: Cue-dependent action-observation elicited responses in the ventral premotor cortex (area F5) of the macaque monkey. Soc Neurosci Abstr. 2013, Program No. 263.08

    Google Scholar 

Download references

Acknowledgements

This work was supported by the grant "OBSERVENEMO" within the framework of the bilateral S&T Cooperation Program between the Republic of Turkey and the Hellenic Republic. Grant no 113S391 funded by TUBITAK and grant ΓΓΕΤ 14ΤUR OBSERVENEMO co- Financed by the European Union and the Greek State, MCERA/GSRT.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erhan Oztop.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kirtay, M., Papadourakis, V., Raos, V. et al. Neural representation in F5: cross-decoding from observation to execution. BMC Neurosci 16 (Suppl 1), P190 (2015). https://doi.org/10.1186/1471-2202-16-S1-P190

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-16-S1-P190

Keywords