Perception Decoder

The latest issue of Nature will include an article detailing the fMRI work by Gallant and his team at the University of California Berkeley.

In the experiment, the brain activity of two subjects (two of Gallant’s team members, Kendrick Kay and Thomas Naselaris) was monitored while they were shown 1,750 different pictures. The team then selected 120 novel images that the subjects hadn’t seen before, and used the previous results to predict their brain responses. When the test subjects were shown one of the images, the team could match the actual brain response to their predictions to accurately pick out which of the pictures they had been shown. With one of the participants they were correct 72% of the time, and with the other 92% of the time; on chance alone they would have been right only 0.8% of the time.

Cool stuff. The researchers would like to eventually make a similar model for intentions. If they succeed there will be some pressing philosophical implications to address.

–Edit–

The study is catching a lot of press:

SciAm

Neuroscientifically Challenged

Not Exactly Rocket Science

 

Advertisements

0 Responses to “Perception Decoder”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: