UC Berkeley Researchers Reconstruct YouTube Videos from Brain Activity

UC Berkeley Researchers Reconstruct YouTube Videos from Brain Activity

by Radford Castro of  lazytechguys.com

“Think Minority Report because this one is a trip. Now, the video you’re about to see is somewhat disturbing because what the university has accomplished is short of astounding. While the quality of the videos they recreate isn’t good at all, it’s still scary that an image can still be displayed.

How do they do this?

The study had the participants in an MRI machine for hours at a time watching YouTube videos. The data gathered by the MRI machine was used to create a computer model that matched the features of the video like colors, shapes, and movements with the brain activity.

The video was recreated by looking at slight changes in blood flow to visual areas of the brain were used to predict what was on the screen at the time. The team thinks that one day the tech could be used to broadcast imagery that plays in the mind independent from vision. You know what? Just watch it.” Source: LTG

Read More:
Lazy Tech Guys: http://www.lazytechguys.com/news/uc-berkeley-has-found-a-way-to-pull-videos-from-your-brain/

For more information about this work: http://gallantlab.org

For the paper (Nishimoto et al., 2011, Current Biology) go to:http://dx.doi.org/10.1016/j.cub.2011.08.031

Reconstructing visual experiences from brain activity evoked by natural movies (Current Biology in press). This paper presents the first successful approach for reconstructing natural movies from brain activity.

Encoding and decoding in fMRI (Neuroimage 2010, PDF 841KB). This paper reviews the current state of “brain decoding” research, and advocates one particularly powerful approach.

Bayesian reconstruction of natural images from human brain activity (Neuron 2009, PDF 3.7MB). This paper presents the first successful approach for reconstructing natural images from brain activity.

Identifying natural images from human brain activity (Nature 2008, PDF 5.4MB). This landmark paper shows that far more information can be recovered from brain activity than was thought previously.

Modeling low-frequency fluctuation and hemodynamic response timecourse in event-related fMRI (Human Brain Mapping 2008, PDF 717KB). This technical paper focuses on optimal quantitative modeling of fMRI data.

Topographic organization in and near human visual areas V4 (Journal of Neuroscience 2007, PDF 4.5MB). This paper provides a detailed retinotopic mapping study of early human visual areas.

One thought on “UC Berkeley Researchers Reconstruct YouTube Videos from Brain Activity”

Comments are closed.