September 24, 2011

simultaneously excited and terrified


Check this out:

...according to Professor Jack Gallant?UC Berkeley neuroscientist and coauthor of the research published today in the journal Current Biology?"this is a major leap toward reconstructing internal imagery. We are opening a window into the movies in our minds."

Indeed, it's mindblowing. I'm simultaneously excited and terrified. This is how it works:

They used three different subjects for the experiments?incidentally, they were part of the research team because it requires being inside a functional Magnetic Resonance Imaging system for hours at a time. The subjects were exposed to two different groups of Hollywood movie trailers as the fMRI system recorded the brain's blood flow through their brains' visual cortex.

The readings were fed into a computer program in which they were divided into three-dimensional pixels units called voxels (volumetric pixels). This process effectively decodes the brain signals generated by moving pictures, connecting the shape and motion information from the movies to specific brain actions. As the sessions progressed, the computer learned more and more about how the visual activity presented on the screen corresponded to the brain activity.

An 18-million-second picture palette

After recording this information, another group of clips was used to reconstruct the videos shown to the subjects. The computer analyzed 18 million seconds of random YouTube video, building a database of potential brain activity for each clip. From all these videos, the software picked the one hundred clips that caused a brain activity more similar to the ones the subject watched, combining them into one final movie. Although the resulting video is low resolution and blurry, it clearly matched the actual clips watched by the subjects.

Think about those 18 million seconds of random videos as a painter's color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he's seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

Given a big enough database of video material and enough computing power, the system would be able to re-create any images in your brain.

(Emphasis Mine. Source.)

Think about it: three/four dimensional reality, the reflected light from which is captured and flattened into two dimensions by cameras, converted into bytes, transmitted via cable/wireless signals which are fluoresced in a flat screen, whose light is projected onto our retinas, and cascade into three dimensional brain signals which are detected and mapped into voxels which are correlated into a two dimensional representation of the referent or subject of the perceiving brain which is converted into bytes, transmitted via cable/wireless signals which are fluoresced in another flat screen, whose light is again projected onto our retinas and again cascade into three dimensional brain signals...

Posted by Dennis at September 24, 2011 6:09 AM

Leave a comment