This is the first time that an AI algorithm known as Stable Diffusion has been used to recreate images from brain scans. (Image: Pixabay)
Listen to this articleYour browser does not support the audio element.
From generating human-like conversations via high-end chatbots to automating various facets of our day-to-day tasks, AI is certainly making new strides across the globe. While the pandemonium surrounding AI replacing humans in many jobs refuses to abate, its awe-inspiring abilities are also giving rise to renewed optimism about future possibilities that can aid humanity.
Dreams are integral to human experience. They not only inspire but also occasionally surprise one with their crudeness. However, not everyone is capable of describing their dreams vividly. A lot of it is lost in translation, driving most humans to wonder if at all they could capture the succession of images, ideas, and sensations in a physical form.
You’ve Read Your Free Stories For Now
Sign up and keep reading more stories that matter to you.
While neuroscientists from around the world have been grappling with the mammoth task of converting mental images into something tangible, AI seems to have paved the way. A recent study has demonstrated that AI can read brain scans and offer realistic interpretations of mental images.
Researchers Shinji Nishimoto and Yu Takagi from Japan’s Osaka University recreated high-resolution images from scans of brain activity. The technology according to the duo has the potential to offer numerous applications that include exploring how animals perceive the world around them, recording dreams in humans and even aiding communication with people suffering from paralysis.
Dream interpretations
This is not the first time that something of this scale has been attempted. Earlier, various studies reported that AI has been used to read brain scans to create images of landscapes and faces. This is the first time that an AI algorithm known as Stable Diffusion has been used. As part of the study, the researchers imparted additional training to the default Stable Diffusion system. This essentially meant connecting additional text descriptions of thousands of photos to brain patterns that were recorded when the same images were observed by the participants of the brain scan studies.
While earlier AI algorithms used to decode brain scans relied on large data sets, Stable Diffusion was able to achieve the feat with less training – essentially by incorporating captions of images into its algorithm. Ariel Goldstein, a cognitive neuroscientist at Princeton University who was involved with the study, called it a novel approach combining textual and visual information to decipher the brain.
The study suggests that the AI algorithm processed information gathered from different regions of the brain such as the occipital and temporal lobes that are involved in perceiving images. The system interpreted information via functional magnetic resonance imaging or fMRI scans of the brains.
Story continues below this ad
The researchers said that when people look at an image, the temporal lobes register information about its contents, while the occipital lobes record layouts and perspectives. All of this information is recorded using the fMRI, which helps in detecting the changes in blood flow to active regions of the brain. The recorded information, according to the researchers, can be converted into an imitation of the image with the help of AI.
The additional training added to the Stable Diffusion algorithm was based on an online data set that was provided by the University of Minnesota. The data set consisted of brain scans from four participants who each viewed 10,000 pictures. However, a portion of the brain scans from the participants was not used in training and was later used to test the AI system later.
Bijin Jose serves as an Assistant Editor at Indian Express Online in New Delhi. A seasoned technology journalist with a diverse portfolio, he brings over a decade of experience in the media industry to his coverage of the evolving digital landscape and emerging technologies.
Experience & Career
Bijin commenced his journalistic journey in 2013 as a citizen journalist with The Times of India. His career trajectory includes significant tenures at prestigious media organizations including India Today Digital and The Economic Times. This diverse professional background, ranging from legacy print institutions to dynamic digital platforms, culminated in his current leadership role at The Indian Express, where he helps shape the publication's technology narrative.
Expertise & Focus Areas
Bijin has transitioned from general reporting to a specialized focus on the intersection of technology and humanity. His key areas of expertise include:
Artificial Intelligence: deeply tracking developments in AI, providing nuanced perspectives on its ethical,industrial, and societal implications.
Tech Commentary: moving beyond product specifications to analyze how technology reshapes daily life.
Diverse Reporting Foundation: draws upon a robust background in crime reporting and cultural features to bring a human-centric approach to technical storytelling.
Authoritativeness & Trust
Bijin’s editorial voice is informed by a strong academic foundation, holding a Bachelor of Arts in English from Maharaja Sayajirao University, Vadodara, and a Master of Arts in English Literature. This literary background enables him to deconstruct complex technical jargon into accessible, compelling narratives. His steady progression through India’s top newsrooms underscores his reputation for editorial rigor and reliable journalism.
Find all stories by Bijin Jose here ... Read More