‘Mind-reading’ AI: Japan study sparks ethical debate

Ai

 

Tokyo, Japan – Yu Takagi was astounded by what he saw. On a Saturday in September, he was working alone at his desk when he watched in awe as artificial intelligence created images of what the subject was seeing on a screen by decoding the subject’s brain activity.

Takagi, a neuroscientist and assistant professor at Osaka University, said to Al Jazeera, “I still remember when I saw the first [AI-generated] images.”.

end of the list.

When I went into the bathroom and saw my face in the mirror, I said, “Okay, that’s normal. I might not be crazy after all.

In order to analyze the brain scans of test subjects who were shown up to 10,000 images while inside an MRI machine, Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022.

Stable Diffusion was able to produce high-fidelity images that were eerily similar to the originals after Takagi and his research partner Shinji Nishimoto developed a straightforward model to “translate” brain activity into a readable format.

Despite not having seen the images beforehand or receiving any training to manipulate the results, the AI was still able to accomplish this.

Takagi stated, “We really didn’t expect this kind of result.

Takagi emphasized that the development does not, as of right now, entail mind-reading because the AI can only produce pictures that a person has already seen.

Takagi declared, “This is not mind-reading. “Unfortunately, there are a lot of misconceptions about our research. ”.

“We believe that this is overly optimistic; we are unable to decode dreams or imaginations. Of course, there is potential for the future. ”.

Although there is a larger discussion about the dangers of AI in general, this development has nonetheless raised questions about how such technology might be used in the future.

Elon Musk, the founder of Tesla, and Steve Wozniak, the co-founder of Apple, called for a halt to the development of AI in an open letter published last month, citing the “profound risks to society and humanity.”. ”.

Despite his enthusiasm, Takagi admits that there is reason for concern about mind-reading technology given the potential for abuse by those who have bad intentions or without permission.

“Privacy concerns are the most significant issue for us. It’s a very delicate subject, Takagi said, if a government or institution can read people’s minds. To ensure that this cannot occur, there must be high-level discussions. ”.

A technique for using AI to analyze and visually represent brain activity was created by Yu Takagi and a colleague [Yu Takagi].

The tech industry, which has been electrified by the rapid advancements in AI, including the release of ChatGPT, which generates human-like speech in response to user prompts, was very interested in Takagi and Nishimoto’s research, which caused a lot of buzz.

A data company called Altmetric claims that their paper summarizing the results has the top 1% of engagement out of the more than 23 million research outputs it has tracked so far.

Additionally, the study has been accepted to the Conference on Computer Vision and Pattern Recognition (CVPR), which is scheduled for June 2023. The CVPR is a well-known venue for approving significant advancements in neuroscience.

However, Takagi and Nishimoto are wary of overreacting to their discoveries.

According to Takagi, brain-scanning technology and AI itself are the two main barriers to true mind reading.

We may still be decades away from being able to accurately and reliably decode imagined visual experiences, according to scientists, despite advances in neural interfaces like electroencephalography (EEG) brain computers and functional magnetic resonance imaging (fMRI), which measure brain activity by detecting changes associated with blood flow.

Brain scans of the test subjects were conducted using an MRI by Yu Takagi and his colleague [Yu Takagi].

Subjects had to spend up to 40 hours in an fMRI scanner for Takagi and Nishimoto’s research, which was expensive and time-consuming.

In a 2021 paper, scientists from the Korea Advanced Institute of Science and Technology noted that due to the soft and complex nature of neural tissue, which responds in peculiar ways when in contact with synthetic interfaces, conventional neural interfaces “lack chronic recording stability.”.

In addition, the researchers noted that “current recording techniques generally rely on electrical pathways to transfer the signal, which is susceptible to electrical noises from surroundings. It is still difficult to obtain fine signals from the target region with high sensitivity because electrical noises significantly interfere with it. ”.

A second bottleneck is the limitations of current AI, though Takagi acknowledges that these capabilities are improving daily.

Takagi said, “I’m hopeful for AI, but I’m not hopeful for brain technology. “I believe that neuroscientists agree on this. ”.

Takagi and Nishimoto’s framework may be applied to brain-scanning technologies other than MRI, such as EEG, or to highly invasive techniques, such as the brain-computer implants being developed by Neuralink, a company owned by Elon Musk.

Takagi still feels that there aren’t many real-world uses for his AI research at the moment.

First of all, the method is still unable to be applied to new subjects. A model developed for one person cannot be directly applied to another because each person’s brain is different.

However, Takagi foresees a time in the future when it might be employed for therapeutic, communication, or even amusement purposes.

According to Ricardo Silva, a professor of computational neuroscience at University College London and a research fellow at the Alan Turing Institute, “it’s hard to predict what a successful clinical application might be at this stage, as it is still very exploratory research.”.

“By examining the methods by which one could spot persistent anomalies in images of visual navigation tasks reconstructed from a patient’s brain activity, this may turn out to be one additional way of developing a marker for Alzheimer’s detection and progression evaluation. ”.

According to some scientists, AI may eventually be used to detect diseases like Alzheimer’s.

The ethical implications of technology that might one day be used to perform real mind reading worry Silva as well.

The most urgent concern, according to him, is how much the data collector should be required to fully disclose how the data was used.

“It’s one thing to sign up to capture a moment in time of your younger self for, perhaps, future clinical use… It’s quite another and entirely different thing to have it used for unrelated tasks like marketing, or worse, used in legal cases that are against someone’s own interests. ”.

Takagi and his partner don’t plan to stop their research, though. Version two of their project is already in the works, and it will concentrate on advancing the technology and utilizing it in additional modalities.

The technique for reconstructing images is currently being improved, according to Takagi. Additionally, it moves along very quickly. ”.

Related Posts

Google Pixel Fold: Officially Real, No More Leaks

Google’s Pixel Fold, the company’s first foldable phone, has been unveiled after years of rumors and leaks. The gadget, which was on display at the I/O 2023…

Leave a Reply

Your email address will not be published. Required fields are marked *