Tech mogul Elon Musk announced on Twitter/X in the first week of February that the first human implantation of the Neuralink device — called Telepathy — has been achieved and the recovery and initial data collection are in progress and going well. Neuralink is a tech startup and their proprietary chip is a surgically implantable device. It can record a massive amount of data from individual neurons and transmit it to a computer, which in turn can read the intention coded in that data to execute a certain task.
Neuralink made headlines all over the world with a video a few years ago, which showed how a monkey was using the surgically implanted chip to play a video game on the computer. The wow factor kicked in then, as it has now. The first human application is intended to return the ability to control digital devices to quadriplegic individuals through thought and intention. Understandably, this has led to enormous excitement and speculation in the media. This is part of the project’s short-term goal — bringing back functionality for a clinically and neurologically disabled population, which probably also aided their fundraising endeavours. However, the company’s vision statement is more ambitious, which is to “create a generalised brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.” The second goal, to alter and enhance cognitive abilities in healthy humans, opens up a Pandora’s box of ethical and legal concerns.
Even if we stick to the company’s current short-term goal of using Telepathy implants to fulfil unmet medical needs, when the device has to be surgically implanted in the brain of a human being, far more critical scrutiny is needed into its workings. At the risk of sounding like pedantic killjoys, scientists and doctors prefer to err on the side of caution. All professional bodies, associations and regulatory authorities exist to protect the rights of study participants whose safety and privacy take (as they should) precedence over the efficiency of a new study, at all times. The most important criterion to be able to ensure that is the availability and transparency of data. Additionally, there are a couple of basic tenets of science and scientific discoveries — replicability and the ability of the raw data to be able to withstand scrutiny from the scientific community. In all of these areas, Musk’s company falls short. The development and pre-clinical testing results are shrouded in mystery, except for anecdotal show-and-tell events. There are disturbing reports of monkeys dying from surgeries and ethical breaches that then disappeared quickly from public discourse. Competitors have flagged a shortfall in the material used to create the chip but that, of course, is dismissed as a rival’s way of trying to undermine a product.
Neuralink has previously refused National Institute of Health (NIH) funding to keep its patented technology secure as federally-funded research comes with more oversight and mandated requirements of data sharing and transparency. While most academia-sponsored clinical trials that happen through major university hospitals in the US need to be registered (at clinicaltrials.gov), the PRIME study (as christened by the company) is not. The only information available is through an online brochure, which is characteristically minimalistic. The device received FDA approval, which ideally should provide ironclad oversight to maximise patient safety and ensure the company behind a drug or device does its due diligence. However, the FDA has fallen short of that duty many times in the past. Ultimately, the gold standards of replicability and reliability in science tie back to the very thing that has been absent from the very beginning — transparent data.
Because the trial is unregistered, it is virtually impossible to find out anything about the conditions under which it will be conducted, commonly referred to as a study protocol. The only clues are the basic information disclosed in the brochure. The inclusion criteria specify adult patients with quadriplegia (paralysis or loss of function in all four limbs) from ALS or spinal cord injury without much improvement up to a year post-injury, and those who have an available full-time reliable caregiver. The trial will preclude individuals who have other implants (such as pacemakers or electrodes for deep brain stimulation) as well as people whose conditions need procedures such as regular MRI scans or Transcranial Magnetic Stimulation (TMS). This is likely to ensure no damage results from the embedded electrodes being subjected to a high-powered magnetic field.
Also excluded are individuals with a history of seizures. This is related to something that is theoretically plausible to predict — the brain often responds to foreign objects by encapsulating in a layer of certain glial cells, and these “glial scars” are notoriously epileptogenic. However minute, the threading of up to 1,024 recording electrodes in the brain will lead to some form of micro-injuries in the brain and Neuralink will need to convince the scientific community that these implants do not enhance the likelihood of microbleeds or strokes or any other form of injuries in the brain. Such microscopic injuries are often not picked up by any diagnostic modalities yet they accumulate over time and predispose individuals to serious neurological conditions such as neurodegeneration later on. The study has to prove that the recording capacity of the electrodes is maintained consistently — this is a problem with existing BCIs. The efficiency of the recording decays over time. The PRIME study calls for 18 months of primary observation followed by periodic follow-ups up to five years. Hopefully, they will have designed the study to collect enough safety data during this period. The concerns of not being registered in a clinical trial repository and the opacity around the study protocol also raise the question of legitimate publication of these results — many reputed medical journals now only publish the results of registered trials.
A primary ethical concern is the ownership of the data. There is no clarity on who the recorded data belongs to, which will be decoded to interpret the “intentions” of a person to control a digital device. Owning the intention to execute a behaviour resides solely within a human being. Embedding proprietary technology enabling data recording and routing it through a proprietary app raises a crucial question: Will the subjects retain ownership of this recorded data? Will Neuralink have data ownership? If yes, what can they technically do with this ownership of “intent to carry out an action” on behalf of a third party? In the current situation, it is easy to overlook the ramifications of this issue because the population on which the trial is being conducted is clinically disabled and there is a direct benefit to them. But Neuralink’s ambition and vision extend beyond clinical use to enhance human cognition and possibilities. Keeping that in mind, the ethics of data ownership is non-trivial and extremely important to settle from the get-go. Any wrong precedent here can endanger human agency and privacy at an unprecedented scale.
In summary, perhaps Neuralink would serve itself better if it were more open about the data it generates and about its device, rather than letting the world speculate. Secrecy does not instill confidence and trust is something scientists have learned the hard way not to bestow on corporate entities too generously. Neuralink and Musk still have a long way to go even if they are to get the scientific community cautiously optimistic.
The writer is professor of psychology at Ashoka University. Views are personal