It’s time for the Australian Law Reform Commission to look at neurotechnology

Allan McCay

At the end of February 2024, the Australian Human Rights Commissioner, Lorraine Finlay, announced that the Commission’s background report on neurotechnology is to be published in March.1 This news came on the back of global media coverage of Elon Musk’s company Neuralink’s first implantation of a brain–computer interface (a form of neurotechnology) into a human being, which has contributed to a nascent awareness of neurotechnology in the public sphere.2

Given some of the coverage of Neuralink’s announcement, one might have thought that this was the first implantation of a brain–computer interface into a human, rather than Musk’s company’s first. But such an impression would be incorrect, as it is now over 20 years since the first human brain implant.3 Nonetheless, Musk’s commercial backing of neurotechnology is a sign that the emerging technology is becoming economically significant and poised to become more prevalent in society. This growing commercial significance might explain why the Australian Human Rights Commission (‘AHRC’) is interested.

Thinking more broadly of technology and law, it is notable that since the release of ChatGPT in late 2022 the legal profession has increasingly turned its mind to the implications of technology for law, but thus far, much of the focus has been on how lawyers and their clients might use or misuse forms of generative AI. It has not focussed on what might happen if clients or legal professionals were to merge with technologies including AI.4

While it is undoubtedly a very good thing that, in respect of governmental thinking, the AHRC is leading the way in Australia in considering neurotechnology, the inquiry into merging with technology now needs to expand considerably and other bodies need to consider how to address the issues.

But what is neurotechnology? How does it represent a merging with technology, and why is the AHRC right to be interested? In addressing the question of what the technology is, I will begin with therapeutic neurotechnology.

Therapeutic neurotechnology

It is important to begin with therapeutic neurotechnology because not only does it help explain what the technology is and how and why we might merge with it, but it also sheds light on why the regulatory approach needs to be cautious. It underscores the fact that there is something very worthwhile that might be lost by heavy-handed regulation.

Examples of neurotechnologies can be found in various therapeutic applications and demonstrate their enormous upside. Perhaps the most striking application of the developing scientific and engineering capacity to connect the human nervous system and brain to technology is brain–computer interfaces for people with locked-in syndrome. A small number of people in clinical trials around the world who lack the capacity to effectively control their system of musculature are now able to control external devices by thinking (and Neuralink’s first implanted patient now joins that group of pioneers). An implanted neural device decodes the neural activity, turning it into a command for a cursor or robotic arm.5 The implantation of the device and its connection to the brain might be thought of as a form of merging with technology that is more sophisticated than a filling in a tooth or an inert prosthetic limb.

This kind of brain implant can substantially improve the lives of people with neurological conditions. Whether their impairment prevents them from using their limbs or, in extreme cases, creates an inability to speak, the technology can restore autonomy and the capacity to engage with the world.

Brain-drone murder and the elements of crime

However, moving to the possible downsides of neurotechnology, it is worth noting that that autonomy brings with it the capacity to commit crimes. Most of the population can commit crimes but choose not to, at least in respect of very serious offences. However, a proportion do, and we should assume that some people will use brain–computer interfaces to offend. Of course, this is certainly not a reason to oppose the technology: the therapeutic upsides massively outweigh the downsides. However, if we do imagine a person using a brain–computer to commit an offence, some odd issues for the law emerge. For example, the actus reus component of offending becomes a bit less straightforward.

The criminal law is replete with instances of people using their system of musculature to commit offences, whether by throwing punches, using their mouths and vocal cords to utter untruths, or using their hands to interact with a keyboard and hack into remote computer servers. As well as some omissions, all manner of actions have been criminalised. However, as indicated earlier, the bodily action paradigm now has a rival.

In this connection, we might note that in the USA, some universities now engage in brain–drone racing.6 At these events, participants don external brain-reading headsets and control the flight of a drone by thinking rather than by using the muscular system to move their hands to control a joystick. Let us now rather pessimistically assume that someone intentionally flies a drone into another person and injures or even kills them. What bit of conduct might be constitutive of the actus reus of any offence they might be charged with? They might have intended and achieved certain consequences, but what was the thing they did to bring those consequences about? Could the law say they engaged in a mental act and that mental act was the thing they did that was the conduct constituting the actus reus? Or that they jiggled their motor cortex (let’s say, by imagining waving their hand) to get the drone to dive into the person, and the bit of brain-jiggling was still bodily (the brain being part of the body), and further, that the brain-jiggling was the bit of conduct that was constitutive of the actus reus?

Things seem to be all in the brain/mind, and perhaps the distinction between the mens rea and the conduct constituting the actus reus seems a little less clear than in the more standard assault or murder case where the system of musculature is involved. It doesn’t seem to make any moral difference that they acted in a way that was at least arguably disembodied, but it does seem a bit nonstandard when examined from the perspective of the elements of crime.7 Of course, things start to become even more complicated where the device malfunctions in some way or is hacked, particularly where it is implanted and arguably part of the defendant rather than a tool they are using. Where do defendants end and the devices they use begin?

Prisons of the mind

I will move now to another neurotechnological device, this time one that is currently used for the treatment of epilepsy before returning to criminal justice. The NeuroPace8 brain implant is approved for clinical use in the USA and can monitor the brains of a person whose epilepsy has not responded appropriately to medication. Using a machine learning approach (a form of AI), it acts to avert an impending epileptic fit by stimulating the brain when it detects the neural precursors to the fit.

Such technology has the capacity to alleviate some of the suffering from epilepsy in situations where other medication attempts have failed. However, Gilbert and Dodds have noted that a similar approach seems in principle possible in respect of identifying the neural precursors to an angry outburst, and I have considered the possibility that an offender who is being sentenced for an anger-fuelled assault and is at the cusp of going to jail might argue for a neurotechnological response rather than a traditional carceral solution.9

Let’s assume the offender has a mental condition that had a role in their offending and, in between offending and the sentencing hearing and following the advice of their forensic psychiatrist, they have a device implanted in their brain that will address the mental condition and help significantly with the anger-management issues. The defendant argues that instead of sending the offender to jail, an Intensive Corrections Order (‘ICO’)10 is appropriate, with a condition that they keep the device active for the duration of the sentence while on the bond. Electronic monitoring of offenders is already part of the criminal justice response to crime (and permissible under an ICO). Surely what is proposed here is just another form of electronic monitoring, isn’t it?

It is not obvious that there is anything to prevent such an order being imposed, but it’s worth bearing in mind what the judge would be ordering. They would be making an order that said that a person with a mental condition was to have their brain monitored 24/7 and that an automated decision-making process would ‘decide’ when treatment in the form of direct brain stimulation was needed. Neural devices might be hacked or fail in other ways, and this risk would be entailed by the order. It also might be worth considering whether the company that provides the brain surveillance and intervention technology may put the data gleaned to some other use in the future.

It is not hard to appreciate why some human rights thinking is needed when one appreciates that criminal justice authorities – and perhaps security services around the world – may, in time, have monitoring and intervention capacities like these at their disposal.

One might also ask, what if the device fails or is hacked while they are on the bond, perhaps rendering it ineffective or even dialling up the offender’s impulsivity rather than dialling it down? Would a failure to comply with a bond condition under those circumstances be the offender’s fault? Perhaps we need new hacking offences in response to the possibility of brain-hacking, as some have suggested.11 These kind of questions need to be considered.

But are neurotechnologies really going to be used in criminal justice? It seems that the answer is ‘yes’. The US company Brainwave Science already markets their brain-monitoring headset to criminal justice and security agencies who want to enhance their interrogation capacities by monitoring brain waves.12 It is hard to imagine that this is as far as things will go with neurotechnology in criminal justice and security.

Human rights, the broader implications for law, and next steps

It is worth noting that although some of the issues mentioned here relate to human rights, and the AHRC is now engaged in considering the implications of neurotechnology, others – like the question relating to the neurobionic actus reus or the criminalisation of brain-hacking – go some way beyond human rights. And this is just from one area of law (criminal law). What about monitoring employees’ brains as they work, consumer protection issues relating to consumer neural devices, and contracts that result from neural device failure-related impulsivity? What are the implications of neurotechnology for the tort of negligence?

Other parts of government now need to look at neurotechnology, and it seems that because of the possible wide-ranging implications for the law, engagement by the AHRC would be a good thing. It’s now time to think more deeply about merging with technology and for Australia to expand its investigation beyond the AHRC. This would require a form of political engagement, as the Attorney-General would need to instruct the ALRC to produce a report.

It is interesting to note that at the end of 2021, Chile changed its constitution, in part to address issues relating to neurotechnology. The country recently had its first litigation based on the constitutional change, which involved a neurotechnology company that began its days in Australia. Chile’s constitutional change and the subsequent litigation were championed by Guido Girardi, who was at that time a senator in their national legislature.13 It seems that Australia may now need a similarly engaged politician or group of politicians as well as further engagement from academia and various government departments if it is to properly consider neurotechnology’s implications for law. BN


1 Lorraine Finlay, ‘Let’s not elevate brain tech over our humanity’, The Australian (online, 23 February 2024)<>.

2 Kelsey Ables, ‘Musk’s Neuralink implants brain chip in its first human subject’, The Washington Post (online, 30 January 2024) <>.

3 Liam Drew, ‘Neuralink brain chip: advance sparks safety and secrecy concerns’, Nature (online, 23 February 2024) <>.

4 For a discussion of legal professionals merging, see Allan McCay, Neurotechnology, law and the legal profession (Report, August 2022) 25–26.

5 For an outline of the operation of the device together with a consideration of some legal implications of it, see Allan McCay, ‘Neurobionic Revenge Porn and the Criminal Law: Brain–Computer Interfaces and Intimate Image Abuse’ in Nicole A Vincent, Thomas Nadelhoffer, and Allan McCay (eds), Neurointerventions and the Law: Regulating Human Mental Capacity (Oxford University Press, 2020) 168, and for a version that does not require subscription, see <>.

6 David Axe, ‘You Can Control These Flying Racing Drones With Your Brain’, The Daily Beast (online, 11 April 2023) <>.

7 For further analysis, see n 5.

8 ‘Main page: There’s a smarter way to treat epilepsy’, NeuroPace (webpage) <;.

9 For a discussion of the human rights implications of this approach to sentencing, see Allan McCay,‘Neurotechnology and human rights: developments overseas and the challenge for Australia’ (2023) 29(1) Australian Journal of Human Rights 160.

10 Crimes (Sentencing Procedure) Act 1999 (NSW) s 73A.

11 Jan Christoph Bublitz and Reinhard Merkel, ‘Crimes Against Minds: On Mental Manipulations, Harms and a Human Right to Mental Self-Determination’ (2014) 8 Criminal Law and Philosophy 51, 73-75.

12 ‘Main Page: Safety Reimagined by iCognative’, Brainwave Science <>.

13 For an overview of the Chilean events, see Allan McCay, ‘Neurotechnology and human rights in Chile: the Australian implications’, LSJ Online (online, 20 September 2023) <>.

* Acknowledgement: I am very grateful to Kevin Zou for research support.

Allan McCay

Academic Fellow, Sydney Law School and Deputy Director, Sydney Institute of Criminology