superrific
Master of the ZZLverse
- Messages
- 12,540
Let's do some hypotheticals here.
1. Suppose AI advances to the point where it can run a brain scan on you and reproduce your neural circuits perfectly in a neural network. Talking to the neural network is exactly like talking to you. Now someone gets out a gun to shoot you. The argument is that he won't kill you, because you're still alive in the neural network. Should that be a valid legal argument, or should he be charged with murder? Note that you willingly participated in that digital self-upload. Should you fear death?
2. Suppose now that AI advances even further to where it can start embedding that network into your mind. Two sub-hypotheticals here. First, suppose it would require many chips in parallel. They are inserted one by one, and each time some of your natural brain circuitry is removed and the neuronal connections are connected to the chip. After a year, your brain has been fully replaced. Are you still you? If not, when did you stop being you? And should you fear being shot, assuming that the neural network can be re-implanted in another body (human or otherwise)?
Does it change if the operation is completed all at once -- like, you go in for surgery with a brain, and you come back with a chipset that is identical in function?
3. Suppose, once that happens, you can also set up digital interfaces with other people. In particular, you can link your brain to that of your spouse, allowing you to share any thoughts and experiences as if the spouse lived them, and vice versa. Let's assume for now that you would only share the memories you want to share, but present sense impressions can be accessed at will. The downside is that it's irreversible; once you're linked to the spouse, that link will always exist and can't be removed. Would you do it?
Questions 1 and 2 would be familiar to anyone who studied the history of philosophy (or, depending on the course, in a phil course itself), but this is a new twist.
1. Suppose AI advances to the point where it can run a brain scan on you and reproduce your neural circuits perfectly in a neural network. Talking to the neural network is exactly like talking to you. Now someone gets out a gun to shoot you. The argument is that he won't kill you, because you're still alive in the neural network. Should that be a valid legal argument, or should he be charged with murder? Note that you willingly participated in that digital self-upload. Should you fear death?
2. Suppose now that AI advances even further to where it can start embedding that network into your mind. Two sub-hypotheticals here. First, suppose it would require many chips in parallel. They are inserted one by one, and each time some of your natural brain circuitry is removed and the neuronal connections are connected to the chip. After a year, your brain has been fully replaced. Are you still you? If not, when did you stop being you? And should you fear being shot, assuming that the neural network can be re-implanted in another body (human or otherwise)?
Does it change if the operation is completed all at once -- like, you go in for surgery with a brain, and you come back with a chipset that is identical in function?
3. Suppose, once that happens, you can also set up digital interfaces with other people. In particular, you can link your brain to that of your spouse, allowing you to share any thoughts and experiences as if the spouse lived them, and vice versa. Let's assume for now that you would only share the memories you want to share, but present sense impressions can be accessed at will. The downside is that it's irreversible; once you're linked to the spouse, that link will always exist and can't be removed. Would you do it?
Questions 1 and 2 would be familiar to anyone who studied the history of philosophy (or, depending on the course, in a phil course itself), but this is a new twist.