Can we develop and test machine minds and uploads ethically?

April 25, 2011 by Martine Rothblatt

A fundamental principle of bioethics requires the consent of a patient to any medical procedure performed upon them. A patient will exist the moment a conscious mindclone arises in some academic laboratory, or hacker’s garage. At that moment, ethical rules will be challenged, for the mindclone has not consented to the work being done on their mind. Does this situation create a catch-22 ethical embargo against developing cyber-consciousness?

There are at least three ways to answer this challenge. First, it can be approached with a medical ethics focus on the mindclone itself. Second, it can be approached philosophically — focusing on the mindclone as just part and parcel of the biological original. Third, it can be approached pragmatically: what will the government likely require?

Creating Ethical Beings Ethically

How can it be ethical to test mindclone-creating mindware when any resulting mindclone has not first consented to being the subject of such an experiment?  How will we know we have mindware that creates an ethically-reasoning mindclone if it is not ethical to even do the tests and trials?

As to the first question, ethicists agree that someone else can consent to a treatment for a person who is unable to consent. For example, the parents of a newborn child can consent to experimental medical treatment for them. The crucial criterion is that the consenter must have the best interests of the patient in mind, and not be primarily concerned with the success of a medical experiment. One of the purposes of an Institutional Review Board (IRB) or medical review committee is to exercise this kind of consent on behalf of persons who cannot give their consent. Hence, having a responsible committee act on their behalf solves the problem of ethical consent for the birth of a mindclone or beman.

Sometimes people complain that they “did not ask to be born.” Yet, nobody has an ethical right to decide whether or not to be born, as that would be temporally illogical. The solution to this conundrum is for someone else to consent on behalf of the newborn, whether this is done implicitly via biological parenting, or explicitly via an ethics committee. In each case there is a moral obligation (which can be enforced legally today for biological parents) to avoid intentionally causing harm to the newborn.

We are now ready to turn to the second question:  how can an ethics committee, acting on behalf of the best interests of future mindclones or bemans, avoid causing harm to them?

One possible solution to ethically developing mindclones is to take the project in stages. The first stage must not rely upon self-awareness or consciousness. This would be based upon first developing the autonomous, moral reasoning ability that is a necessary, but not sufficient, basis for consciousness. Consciousness is a continuum of maturing abilities, when healthy, to be autonomous and empathetic, with autonomous defined as: “the independent capacity to make reasoned decisions, with moral ones at the apex, and to act on them.” Independent means, in this context, “capable of idiosyncratic thinking and acting.”

By running many simulations, mindclone developers can gain comfort that the reasoning ability of the mindware is human-equivalent. In fact, the reasoning ability of the mindware should match that of the biological original who is being mindcloned.

The second stage of development expands the mindware to incorporate human feelings and emotions, via settings associated with aspects of pain, pleasure and the entire vast spectrum of human sentience. At this stage, all the feelings and emotions are terminating in a “black box,” devoid of any self-awareness. Engineers will measure and validate that the feelings are real, via instruments, but no “one” will actually be feeling the feelings.

The third stage entails creating in software the meaningful memories and patterns of thought of the original person being mindcloned. This can be considered the identity module. If this is a case of a de novo cyberconscious being, i.e., a beman, then this identity module is either missing or is created from whole cloth.

Finally, a consciousness bridge will be developed that marries the reasoning, sentience and identity modules, giving rise to autonomy with empathy and hence consciousness. Feelings and emotions will be mapped to memories and characteristic ways of processing information. There will be a sentient research subject when the consciousness bridge first connects the autonomy, empathy and identity modules.

This bridging approach to ethically creating mindclones is reminiscent of Dennett’s observation that the disassociation from themselves that some victims of horrible abuse exhibit — a kind of denial that the abuse happened to them — is not only a way to avoid the sensation of suffering, but is also likely to be the normal state in beings that have not integrated consciousness into their mind.

In other words, if a being is unable to mentally organize a conceptualized self into a mental world of conceptualized things and experienced sensations, then they cannot actually suffer from pain because there is not a yet a self to suffer. Pain can be experienced, and it can hurt like hell, but it is an autonomic hurt and not a personally experienced hurt. In Dennett’s view, when people witness this kind of pain in most animals, they anthropomorphize themselves into the animal’s position and imagine the animal’s hurt. But because most animals cannot do this, they cannot hurt. Similarly, until a self was bridged into a mindclone’s or beman’s complex relational database of mindware and mindfiles, there would be “no one home” to complain.

Ethically, approval from research authorities should be obtained before the consciousness bridge is activated. There will be concern not to cause gratuitous harm, nor to cause fear, and to manage the subject at the end of the experiment gracefully or to continue its virtual life appropriately. The ethics approvals may be more readily granted if the requests are graduated. For example, the first request could be to bridge just a small part of the empathy, identity and autonomy modules, and for just a brief period of time. After the results of experiments are assessed, positive results would be used to request more extensive approvals. Ultimately there would be adequate confidence that a protocol existed pursuant to which a mindclone could be safely, and humanely, awakened into full consciousness for an unending period of time — just as there are analogous protocols for bringing flesh patients out of medically induced comas.

For example, before companies are allowed to test new drugs on patients they must first test a very small dose of the drug, for a very short period of time, on a healthy volunteer. Only gradually, based on satisfaction with the safety of previous tests, are companies allowed to test the drugs more robustly. Analogously, we can envision ethical authorities first permitting the test of only a small sliver of consciousness and only for a small sliver of time. Gradually, as ethical review committees become convinced that the previous trials were safe (did not cause pain or fear), greater tests of consciousness would be permitted.

Of course we are all aware of drugs that have been withdrawn from sale after having even been approved. In these cases evidence of dangerous side effects appear that were not evident during the clinical trials. No doubt the same situation will occur with mindclones — some tortured minds may be created inadvertently. This does not mean it is unethical to create mindclones. It means that every means practical should be employed to minimize the risks of such side effects, and if they manifest, to be able to rapidly resolve the problem. For example, if test equipment indicates a serious problem with a mindclone it should be promptly placed into a “sleep-mode” so as not to suffer.

In the graduated process described above the experimental subject still did not consent to being “born.”  However, they could not so consent. In these cases a guardian (such as an institutional review board or certified cyberpsychiatrist or attorney) can ethically consent on an incompetent’s behalf, with such conditions as they may to impose. Alternatively, humans may in fact consent that their donated mindfiles can be used to create mindclones through a medical research process, assuming such consent was fully informed with a disclosure of the risks to the best of the researcher’s abilities.

In the foregoing way, it will be possible to ethically develop mindware that can be approved by regulatory authorities as capable of producing safe and effective mindclones for ordinary people. The authority may be the FDA in the U.S., or the EMA in the E.U., or some new regulatory entity. They will need to be assured that the mindware is safe and effective, and that proving it so was accomplished via clinical trials that were ethically conducted. By taking the inchoate mindclone through incrementally greater stages of consciousness, the regulatory hurdle can be met.

What’s the Big Deal — Just Me and My Mindclone

Another approach to the ethics of mindcloning is to remember that a mindclone and their biological original are the same person. Hence, the ethical requirement of “consent” is satisfied as long as a biological person requests their mindfile to be activated with mindware into a mindclone.

For example, there is no ethical objection to a person authorizing one, two, or 22 plastic surgeries upon their face, in the process transforming their looks almost beyond recognition. With mindcloning the plastic surgery is replaced with cyber surgery, and it is performed outside of one’s body. However, the end result, functionally, is quite similar — a person has consented to change of self — from one face to another in the case of plastic surgery; from one instantiation to two instantiations in the case of mindcloning. In each case the individual’s future will be changed, because others will interact differently with them, and they will behave differently. However, we recognize the right for a person to medically do as they please with their body (and mind), provided no doctor is being called upon to harm them without a countervailing benefit.

When consciousness first arises in a mindclone, it is not a new consciousness but an expansion of an existing consciousness. If it hurts, if it frightens, if it enlightens, it is not pain, fear or inspiration occurring to a new soul, but to an existing soul who now transcends two substrates: brain and software. The opening of consciousness in a mindclone is like what occurs to us when we have a profound educational experience.

I remember that I cried when I first read how the Nazis tied the legs of pregnant Jews together to kill them and their babies, and how, half a century later, the Liberian rebels chopped off the hands of young teenagers and talented craftsmen. My consciousness opened up to realms of cruelty that I had never imagined. I can’t say that I’m any better off for that education, but I knew what I was getting into in reading those stories. Similarly, creating a mindclone is going to change our minds. But it is our minds that we are changing, and this is something we have an ethical right to do.

We must also always remember that our minds are dynamically evolving pastiches of information and patterns of information processing. There is no such thing as having one mind, completely formed at birth, and never changing after that. Indeed, an excellent definition of a mind is: that which idiosyncratically aggregates, utilizes and exchanges information and information processing patterns. Consider the following meditation by Douglas Hofstadter in I Am a Strange Loop:

We are all curious collages, weird little planetoids that grow by accreting other people’s habits and ideas and styles and tics and jokes and phrases and tunes and hopes and fears as if they were meteorites that came soaring out of the blue, collided with us, and stuck. What at first is an artificial, alien mannerism slowly fuses into the stuff of our self, like wax melting in the sun, and gradually becomes as much a part of us as ever it was of someone else (and that person may very well have borrowed it from someone else to begin with).

Although my meteorite metaphor may make it sound as if we are victims of random bombardment, I don’t mean to suggest that we willingly accrete just any old mannerism onto our sphere’s surface — we are very selective, usually borrowing traits that we admire or covet — but even our style of selectivity is itself influenced over the years by what we have turned into as a result of our repeated accretions. And what was once right on the surface gradually becomes buried like a Roman ruin, growing closer and closer to the core of us as our radius keeps increasing.

All of this suggests that each of us is a bundle of fragments of other people’s souls, simply put together in a new way. But of course not all contributors are represented equally. Those whom we love and who love us are the most strongly represented inside us, and our “I” is formed by a complex collusion of all their influences echoing down the many years.

The relevance of Hofstadter’s extended metaphor lies in its implication that a mindclone is very much a part of its biological original because so very much of it would be copied from the original. If we are an agglomeration of other people, we surely must be much more an agglomeration of ourselves — even as we evolve from month to month and year to year. Our mindclones will be consolidations of ourselves, extensions of ourselves, and expansions of ourselves. They will be “of ourselves” and hence we are on firm ethical ground when we consent to their conscious awakening.

Quite a different situation prevails for the creation of a non-mindclone beman. Such consciousness is not an extension of anyone, but an entirely new idiosyncratic mixture of information and information processing patterns. The creation of such consciousness could be ethically considered as an exercise of a person’s own personal autonomy only in terms of each person having a right to create new life, as with biological reproductive rights.

The Ethics of Practicality

In the film Singularity is Near, futurist Ray Kurzweil argues with environmentalist Bill McKibben over the ethics of keeping people alive as long as technology makes a good quality of life possible. McKibbon says he is worried about the ethics of avoiding death. Kurzweil responds, “I don’t think people are going to wax philosophical if they are healthy but 120 years old, and a government official says they have to die.”  The clear implication is “hell no.”

Similarly, I started a truck locating company called Geostar back in the 1980s. At first, people wrung their hands over the ethics of monitoring the truck drivers’ locations via satellite. Many thought the drivers would rip the satellite locators off their cab roofs. Instead, the drivers embraced the technology because it enabled them to make much more money. The satellite tracking technology permitted trucking company dispatchers to know at all times if locator-equipped drivers were close to the locations’ newly called-in loads. Not a single locator was ripped off in the thousands of trucks using our service.

I think practically speaking, the benefits of having a mindclone will be so enticing that any ethical dilemma will find a resolution. With mindclones, we are offering people the opportunity to cram twice as much life into each day, absorb twice as many interesting things and continue living beyond the days of their bodies — with a practical hope of future transplantation via downloading into a new body. I doubt that those who wax philosophically about the ethics of mindcloning will win many arguments. People will want their mindclones, like we want smartphones, especially as they become cheaper and better.

There will be different companies competing to offer mindclone-creating mindware. They will need some sort of regulatory approval in order to legally sell their mindware (as opposed to black market sales). The public will be reluctant to permit cyber-consciousness to arise in great numbers without some guarantee of its safety and efficacy, e.g., lack of psychoses in mindclones. Certainly the public will only accept the citizenship of mindclones that are created from mindware that has been certified (by an expert government agency) to produce mindclones that are mentally equivalent to their biological originals (assuming adequate mindfiles).

I think it is unlikely that cyber-consciousness will be accepted as real consciousness until it has manifested itself, probably many times over, and been shown to be persuasive in media interviews and court cases. Hence, it will be difficult to hold up experimental development of cyber-consciousness because regulators will not believe there is any real sentience to worry about — “just code.”  Yet, once cyber-consciousness has appeared, and been generally accepted, the ethics of its development is a moot point.

Thus, practically speaking, the first mindclones will arise without much (or any formal) ethical protection during their development. Before the mindware that produced these mindclones can be generally marketed to the public, as certified to produce mindclone citizen extensions of biological originals, government agencies will require safety and efficacy testing. Specifically, government agencies will want proof that the mindware produces a healthy mind, and one that is practically indistinguishable from the mind of the biological original with an adequate size mindfile. Government agencies will not give their blessings to such proof unless it is developed ethically.

Ethical guidelines for developing mindclones will include a requirement of consent for the creation of a conscious being. As to the creation of mindclones, the consent of the biological original will likely be acceptable. As to the creation of bemans, there will be a more challenging pathway. Ethical review boards will need to be persuaded that the beman minds are not suffering during the process of accruing cyber-consciousness. This is not an insuperable barrier. However, it will require a much more deliberate development pathway based upon numerous graduated introductions of elements of cyber-consciousness, such as autonomy, empathy, identity and software bridges amongst these elements.

The bottom line is that ethical considerations favor a more rapid introduction of mindclones than non-mindclone bemans. Ultimately, however, the seeming catch-22 of how does a consciousness consent to its own creation can be solved.