Code of Ethics on Human Augmentation: the three ‘Laws’

Why? To protect us, future consumers and adopters, and society from machines of malice --- whether eventually by AI superintelligence, or right now by corrupt human intelligence. --- Steve Mann
July 5, 2016

By Steve Mann, Brett Leonard, David Brin, Ana Serrano, Robin Ingle, Ken Nickerson, Caitlin Fisher, Samantha Mathews, Ryan Janzen, Mir Adnan Ali, Ken Yang, Pete Scourboutakos, Dan Braverman, Sarang Nerkar, Keram Malicki-Sanchez, Zack P. Harris, Zach A. Harris, Jesse Damiani, Edward Button

The Human Augmentation Code was presented at the VRTO Virtual & Augmented Reality World Conference + Expo, June 25–27, 2016 by Steve Mann, PhD., Chief Scientist of Metavision, Chief Scientist at the Rotman School of Management at the University of Toronto, Inventor of HDR (High Dynamic Range) imaging, Founder of MIT Media Lab Wearable Computing project, and widely regarded as “the father of wearable computing.” 

“Creators of technology tend to focus mainly on their customers without due regard for how their technology will affect non-adopters,” Mann said in a keynote. “For more than 40 years, I’ve lived everyday life in a tetherless free-roaming virtual reality universe of my own making where I could see sound waves, radio waves, and more profoundly, see sight itself. And my most profound discovery was not what was inside that universe, but what was at its boundaries, especially its societal boundaries.

“In the 1970s I could choose to live in a world where the speed of light was exactly equal to zero, so I could see, touch, and hold radio waves as virtual objects sitting perfectly still. In the 1990s this invention formed the basis of my portfolio for successful admission to MIT Media Lab where I founded the MIT wearable computing project as its first member and brought this concept to a global audience [Negroponte 1997].

Seeing radio waves (credit: Steve Mann)

“The long-term risks of artificially intelligent machines are well known and much talked about (‘The Singularity is Near!’). Less understood, but more immediately pressing, are the risks that humanistically intelligent entities pose right now, whether facilitated by “smart buildings”, “smart cities” (a camera in every streetlight), or “cyborgs” with wearable or implantable intelligence. This sensory intelligence augmentation technology is already developed enough to be dangerous in the wrong hands, e.g., as a way for a corrupt government or corporation to further augment its power and use it unjustly.

“The Keynote at the World Transhumanist Association’s annual conference in 2004 connected this question with the ethics communities, leading to what Minsky, Kurzweil, and I call the ‘Sensularity’ (sensory singularity) [Minsky, Kurzweil, Mann 2013]. Augmented reality is not just about eye glass, but it already affects all of us ever hour of every day — cities, buildings, cars, and now people, have augmented sensory intelligence that affects us both virtually and in reality.”

For more info and the current version of this paper, see


The possibility that artificially intelligent machines may some day pose a risk is well known [1].

Less understood, but more immediately pressing, are the risks that humanistically intelligent [5, 7] peo­ple or organizations pose, whether facilitated by “smart buildings,” “smart cities” (a camera in every street­light), or “cyborgs” with wearable or implantable in­telligence. As we augment our bodies and our societies with ever more pervasive and possibly invasive sensing, computation, and communication, there comes a point when we ourselves become these technologies (what Minsky, Kurzweil, and Mann refer to as the “Sensory Singularity”[10]).

This sensory intelligence augmentation technology is already developed enough to be dangerous in the wrong hands, e.g., as a way for a corrupt government or corporation to further augment its power and use it unjustly.

Accordingly, we have spent a number of years devel­oping a Code of Ethics on Human Augmentation [9], further developed at IEEE ISTAS 2013 and IEEE GEM 2015 (the “Toronto Code”), resulting in three funda­mental “laws.”

Human Augmentation Code: the three “Laws”

These three “Laws” represent a philosophical ideal (like the laws of physics, or like Asimov’s Laws of Robotics [2]), not an enforcement (legal) paradigm:

  • 1. (Metaveillance/Sensory-Auditability) Humans have a basic right to know when and how they’re being surveilled, monitored, or sensed, whether in the real or virtual world.
  • 2. (Equality/Fairness/Justice) Humans must (a) not be forbidden or discouraged from monitor­ing or sensing people, systems, or entities that are monitoring or sensing them, and (b) have the power to create their own “digital identities” and express themselves (e.g., to document their own lives, or to defend against false accusations), us­ing data about them, whether in the real or virtual world. Humans have a right to defend themselves using information they have collected, and a re­sponsibility not to falsify that information.
  • 3a. (Aletheia/Unconcealedness/Technological-Auditability) With few exceptions, humans have an affirmative right to trace, verify, examine, and understand any information that has been recorded about them, and such information shall be provided immediately: Feedback delayed is feedback denied. In order to carry out the jus­tice requirement of the Second Law, humans must have a right to access and use of information col­lected about them. Accordingly, we hold that Subjectrights [6] prevail over Copyright, e.g., the subject of a photograph or video recording enjoys some reasonable access to, and use of it. Sim­ilarly, machines that augment the human intel­lect must be held to the same ethical standard. We accept that old-fashioned, hierarchical insti­tutions (e.g., law enforcement) still have need for occasional asymmetries of veillance*, in order to ap­ply accountability to harmful or dangerous forces, on our behalf. However such institutions must bear an ongoing and perpetual burden of proof that their functions and services justify secrecy of anything more than minimal duration or scope. Application of accountability upon such elites ­even through renewably trusted surrogates, must be paramount, and a trend toward ever-increasing openness not thwarted.
  • 3b. Humans must not design machines of malice. Moreover, all human augmentation technologies shall be developed and used in a spirit of truth, openness, and unconcealedness, providing compre­hensibility through immediate feedback. (Again, feedback delayed is feedback denied.) Uncon­cealedness must also apply to a system’s internal state, i.e. system designers shall design for imme­diate feedback, minimal latency, and take reason­able precautions to protect users from the negative effects (e.g., nausea and neural pathway overshoot formation) of delayed feedback.
  • 3c. Systems of artificial intelligence and of human augmentation shall be produced as openly as pos­sible and with diversity of implementation, so that mistakes and/or unsavory effects can be caught, not only by other humans but also by diversely competitive and reciprocally critical AI (artificial intelligence) and HI (humanistic intelligence).

A metalaw states that the Code itself will be created in an open and transparent manner, i.e., with instant feedback and not written in secret. In this meta-ethics (ethics of ethics) spirit, continual rough drafts were posted (e.g., on social media such as Twitter #HACode), and members of the community were invited to give their input and even become co-authors.

The Second Law

The First Law is well documented in existing literature on metasensing, metaveillance [8], and veillametrics [4]. Interestingly, the City of Hamilton, Ontario, Canada, has passed the following bylaw, relevant to the First Law of Human Augmentation:

“No person shall: Apply, use, cause, permit or maintain … the use of visual surveillance equipment where the exterior lenses are ob­structed from view or which are employed so as to prevent observation of the direction in which they are aimed.” [3].

A drawing by Mann’s six-year-old daughter, illustrating surveillance versus sousveillance (credit: Stephanie Mann)

The Second Law asserts that systems that watch us, while forbidding us from watching them, are unfair and often unjust.

2.1 The Veillance* Divide is Justice Denied

In the new, “transhumanistic era,” some machines will acquire human qualities such as AI (artificial intel­ligence), and some humans will acquire machine-like qualities such as near-perfect sensory and memory ca­pabilities. Irrefutable recorded memories — suitable as evidence, not mere testimony — will challenge many of our old ways, calling for updated ethics that serve the interests of all parties, not just those with power or au­thority.

Surveillance vs. sousveiillance (left-to-right): ceiling dome, Mann 1998, Microsoft 2004, Memoto 2013 (credit: Glogger CC)

Our greatest danger may be a “(sur)Veillance Divide” where things (Internet of Things) and elites may record with perfect memory, while ordinary people are forbidden from seeing or remembering. Therefore, we propose the following pledge, to clarify the need for fairness, equality, and two-way transparency:

  • 2a(i). I pledge to not surveill or record any individ­ual or group while simultaneously forbidding that individual or group from recording or sousveilling me.
  • 2a(ii). I pledge to respect the needs of others for the sanctity of their personal space. I will negoti­ate any disagreements reasonably and with good will.
  • 2a(iii). If I witness a crime against fellow humans, whether perpetrated by low-level criminals or by elites or by authorities, I will aim to record the event, overtly or covertly (whichever is appropri­ate). I will aim to make such recordings available to injured parties.
  • 2a(iv). I will maintain that, with few exceptions, being surveilled while simultaneously being forbid­den from sousveilling, is itself an injury. Therefore, if I witness any party being recorded, while that party is simultaneously prevented from recording, I will aim to record the incident, and to make the recording available to the injured party.
  • 2a(v). I will make a best effort to be informed of escrow storage (e.g., “videscrow”), so that when recording others, there can be “temporary exclu­sions” on retroactive recording until disagreements may be adjudicated. Here the burden-of-proof is on the party prohibiting unescrowed recording.
  • 2a(vi). I will try not to be provocative or con­frontational, assuming the worst about others. But the light that I shine and the recordings I take may thwart injustice. It is possible to apol­ogize and make amends for too much light. Too little can be lethal.


We take here an important first step toward the Human Augmentation Code 1.0. This is a “living document” and we are open to contributions from all, as it evolves.

KurzweilAI readers can contribute by:

  1. Signing up (become a signatory) at
  2. Sharing to all their social channels (tweet #HACode, etc.)
  3. If you want to be more active and to help draft the next version of the code, which will be submitted to IEEE in 17 days (on July 22, 2016), email proposed edits or changes to
  4. If you want to dig deeper into human augmentation technologies, go to the following Instructable,, click “I made it” and upload your results.

* Definition of “veillance”:

(credit: Steve Mann)

When humans interact with machines, computers, software, or the like, the machines sense us while giving us information (feedback), as shown in the Humanistic Intelligence (HI) diagram above.

When machines sense us, we call that “surveillance.”  This sensing is not a bad thing, as long as the machines reveal their state back to us (“sousveillance”).  But when machines sense us without immediately revealing themselves, we have a one-sided veillance (sensing) that is out-of-balance and destructive.

Too often, modern machines know everything about us, yet reveal nothing about themselves.  Sluggish delayed feedback disrupts HI, resulting in frustrating user-interfaces that cause people to repeat the same actions over and over angrily.  When people click on something and nothing happens, they instinctively double-click and triple and quadruple and quintuple-click, repeating the same thing over-and-over expecting a different (i.e. successful) result.

The definition of insanity is “doing the same thing over and over again and expecting a different result” —  Benjamin Franklin, Narcotics Anonymous 1981 (often attributed to Albert Einstein). We live in a world that’s increasingly difficult to understand, due to the increasingly erratic and unresponsive (delayed feedback) nature of computing, including things like VR sickness and brain damage from delayed feedback. This is creating a world that causes (or requires) insanity.

Optimum Insanity: There is a certain optimum amount of insanity required to use software, etc., and this might be quantified as the optimum number of times we should retry the same thing (input).

Amid this world, we see a quest for authenticity, consistency, reliability, transparency, and comprehensibility. This quest has manifested itself as a surge in the old DIY (Do It Yourself) and “Maker” cultures, along with a rebirth of old technologies like record players (turntables), large control knobs, and the like.

The answer to these problems is a code of ethics on human augmentation that requires immediate feedback — among other things — related to how we’re sensed and how we know when and how we’re being sensed.

More info in this open-access paper by Mann: Surveillance (oversight), Sousveillance (undersight), and Metaveillance (seeing sight itself)


[1] N. Bostrom. Ethical issues in advanced artificial intel­ligence. Science Fiction and Philosophy: From Time Travel to Superintelligence, pages 277–284, 2003.

[2] R. Clarke. Asimov’s laws of robotics: implications for information technology-part i. Computer, 26(12):53– 61, 1993.

[3] M. Fred Eisenberger and C. C. Rose Caterini. City of hamilton bylaw no. 10-122. May 26, 2010.

[4] R. Janzen and S. Mann. Sensory flux from the eye: Biological sensing-of-sensing (veillametrics) for 3d augmented-reality environments. In IEEE GEM 2015, pages 1–9.

[5] S. Mann. Humanistic intelligence/humanistic comput­ing: ‘wearcomp’ as a new framework for intelligent sig­nal processing. Proceedings of the IEEE, 86(11):2123– 2151+cover, Nov 1998.

[6] S. Mann. Computer architectures for personal space: Forms-based reasoning in the domain of humanistic intelligence. First Monday, 6(8), 2001.

[7] S. Mann. Wearable computing: Toward humanistic intelligence. IEEE Intelligent Systems, 16(3):10–15, May/June 2001.

[8] S. Mann. The sightfield: Visualizing computer vision, and seeing its capacity to” see”. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pages 618–623. IEEE, 2014.

[9] S. Mann. Keynote address: Code of ethics for the cyborg transhumanist era. In Second annual conference of the World Transhumanism Association., August 5­8, 2004.

[10] M. Minsky, R. Kurzweil, and S. Mann. The society of intelligent veillance. In IEEE ISTAS 2013.

Additional references:

S. Mann, Brett Leonard, David Brin, Ana Serrano, Robin Ingle, Ken Nickerson, Caitlin Fisher, Samantha Mathews, R. Janzen, M. A. Ali, K. Yang, D. Braverman, S. Nerkar, K. M.-Sanchez, Zack P. Harris, Zach A. Harris, Jesse Damiani, Edward Button. Code of Ethics on Human Augmentation. VRTO Virtual & Augmented Reality World Conference + Expo, June 25-27, 2016. (open access)

Steve Mann. Surveillance (oversight), Sousveillance (undersight), and Metaveillance (seeing sight itself). 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshop. 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops. (open access)

Steve Mann. The Sightfield: Visualizing Computer Vision, and seeing its capacity to “see.” IEEE Xplore. 2014 (open access)

Ryan Janzen and Steve Mann. An Information-Bearing Extramissive Formulation of Sensing, to Measure Surveillance and Sousveillance. 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE). 2014 (open access)

Mir Adnan Ali and Steve Mann. The inevitability of the transition from a surveillance-society to a veillance-society: Moral and economic grounding for sousveillance. 2013 IEEE International Symposium on Technology and Society (ISTAS). 2013 (open access)