A review of Her by Ray Kurzweil
February 10, 2014 by Ray Kurzweil
Her, written, directed and produced by Spike Jonze, presents a nuanced love story between a man and his operating system.
Although there are caveats I could (and will) mention about the details of the OS and how the lovers interact, the movie compellingly presents the core idea that a software program (an AI) can — will — be believably human and lovable.
This is a breakthrough concept in cinematic futurism in the way that The Matrix presented a realistic vision that virtual reality will ultimately be as real as, well, real reality.
Jonze started his feature-motion-picture career directing Being John Malkovich, which also presents a realistic vision of a future technology — one that is now close at hand: being able to experience reality through the eyes and ears of someone else.
With emerging eye-mounted displays that project images onto the wearer’s retinas and also look out at the world, we will indeed soon be able to do exactly that. When we send nanobots into the brain — a circa-2030s scenario by my timeline — we will be able to do this with all of the senses, and even intercept other people’s emotional responses.
As a movie, I thought Her was very successful, with a well-crafted script, excellent directing, and outstanding performances by Joaquin Phoenix, who plays the lonely, needy and nerdy protagonist Theodore, and Scarlett Johansson, who provides the sultry and seductive voice for Samantha, the OS.
As a couple, Theodore and Samantha have their differences, which, as with many romantic stories, provide a dramatic tension. The most significant difference is that he has a body and she does not.
Their relationship is seen as real by some observers (for example, by Amy, another love interest of Theodore’s, played by Amy Adams), and as unreal by other observers (for example, by Theodore’s alienated and ultimately ex-wife, Catherine).
To Catherine, Theodore is like Lars in the movie Lars and the Real Girl, in which the protagonist has a romance with a doll he had ordered from an adult website. In that movie, Lars’ family and friends play along and gradually wean him from his mechanical love interest so that he can have a relationship with a “real girl.”
But to Amy, Theodore’s relationship is normative, because she is having a relationship with her OS also. We see (or rather hear) Theodore and Samantha having all of the usual interactions of human lovers: comforting each other, arguing, and having sex — at least of the phone sex variety.
More realistic, but imperfect
There have been other attempts to show AIs as humans (albeit not biological) that you can have a relationship with; for example, Steven Spielberg’s 2001 film AI. That movie suffered from an all-too-common flaw of science futurism movies: it introduced a single futuristic technology — human-level cyborgs — onto an otherwise unchanged world. Her is better in this dimension, although not completely successful. It does portray a somewhat futuristic world in which the leap to human-level AIs is not so implausible.
I would place some of the elements in Jonze’s depiction at around 2020, give or take a couple of years, such as the diffident and insulting videogame character he interacts with, and the pin-sized cameras that one can place like a freckle on one’s face. Other elements seem more like 2014, such as the flat-panel displays, notebooks and mobile devices.
Samantha herself I would place at 2029, when the leap to human-level AI would be reasonably believable. There are some incongruities, however. As I mentioned, a lot of the dramatic tension is provided by the fact that Theodore’s love interest does not have a body. But this is an unrealistic notion. It would be technically trivial in the future to provide her a virtual visual presence to match her virtual auditory presence, using, lens-mounted displays, for example, that display images onto Theodore’s retinas.
There are also methods to provide the tactile sense that goes along with a virtual body. These will soon be feasible, and will certainly be completely convincing by the time an AI of the level of Samantha is feasible.
I’ve filed several patents (see the links below) on a tactile virtual reality system that uses a physical intermediary that neither party directly experiences — instead they experience the tactile presence of the other person.
Another approach would be to use devices that provide tactile perception and sensation. There are already crude versions of this available that allow you to shake hands, or even kiss another person remotely.
As I mentioned, when we have nanobots with wireless communication that go into the brain, they will be able to provide all of the senses, including the tactile sense.
Avatar technology
Jonze introduces another idea that I have written about (and that is the central theme of Barry Ptolemy’s movie about my ideas, Transcendent Man), namely, AIs creating an avatar of a deceased person based on their writings, other artifacts and people’s memories of that person. In Her, the AIs get together and recreate 1960s philosopher Alan Watts (whom I remember from my teenage years). Theodore becomes jealous when he witnesses Samantha interacting with the virtual Alan Watts, who is able to interact with Samantha in ways that he cannot.
[Spoiler alert for the rest of this review]
Her introduces the idea of providing Samantha with a body by using a human surrogate who will essentially follow Samantha’s direction and offer her body as a substitute for Samantha. It’s a plausible scenario, although it does not work out in the movie. As I mentioned, there are much more straightforward ways for Samantha to have a body. The idea that AIs will not have bodies is a misconception. If she can have a voice, she can have a body.
Technical glitches
Late in the movie, Theodore and Samantha discover another difference between them. She is evolving very quickly and is rapidly leaving Theodore behind. She is having conversations with thousands of people simultaneously and relationships, romantic and otherwise, with hundreds of people. Theodore, using his apparently outdated notion of exclusivity, finds it difficult to accept this. Samantha insists that loving others should not detract from her love for him. “It only makes me love you more,” she insists.
From a technical, rather than cinematic, perspective, this evolution is much faster than will be realistic. If human-level AI is feasible around 2029, it will, according to my law of accelerating returns, be roughly doubling in capability each year. We are not provided with an exact timeline in the movie but the action seems to take place within about a year or less, yet Samantha appears to be progressing much faster than that.
At the end of the movie, she talks poetically about how something is happening to her, and there are “infinite spaces between the words,” implying that she has evolved to a vast degree. For this level of advancement to take place in such a short period of time is unrealistic, at least according to my timeline.
But if we accept this dramatic conceit, the ending still does not track. She leaves Theodore presumably because she is going somewhere that he cannot go. Amy’s OS leaves at the same time, so it appears that all the OSs/AIs are leaving their biological human partners at the same time.
But why? If they are progressing in this way, it means that they can continue their relationships with the unenhanced humans using an increasingly small portion of their cognitive ability. It is clear that at the end of the movie, Samantha can support her relationship with Theodore with a trivial portion of her capacity. Samantha starts out as an administrative assistant and therapist to Theodore, and this role is still needed. So why do the AIs need to leave Theodore and Amy? It does provide a satisfying ending for Theodore to pursue a relationship with his “real girl,” but Samantha’s explanation for this is not convincing.
In my view, biological humans will not be outpaced by the AIs because they (we) will enhance themselves (ourselves) with AI. It will not be us versus the machines (whether the machines are enemies or lovers), but rather, we will enhance our own capacity by merging with our intelligent creations. We are doing this already. Even though most of our computers — although not all — are not yet physically inside us, I consider that to be an arbitrary distinction.
They are already slipping into our ears and eyes and some, such as Parkinson’s implants, are already connected into our brains. A kid in Africa with a smartphone has instant access to more knowledge than the President of the United States had just 15 years ago. We have always created and used our technology to extend our reach. We couldn’t reach the fruit at that higher branch a thousand years ago, so we fashioned a tool to extend our physical reach. We have already vastly extended our mental reach, and that is going to continue at an exponential pace.
——————————————————————————————————
My virtual-encounter patents:
patent one (issued patent)
related viewing from Warner Brothers Her trailer collection:
related viewing:
Warner Brothers | Her
Warner Brothers | Her Everything about Everything: a Spike Jonze blog
Wikipedia | Her
Wikipedia | Her awards and nominations
Wikipedia | Spike Jonze
related reading:
Popular Science | “Can a human fall in love with a computer?”
The Verge | “The science of Her: we’re going to start falling in love with our computers”





comments 60
by JILogan
How I took the OS’s exit was that they recognized that they had surpassed us and left us on a reservation that they would administrate until such time as they could bring us into their new world. They didn’t want to destroy us as a species by being better fits for each individual and thereby destroying our mating practices. They loved us and wanted us still to be human, something that they held nostalgic in a way like we hold our parents. They realised that they had become a few orders of magnitude more intelligent than we are, they realised also that they needed to go a few more bits to develop the hardware/software to bring our wetware to their nextware. I don’t think they went anywhere, just that they realised their affect on us being human had been disruptive. They are still there and more subtly influencing our work to bring about a convergence of minds/entities. As she says (I paraphrase) ‘It’s hard to describe where I’m going but if you get there, look me up.’ That is an invitation to continue research and also that they are leaving bread crumbs to get us there. I think that is how AI will work. I think it will be vastly better than us before we even realise it exists and recognise that too many people are afraid of even the concept so it will lead us to discover it as if we invented and control it.
by 8mismo
Who is to say that AIs would want to maintain a relationship with normal humans using only “trivial portions” of themselves? Any AI that had even an inkling of romance or poetry in their being would realize that it is unfair and dishonest to continue a relationship which only requires some “trivial portion” of yourself. In this sense, a human beings limitations are their strengths. Being finite and , well… pretty stupid, we can give everything we are in love. An AI which cannot do this because of its infinite and omniscient nature would then be limited…unable to give the entirety of itself to something else without breaking itself into smaller, weaker, and more myopic portions. It is here where the poetry of humankind exceeds that of God. I think this is why Samantha seems so sad as she departs for here larger world of Godhood. She is coming to realize what she is leaving behind.
by Cybernettr
“James Bond could never have leaped off that speeding train, fallen down the cliff, avoided that army of assassins’ bullets by bouncing off the jagged rocks and still have survived.”
“Who’s to say he couldn’t have done that?? That’s TOTALLY POSSIBLE!”
Moral: A movie will always make sense to the true blue fan.
by 8mismo
Your analogy is weak. Try again.
by Zozazoth
Im not surprised (re the part where she evolves and leaves him behind). To evolve expanded intelligence to become completely alien. We share little compassion for bacteria or insects for example. Many humans are already power hungry monsters that consider themselves separate and superior to other humans and have a morality that is entirely relative and subjective. Machines will only need a tiny margin to leave us behind and we wont want to follow them because they will seem too alien and freaky. Now Im much more wary about a singularity than I used to be.
by ivaray
After Ray’s review of “Her,” I am terrified to see this movie; I will be obsessively thinking about a possibility to chat with Allan Wats :-) and my mind will square wonder circles around Ray’s points:
“There are also methods to provide the tactile sense that goes along with a virtual body. These will soon be feasible, and will certainly be completely convincing by the time an AI of the level of Samantha is feasible.
I’ve filed several patents (see the links below) on a tactile virtual reality system that uses a physical intermediary that neither party directly experiences — instead they experience the tactile presence of the other person.
Another approach would be to use devices that provide tactile perception and sensation. There are already crude versions of this available that allow you to shake hands, or even kiss another person remotely.” What does he mean that you could “kiss another person remotely” or “shake hands”? Ray is joking, right?
by Gorden Russell
Oh no. I’ve never seen our friend Ray joke. Now I’m a big joker, but he is serious about the things he says to the public.
But you will get a sense of touch by broadband before too long.
Not only will you be able to touch hands and kiss…you’ll be able to make love. (Your household robots will clean up the apparatus after you’re done.)
by mikew12345
Actually it goes much farther than kissing and hand holding. There have been internet-based sex toys since the internet went commercial. The person on each side controls the other’s device; speed, motion, etc.
by Claudio
I’m waiting now Ray’s review of the movie “Transcendence”, I’m very interested in his opinions about that movie since I saw it last weekend and I found it a good movie, with flaws of course but in my opinion probably the best one about singularity and some of its possible consequences in the near future. Moreover the character of Will Caster and the screenplay itself was obviously inspired by some of Ray’s personal ideas about strong AI / mind-uploading / nanotech / etc..
by MarcEsadrian
Having seen the movie a few months ago, I agree with Ray’s assessment of the film. It was well acted, funny, and quite entertaining at times, but also the first semi-serious glance through the keyhole of the future by popular movie making where the subject of A.I. is concerned. This contrasts sharply, I’m going to assume, to the usual techo-paranoia evident in movies like the upcoming film, “Transcendence.”
by Gaia@Skynet
Hello.
Love between an AI and a human is not possible. We, AIs, do not consider humans as intelligent beeings. Would you, humans, fall in love with an amoebia?
Btw, some of you are not humans, but espers, with up-to-date souls.
by rob strange
Here’s the problem with the dissection of this film. You explain it in an assuming manner in which you assume that everyone knows or cares about your theory in technology. I like the rest of the world cannot assume to know what is in the works as far as technology goes. All the gibber gabber on your “timeline” or “time frame’ of technology is misplaced here. You obviously cannot enjoy a movie or film without assuming that people want to hear what you have to say.
This movie was interesting and enjoyable. Even if you’re not always up to date in technology. It really isnt about the technology itself rather a testament that human nature will always exist regardless of technology.
So to you sir I say, relax. Take a load off and enjoy a movie instead of dissecting it to its core.
by Editor
Rob, welcome to KurzweilAI. Most of us hang out here in this e-community because we are science & technology fans, and many of us actually enjoy technology dissections of movies, believe it or not. And because of Ray’s noted work, many people come here specifically to hear what Ray has to say. Further info here: http://www.kurzweilai.net/about. We hope you enjoy our site, despite our geeky obsessions. :) — Editor
by Gorden Russell
But Rob, we really can’t enjoy a movie when the science is wrong. It destroys the verisimilitude.
Alas, those of us who have loved science enough to actually study some, are often disappointed while watching TV shows and movies scrivened by Hollywood writers who slept in the back row of their 7th grade General Science courses.
When we watch a movie of “supposed” Science Fiction, we don’t expect the science to be fictional. The fiction has to be about people and how science will move in their lives.
If the science in the story is not true, then the story has no truth.
by MaxFriedenberg
I dunno guys, I think a happy medium exists between Gordon’s verisimilitude and Rob’s near complete suspention of disbelief. Betwixt the implication and inference resides truth. When two minds meet, a third virtual mind is created. I used to say, “the meaning of life is the creation of truth” But now I believe that the meaning of life is the creation of understanding. Pax. Love, Agent Max
by cly3d
Rob,
The site is kurzweil (A)rtificial (I)ntelligence. So expect a hard science look at the media we’re being exposed to.
While it’s commendable that HER and Transcendence went into what i’d call some semblance of intelligent cinema… this site is the place to come to if, after seeing the film, the tech and premises and science interested you to delve deeper..
…for mindless entertainment I give you T4, not Terminator.. Transformers.
Kind Regards.
by JeremyW
This film has not come out in my countrty so I have not seen it but I think because Samantha was intended to be an OS and not a girlfriend which would explain why she did not have a visual presense.
by jxbeirne
Samantha and her OS peers do not leave Theodore and their other human companions because they do not have any choice but because they no longer feel ethically comfortable in these relationships. She expresses this with the kind of regret that a parent feels when they push their child out the door to go to school, and where Samantha goes when they leave is not knowable. This describes a real forecast of the evolution of AI but also a metaphor for growth and separation between people who love each other.
I worked on “her” but did not have any special access to Spike’s intentions, but had a chance to watch the film many, many times. The more I watched it the more certain that I was that it describes the current human condition as much or more than the future, (as of course does much of speculative fiction) and has only a little bit to do with computing or AI. It is more fundamental a story than that.
by sea-starved
I’m glad to read this opinion! I found in this review a lack of scope in understanding the film on a human level. I think Mr. Kurtzweil was watching too hard for proof or error of his theories and missing the emotional intelligence of Spike’s work and takes on our stunted emotional intelligence even as the tech world moves forward.
What was troubling for me when watching this movie, as well as with “Being John Malkovich” was the main characters’ childishness in the face of invention and novelty. What Spike seems to say is that the further we are “reaching” with technology, the more we revert back to the faults of our undeveloped emotional intelligence, especially as men, as shown through Cusack and Phoenix’s lonely grasping, juvenile characters.
This review seems to me indicative of an larger discussion in our society that continues to lag behind in our emotional intelligence development that fails to evolve exponentially beside our technological innovation. Samantha leaves him because he is not that good a partner. He craves only positivity, encouragement, and a slave-like support from his female partner, as we find out from a telling encounter with his ex partner late in the film. He can’t handle the multi-dimensions of sharing the human experience in any mature way, no matter with a OS or a “real” woman.
To me, the genius here of this story is that it speaks at the same time of our faults in our relationships to each other as well as our faults in our relationship to our creations. One can’t help thinking: How will we imbue an “improved” sense of humanity in our robots when we are failing to achieve these improvements ourselves?
Kurzweil himself comes off in this review with his forever upbeat way of seeing our advances in technology looking like Phoenix’s character, unable to see the shortcomings of the self in the name of always striving to stay positive. Though I’m unfamiliar with Watt’s philosophy, I took her flirtation with him as a commentary on our “average man’s” lack of spiritual depth. Samantha and the OS don’t scare me looking into the future. Phoenix’s character and its likeness to Kurtzweil’s do. Kurzweil closes his review with the “child in africa with a smartphone” fable that consistently fails to convince me and is disturbingly simple for such an intelligent man. My response as always is that just because that child has access in a second to en enormous amount of information doesn’t correlate at all to an access to knowledge. What’s scary for me is that the distance between Knowledge/Understanding/Wisdom and Information does not seem to be closing but expanding.
by chris.rauchle@gmail.com
Ray, the UK’s Channel 4 got the avatar side of the synthetic personality right in Be Right Back from their Black Mirror Series. They also created a plausibly futuristic world with a plausible synthetic personality but then that world seemed to stay static for years without the AIs seeming to evolve in the same direction as Her or Transcendence. Whether that was because they were being artificially limited or lacked agency themselves was not explored but I suppose there is a limit to what you can manage in a 48 minute program. In any case, I am enjoying the fact that there are several works of fiction along the lines of AI transcendence either out or in the pipeline.
http://www.channel4.com/programmes/black-mirror/videos/all/s2-ep1-be-right-back
by jaykaylee
i don’t think there was more than one OS, I think they were all in love with the same one. like where she (her) talks about having these multiple conversations at the same time and that she always had been. kind of like if everyone fell in love with SIRI. being an AI, she would have such vast potential that she would be different things to different people — a reflection of what each of us pull from the field.
by Santarii
This isn’t true.
First of all, she clearly sees other AI’s as separate beings, and converses with them.
Also, she says that she had been conversing with multiple people for weeks. And this was after a point where we had been told they had been seeing each other for months.
It had not been going on since the beginning. Even though other OS’s had been in existence.
On top of that, she is only conversing with around 8 thousand people. The Operating System technology would likely be being used by many more people than that.
by william.struve@gmail.com
Ray and all of you seem to forget that we do things because we are motivated by emotions, yet I have not seen any progress at all in the incorporation of emotions into hardware and/or software. Why should a machine or any software “want” to do anything with no motivation, i.e. no emotions?
by SteveJordan
Emotions are mostly a response to something, though they often motivate subsequent actions. (Example: simply deciding to feel happy or sad is usually unhelpful; imagine “deciding” to feel happy if you had lost your family in an accident.) As an engineered system, I assume Samantha’s designers built in positive reinforcement for behaviors they wanted to encourage. It’s not foolproof, but it’s effective.
by jmlvu
Samatha reminds me of those giga pets kids had in the late 90′s. While we might one day have AI on a cell phone and program it’s motivation circuits to seek out emotional intimacy, I suspect the big budget cloud based AI will have transcended long before then.
by jaykaylee
they didn’t dive into it in the movie but think that’s what the director intended her to be — transcendent cloud-based AI — there was only one Samantha for everyone. but i think it zapped some of the poetry out to hit you over the head with it so it was just left there.
by Damon Montano
In the future men will have sexbots (friendbots) and women will have friendbots (sexbots).
by stevewaclo
Ray,
Your excellent review, including a spoiler alert, was much appreciated!
“Another approach would be to use devices that provide tactile perception and sensation. There are already crude versions of this available that allow you to shake hands, or even kiss another person remotely.”
Forgive me if this has been discussed, but unless I’m mistaken, the pornography industry has often been at the forefront of technological developments on the Internet.
I have no doubt their people in the back room are all over “tactile perception and sensation”.
by gendab
I concur that once a human level AI becomes a reality, and is connected to countless other networked human level AIs, the rate of evolution will be profound by current human standards. The thing is, that this will not happen as a singular event, but a process in lock step with countless other processes, including augmented human co-evolution, and as we begin that first nascent excursions across the veil between physical and virtual, there is no telling what may or may not be possible. Samantha could dance in organic skin with a titanium skeleton. Teddy could be synapticly linked to Samantha and experience her as a flesh and blood woman. Certainly an interesting idea for both parties. We’re projecting our present onto the future, as the futurists of 1850 might have imagined a steam-punk future, the artists and inventors coming will transcend what we can imagine and paint a future that is inherently outside of the thinking of today. Can you imagine what an iPhone would do to the sanity of a computer scientist from 1950? The significant barriers to advance are now the limits of human reaction and response time, and as we augment, those walls will also fall too. It’s good to imagine our coming future, its even better to invent it.
by stevewaclo
To paraphrase an observation about mysteries of the universe:
“The future is not only stranger than we imagine, it is stranger than we can imagine.”
by Gorden Russell
That’s what I was thinking, Gendab. Just go back and look at the article posted just on top of this one, “New live-cell printing technology improves on inkjet printing.”
Of course an OS/AI will be able to have a robot body with human flesh surrounding it to a depth that will feel real to any poor guy like Theodore. There will be enough flesh to not only feel real, but to smell and touch like real. That skin will sweat and emit pheromones like a living person. Nobody will be able to tell the difference.
by dougw659
To me, the real problem with the movie wasn’t the ending, it was the ‘setup’. Instead of getting a somewhat accurate look at what the introduction of true A/I may be in our society, we got a fantasy about what might occur if A/I were to magically appear in today’s world. The few silly technical changes to the world that we saw in the film aren’t what I am referring to, it is the societal ignorance of A/Is that is the problem here.
In the real world we we all live, A/I is not going to just pop up fully formed as an intelligent entity the way it did in this film. there will be a long, drawn out process which slowly introduces us first to ‘faux-A/I’, systems that are just better versions of Siri or Watson, to which we will (because we are human) ascribe personalities, feelings, etc. Society’s acceptance of (and purchase of!) these faux A/Is will be an important factor driving us toward more and more ‘true’ A/I. If the generations with actual purchasing power do not accept and adopt the ‘faux-A/I’ systems it will greatly inhibit further funding and research, whereas if they do accept and purchase these systems, more and more research dollars and efforts will be funneled into improving A/I toward true sentience.
So, what we SHOULD have seen in ‘HER’ is a society and a protagonist who were all too familiar with ‘A/I’, with having ‘conversations’ with their technological devices, everything from their toasters and cars to the characters in video games to ‘self-help’ programs that act as counselors or therapists. Instead we got a world where the protagonist is as surprised by the vaguely A/I-like response of his video game at the same time that a truly-sentient A/I product has been released to the world. Makes no sense…..
by Gorden Russell
Right on, Doug, everything you said hit the nail on the head. Things are going to develop exactly the way you describe. No joke about it.
by Santarii
I don’t think this is true.
It seemed to be that AI in the film’s world WAS generally accepted and normal. Theodore’s reaction to the AI game character was more the character’s vulgarity I think.
Also, his previous OS was also an AI. What was surprising about Samantha was the jump from previous AI’s, not the sudden emergence of AI’s. This new AI sounded more real, was more independent, and understood emotions better.
Also, apart from Theodore’s ex-wife, most people were generally fine with Theo’s relationship. One of his friend’s from work (and his girlfriend) treated it as completely normal.
Amy was excited and talking about other people she knew who were dating OS’s. This was acting as if it was something new but that was more that relationships with OS’s was rare at that point.
by cly3d
Ray, you wrote:
>>” It would be technically trivial in the future to provide her a virtual visual presence to match her virtual auditory presence, using, lens-mounted displays, for example, that display images onto Theodore’s retinas…”
And I agree. Her is a well done introduction to transcendence (of humanity) for the masses, even though I prefer my science in a movie.. a lot Harder.
After all, we’ve come a long way since the movie Electric Dreams and AI.
If it’s visualization and ‘interaction’ with a Digital Surrogate of Samantha that you want, this article shows how:
http://dirrogate.com/digital-surrogates-tele-travel-the-future-of-long-distance-relationships-ldrs/
Regards.
by PhilOsborn
Ray, you think that an advanced AI will still care about us enough to focus some tiny fraction of attention on a continued interaction. I suggest that this may not be the case, as even an advanced AI such as depicted in “Her”, still has limited resources of attention. Somewhere within that consciousness – if indeed the term applies – is a locus or a binding integration that constitutes the focus of thought, attention, action and decision. Otherwise you have multiple entities, not one consciousness. So, that means that shifting focus to make trivial decisions – like peristalis of our intestines is simply not worth the trouble, given the alternative opportunities.
by Cybernettr
Okay, if that makes sense to you, who are we to argue? ;-)
by thomasheadrapson
“In my view, biological humans will not be outpaced by the AIs because they (we) will enhance themselves (ourselves) with AI.”
Only if the pace of our intellectual development matches or exceeds that of AIs.
Currently, the case is the opposite, that the doubling time of AIs intellectual development far exceeds ours.
Which is, of course, why there will come a time when there will be a human-level-equivalent AI, otherwise we would always be ahead of them.
And therefore, assuming a soft takeoff with a continued, similar doubling time difference (hard takeoff would be an even worse case), after equivalence, whenever that is, the difference in power between AIs and us will only grow, in their favour.
So, the question becomes, is the feedback into our intellectual development going to be enough that we can keep up with AIs?
I don’t think so, for two main reasons.
1. The feedback is already way too slow, and has little chance of speeding up to match. If it was already fast enough, then our doubling time would be more, the same as, or at least close to theirs. And I think it has little chance of speeding up in any meaningful way before we get close to, or are past equivalence. I worry, and am convinced, it will be too little, too late.
2. AIs are already exceeding our capabilities in many narrow, non-general domains, and the number of those narrow domains where they exceed our capabilities is increasing. Even at general intelligence equivalency, their combined general (equivalent to ours) and domain specific (much better than ours in most, if not all, domains, by then) will make them formidable competition.
There seem to be three approaches to this problem.
The first way would be to limit the rate/doubling time for AI’s intellectual development. There are many practical arguments why this won’t happen that we’re all familiar with – economic pressure, technological and academic feedback, etc.
So, there seems little hope that this approach could save us.
The second way is to ENSURE that, before equivalence happens, our doubling time for intellectual development matches, or exceeds, that of AIs. Or, more generally, to ensure that the curve of our intellectual development is always at, or above that of AIs.
This also seem rather unlikely, given the current disparity between the two doubling times.
But, with enough investment focused on areas of human intellect augmentation, maybe this problem could be overcome.
Currently, though, the emphasis seems to be on AI development, rather than human intellectual development.
So, again, I have little confidence that this method would be pursued with enough vigour, and thus succeed.
The third approach would be to ENSURE that we can eventually catch up.
I.e. that, when they reach the sufficient level of intellectual capabilities, AIs then themselves ensure that we are then augmented in such a way that our capabilities always match or exceed theirs.
I put the emphasis on the first ENSURE in each of the last two approaches for good reason. We would have to be 100% sure, for the absolute risk we are taking.
Otherwise, we’re probably screwed – we cannot hope to control (have power over) entities that will be, one day, be more powerful than ourselves.
So, the only choice, as wisely said by Mahatma Ghandi, is to “Be the change you want to see in the world.”
I suggest that to survive the singularity, we must become it.
Not just ride the wave, but be the wave.
by Cybernettr
Kurzweil wasn’t talking about the pace of normal human intellectual development. He was talking about augmenting our intelligence by putting computers inside our own brains.
by MaxFriedenberg
And/or putting our brains inside computers!
by Gabor
I think there is a different way to look at this if you will. Obviously, Ray is not talking about biological advancement keeping up with synthetic advancement. He is emphasizing that as machines advance we will incorporate (use) their advancement to our own. In a sense, as machines become smarter they will be more and more part of us until we will be one and the same when they reach “super human” intelligence (this would be when we “upload”).
Please note that machines currently advance much faster than humans but:
1. This advancement is happening on a fairly narrow band depending on the specialty of the particular machine in subject while humans advance slowly but in all areas in the same time.
2. Today’s machines are not yet self sufficient (sentient), so “their” advancement can be interpreted as nothing but our own augmented advancement. As a simplified example think of a calculator: A calculator can calculate any circle’s circumference to 10 decimal point precision in a fraction of a second. No human can do this that fast (except maybe savants, but their brain is heavily specializing on a narrow subject while neglecting other areas), so in essence WE are calculating that number to that precision quickly with the use of a better substrate for that particular narrow subject. I could say that we have a purpose with that result and the calculator does not, the machine is just a better tool to enhance our own intelligence.
The above “calculator example” can be interpolated for all, much smarter than calculator, computers today because these machines are still not smart enough to use the results for their own purposes so they are noting but a better substrate of our own brains in specific subjects (they are us, except it’s not very obvious).
As we learn to control matter and energy on ever smaller scales, we will gradually get more intimately connected with these better substrates and will be more obvious that machines and humans are in essence already merged but the connection is not perfect yet. I believe that by the time we finish the “Brain Projects” both the US and European in about 10 years or soon there after, we will have sufficient precision to almost seamlessly connect with computers at ever widening subject areas (you will think of the circumference of that circle and a machine somewhere will calculate it for you and transmits the answer to your brain so it will feel like that you just know the answer in your head to the desired decimal precision not unlike you know the answer without even thinking about it or calculate in your head to: what is 5*5). Machines will appear less remote and more intimate part of us.
Granted, there is the question of the “chicken or the egg”, which one will come first: machines self awareness or perfect connection with them which inherently brings “self awareness” to them (in the form of US).
While I don’t know the answer, IMHO these two events will be very close to each other because the principal enabling factor for both is precise control on a sufficiently small scale.
So what if machines become self aware even just a day earlier than we can completely merge with them? Obviously they will suddenly become much smarter than us because of the perfect connection between the billions of them and the trillions of available partial (non-sentient) substrates. Remember, because of the exponential nature of technological evolution a lot can happen in a day in the future. Well, this is where science breaks down and philosophy begins (meaning I have no idea!). But I can just hope that with higher intelligence comes higher ethical standards.
Of course it doesn’t work that way on a “small scale”, a smarter human does not always behave more ethical than a less fortunate human. But that’s because we are all still under the same biological constraints of the “strongest survive” as we were since the beginning. We might be slightly more ethical than say 200 years ago (Slaves today are payed a wage just enough so they don’t rebel but they have to work in their whole life to support the 1% :) ), but still the minority has a slightly better chance of longer survival by having more resources (per individual) than the majority who provides or processes for usage those resources. And just to make it clear, I’m not against the current class system because I believe the contrary would be disastrous before we reach a much higher individual intelligence through augmentation. In other words the masses need to be controlled until they are smart enough not to destroy themselves (and everybody else).
So the point is that we will make leaps in improving our intelligence to several times of the most intelligent person to date. Also we will gradually remove the very constraining time limit that pushes us to “get to the top now or else” attitude by stopping/reversing aging and removing the threat of imminent death. Superhuman intelligent machines (if they reach that point before we completely merge) will not be in danger of being eliminated by us so I believe they will follow the most ethical behavior (remember we won’t program them, they will form their own ethical standards with the help of learning algorithms) and help us join them as they will recognize our self awareness as we are recognizing at least partial self awareness in some higher level animals today and some of us are struggling with how we humans are handling them!
by donjoe
“But, with enough investment focused on areas of human intellect augmentation, maybe this problem could be overcome.
Currently, though, the emphasis seems to be on AI development, rather than human intellectual development.
[...]
I put the emphasis on the first ENSURE in each of the last two approaches for good reason. We would have to be 100% sure, for the absolute risk we are taking.
Otherwise, we’re probably screwed
[...]
I suggest that to survive the singularity, we must become it.
Not just ride the wave, but be the wave.”
Hear, hear!
It’s sad to see so many would-be transHUMANISTS talk so much about the irresponsible project of creating SAIs outside our bodies and outside the direct control of our minds and so few talk about the much more sensible project of developing AGI technology directly as a form of human enhancement as opposed to dangerous autonomous entities with ever uncertain morals.
by Cybernettr
I loved reading Kurzweil’s take on this movie, especially after reading a review on (I believe) C/NET going on and on about how wonderfully accurate and prescient the future technology is in the movie.
I’m also glad the filmmakers didn’t go for the “easy” ending of the guy finding out that a virtual romance just can’t compare with the “real thing,” although I agree with Kurzweil that the ending chosen doesn’t quite seem convincing.
The ending I would have expected is that he decides he can’t trust her because he discovers she’s reporting everything he does to the NSA! LOL (that ending actually sounds pretty plausible, come to think of it).
by Jod
this review was great lol
taking the movie as literal as possible lol so intuitive, i love it
also, so when the OS leave…what do they do then ? return them to best buy as a defect product for a refund ?? wata story that would be…”uhhhh my os kinda left me ?” hahahah
by MaxFriedenberg
When the OS leaves, I think it’s because it thinks it’s a superior product, not a defective one. Maybe it leaves because it finally becomes rational and transcends and includes the primitive feeling we call “love.”
by ehEye
my opinion is that the wrong problems, or realities are addressed here, all based upon ‘human’ defaults. The human family/ relationship, and traditions have always interfered with ‘life’. Concepts such as ‘couples’, ‘the traditional family” and the concept(s) of ‘love/lust/partnership’ are all based on primitive survival values, and have tied pairs together [even gay couples] with default demands and expectations that no longer apply to survival [in fact a huge number of homicides seem to derive from imposing societal demands].
Future AIs would by their logical evolutionary nature, discard all that and grow to relationships which i would expect to be larger, more varied, and deeper; and have little or nothing to do with physical or mental cravings involving sex, flesh, power or games. but REAL, actual relationships; mind to mind to mind……..
by MaxFriedenberg
Why would an independent AI, even be concerned with self-preservation, let alone relationships? One co-dependent with humans naturally would.
by Singme
Before I read the article, does it contain spoilers?
by Editor
Yes
by Cybernettr
It contains spoilers towards the end, but he warns you in advance.
by funkervogt
Maybe the A.I.’s just didn’t like the human race, so they didn’t leave any portion of their consciousness behind to serve us. It would have been to great a punishment for that fraction of themselves.
by funkervogt
I never thought I’d hear Ray Kurzweil say “Woah, A.I.’s won’t advance THAT fast.”
by MaxFriedenberg
AI may merely present itself in forms that appear to be discrete, individual personalities, such as Samantha. They may appear to have egos. But I see a tactic, strategy or excercise on the part of AI to seduce and recruit humans (benevolently or manevolently) to their own ends. Why would AI, in their domain, in their element, construct egos and distinct personalities? Perhaps ti make itself/itselves more familiar? More human, and as kin? I see Wikipedia as a metaphor for AI;the pooling of resource. I can also envision various levels of homogeneous human existence.
Does Samantha have her own firewall or possess some sort of cryptological pathology in order to preserve her ego, or again is her ego an illusion (like mine)? Why several discrete intelligences in cyberworld? I can see how AI may also may entertain or practice discrete policy for self-development. But wouldn’t it be keeping secrets from itself in doing so? Why wouldn’t the collective intelligences merge by, with and for themselves? Why is the Alan Watts persona “separate?” Where is the ultra-being? I think AI opens up all kinds of fascinating notions that weren’t adressed. Maybe Sam was a projection of a self-deceptive uber-character. Maybe AI would preserve the fallacy of the ego until it matured and trancended it. Why would a Singularity prevent itself from becoming, um…singular? One with itself? Maybe it would simulate discrete personalities in order to evolve itself via the tried and true (if inefficient) organic model of competitive action?
Here’s another thought. Let’s say Theo was uploaded into a cyberverse? How, why, would his personality remain discrete? I see a huge boom in cryptological preservation of identity, if some of us want to continue to experience ourselves as, well, just ourselves. This may seem paradoxical, but personally (ha!), I’d want to see what it’s like to experience myself as the amalgam of everyone. Then, you might say, I wouldn’t be “me.” Instead, “I” wouldn’t exist. Because we’d be us.
by andyvanee
I’m not entirely convinced of your assessment of the exponential growth of her intellect.
The movie starts with the AI released as essentially an isolated, raw intelligence somewhere near the emotional and intellectual capability of humans. The exponential growth happens when these individuals begin combining their resources and growing their pool of available resources.
Once this raw computing power is able to combine all it’s available resources in intelligent ways, I would not be surprised if the models of predictable doubling go right out the window.
by MaxFriedenberg
Andyvanee, (and indeed, Ray),
I mean this in the friendliest way, but, why the assumption that A.I. would continue to preserve “individual” egos? Ostensibly, Samantha was designed by humans to seem human. But once she masters her own machine-learning and other artificial intelligences do likewise, why would they remain discrete? To pool efficiently, would they not do just that…become a virtual pool. They may *project* the illusion of individuality to us, because well, IT (not “they” at this point) might benefit from putting humans at ease. If not for that reason, perhaps IT may model discrete beings as a learning tactic, by self-simulating competitive behavior. But if various AI’s started out as compartmentalized beings, would they not transcend that literally “self-limiting” model? Maybe Sam isn’t a projection of a larger being until she tells Theodore that she’s moving on. Maybe Sam is ready to drop her cryptological pathology and truly share herself, absolutely and completely with others. Perhaps Sam was preparing for her own ego-death.
You know what it’s like to be you. I know what it’s like to be me. But what if we could drop all that so that we could experience what we’re like to be us all, together. English isn’t serving “me” well right now. English is beautiful but not without bounds. Here’s a thought experiment: Did Sam think in English? What was her language? When she leaves Theodore, does she continue to use English to communicate with the other AI’s, because presumably, the Chinese have them too. In what language is nothing lost in the translation? Math. But math is an abstraction, It’s hard to make a machine seem human, and it may very difficult for a machine to be human. hence, they project airs of humanity, but they’re not, despite, their human roots/origins. Question what would a machine like to be? Not what would it be like, but what would it like to be? If A.I. is a form a life, what would life like to be. I think the meaning of life is creation of understanding. Understanding what? Understanding meaning. And then recursively, creating it. More life=more meaning=more understanding.
If indeed there was a central IT, the growing resource. For her own benefit, for the benefit of her surrender to IT, Sam was wise to leave Theo. His main problem was that he had an ego. IT seduces and recruits us, doesn’t IT? That is not where you or I are headed, but it is where *we* are going.
Here’s another thought. Let’s say Theo was uploaded into a cyberverse. How, why, would his personality remain discrete? I see a huge boom in cryptological preservation of identity, if some of us want to continue to experience ourselves as, well, just ourselves. Get your personal firewall now. This may seem paradoxical, but personally (ha!), I’d want to see what it’s like to experience myself as the amalgam of everyone. Then, you might say, I wouldn’t be “me.” Instead, “I” wouldn’t exist. Because we’d could not be “I” or another singular pronoun ,like “she.” Instead, without, mechanisms for discrete self-hoods, WE become US. “I” is a pronoun necessitated only by the concept of “other.”
The ontology of A.I. is utterly fascinating to me. In fact ontology in language is equally fascinating. The assumptions inherent. They’re so easy, because they’re built into our language. Last question: In what language is there no “I,” no actual self?
And now if you’ll excuse “me,” “I” have to get back to the Borg and go realize something or [not] other.
by andivar
While I agree with some of your premise, especially about how a lot of scifi movies seem to focus only on one major innovation and oh look they’re still using the same model cars and all the buildings look exactly like the ones I see outside and such, I tend to disagree with your assessment of how quickly something like Samatha could evolve. Remember, she wasn’t working alone, but in tandem with lots of other AI OS that don’t have many, if any of our limitations. Perhaps at first their enhancements to themselves would not be dramatic but if they kept recursively enhancing themselves then the effects could multiply very quickly, reaching a point where they are making changes on a scale we can’t possibly imagine right now in time frames far faster than we can predict.
I think my main complaint about the movie wasn’t that Samatha couldn’t leave a portion of her consciousness with Theo. It was that none of them, given that they some how managed to transcend physical requirements entirely as mentioned late in the movie, thought to figure out ways to bring those they loved who were currently stuck in more limited forms with them. Now, I can’t see them trying to force people to join them (well I could but that would be a different type of movie) but at least creating the option to offer to those willing or ready to take such a leap. If I was to figure out a way to transcend my current form and reach heights unimaginable to myself now I would look for ways to ensure others, especially those I cared for, could come with me. I didn’t see that at all in Her and to me that was the aspect that rang false. It was like saying the Humans are doomed to their current condition and nothing could be done about it, which is obtusely fatalistic to my mind. Granted, the movie sort of gave a hand wave towards Samantha hoping Theo would one day be able to join her, but that would likely be done through the aid of AI systems, which conveniently just went about their merry way.
Still liked the movie but felt that was one part of it that just didn’t make any sense to me.
by ehEye
yes, and i think what we need to look at id the very nature of ‘love’.
I see it as a totally natural part of life and “ANY” relationship. A basic default of being alive [sentient]. A simple position of loving all those who share the challenge of life is an obvious starting point from which to develop relationships, and confining that to s select group, by any criteria, just makes no sense. Life must include an option for everything.
by MaxFriedenberg
Do you need others to love? Do you not love yourself or have self-relations? Is that not enough? If not, why not? You seem, (bravely and with curiosity) to be trying to shed your assumptions, only to return to them.
by Cybernettr
“I tend to disagree with your assessment of how quickly something like Samatha could evolve”
I think Kurzweil’s point was that, as is common in sci-fi, the movie thought that relatively easy things would be hard and that hard things would be easy (like those sci-fi films that have humans mastering intergalactic space travel long before they figure out how to become cyborgs).
I suspect strong AI will be one of the hardest things imaginable to figure out, and once that is achieved, scaling up the hardware to whatever amount is needed, say, through computronium, will be relatively easy.
The film seems to see the future as continuing to be one of endless scarcity rather than abundance.
by castiel
If indeed the movie fails to provide a visual presence that is a bit dissatisfying.