Выбрать главу

[EXT. Dockside (SUNSET)]

And if Scottie follows Madeleine but refuses to help her when she jumps into the Bay? Or if he has another attack of vertigo at just that moment — perhaps, without meaning to, looking up into the net of guywires and girders below the bridge, thinking to himself how precarious any foothold up there would be, how far one would fall if one fell? What then? Would Judy take her role so seriously as to actually drown? How much could Elster have paid her, to make it worth it to risk her own death in playing out another woman’s life? How much did a salesgirl’s life cost in 1958? What could have been the nature of the payment?

Many more people commit suicide by jumping from the Bay side of the Golden Gate Bridge than the Pacific side. There are barriers on the Pacific side, but even before there were, more people jumped from the Bay side. And whether Maxwell Anderson’s preliminary script had her jumping from that side or not, Judy jumps from that side.

Faked deaths always end in real deaths.

Only one is a wanderer. Two together are always going somewhere.

(00:57:37)

If there is a dark power which treacherously attaches a thread to our heart to drag us along a perilous and ruinous path that we would not otherwise have trod; if there is such a power, it must form inside us, from part of us, must be identical with ourselves; only in this way can we believe in it and give it the opportunity it needs if it is to accomplish its secret work.

(E. T. A. Hoffmann, “The Sandman”)

The characters in [stories of self-imitation] circle back on themselves in a ring with a twist in it — the twist of self-deception or the deception of others, of ambivalence or ambiguity, or of the paradox of married sexual love.

(Doniger, The Woman)

Is “Only one is a wanderer. ” a line Judy Barton from Salina, Kansas could have come up with? Are any of Madeleine’s lines? She only really seems to be “acting” starting at 01:16:00, just before entering the tower.

In a story called “Web Mind #3,” computer scientist Rudy Rucker writes, “To some extent, an author’s collected works comprise an attempt to model his or her mind.” Those writings are like a “personal encyclopedia,” he says; they need structure as much as they need preservation. He thus invented the “lifebox,” a device that “uses hypertext links to hook together everything you tell it.” No writing required. “The lifebox is almost like a simulation of you,” Rucker says, in that “your eventual audience can interact with your stories, interrupting and asking questions.” He isn’t only talking about the traces we leave after we die. He’s repositioning immortality as a question of organization and technology. It’s a technophile’s version of the Turing test — if an interaction with you can be so perfectly simulated that there is no difference between interacting with you and interacting with the lifebox, is the you the lifebox simulates actually simulated, or is it, instead, another you?

While researching an article for The Atlantic, Alexis Madrigal discovered that telemarketing centers have begun using soundboards — recorded speech activated by the press of a button — in place of human response. Rather than answer the person on the other end of the line, the call-center worker selects the appropriate snippet of sound from a computer menu and then the computer responds to the caller. Even though nearly all such call-center workers were simply repeating a script to begin with, there were problems: some workers had weird or unpleasant voices, some had strong accents, and some couldn’t control their emotions. Humans are human. They have good days and bad days. Even at their best, they might not be as good as the best representatives. The solution? Record the best representatives and then have those recordings available to every worker, at the touch of a button. While Madrigal seems to join in the software’s programmers’ enthusiasm for this technology, he ends his article with the thought “When we look around our world at the technologies we have, it’s hard to imagine the series of steps that got us to where we are.”

Madrigal began investigating this technology, he tells us, because of an article in Time, “Meet the Robot Telemarketer Who Denies She’s a Robot.” The “robot” of the headline is named Samantha West. This Samantha West called Time’s reporter offering health insurance. When the reporter asked if he was speaking to a robot, Samantha West “replied enthusiastically that she was real, with a charming laugh,” but when asked “What vegetable is found in tomato soup?” she claimed not to understand the question. (Fair enough — the tomato is a fruit.) When asked what day of the week it was the day before, she complained of a bad signal. Other reporters called Samantha West. Each time, her answers, both on-script and off-, were exactly the same. It seemed clear she was a robot.

Reading about these reporters grilling Samantha West, I couldn’t help but think of Philip K. Dick’s Do Androids Dream of Electric Sheep and his Voigt-Kampff machine. There, because cyborg technology is so advanced, the questions are trickier, based on emotion, something Dick presumed could not be programmed. And maybe he’s right. Even if it could come up with slight variations on the same answer — as we all inevitably do when asked the same question — wouldn’t Rucker’s lifebox always have the same attitude, the same tone?

If Dick is right and the thing that puts us off about these “robot” or “cyborg” telemarketers is their lack of emotion (Madrigal points to this as their main benefit, since both callers and workers find such calls less stressful), then is emotion what makes us us, what separates us from the way we talk about — and talk to — each other and ourselves? Where does that leave Judy Barton, answering from a script that is not only not her own but isn’t even that of the woman she’s been asked to play; not Madeleine’s but Carlotta’s, and not Carlotta’s but Elster’s? It is Elster’s Carlotta’s decision to commit suicide, not Madeleine’s, and certainly not Judy’s. Judy’s emotions are not under her control. “Madeleine’s” emotions are under Elster’s control. But then what does that make Scottie, drawn in by a woman with the affect of a robot, falling in love with a soundboard? Is he just a dupe, or is it something more sinister than that?

In her book The Future of an Illusion: Film, Feminism, and Psychoanalysis, Constance Penley writes, “The bachelor machine is typically a closed, self-sufficient system. Its common themes include frictionless, sometimes perpetual motion, an ideal time and the magical possibility of its reversal (the time machine is an exemplary bachelor machine), electrification, voyeurism and masturbatory eroticism, the dream of the mechanical reproduction of art, and artificial birth or reanimation. But no matter how complicated the machine becomes, the control over the sum of its parts rests with a knowing producer who therefore submits to a fantasy of closure, perfectibility, and mastery.” The concept of the “bachelor machine” originates with Marcel Duchamp and his Large Glass, but, as Penley’s quote demonstrates, it has been adopted by theorists, metonymically used to mean a closed system, a machine that admits no input and delivers no output. Isn’t Judy’s Madeleine then a bachelor machine? To the extent that she can even respond to other people (and it is rare, very rare, that the other person is not Scottie), she must do so as Madeleine, i.e., within the system that Elster has created for her. Worse, the role she has been given is a bachelor machine within a bachelor machine — when she is Madeleine pretending to be Carlotta, she is doubly closed-off. But is she then Elster’s bachelor machine, or Scottie’s?