Pierre Cassou-Noguès

The Bene-Veillance of Machines

(In French)

Apps that determine our mood, humanoid robots that adapt to our behaviour, cameras that guess our gestures - these technologies monitor us for our own good: it is bene-veillance. 

Pierre Cassou-Noguès explores our relationship with contemporary technology through the fictions we imagine in order to be able inhabit new forms of life. For while technology is transforming our material environment, it is also upsetting the content of our thoughts and emotions, down to the most intimate dimensions of our subjectivity. Philosophy must therefore analyse both these new realities and the possibilities they promise, for better or for worse.

Abstract

Domestic robots, companion robots are meant to be kindly, benevolent. They may watch over children, or elderlies. They collect plenty of data that they may sell, but it is for the own good of the user that she is under surveillance. This benevolent surveillance is what I call beneveillance and which I take to be a key feature of contemporary technology. The central claim of this book is that the beneveillance of machines transforms us at a transcendantal level, i.e. transforms the structures in which we may see ourselves and the ouside world. It transforms the status itself of subjectivity. This idea of the transcendantal is derived from C. Malabou's empirical transcendantal.

The introduction discusses the use of fiction as a method of philosophical investigation. The idea is that, when we speak of a "robot", this term evokes not only current robots, with what they can, and cannot, do, but also all the robots we have imagined, so that we must include imaginary robots in the elucidation of what robots are for us. We should refer to science fiction as well as to the discourse of manufacturers, which often exceeds the real use of the robot, and which can also be taken as a fiction.

The first chapter is about introspective machines (the term is from Minsky), apps using data analysis or neuroscience devices, that aim at revealing to the subject what should be a first-person experience. For instance, several apps can use various data in my phone to tell me my own mood. In our usual form of life, I am supposed to know whether I am happy or unhappy. If I need to ask my phone, our whole form of life have changed. This is what I call the thermometer syndrome. It is inspired by an uncle who used a thermometer to measure the outside temperature and then decide if he was hot, as if the experience of his body no longer gave him enough information. The relationship to ourselves, to our bodies as well as to our minds, is mediated by technology. Can we imagine a future in which there would no longer be a first-person experience: pain would be like a fever, we would need a device to check if we are in pain ?

The second chapter continues the investigation of the thermometer syndrome. The idea is that this transformation of our form of life makes our mind "flat", i.e. suppresses the latent contents that Freud opposed to the "façade" of the dream. By giving an univocal determination of mental contents, these introspective machines leave us with a facade mind, a flat mind, devoid of the latent contents that psychoanalysis was looking for.

Starting with B. Stiegler, the third chapter contrasts the model of television, organized around one pole (such as the screen in a bar during a soccer match, when there is a goal and all heads turn to that side) with the circulation of contents from peer to peer. I try to show that, since the 19th century, two models of « crowds » have been developed, one that is polarized and the other that focusses on the circulation of contents (such as G. Tarde's description of the « influence of conversation », or Norbert Wiener's analysis of "clichés"). I follow Wiener in proposing an analysis of "clichés" (what elsewhere would be called « memes ») and of the "obsessions" that these clichés create in the network. These obsessions are painful for the human mind but beneficial to the development of the network since they tie the user to it ever longer and more closely.

The fourth chapter concerns the question of work and the value of attention. I claim that what has a value is not the user's attention (in the usual sense of the word attention, when a teacher says to his pupils: "please, pay attention") but an artificial state of mind that television and the internet must produce and which is close to the state of mind that we attribute to zombies in TV series: a blindness to the outside world, as well as to oneself and one's own survival, in which only precise objects that the zombie then desires are illuminated. This state of mind is the result of a process of production to which contributes the spectator, who lends her body and her mind to this operation. Contributing to the production of what has a value, the spectator thus works. This is a new form of work, which I call zombie work and from which I take up the Marxian analysis of value as work. A classical question, already asked by Marx, is whether capitalism can survive automation and the end of work. I show that it is possible to imagine a society where machines produce by themselves all the commodities though an army of zombies continue to work in front of their screens in order to increase the desirability and thus the value of the commodities produced by the machines. In an alternative, and more likely, version, production and zombie labor are geographically separated: commodities are produced in one part of the world and made desirable in another part of the world by zombies who work on their sofas watching advertisment, and thus sell their desire as others sell their physical strength or the agility of their fingers.

The fifth chapter is about companion robots. I compare Poe's poem "Nevermore" to a video produced by Softbank about its robot Pepper to highlight the singular character of the supposed benevolence that is introduced into the relationship between human and a non-human. I am also interested in the images of Paro, a companion robot supposed to help elderly people, in order to question the ethical character of this benevolence. It is in this chapter that the idea of beneveillance, is put in place.

The sixth chapter analyses new forms of perception and surveillance introduced by contemporary technologies. I claim that the specificity of contemporary technologies is not to allow us to perceive at a distance but to perceive in a non-reciprocity. Information theory (as Wiener sets it up) makes it possible to imagine (and to a certain extent to create) a non-reciprocal perception. If I shake hands with a friend, I touch her hand, and she touches mine. We cannot do otherwise. But in some museums, and thanks to tactile devices, I can touch an antique vase without risking to break it or to hurt my finger on the chipped corner. I touch while being intangible. On the other hand, while sight, in life, gives us a global image (I enter a ballroom, and I see everything at once, the silhouettes of the dancers, their relative positions), our screens give us a multitude of local views: small vignettes with faces in close-up. It is, in the words of Deleuze and Guattari, a haptic perception and no longer an optical one. Technological perception is haptic but contrary to touch it is non reciprocal. Contemporary technologies open up a form of perception and surveillance that I call synhaptic and that must be contrasted with the panoptic model that Foucault put at the foundation of disciplinary societies.

The conclusion takes up these themes in relation to S. Butler's novel Erewhon and more particularly to the chapters entitled "The Book of Machines". Butler hints at how humans may become "parasites" in relation to machines. Technology for Butler does not have consciousness, nor intelligence, nor instinct, nor life, but it tends to extend itself, to develop itself. We contribute to its extension as the bacteria in our stomach contribute to our own life. Technology transforms humans but does not destroy them by itself. It only asks them to help it expand, like my phone beeps to catch my eye and, when it is a bit old, tells me by all sorts of signs, which are like a little voice: "buy a new, more powerful model".

However, as we approach a climate catastrophe, we must learn to turn it off (at least from time to time).

Suggested content