Anthropology Magazine

Transcript – Is Robot Empathy a Trap?

Transcript – Is Robot Empathy a Trap?

Daniel White: [00:00:03] You know, I kind of pulled him out of the box and Pepper is sort of slumped over. So I just kind of, you know, held him there for a second to see for myself what was going on and was quite surprised to find these almost sort of parental feelings, you know, arise in me. You know, I’m not a parent myself, but to kind of pull this, this robot out of the box, and a robot, you know, turned off that clearly kind of evoked the sense of needing … help. [00:00:28][25.2]

 

Chip Colwell: [00:00:30] OK, wait, so he’s talking about a robot, right? [00:00:31][1.1]

 

Jen Shannon: [00:00:32] Yeah, he’s talking about a robot named Pepper who happens to be the first emotional robot marketed to basically understand how people are feeling and respond to them. Well, it sounds like, to me, he’s having a really deep response, like deep parental feelings, and he has this really serious, emotional connection. [00:00:55][22.5]

 

Jen Shannon: [00:00:56] Yeah, that really surprised me because I actually encountered Pepper in a hotel lobby in Japan recently. [00:01:00][4.4]

 

Chip Colwell: [00:01:01] Oh, lucky you … What was that like? [00:01:02][0.2]

 

Jen Shannon: [00:01:02] Yeah … [00:01:02][0.0]

 

Chip Colwell: [00:01:03] Same kind of feeling? [00:01:03][0.4]

 

Jen Shannon: [00:01:03] Well … I was, you know, standing in line, looked to the right, saw Pepper and was like, oh my God, that’s the one. I have to go check this out. And so I walked up and, you know, Peppers kind of looks like some miniature person in a spacesuit. And on his chest and I say his, not it, right. On his chest there are six selections of folk dancing. So I picked the one that had the image of a knight in shining armor and, all of a sudden, Pepper just started moving and jerking and shaking. But really what struck me was that over and above the sound of the music were the sounds of Pepper. Clack, clack, clack. I mean, it was just really kind of disconcerting and I have to say I did not feel any sense of emotion except gosh, this is kind of weird. Like what … What is this about? [00:02:03][59.1]

 

Chip Colwell: [00:02:05] Wait, so, what is Pepper actually for, then? [00:02:07][1.5]

 

Background voice: [00:02:08] Pepper is here to make people happy. Help them grow and enhance their lives. Think of it as high tech you can high-five. [00:02:16][7.9]

 

Jen Shannon: [00:02:17] Pepper is actually for taking care of people. But there’s a whole debate going on about robots and, of course, some people really see that side of it where there’s a lot of wonderful things that robots can improve the quality of human life. But there is this other dark side too, and we’ve seen that most recently in the news. [00:02:35][18.0]

 

Background voice: [00:02:37] Pepper’s not here to replace humans or even vacuum the floor. [00:02:39][2.4]

 

Jen Shannon: [00:02:40] Are they going to take our jobs and eventually take over us? Or are they going to become, you know, really a way to help our society in general and people. So, yeah, it’s a, it’s a, it’s still up for debate. I’d love to learn more. [00:02:53][12.6]

 

Chip Colwell: [00:02:53] The big question I’m interested in here is, Can robots care and why should we care if they do? [00:03:00][6.8]

 

Jen Shannon: [00:03:01] Good question … [00:03:02]

 

Together: [00:03:03] INTRO [00:03:18][14.5]

 

Chip Colwell: [00:03:21] Hey, Chip Colwell here, you just heard me chatting with my cohost Jen Shannon and our conversation really got me thinking. How are our relationships with technology really challenging the way we think about our most serious intimacies? So I called up Dan White and Hirofumi Katsuno, anthropologists who are working on these very questions in Kyoto and Tokyo, Japan. You heard Dan’s voice just a couple of minutes ago. He’s the person who was surprised at his own parental feelings at the first time he met Pepper. [00:03:53][32.0]

 

Daniel White: [00:03:54] Basically, I’m interested in emotion, and, these days, I’m looking at the combination of emotion and technology. So, looking at things that we like to call emotional machines, so, different kinds of technologies that can either read your emotion, sense your emotion, elicit emotion, or maybe even machines that in the near future can have emotion. Sometimes we hear this called emotional artificial intelligence. [00:04:19][25.0]

 

Chip Colwell: [00:04:20] So, what is an emotional machine exactly; isn’t that kind of an oxymoron? [00:04:23][2.9]

 

Daniel White: [00:04:25] In one sense, it means machines that are able to basically sense what people are feeling in a variety of ways. So this means something like a machine or a robot that could read your facial expressions and understand from that what a person is feeling. But we also refer to things like wearable devices and wearable technologies like a Fitbit or an Apple Watch, which can sense what your body is doing and how your body is feeling that you may not necessarily be aware of consciously. So machines that can sense heart rate variability or how much your skin is sweating, things like this, are actually good measures of how the body is feeling. So all these things get kind of lumped together in this category we call emotional machines. [00:05:03][38.8]

 

Chip Colwell: [00:05:05] So, in what sense are we talking about emotions here? Because, you know, sometimes when I’m frustrated at my computer, you know, say, you know, it won’t reboot or something, it feels like maybe it’s a little angry at me or, you know, some other device malfunctions. Is it, is it, is it a sort of an abstract sense that things are reacting to us or are we really asking the question: Can we give machines the same feelings that you or I might have? [00:05:32][26.9]

 

Daniel White: [00:05:32] Right. So, I should say that this field is very much attributed to the researcher, Rosalind Picard, who in the 1990s first wrote a paper asking if we could really give machines the ability to read our emotions, to elicit our emotions, and then, even thinking kind of a bit ambitiously, machines that could even have emotions. And what that really means is kind of still being explored. [00:05:56][23.3]

 

Chip Colwell: [00:05:57] And you study Pepper, so, who is Pepper? [00:06:00][3.0]

 

Daniel White: [00:06:01] Essentially, he is what Softbank is calling the first emotional robot. So, what that means is that Pepper has an emotional engine: He can sense how you’re feeling through facial expression, your tone of voice, body movements, key words, and, most importantly, he can learn what you like by interacting with it, by playing games with it via his tablet. And Softbank is very excited about this and they think, indeed, that Pepper is hopefully going to become kind of a regular new family member in the near future. [00:06:35][33.6]

 

Chip Colwell: [00:06:35] And hero. What exactly do you think the designers did do to help elicit your human emotion from this essentially piece of plastic and circuitry? [00:06:44][9.6]

 

Hirofumi Katsuno: [00:06:45] In the case of Pepper, there are many determinants of social presence, but an important one is eye gazing. [00:06:52][6.7]

 

Chip Colwell: [00:06:53] Eye gazing, like making eye contact? [00:06:54][1.6]

 

Hirofumi Katsuno: [00:06:54] Yeah, yeah. So, the Pepper’s creators are pretty much intentionally interested in this, you know, eye contact. Is it really important? Actually, there is a special eye-tracking system so they kind of chase our facial movement, especially eye movements, and the eye gazing evokes a feeling in human mind that the machine is kind of an intentional agent that seems capable to observe us. [00:07:23][28.9]

 

Chip Colwell: [00:07:24] So, Pepper can gaze deep into our eyes essentially. [00:07:26][2.5]

 

Hirofumi Katsuno: [00:07:27] Well, sometimes, sometimes, we feel that and practically, actually yeah, observes our facial expressions, and this actually evokes a feeling that the machine even cares about us. The machine tries to understand me or understand us. [00:07:43][15.7]

 

Chip Colwell: [00:07:44] Is that universal, or are there cultural aspects to how we react to robots? [00:07:51][6.6]

 

Hirofumi Katsuno: [00:07:51] That’s a really interesting question. We actually want to investigate that, but at least one thing we probably can emphasize in the case of Japan, Japanese sort of, you know, I don’t want to generalize all Japanese attitude, but one really distinguishing kind of reactions by Japanese to the robot is that there is a kind of uncritical celebration of the robots, and in the case of gaze, I think, I would, assume that in your American social settings, people have, you know, kind of anxiety when being looked at by machine. But there are some cultural actually influences here that the gaze is also comfort. Give us a comfort. People comfort. They watch us and protect us. [00:08:44][52.6]

 

Chip Colwell: [00:08:51] All right. So I think we’re going to need some context here, really, about the history of robotics. So to fill us in, I called up Dr. Jennifer Robertson. She’s an anthropologist at the University of Michigan. Jennifer is also the author of Robo sapiens japanicus: Robots, Gender, Family, and the Japanese Nation out from the University of California Press in 2017. And I asked Jennifer how we got to where we are now. On the one hand, we have robots that make us anxious, scared for our lives, scared for our jobs, and on the other hand, like Hiro said, robots right now are taking care of us, and they can even make us feel more human. [00:09:31][40.0]

 

Chip Colwell: [00:09:33] Do you have a sense of, of how we went from, you know, robots as this kind of abstract concept to today robots’ being receptionists and babysitters and nurses and companions and I, I hear, even fashion models in Japan? How did we go from point A to where we are today? [00:09:52][18.5]

 

Jennifer Robertson: [00:09:53] That’s a gigantic question. Do you have about 10 series to devote to this? [00:09:56][2.7]

 

Chip Colwell: [00:09:57] I wish. [00:09:57][0.2]

 

Jennifer Robertson: [00:10:00] Well … Let me try to sum her up. Yeah, “robot” was coined by Karel Čapek and his brother Joseph Čapek, a writer and artist in the, in Czechoslovakia at that time in 1920, and Karel had just written a book in Czech called Rossum’s Universal Robots about a company, Rossum’s, that created these human lookalike robots for export all over the world. [00:10:31][31.9]

 

Background voice: [00:10:34] Young Rossum invented a worker with the minimum amount of requirements. He had to simplify him. He rejected everything that did not contribute directly to the progress of work. Everything that makes man more expensive. In fact, he rejected man. And made the robot. My dear Ms. Glory, the robots are not people. Mechanically, they are more perfect than we are. They have an enormously developed intelligence, but they have no soul. [00:10:57][23.5]

 

Jennifer Robertson: [00:11:03] Then it was almost immediately translated into English and Japanese and performed in Japan in 1924. [00:11:10][7.3]

 

Chip Colwell: [00:11:11] So did this new concept of a robot quickly catch on? [00:11:16][5.0]

 

Jennifer Robertson: [00:11:17] Well, absolutely. There were then comic books and advertisements and all kinds of science fiction stories using the word robot or roboto in Japanese, which is a syllabic language. But in terms of movies, it was in 1927 that Fritz Long’s Metropolis featuring the evil robot Maria premiered, and it was really after that time in the mid-1920s into the 1930s that robots become really popular media images in Japan. [00:11:50][32.8]

 

Chip Colwell: [00:11:51] These representations of robots … Are they different in different places, so, you know, are representations by European writers and artists different from Americans versus Japan and beyond? [00:12:05][13.9]

 

Jennifer Robertson: [00:12:05] I think it’s pretty universal at that time in terms of the presence of both evil warring robots that are used for military purposes and/or, you know, attack human beings and each other or benign robots that collaborate with humans to perform certain tasks. [00:12:22][16.4]

 

Chip Colwell: [00:12:23] Fascinating. So, so, what you’re saying in a way is that this tension of what robots can do for us has been with us for a century, almost or really since the beginning. On the one hand, this fear of the militarized robot conquering us and, on the other hand, robots serving us or, you know, helping us in our everyday lives in our everyday world. [00:12:45][22.1]

 

Jennifer Robertson: [00:12:46] Absolutely. The thing is that Rossum’s Universal Robots was, of course, about these robots produced to relieve humans of drudge labor. But because they would often injure themselves, they were equipped with emotions that allowed them to sense dangerous situations so that they wouldn’t injure themselves and, you know, have to be replaced. It was a cost-saving measure. The thing is, some of these then emotions got translated into self-consciousness, which made the robots aware of their exploitation by humans, and so they revolted en masse and destroyed all humans. [00:13:25][38.9]

 

Chip Colwell: [00:13:25] That sounds very familiar. [00:13:26][0.6]

 

Jennifer Robertson: [00:13:26] Yeah. And I think this is where the, at least the Euro-American fear of robots comes from. Now, RUR was staged in Japan but that didn’t seem to lead to this notion of robots as always already dangerous and prone to attack and destroy humans. And there’s a reason for, for why these venues and applications are pushed in Japan and not necessarily elsewhere. Japan does not define itself as, as an immigrant nation even though historically people obviously did immigrate to the islands that now compose Japan. But this notion that Japan is a mono-ethnic group, which is largely a myth, that has been retained by conservative politicians as a kind of a rallying call to avoid replacing the shrinking workforce with, with guest workers, for example, and Japan has one of the fastest-aging, a fastest-shrinking populations on the planet of all the post-industrial countries. So the idea is that the robots would replace human workers and the robots would also replace human caregivers, there being a dearth of nurses now in Japan because the population is aging so rapidly there are not enough people to care for the elderly and infirm. [00:14:54][87.6]

 

Background voice: [00:14:55] In 10 years, Rossum’s Universal Robots will produce so much corn, so much cloth, so much everything, that things will be practically without price. There will be no poverty. All work will be done by living machines. Everybody will be free from worry and liberated from the degradation of labor. Everybody will live only to perfect himself. Of course, terrible things may happen at first but that simply can’t be avoided. Nobody will get bread at the price of life and hatred. The robots will wash the feet of the beggar and prepare a bed for him in his house. [00:15:38][42.3]

 

Chip Colwell: [00:15:43] Do you think, at the end of the day, robots can love? [00:15:45][2.6]

 

Hirofumi Katsuno: [00:15:47] That’s a kind … Yeah, that’s a good question, but at least we, we are getting in the human side. Human is getting to feel that as if Lord loves us. [00:15:59][12.6]

Jennifer Robertson: [00:16:00] You can read certain behaviors and expressions as emotional, but I don’t think humans quite understand the full extent of what exactly constitutes emotions. Just as, you know, there’s no gold-standard definition of consciousness. So depending on how you define it, yes, robots have emotions and yes, robots have consciousness. And then, on the other hand, you’re going to get another group of specialists who say, Well, no, according to our standards, you know, no consciousness, no emotions. [00:16:31][30.8]

 

Chip Colwell: [00:16:31] So what do these robots tell us about human AI interactions? [00:16:35][3.7]

 

Daniel White: [00:16:36] Building an emotional machine, we like to use a rolling-motion machine because it kind of broadens this category. What you have to do first is basically build a model of emotion. So what you have to do is take, you know, theoretical ideas of what the emotions are and then build a quantitative platform for that. There’s kind of a problem in this, at least for anthropologists who in their experience have found that emotions and the way people feel differ quite a bit from place to place and time to time. However, for people working in the field of affective computing, that’s not going to work so well for them because they’re tasked with being able to produce something that can give an emotional reading or an output quantitatively. So what happens is, in order to kind of build models that can do something that can read emotion, that can produce emotional data, you have to take the simplest theoretical models that exist, and usually those models are very universalizing models. What I mean by that is that they’re models which basically assume that emotions are universal across culture. So if we think of something about facial expression, there’s been psychologists who have done work on facial expression and some who have argued that facial expression is universal across cultures, that, in fact, you can reduce it to basically six basic emotions: joy, sadness, anger, surprise, fear, and disgust. That’s a very good model for an engineer to work with because it’s very easy to kind of quantify once you get a coding system that can basically take a certain composite of the muscles on the face and then connect that to a certain category called joy or sadness. The consequence of that is that these universal theoretical models have these universalizing effects, that the kind of data that these machines are inevitably going to elicit are data that produce a sort of universal model of what people are and what their emotions are because you can only collect data that fits into these sort of six categories of what the basic emotions are. [00:18:53][136.3]

 

Jen Shannon: [00:18:56] Whoa. [00:18:56][0.0]

 

Chip Colwell: [00:18:56] Yeah, so, these technologies as we’re communicating more and more with them in the same way across the globe seem to be having a kind of like flattening effect on our cultural expressions. But, you know, Hiro and Dan and Jennifer, they all talked to me about the ways in which emotional AI might actually make us more prone to acting in traditional ways, more likely to act on gendered ways of doing things and even more able to access intimacy. [00:19:25][29.2]

 

Daniel White: [00:19:26] In many ways, you see our contemporary emotion machines and emotional technology is kind of, in a way, reproducing sort of traditional social relationships, traditional forms of social intimacy that perhaps have grown a little thin. And, of course, you know, social networking services and social media, you can see some interesting parallels there that these kinds of social robots are just another form of how technology mediates intimacy. And this is actually not something new. Intimacy has always been something that we engage with through tools and through objects whether it be kind of love letters or symbols of, of hearts or music or now perhaps, you know, companion robots. [00:20:09][43.4]

 

Jennifer Robertson: [00:20:10] Well, there’s certainly, there’s certainly different species, you could say, of robots in terms of the kinds of professions that they’re imagined to and roles that they are imagined to play in human society. These professions are also gendered. So you get robots being made with a particular gender in mind simply because of existing, you know, sexist ideas about certain professions. So a security robot should look like, you know, a big burly police officer and have a big square shape and have at least some navy blue built into the color scheme, whereas a robot that’s designed to be a companion to an elderly person should be small and cute and a neutral color. [00:20:52][41.9]

 

Chip Colwell: [00:20:53] That definitely occurred to me in a conversation with another anthropologist who referred to Pepper as a, with the masculine pronoun as a “he” whereas my intuition was “she.” And so, you know, Pepper is pretty androgynous visually and according to my eye anyways. But it is fascinating how each of us might interpret gender in these robots. [00:21:17][24.0]

 

Jennifer Robertson: [00:21:18] Well, I think Pepper was created as a quote-unquote male robot to begin with, and you find in the Japanese PR the reference to Pepper Kun and Kun, k-u-n, is a suffix indicating, you know, male gender. [00:21:33][15.2]

 

Chip Colwell: [00:21:33] So the creators, the creators wanted that as the gender, OK. [00:21:37][3.7]

 

Jennifer Robertson: [00:21:38] But you, you are free to, you know, to give Pepper a new name and a new gender. [00:21:42][4.5]

 

Jen Shannon: [00:21:43] Huh. So we’re playing out a dominant cultural norms and gender roles through these new technologies? [00:21:48][4.7]

 

Chip Colwell: [00:21:49] Yep, that’s right. So the thing is, Jennifer is pretty skeptical of these robots’ capacity to carry out the larger-than-life goals that have been assigned to them by these companies and so, you know, at the end of the day folks in Japan today, they’re not being cared for by robots. It’s actually immigrants from places like Indonesia who are low-paid guest workers. So, one thing I keep coming back to is that although artificial intelligence might not be living up to our wildest fantasies and hopes and science fiction dreams, there are real world impacts right now and so, for example, we’re seeing an increased presence of surveillance as companion robots are actually collecting data on us as they talk to us and live amongst us. And so, that leads me to think about how desperately we as humans want closeness and we need sharing and affection, but how those needs can actually be used against us. How things like intimacy can actually hurt us. [00:22:51][61.1]

 

Jennifer Robertson: [00:22:57] In the United States, the majority of funding for research and robotics comes from the Department of Defense. And that is no secret. Even here at the University of Michigan, much of the funding is from the Department of Defense. [00:23:11][13.9]

 

Chip Colwell: [00:23:11] Really. [00:23:11][0.0]

 

Jennifer Robertson: [00:23:12] That said, it’s certainly the case in Japan that much of the funding for robotics comes from the government, although much has also come from the entertainment industry and also from the automotive industry and the development of industrial robots. It was with the administration of Prime Minister Abe, who is, who has been prime minister twice; his first administration was only a year. Well, actually less than a year long, 2006 to 2007, which was when he really started pushing the roboticization of Japan and the idea that the development of robots would generate all these spin-off industries that would help salvage the Japanese economy that was in recession at the time. And his second administration, which began in 2012, really, you know, put robotics back on the front burner, and he also decided that the Japanese should no longer shy away from joining the weapons economy, that it’s a very lucrative, you know, application of robotics. Now, the Japanese have always made weapons but for their own self-defense forces; now they can also export weapons and not just what we call dual-use technologies like Jeeps, for example, Mitsubishi Jeeps are found on the battlefields of Iraq and Afghanistan, or Nikon cameras that are used in battlefield situations or various optical kinds of devices. But now we’re talking about complete weapon systems. The Japanese are developing drones, exoskeletons for military use, for example. And of course, and this gets us back to humanoid robots—to make a humanoid robot is one of the most complex things to do as a roboticist. You have to have, you know, vision. You have to have voice; you have to have joints; you have to have mobility. [00:25:19][126.3]

 

Chip Colwell: [00:25:20] And there’s the intelligence aspect, right? [00:25:20][0.0]

 

Jennifer Robertson: [00:25:23] And intelligence, right. But it’s not just the robot itself that is then going to be parlayed into some sort of useful application, but rather the component parts are going to be spun off into these industries and/or incorporated into other kinds of technologies like cars but also like tanks and airplanes and fighters and radar systems and surveillance devices. And so, that’s why, you know, humanoid robots have been so important to the Japanese economy in the form of the, the, you know, the spin-off industries that their production has generated, and you’re finding a lot of that now being shunted into military use and military applications. [00:26:07][43.8]

 

Chip Colwell: [00:26:08] My sense is that we hear a lot about robotics and labor in medical fields but not so much in military applications. Why, why do you think that is? [00:26:17][8.7]

 

Jennifer Robertson: [00:26:18] Well, apart from the fact that a lot of this is top secret, and journalists don’t seem to be focusing on this when, which, in fact, they probably could. You know, entertainment robots to just capture our imagination more than so-called killer robots. But, I think also in Japan there has been a kind of a censorship on the reporting of robots more as representative of the advanced technological status of Japanese companies as opposed to how robots are really being applied on the ground. And I think, you know, many in Japan still want to export the idea that Japan is a pacifist country even though under the very illiberal Liberal Democratic Party, which is the dominant party in Japan and has been since the postwar period, Japan’s remilitarization has been proceeding at a steady pace. [00:27:21][62.6]

 

Background voice: [00:27:24] The robot soldiers spared nobody in the occupied territory. They’ve assassinated over 700,000 citizens, evidently at the order of their commander. Rebellion in Madrid against the government, robot infantry fires on a crowd; 9,000 killed and wounded. Robots of the world. The power of man has fallen. A new world has arisen. The rule of the robots. March. [00:27:50][26.6]

 

Jen Shannon: [00:27:56] Weapons regimes, surveillance, emotional development communication. I just did not think about all the complexities behind the Pepper that I met in Japan. [00:28:05][9.4]

 

Chip Colwell: [00:28:06] Yeah, you know, I think for me the question of care and love is still a really open question, but our idea that artificial intelligence can share intimacy with us really is having a lot of real-world impact. So, you know, data collection, billion-dollar industries, new weapons that come out of that cute culture, you know, I mean there’s, there’s just all these different parts to it. [00:28:31][24.6]

 

Jen Shannon: [00:28:32] So, are you going to get a Pepper after this? [00:28:34][1.9]

 

Chip Colwell: [00:28:35] Well, I have a cat already, but in any case, Jennifer advised me that Paro, which is this cute little baby harp seal, is actually a much more satisfactory companion. [00:28:46][10.6]

 

Jen Shannon: [00:28:47] Fair enough. [00:28:47][0.2]

 

Chip Colwell: [00:28:47] No, but I mean, for me this question isn’t really about am I going to own a Pepper. It’s really about these bigger questions, right? In some ways, as a society, we’re still thinking about the same themes that come up in Rossum’s Universal Robots. Agency and work and emotion and even violent revolution. For me, emotional artificial intelligence brings us back to how we think about basic human things like intimacy and communication and love. The future is really yet to be determined. [00:29:21][33.6]

 

Mina Colwell: [00:29:22] Siri, who produced this episode? [00:29:22][0.0]

 

Siri: [00:29:22] I did not understand. [00:29:22][0.0]

 

Chip Colwell: [00:29:31] This episode of SAPIENS was produced by Arielle Milkman, edited and sound-designed by Matthew Simonson, and hosted by myself, Chip Colwell, and Jen Shannon. SAPIENS producer Paul Karolyi, executive producer Cat Jaffee, and House of Pod intern Lucy Soucek also provided additional support. Christine Weeber is our fact-checker. Our cover art was created by David Williams. Music for this episode is by Matthew Simonson. You also heard an original score of the 1927 science fiction film Metropolis by composer Scott Ampleford. Paul Karolyi interpreted select portions of the play Rossum’s Universal Robots by Karel Čapek. Thanks this time to our guests Hirofumi Katsuno, Jennifer Robertson, and Daniel White. Thanks also to Danilyn Rutherford; Maugha Kenny; and the entire staff, board, and advisory council of the Wenner-Gren Foundation. Amanda Mascarelli, Daniel Salas, Christine Weeber, Aaron Brooks, and everyone at SAPIENS.org. SAPIENS is part of the American Anthropological Association Podcast Library. This is an editorially independent podcast funded by the Wenner-Gren Foundation and produced by House of Pod. Thanks so much to my daughter Mina for joining me in the studio today and helping me out with that smartphone tech. [00:30:53]

 

Mina Colwell: [00:30:55] See you next time, fellow sapiens. [00:31:01][84.0]

 

Chip Colwell: [00:31:02] Mina, why did the robot keep getting so mad? [00:31:04][2.9]

 

Mina: [00:31:06] Because people kept pushing his buttons. [00:31:06][0.0]

[1733.2]