On AI and Why We Need Humans (and Tiger King)—Mutale Nkonde, AI for the People

woman's face looking at her reflection underneath strings of computer code
Photo credit: Gerd Altmann from Pixabay

An expert on race and technology, Mutale Nkonde is the founding CEO of AI for the People, a nonprofit creative agency. She is currently a fellow at the Berkman Klein Center for Internet & Society at Harvard University and at Stanford University’s Digital Civil Society Lab. She has also been a fellow at the research institute Data & Society, and her work has been covered by MIT Technology Review, WIRED, and PBS NewsHour, among others.

Mutale and host Ted Fox were supposed to get brunch back in mid-March, when she was scheduled to be a panelist at a conference hosted by the Notre Dame Technology Ethics Center, a new center at the University that supports multi- and interdisciplinary research on questions related to the impact of technology on humanity.

However, like pretty much everything else these last couple of months, that event had to be cancelled. Fortunately, Mutale was still up for doing the podcast remotely, so she and Ted traded waffles for Zoom and had a conversation about artificial intelligence that started out by digging into what AI, machine learning, and deep learning even are. They then talked about the ways this seemingly dispassionate tech can exhibit very real bias—not to mention its implications for privacy and the future of work in the age of COVID-19—as well as her work on Capitol Hill and at Harvard.

As for the three minutes they spent on Netflix’s Tiger King? Even that wound its way back to algorithms.

Episode Transcript

*Note: We do our best to make these transcripts as accurate as we can. That said, if you want to quote from one of our episodes, particularly the words of our guests, please listen to the audio whenever possible. Thanks.

Ted Fox  0:00  
(voiceover) From the University of Notre Dame, this is With a Side of Knowledge, the show that invites scholars, makers, and professionals out to brunch for an informal conversation about their work. I'm your host, Ted Fox. And if you'd like to keep up with the show in between episodes, you can find us on Twitter--and now Instagram, too. In both spots, we are @withasideofpod.

An expert on race and technology, Mutale Nkonde is the founding CEO of AI for the People, a nonprofit creative agency. She is currently a fellow at the Berkman Klein Center for Internet & Society at Harvard University and at Stanford University's Digital Civil Society Lab. She has also been a fellow at the research institute Data & Society, and her work has been covered by MIT Technology Review, WIRED, and PBS NewsHour, among others. Mutale and I were supposed to get brunch back in mid-March, when she was scheduled to be a panelist at a conference hosted by the Notre Dame Technology Ethics Center, a new center at the University that supports multi- and interdisciplinary research on questions related to the impact of technology on humanity. However, like pretty much everything else these last couple of months, that event had to be cancelled. Fortunately, Mutale was still up for doing the podcast remotely. So we traded waffles for Zoom and had a conversation about artificial intelligence that started out by digging into what AI, machine learning, and deep learning even are. We then talked about the ways this seemingly dispassionate tech can exhibit very real bias--not to mention its implications for privacy and the future of work in the age of COVID-19--as well as her work on Capitol Hill and at Harvard. As for the three minutes we spent on Netflix's Tiger King? Even that wound its way back to algorithms. (end voiceover)

Mutale Nkonde, welcome to With a Side of Knowledge.

Mutale Nkonde  2:07  
Thank you, I am excited to be here.

Ted Fox  2:09  
So oftentimes, conversations about technology, they're devoted to all the incredible things it can do, the way it can enhance our lives, from health care to transportation, the list goes on. You can find episodes of this podcast where we've talked about the amazing capabilities of tech. But to borrow a sentiment from the Spideyverse: With great power comes great responsibility. And today, we're going to talk about that responsibility piece a little bit. So about a year ago, you gave a talk at the Data & Society research institute titled, "What Do We Know? The Inability to Question Mark Zuckerberg in Congress." And in it, you described how many different answers you've heard just on Capitol Hill when people try to explain what artificial intelligence is. So I wanted to start by asking you, what is it? What is artificial intelligence? I mean, it's a term we all hear, but what is it, what all gets lumped under that term? And why is a lack of clarity on the part of lawmakers, the inability to question someone like Mark Zuckerberg, why is that so problematic?

Mutale Nkonde  3:18  
So, artificial intelligence is basically a marketing term, which is part of what makes it problematic, because we try to apply these technical understandings to what is a term that means nothing and everything at the same time. So to really understand what AI is, we have to first understand the history of computing, slightly. So the idea that machines could take on similar tasks to human beings is what spawned the original paper in 1959 on what would be artificial intelligence. So at that point, in this paper, what they outline is a future in which a machine can hear, a machine can see, a machine can speak, drawing on the senses that are typically attributed to human beings. And that is done through a process called machine learning. So at that point--and artificial intelligence was this thing far, far, far away in the future, whatever that future may look like. And through between kind of 1959 and the early '80s, processes, which we now think of as machine learning, started to be developed and taking place. And what machine learning is, is when you teach a system or you train a system to take on through these tasks, through feeding it data sets. So data sets are basically statistical patterns that lead to an outcome. So for example, in the facial recognition example, which has got huge amounts of press over the last couple of years, in order to teach a machine to see--that's called computational vision--you would feed it millions and millions and millions and millions of different pictures of faces in the case of facial recognition, and statistical models would be then built from measuring the distance between the eyes, the distance between the bottom lip and the chin, the distance between the ears, circumference of eyes, etc. And those statistical models are then labeled by the machines as human faces. The issue with that is really in the training, because much of this work is experimental. It's usually done with people in your lab, or people that you know around your lab.

Ted Fox  5:45  
Right

Mutale Nkonde  5:45  
Those people typically look like you, have similar experiences, and you are inadvertently teaching machines that all faces are white if all the people you know are white, and the social science kind of tells us that 70% of white people do not know a Black person, so that is very likely. The way we label is often binary, male face, female face. So if you're nonconforming, if you're trans, the machine will not assign you a face, which is really problematic once it's deployed. And if you're Asian, then you may not be assigned eyes because the circumference for normative eyes are based on these Western norms. So what we think about AI are really systems that are trained in this way, and they're machine learning systems.

And then just to confuse us all even more, as the internet and as technical applications became marketed through private equity firms, they were looking to really increase the return on investment. So they were saying, in machine learning, you tend to test a product before it goes to market, you tend to have some type of controls; why don't we just see what the machines can do themselves? And that's a process called deep learning. So deep learning is an extension of this machine learning protocol where the machine is being tested through data sets. But in that case, the machine is allowed to just generate its own determinations around what norms are. And the famous example was Facebook, when they deployed a deep learning algorithm that developed a language that humans couldn't understand, and they lost control of their test platform. And so when they realized that the machines could potentially take over the world, they stopped the experiment. (laughs) But there is no testing in deep learning, and it's still learning through these data sets that are limited because the social conditions around education mean very, very few types of people get into these advanced technical programs from these particular schools that are then feeding Silicon Valley.

Ted Fox  8:06  
Well, and I think that then, I mean, that really illustrates well, then, how probably ill-equipped then lawmakers are to question someone like a Mark Zuckerberg, who is clearly a brilliant individual, and clearly understands these things at a level that the average person is never going to comprehend. And then if the lawmakers are supposed to be making laws to protect us--and frankly, themselves to some extent--if they don't have that kind of understanding of what these things are capable of, then we have problems.

Mutale Nkonde  8:39  
And also, it sounds like nonsense. Like, it sounds like complete nonsense.

Ted Fox  8:44  
Like science fiction almost.

Mutale Nkonde  8:45  
Right, that a machine can see through these processes. And then the other thing is to understand the history of lawmaker ineptitude in this example. You have to also realize that between 1959 and really the mid-1990s, these systems are being developed to express artificial intelligence, AI, the marketing term that we started with, but the first law to even acknowledge that this was happening as a field was passed by Congress in 2017. So you have ...

Ted Fox  9:23  
That's a big gap. (both laugh)

Mutale Nkonde  9:23  
So you have 69 years of development, you have 69 years of people going to conferences, deciding norms, deciding labeling protocols, experimentation, and Congress is nowhere in this conversation. And then on top of that, many of the products that we see on the market right now started out in science fiction books or scripts. So the most famous example is the cell phone. It was written about in about 1952 in a novel, and then the person that created the first cell phone was a massive science fiction fan and wanted to bring it to life. Lawmakers are not necessarily futurists, so.

Ted Fox  10:06  
(laughs) Right. That talk that I referenced before, you pointed out that one of the things that, you know, when you're trying to talk to lawmakers about these things, one thing that--and this isn't true just of lawmakers but that's always going to get people's attention--is the amount of money involved and the impact on the economy. That will primarily take shape in the form of more personalized advertising and improved supply chain efficiency. What does that mean exactly? What are we talking about when we're--they sound almost kind of euphemistic, like more personalized advertising and improved supply chain efficiency. What are we seeing in terms of where this value is coming from?

Mutale Nkonde  10:42  
So prior to the COVID crisis, McKinsey had published a report in 2008--McKinsey being a firm that makes market predictions--that there was $35.5 billion of additional value that could come through machine learning processes. And what they were arguing were if we can gather more personal data from people, then we can more effectively, as an economy, track their likes, their dislikes, and sell to them in a way that's more efficient. What they didn't say is that when you're gathering data from people in the way that we have learned to gather data from people on Facebook and YouTube and all of these other platforms that kind of just know that you were looking for UGGs or know that you like Tiger King (both laugh)--that's not magic. What they're actually doing is tracking your behavioral patterns by tracking the clicks between websites. So whenever you go to a website, and it says it has cookies, that's what those cookies are doing. And that has huge privacy implications because we do have a Constitution, and the Fourth Amendment allows us access to privacy. So there is a huge legal question around whether that type of tracking is even--it's not illegal because we don't have laws to safeguard against it, but it's also not desirable, it doesn't go within the ambition of our Founding Fathers.

And then on top of that, when we think about supply chain efficiencies, much of the data--which is actually not that egregious, it's the part that could be really amazing--it's just figuring out how to get goods and services from A to B as efficiently as possible. The issue is, that's going to cut out human beings, typically. Because you could write code that could then be used to train systems in the way that most, you know, machine learning intelligence, artificial intelligence systems are built, and you could build a visioning system that is meant to recognize when something is a piece of meat and cut it--as opposed to hiring a person, giving them health care, paying them wages, paying them, all of the associated costs. And in this era of COVID, we're actually in my opinion going to see an acceleration of this. If you have meatpacking plants where 1000 people become infected with the COVID virus because they have to be so close to each other and work, all it's going to take is for a company to say, Well, actually, we can create a system where this can all be automated, and we can still have meat, and we don't need these people. But then the question becomes, where do those people go? The type of people doing those types of jobs are going to be the most vulnerable. So in the Michigan case that I read about in The New York Times, it was recent immigrants, many of them from African countries, those would then be the people who don't have work. And then you have another set of problems. What happens when you have this large group of people who are not working, but we can very efficiently get bacon? And the machine won't revolt, but human history lets us know that people do revolt. So, you know, what are the trade-offs?

Ted Fox  14:18  
Well, I know as you think about these issues that there were three areas in particular that you're really interested in. One was the future of work, and you just described that very well there. Another one, and you talked about this some earlier and I wanted to go back to it because you talked about bias, and how bias so often, even when it's included as part of the conversation, kind of gets bumped down to the bottom of the list, as like, Oh, yeah, there's that piece of it, too. And one of the things that really stuck with me in the talk that you gave was you were talking about, you know, these algorithms that are being developed, and you referred to it as "the code that incriminates people like me for being me." And you gave the example earlier of kind of the facial recognition technology. What are some other areas where this is particularly problematic or has the potential to be particularly problematic?

Mutale Nkonde  15:13  
So one of the things that we're facing now as we think about COVID tracking technologies is another issue of these biases within technology. And I think one of the most egregious areas that we see this are in a technology that is supposed to make predictions about people's future behavior. One of the most egregious ways this came up was in a ProPublica article that came out in 2017. And they looked at a rehabilitation algorithm that was being used in Broward County, Florida. And when people were coming up for parole, they would put all their behavior into--basically data inputted into a program. This would then go to an algorithm to decide whether the person should get parole and for how long. And in every case, Black defendants were given longer parole or denied parole at higher rates than their white equivalents. So these were white defendants with the similar criminal history and similar crimes and crime patterns as their Black counterparts, and what it really came down to were all the other extra elements that act as proxies for race. So for example, one of the inputs in that particular algorithm was zip code. And people think about zip code just as being where you live, but because of our underlying history of redlining, zip code actually becomes a proxy for race. Because what redlining did was keep all the Black people in one area of town, and then divest, and then put all the white people in another area of town, and then invest. When you invest in a community, you get higher educational outputs, higher health outputs, people are not engaging in petty crime. Because in the redlining case, white veterans were being given home loans, so they were owning their own houses, they have a sense of community. Versus Black veterans, who were then relegated to areas in which there wasn't good educational opportunities, there are not as many jobs, they can't own homes, there's more likelihood of petty crime. And when you put that into an algorithm and say, Should I release this person back into society, the algorithm is going to say, Well, no, don't put them back into a crime-ridden area if they are a convict, keep them in jail. But release this other person because they're going to be going into a quote-unquote safe neighborhood.

And that becomes extremely problematic when you are a Black person because despite the fact that we don't have redlining anymore, we do still have these historic patterns where Black people live in redlined communities, white people don't. And I may not be somebody committing crimes, but if I am looking for public services, for example, they use similar algorithms to decide whether I should get social services designations. The work of Virginia Eubanks is, you know, very clear on that; they're using these same algorithms to decide where to deploy police, and in the New York City context, they're actually using stop-and-frisk data to train that particular algorithm. And what stop and frisk did was stop Black and brown men when they were using our train system. And in 90% of those cases, those men were completely innocent. There were no drugs, but the arrest data is there. And arrest is not the same as conviction. So when you're saying to your police people, Where should we send people, and you're using arrest data, then you're going to send them to areas like mine, despite the fact that there is the same level of crime as there is in the Upper East Side. I live in Brooklyn, New York. And then when crime happens where there are a lot of police, it's seen as detected. But if there are no police, you don't know the crime that's going on.

Ted Fox  19:21  
The whole if the tree falls in the woods and no one is there to [hear it], like, Well, okay, it didn't happen because no one got caught; that doesn't mean it didn't happen.

Mutale Nkonde  19:27  
Right! And then we get shocked as a city where we have cells of white supremacists organizing on the Upper East Side because there's no law enforcement to stop it. They're all wondering what my banana is and if it's a gun (laughs).

Ted Fox  19:44  
Right.

Mutale Nkonde  19:44  
You know, instead of these other crimes. And so that's where bias becomes an issue, but people are not thinking about that because they assume that math is a scientific process, that it has no value, and if we look to the work of Cathy O'Neil, we know that the decisions that we make when we create these statistical models, they're influenced by the developer. And COVID trackers, as I started this, are going to use those same patterns. Because they're gonna say, Well, let's look at where all the disease is. And we know that Black and brown people have high rates of infection and death because of other--because of lack of investment in health care, right? They're more likely to be poor. So they're gonna track that, Oh, my God, in these neighborhoods, there's more infection, Black and brown people must somehow be different to white people. And it's like, No, Black and brown people are poorer, so they don't have health care; white people are not as poor in aggregate, so they're less likely to have it.

Ted Fox  20:44  
And I mean, it's really kind of this, I don't know what the right adjective would be, but almost kind of this twisted irony of exactly what you were saying there a second ago--of you could look at it like, Well, look, we're creating these algorithms, these formulas, so we're not relying on an individual who might be biased to make this determination. We're letting the math sort it out, basically. But like you said, I mean, that algorithm had to come from somewhere. And even if it's not, going back to your earlier example about the facial recognition, even if it's not an intended thing, if it's just that everyone who is working on this is similar, and therefore there's these unintended, these implicit things that creep into it, it ends up having the exact opposite effect of what hopefully you set out to do, which was, Well, let's take the bias out of these things. And it's just kind of spitting it right back at you.

Mutale Nkonde  21:36  
And the reason that that's always gonna happen is that we're depending on data sets that have been shaped through public policy that is inherently sexist, racist, ablest. And that's a reflection of the society that we are. And algorithms are always built on data from the past. They're never, you know, it's never like, It's 2020, we are no longer going to be racist; well, that's great, but we have 500 years worth of data. (both laugh)

Ted Fox  22:07  
Right, right. So I want to talk some about some of the work that you've done trying to take on these issues. And as a senior tech policy advisor for Congresswoman Yvette Clarke, you were on the team that introduced the Algorithm Accountability Act and the Deep Fakes Accountability Act into the United States House. I'm wondering, what do those acts seek to do, and what's their status at the moment?

Mutale Nkonde  22:34  
So those acts really sought to start a conversation. In Congress, you have acts that are introduced to pass, and that's typically in a situation where the party that you're working for either controls both houses, and then you have an executive who is willing to pass nonpartisan legislation or bipartisan legislation. And that was not the situation we were in in the 116th Congress, which is the one that's about to wind up. But we still did feel that it was really important to start that conversation.

So one of the things that we looked for in the Algorithmic Accountability Act was a way of getting at this phenomenon that I've just described of encoded bias. And we wanted to introduce this idea of impact assessment. So putting a federal law in place that would say to Silicon Valley and other development hubs, by all means create these technologies, we want to capture as much of that $35.5 billion as we possibly can in the United States. China has a very aggressive tech sector, we want to make sure that America remains dominant. But before you introduce these acts, to avoid the type of algorithm used in Broward County--which by the way, was used across 30 states. So that was one particular example, but that algorithm is used widely across the country. Instead of introducing that, we just want you to do these level of tests to see what impact it has, particularly on marginalized groups. And we were looking at protected classes. So we were looking at Black people, women, Native people, people with disabilities, people that were sexual minorities, and if it hurts those people, then we want you to show us proof that you are going to do some fixes around that. And you have to report this to a federal agency. So very similar to having like an FDA-type body that would be created in the hope that then the US could build its brand on having safer, fairer technologies. And that was actually something that was going on in the EU at the time with the GDPR. The EU recognized they don't have US or Chinese dominance, but what they can do is really tap into great computer engineering and create safer systems that would be bought across the democratic world. And for the reasons we introduced it for, it was incredibly successful. We started to see editorials around algorithmic accountability, people were writing op-eds in The New York Times telling us how terrible the bill was, and we were really happy because they were talking about the bill.

Ted Fox  25:27  
They were talking about it, right.

Mutale Nkonde  25:29  
Right. And we started to see this national conversation, it was really exciting. But it was really difficult to get people to sign on. And this goes back to explainability. Because in order to become a co-sponsor, you first had to explain the science. And then we were having so many scandals--you know, the Mueller Report had just come out, we were at war with Iran, people were getting fired from the White House--so we were never really able to do much of that cultivation work that we would have liked. Even though we did see so many panels and briefings across both houses, which was great.

And with the Deep Fakes Accountability Act, we were so naive, we were thinking, We have an election coming up in a year, let's do an act that really presses forward on the work that we did with algorithmic accountability, but makes it much more applied, and look at the way audio-visual manipulation really impacts women and girls. And we worked with Danielle Citron and Mary Anne Franks and others who had been looking at revenge porn, and really said to them, Look, revenge porn is one thing, it's one thing when you take a picture, a sexually explicit picture, and then your partner releases it. It's quite another thing where we're building algorithms that can take one picture and 15 seconds of your voice and create these pornographic images. And they don't even need to know you, they can get these images from the internet, pretty much. And we were lucky enough to have Scarlett Johansson, who was a victim of this, so she had kind of raised the volume on this. And Mark Zuckerberg actually had one made of himself the day before the bill dropped, so everybody was thinking about this, talking about this; again, we're trying to do this messaging work. And then what ended up happening was a video of Nancy Pelosi--which wasn't a deep fake because it didn't use any of these technical processes, but it was a video that had been slowed down to make it seem that she was drunk--was sitting on the Facebook platform, and Mark Zuckerberg was refusing to take it down because of First Amendment issues.

So there was this other discussion generally in the country around, What is the First Amendment for? And the way that we were interpreting [the] First Amendment, it was meant to protect freedom of speech in the instances where we have goodwill towards one another. But in this particular instance, there isn't goodwill. And, you know, we thought the worst thing that would happen in the election was that there would be a deep fake of our president declaring war or whoever the Democratic nominee engaged in some kind of terrible scandal. Turns out that we have actually more scandals and quite different problems that can't be addressed through that legislation. But what we were looking for then was for labeling systems on social media, and we've actually seen those proposals being taken up by Facebook and YouTube. So that's an instance where we've been able to push industry with the threat of regulation, and we didn't need the co-sponsors. But I'm of the opinion that we need the industry to change and take on new norms. And we need to regulate. Because the thing about AI is that it doesn't just exist with Google and Facebook. This is actually being adopted in banking systems and hospital systems and all of these other industries. So we still do need that regulation.

Ted Fox  29:27  
Yeah, that was one other thing that struck me looking at that talk you had given. You talked initially about how many--we kind of have a mental picture, again, the assumptions that we make of, Oh, well, this is where I would expect AI or this kind of tech to have an impact, but there's these other industries that you wouldn't even think that, Oh, this could be implemented there and have an adverse effect.

Mutale Nkonde  29:51  
And banking is a massive place where this is going forward. And the thing about the banking industry is that it's used to regulation, it's used to working with government. So while you would have these really high-profile meetings with Zuck and others on the Hill, that's such a small part of this $35 billion growth; like, it's not going to come from three companies or four. One famous example was the way AI is being used in office manufacturing because they now have all the data of when people are ordering chairs, and they can do very kind of strategic production of particular chairs that people are using at particular times. That then gets you into antitrust situations because how do you have innovation if we're going to be using this old data, and many of those chairs are being used by prison labor? I mean, it's a mess. Like, we just need humans. We need humans.

Ted Fox  30:50  
People are important.

Mutale Nkonde  30:51  
Yes. (both laugh)

Ted Fox  30:53  
So, as we're getting near the end here, there was one other, you've been a fellow at Harvard's Berkman Klein Center for Internet & Society for the past year. And I know your project there was conducting an ethnographic study on how congressional staffers learn about AI policy. Can you share anything from that? Has there been anything that's been, I guess, maybe particularly surprising to you about how they get their information? Or has it kind of confirmed maybe some of what you thought would be how they get their information?

Mutale Nkonde  31:22  
So that was the project that I pitched going in. One of the things about the Hill is that it's driven by very few personalities, and we couldn't get the access that we needed. So what I have been working on over the last year, I had to pivot really quickly, what I had been working on was disinformation for the election, and looking at how social media algorithms were promoting disinformation towards Black audiences online. Because one of the key findings from the Mueller Report was that African Americans were the most targeted group by the Internet Research Agency in 2016. And they, we didn't know what shape that would take in 2020. So we're about to start the computational analysis around that, so I can't speak too much about it. But we did find that there are a number of bad actors who are domestic bad actors. And The New York Times actually did a piece on this back in March, where the new IRA tactic is to identify domestic bad actors and promote their voter suppression lines. And it's been, it's been happening. And then one of the most interesting things has been since March, those bad actors that we've been following over the last year have started to use COVID-related information to promote the idea that as Americans, we should not vote because the government doesn't work. And my argument is, as Black Americans, we definitely should vote because we need to make sure that our representatives are going to dismantle the systems that got us into this mess in the first place.

Ted Fox  33:12  
Right. So last question, and it's completely off topic, but I had to because it's, we've been talking about algorithms serving things up to us. So Netflix. Cuz Netflix serves things up to us all the time.

Mutale Nkonde  33:24  
Algorithms.

Ted Fox  33:25  
That's right. So you and I bonded a little over Twitter about Tiger King, which, if those of you listening haven't watched, and Mutale, I'm gonna--(laughing) How would you, someone who hasn't watched it, how would you describe Tiger King on Netflix?

Mutale Nkonde  33:39  
It's like a seven-part miniseries that takes place between Oklahoma and Florida in this subculture around big cats, which should sound really boring. But what you don't realize is that, it's like, in one episode, you learn about a cult, then you learn about polyamorous marriage, then you learn about forced-labor camps, then you learn about how people who get their arms and legs pulled off by tigers don't want health care because they're trying to save the tigers. Then the FBI gets involved, then somebody runs for president, then somebody--and it just goes on and on and on. And the reason that I loved it so much is so much of my research is really grounded in the histories of race in this country that I'm always focused on the histories of Black people in this country. And Tiger King is this kind of, like, all-white curiosity. But you get really brought in because I think one of the things that COVID has done, at least for me, is made me realize that all the ways that I tried to validate myself through degrees and fellowships and my work and learning really don't mean anything, and that there are these people who really don't care about algorithms and really don't care about science and are really having the most amazing life. (Ted laughs) Like, there's one character in there who's kind of like a supporting character called Doc Antle.

Ted Fox  35:11  
Uh huh. Yes.

Mutale Nkonde  35:11  
And they do this montage where they're like, Well, how many girlfriends does he have? And people are like, well, we think 10, three, nine.

Ted Fox  35:17  
Did you see the thing then, too, he was apparently on the stage with Britney Spears at the VMAs like 15 years ago? I don't know if they had, like, an anaconda or something in the background, and then everyone was like, Oh, wait, so that was the guy, the other guy from South Carolina in Tiger King was on stage at the VMAs with Britney Spears? It's like, your mind is just, like, exploding over and over again with all these things. (laughs)

Mutale Nkonde  35:42  
It's amazing. If you haven't watched it, please do. It's great escapism. The one thing I will tell you is that I binge-watched it in March, and I often have to revisit because there's just so much going on in each frame.

Ted Fox  35:56  
There's a lot packed into those seven episodes. (laughs)

Mutale Nkonde  35:59  
And the other thing, which you haven't asked about but we talked about briefly, Hollywood is another similar Netflix series, except this is fictionalized. It's Ryan Murphy. But it's just a lot going on when you just want to block out what is a lot going on in the real world. And I've been--because obviously I live my life thinking about algorithms--I've been watching my Netflix algorithm because I revisit those queues so much. And it can't really serve up anything similar because I think that was a once-in-a-lifetime experience. (both laugh)

Ted Fox  36:34  
I was gonna say, this was a good podcast: algorithms and TV recs at the end. I don't know how you can go wrong with that. Mutale Nkonde, thanks so much for doing this today. I really appreciate you taking the time to talk to me.

Mutale Nkonde  36:47  
It's so nice to finally meet you. And please keep up with me on the interwebs because your tweets ...

Ted Fox  36:57  
We have an avatar as a waffle, right? I mean, it only sets the stage from right there, right? (both laugh)

Mutale Nkonde  37:03  
Yeah. Well, it was so nice to meet you. Please stay safe and healthy. And let's hope that we can get through this only worrying about when Amazon is coming.

Ted Fox  37:15  
Exactly. Thanks so much.

Mutale Nkonde  37:17  
Okay, bye bye.

Ted Fox  37:18  
(voiceover) With a Side of Knowledge is a production of the Office of the Provost at the University of Notre Dame. Our website is provost.nd.edu/podcast.