esarley, Author at 性视界 Business School AI Institute The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Thu, 16 Apr 2026 17:28:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png esarley, Author at 性视界 Business School AI Institute 32 32 A new type of AV, a new type of AV company聽 /a-new-type-of-av-a-new-type-of-av-company/ /a-new-type-of-av-a-new-type-of-av-company/#respond Tue, 28 Jun 2022 22:33:38 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=15651 Zoox co-founder and chief technology officer, Jesse Levinson, shares what makes their autonomous vehicle product and model unique from a technical and business perspective.

The post A new type of AV, a new type of AV company聽 appeared first on 性视界 Business School AI Institute.

]]>

鈥淲e really wanted to change the way people moved around cities. And in our view that meant doing something more than just retrofitting conventional cars to be self-driving.”

Jesse Levinson

Zoox’s approach to solving autonomous mobility has always been a little bit different. In this interview, Zoox co-founder and chief technology officer, Jesse Levinson, shares what makes the Zoox product and model unique from a technical and business perspective. Jesse gives us a peek into how they鈥檙e innovating on the hardware, the software, and the integration of the two to create a new type of vehicle. Beyond the tech, Zoox is redesigning the business model by shifting from selling everybody their own car to autonomous electric people movers that provide rides all day and all night long.

The vision is compelling and doesn’t take much to get people excited, but we had to ask the big elephant in the room question: is this really possible and if it is, why hasn’t it happened yet?

96%

“Each one of our vehicles would be profoundly more useful and efficient throughout a 24 hour cycle than somebody’s personal car, which spends 96% of its time taking up space and depreciating.” 鈥 Jesse Levinson

The post A new type of AV, a new type of AV company聽 appeared first on 性视界 Business School AI Institute.

]]>
/a-new-type-of-av-a-new-type-of-av-company/feed/ 0
10x鈥檌ng DEI: a focus on product and communities /10xing-dei-a-focus-on-product-and-communities/ /10xing-dei-a-focus-on-product-and-communities/#respond Tue, 08 Feb 2022 08:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=15194 Drawing from her wide experiences across the UK Parliament, British media, YouTube, Google, and Snap Inc., Oona King shares how we can leverage tech鈥檚 problem-solving mindset to drive DEI progress, her vision for open-sourcing DEI, and the next big leap for DEI 鈥 more deeply understanding how product plays into the space.

The post 10x鈥檌ng DEI: a focus on product and communities appeared first on 性视界 Business School AI Institute.

]]>
Technology affects everyone. It’s the infrastructure for civilization at this point. And each decision we make as we build products has a downstream impact on individuals and communities.

We recently sat down with Snap Inc.’s diversity, equity, and inclusion VP, Oona King, to dig into DEI from the perspective of software developers and other engineers. We wanted to know what DEI actually means for those in our community who are looking up from their machine learning models and rethinking their roles within tech spaces today.

“Do no harm is a great mantra as a minimum absolute MVP, but you need to go beyond that now. You need to think about how your product may inadvertently exclude or harm groups, communities.”

Oona King

In the interview, Oona shares how we can leverage tech鈥檚 problem-solving mindset to drive DEI progress, her vision for open-sourcing DEI, and the next big leap for DEI 鈥 more deeply understanding how product plays into the space. You’ll hear insights from her time working in the UK Parliament, British media, YouTube, Google, and Snap… and what we can learn from her favorite show, The Wire.

10x

Tech always talks about 10x. How do you increase tenfold, 10 times, 10x what you’re doing? And they will do that for everything else in the world, but not think about it for DEI.” 鈥 Oona King

The post 10x鈥檌ng DEI: a focus on product and communities appeared first on 性视界 Business School AI Institute.

]]>
/10xing-dei-a-focus-on-product-and-communities/feed/ 0
HBS Gender Initiative Director Colleen Ammerman has a question for you /hbs-gender-initiative-director-has-a-question-for-you/ /hbs-gender-initiative-director-has-a-question-for-you/#respond Wed, 17 Feb 2021 18:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13240 HBS Gender Initiative Director Colleen Ammerman wants to know: when does technology not work for you?

The post HBS Gender Initiative Director Colleen Ammerman has a question for you appeared first on 性视界 Business School AI Institute.

]]>
Summit journey with marker over listening

We want to hear your thoughts and experiences around tech inequality. Our Summit listening tour co-host Colleen Ammerman, director of the HBS Gender Initiative, has a question for you. Name a tech product that doesn鈥檛 seem made for you鈥 or briefly describe an experience with technology that made you feel excluded. Check out Colleen’s take on the topic and send back a 10-second reply in video, audio, or text form to add your voice to the listening tour. (Our team is having a lot of fun with this tool and hope you do too).  

When does technology not work for you?


We鈥檙e excited to hear from you and we鈥檒l share some responses at Summit Gathering, our virtual event in March where we’ll come together to discuss the problem of inequality in tech and identify opportunities to improve.

Learn more about our great partner, the , a team that catalyzes and translates research to drive change and eradicate gender, race, and other forms of inequality in business and society. To keep exploring, !

The post HBS Gender Initiative Director Colleen Ammerman has a question for you appeared first on 性视界 Business School AI Institute.

]]>
/hbs-gender-initiative-director-has-a-question-for-you/feed/ 0
HBS Digital Initiative Director David Homa has a question for you /hbs-digital-initiative-director-david-homa-has-a-question-for-you/ /hbs-digital-initiative-director-david-homa-has-a-question-for-you/#respond Wed, 17 Feb 2021 18:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13261 HBS Digital Initiative Director David Homa wants to know: how would you describe tech inequality?

The post HBS Digital Initiative Director David Homa has a question for you appeared first on 性视界 Business School AI Institute.

]]>
Summit journey with marker over listening

We want to hear your thoughts and experiences around tech inequality. Our Summit listening tour co-host David Homa, director of the HBS Digital Initiative, has a question for you. What are your thoughts on inequality in tech? Can you name five words that summarize the main issues? Can you share them in 10 seconds or less?! Check out Dave’s take on the topic and send back a reply in video, audio, or text form to add your voice to the listening tour. (Our team is having a lot of fun with this tool and hope you do too). 

How would you describe tech inequality?


We鈥檙e excited to hear from you and we鈥檒l share some responses at Summit Gathering, our virtual event in March where we鈥檒l come together to discuss the problem of inequality in tech and identify opportunities to improve.

The post HBS Digital Initiative Director David Homa has a question for you appeared first on 性视界 Business School AI Institute.

]]>
/hbs-digital-initiative-director-david-homa-has-a-question-for-you/feed/ 0
James Mickens on why all data science is political /james-mickens-on-why-all-data-science-is-political/ /james-mickens-on-why-all-data-science-is-political/#respond Thu, 14 Jan 2021 06:45:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=12550 In this episode, we are speaking with James Mickens from the 性视界 John A. Paulson School of Engineering and Applied Sciences (SEAS) about the ethical challenges in cybersecurity, the societal implications of data science, and the importance of humor in learning.

The post James Mickens on why all data science is political appeared first on 性视界 Business School AI Institute.

]]>
Data science and artificial intelligence have inescapable influence and power in our world. The people who are the most negatively affected are often the ones whose voices are not heard. What does a digital world that works for everyone look like? And who gets a seat at the table?

In this episode, our hosts Colleen Ammerman and David Homa speak with James Mickens about the ethical challenges in cybersecurity, the societal implications of data science, and the importance of humor in teaching. James is a professor of computer science at the 性视界 John A. Paulson School of Engineering and Applied Sciences (SEAS), as well as a director of the Berkman Klein Center for Internet and Society at 性视界 University.

Watch the episode with James Mickens

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): Today, we are joined by James Mickens. James is the Gordon McKay Professor of Computer Science at 性视界’s John Paulson School of Engineering and Applied Sciences, as well as a director at the Berkman Klein Center for Internet and Society at 性视界 University. Welcome, James. We’re very excited to talk to you today.

James Mickens (性视界 computer science professor): Thank you. Thank you for that introduction.

David Homa (Digital Initiative director): Good to see you again, James. Glad you could join us. 

JM: Glad to be here.

DH: So, let’s get started. I want to talk specifically first about people and technology and actually, the people who create technology. The people who create technology are in a unique position to sort of know when bias might be introduced. Do you have a sense of what proportion of those people are actually aware that they have this big responsibility?

JM: I can’t give you a specific number. I know that the number is lower than we would hope. I think that a lot of people come through their technical education, whether it be formal, through university, or whether it be sort of informal, through self teaching or watching courses on the Internet. But a lot of technologists, and in fact, a lot of tech-centered entrepreneurs, they come through that educational process thinking that, implicitly, technology is value-neutral 鈥  that somehow we just create these products. And sure, they can be used for good or bad. But ultimately, that’s not for the technologist or for the businessperson to decide. There’s some sort of hope that: 鈥淚f it gets too bad, maybe the government will intervene. Or, social forces will take over. There will be boycotts. It’s not our problem.鈥 But that’s wrong. That’s the wrong way to think about it. And so I think that the fraction of people who actively devote thought to this is getting bigger. That’s the good news. I think that particularly the younger generation of entrepreneurs and tech folks are starting to think about these things more explicitly. But there’s still a large swath of the tech community that would sort of prefer to not get bogged down in these 鈥渘ontechnical鈥 details.

DH: We reached out on Twitter with a poll before interviewing you and asked: “If a developer sees something that really should be dealt with, or if there’s heavy bias in in a product, should they always say something? Should they only point it out if they think it’s a really big deal? Or, is it not their problem?” And it was pretty universal that they felt people should always say something. So maybe that’s a sign that says something about our community, or maybe the trend is heading in the right direction. 

JM: Or maybe we can’t necessarily trust the polls. I mean, we would all say that we鈥檇 help the grandmother try to cross the street. We would certainly do that. But of course, when we actually get to the intersection, you see everyone looking around saying, 鈥淲ho else is going to help this person across the street? I got somewhere to go.鈥 I think you’re exactly right that in the abstract, when you ask people these questions, they say, 鈥淥h, of course, of course. We should think about the well-being of other people.鈥 But once you get the pressure of deadlines 鈥 the product has to ship next week. Once you get the pressure of shareholders 鈥 they want to make sure that you’re competitive, where “competitive” is oftentimes defined as how many features you have. “How fast is your software?” So on and so forth. When you start looking at these sort of more complex, real-world situations, I think it’s easier for people to lose some of their moral centering, if you will. 

CA: I wanted to follow up on that theme around the way that we sort of divide technical issues, questions, and problems from social ones. We just don’t traverse that boundary a lot. So, it seems to me what you’re saying about education is really critical. Can we educate people differently so that these ethical considerations are embedded in how they think from the get-go? That seems ideal. But of course, today there are plenty of people in positions of power making decisions that didn’t have that education. You had this great quote in a talk that I was watching in preparation for this interview where you said something like, 鈥淓thical considerations become less important to us when considering them could hurt our revenue.鈥 We just have these different incentives. So, I would be curious to hear you talk about where we are today. How do we meaningfully integrate ethical considerations into our decision-making? And how do we even communicate about them when there’s so much variation in what people know on both the social and the technical sides?

JM: Those are great questions. I think the first step is always realizing that you have a problem. In other words, the first step is always sort of stepping back and saying, “Look, I can’t just silo myself in my narrow domain of expertise. The things that I build, the company or the technology or the people that I train, they interact with the larger society.” So, I think just sort of getting those high-level ideas in people’s heads, that’s sort of the first fight. Then, after you’ve won that battle, I think the next struggle is to convince people that many of these ethical challenges, they don’t just have a very simple, “yes” or “no” answer. And of course, as engineers, that is what we want to hear. As engineers, we want to basically say, 鈥淟ook, I get it. Ethics. Certainly I want to make sure that I don’t go to jail and I get to heaven. So, can you just give me a checklist?” And then whenever something ethical happens to me, I have to make some sort of ethical decision, I just consult ye olde checklist. And then I just go, “yep, OK.” Then we’re done. Unsurprisingly, perhaps, this is not the way that these situations actually resolve themselves in the real world. You actually have to think about these things and you have to make difficult decisions whereby the decision you end up making may still have some bad effects. It may be the best of a series of difficult decisions to make.

“when you don’t think about it explicitly, you end up getting tech that fails in ways that CAUSE a lot of harm.”

So, that’s where I think it’s actually helpful to have people either on staff, or people that you can talk to, who are classically trained in thinking about these difficult issues. You know, many hospitals, for example, have an ethics board. And it’s not because the doctors aren’t aware of these issues. It鈥檚 that the doctors were, first and foremost, trained to heal people. They weren’t trained to think about, you know, philosophers in caves, or whatever it was that obsessed the Greeks back in the day. So, I think that increasingly, in at least some of the bigger companies, you’re starting to see there be some roles inside the company where the job is to think about some of these issues. In the same way that you have a lawyer to think about legal compliance, you might have some philosophers, some ethicists, some sociologists on staff to think about these issues, about what is the right and wrong thing to do here. “Who are the stakeholders? Are we ensuring that we’re providing equity along multiple dimensions 鈥 gender, race, disability status?” Things like that. So, I think that’s ultimately where we want to go. The endgame is that even if you have a company that seemingly is only focused on one thing 鈥 like making trades go fast, or providing a social network, or things like that 鈥 they still have the capacity to think more holistically about how those products are integrating with the rest of society.

DH: There’s a lot of talk about bias and data. But there’s a lot of steps in data: there’s the data gathering, there’s the collection, the storage, and the parsing of it. Then, there’s the analyzing of it. Then, on the back end, there are systems of AI and ML interpreting them, or making decisions, or projecting the future. I wonder, are there different problems at each stage there? And are some of the problems bigger than others? And where do you see the biggest problem? Or does everyone just have to be aware all along?

JM: Yeah, it’s wall-to-wall problems, chock full of problems. [laughter] Christmas has come early, and your gifts are problems! That’s the way that I would look at it. I mean, I think that intuitively human nature is to look at problems and try to be as reductionist as possible. It鈥檚 to say, “I’ve got this complicated process. But here, here’s the problem.” I want to point a finger at something and say, “If we fix that, then we’re done.鈥 I think that particularly when you look at big data pipelines, machine learning, things like that, because these pipelines are sometimes so deep, because the data sets are so complex and so multidimensional, it’s oftentimes hard to say, “Yeah, if we just fix this one thing, that will solve 90% of our ‘ethics problems’ or our ‘diversity problems’ or whatnot.” It’s typically a more holistic type of reasoning that you have to apply. And I think that this is somewhat related to our conversation about how there is no checklist, right? In as much as, if you do this, this, and this, then you’re scot-free. Instead, what it typically is, I mean, it’s really a lifestyle. So, you have to 鈥 at every step of design and then implementation and then testing 鈥 you have to be thinking about some of these questions. And one pushback that you’ll sometimes get from that, particularly from engineers, or from quants (or, [in general], people who view themselves as being hard technical people), they’ll say, 鈥淭his isn’t what you hired me for. You didn’t hire me to watch these videos about the value of diversity. I believe it. I believe it. Let everyone in. But I don’t want to talk about this.鈥 And the problem is that there’s a lot of research which shows that that type of attitude of, 鈥淚 understand bias, but I don’t want to deal with it in my day-to-day professional life,鈥 does not lead to unbiased outcomes. It’s a thing that you always have to sort of think about, in addition to the technical side of things, or to the business side of things. So, it’s really 鈥 to get back to the original question 鈥 you’ve got some big data pipeline that involves a lot of different players, that involves a lot of different systems. At every step, you have to think about who are the stakeholders, who are the people you’re trying to help, what are their interests? How do you make sure those interests are being protected? Who are the set of people that you don’t care about, right? Who are the set of people for whom you’re not actually targeting their concerns? Being explicit about that stuff is very important. Because when you don’t think about it explicitly, you end up getting tech that fails in these ways, that ends up causing a lot of harm, even though, let’s say, the developers and the business folks don’t have any explicit malice in their heart.

A great example is facial recognition for cameras built into laptops. A lot of the early cameras that came out, they couldn’t track people’s faces of a certain skin tone. And I have no doubt that the people who designed a lot of these early systems, they weren’t explicitly trying to do that. But they didn’t ask these questions about, you know, “Where’s our training data coming from?” for example. So, they get training data that itself is biased. It then results in a biased facial recognition algorithm. They didn’t know that, though, and so they did ship the product saying, “We’ve hit all of our internal metrics for accuracy.” Until you started seeing things on YouTube where you’d have someone come into frame and then the camera would just freak out and just start emitting smoke. You know, it took that to make them understand, “We really need to rethink our process from the ground up, not just the engineering and the algorithms once you have the data.” But where is that data coming from in the first place?

DH: We’re talking a lot about engineers thinking differently. What’s the space or need for people who are not engineers, and what’s their role? And what would they have to be doing differently? Do we just take ethicists and put them in a room with engineers or do they need to learn something first? What’s your view on that?

JM: Exactly. Yeah. My number one recommendation: You take the ethicist, chain them to a radiator, bring in the other set of people, chain them to a radiator, see who makes it out. [laughter] My bet is on the ethicist. They’re kind, but they’re cunning. 

I would say that, if you look at any sort of modern enterprise, there’s a very good chance that the set of job titles that you have in that enterprise are pretty varied. Even in a tech company, they have a huge number of lawyers, they have a huge number of business people and economists. These are very diverse companies. And so what that means is that even if you think, 鈥淥h, I work at a widget company, the majority of people who work at this company are directly making widgets,鈥 that’s oftentimes false. You know, there are the support roles to support the widget-forward workers. [laughter] You can tell I’m not a business person. That’s not business talk. [laughter] So, I think that what ends up happening a lot of times is that there are these decisions about products and services that have to be made that don’t just involve the people on the ground, the people who are actually making those services or products. The decisions sometimes bubble up to other parts of the company, which are not those front-facing people 鈥 the lawyers, the H.R. people, things like that. And so I think that when you talk about things like diversity training, when you talk about things like ethics training, it’s not just teaching to the people who are frontline, making the widgets. It’s all the way up and down the stack. And to be honest, I think a lot of the training also has to be directed to shareholders as well, because I think another key tension that oftentimes arises is that people 鈥 and by “people,” I mean shareholders 鈥 say things like, 鈥淲ell, yeah, all these things that you’re doing that are not directly profit-focused 鈥 great. You should definitely do that… But also, don’t hurt profit.鈥 They want to have this ambivalence towards these things, and that ambivalence is problematic.

CA: You do a lot of research on cybersecurity and it seems like that’s an area where we’re just beginning to grapple with the implications for diversity, inclusion, justice, and fairness. I wonder if you could talk a bit about the connections between cybersecurity and equity. 

JM: You know, all of these issues that we look at in technology 鈥 and I think increasingly in business, too 鈥 I think just defining these terms is becoming messier and messier, is becoming more and more difficult. So, going back to this example of facial recognition: So, imagine that you have cities 鈥 you don’t have to imagine it, there are cities that have cameras deployed throughout the streets, and those cameras are used, among other things, to help prevent and then later on, unravel what types of criminal activity happen. Well, if the data that was used to train those cameras to identify faces was biased, that puts certain communities at greater risk. And so there’s an interesting cybersecurity angle there too because if someone were to break into those systems and let’s say, change the way that they identified criminals versus not-criminals, that risk would fall disproportionately on certain segments of society. And so, to put a finer point on it, depending on what zip code you live in, you are more or less likely to have, let’s say, police cameras in that zip code. And as a result, the security, or lack thereof, of that camera system deployed by the city, the impact of that system being hacked into will fall disproportionately on people from different zip codes.

“when we talk about these issues of tech…, you really have to take this increasingly broader perspective on things because so many aspects of society are entangled now.”

So, I think that, you know, the notion of cybersecurity is evolving. It used to just be, “Can people break into my stuff?” It’s become more encompassing as technology has become more pervasive. So now, for example, cybersecurity includes things like 鈥淐an people break into my power grid?鈥 And by “my,” I mean a county鈥檚, state鈥檚, or nation鈥檚. Cybersecurity includes things like, 鈥淐an someone tamper with our elections?鈥 And once again, there are opportunities for disproportionate impact in terms of the way that, you know, let’s say foreign states might try to tamper with the votes that have been registered by certain members of certain communities. So once again, I think this all harkens back to this idea that when we talk about these issues of tech, or business, or cybersecurity, or bias, or diversity, or ethics, you really have to take this increasingly broader perspective on things because so many aspects of society are entangled now with so many other aspects of society. 

DH: I always wonder about integrating these data sets. You mention certain zip codes have more data or more gathered information on the residents in those areas. And out there, there are companies like, for instance, financial institutions that want to evaluate people for loans. And they’re aggregating data from many, many sources. So, inherently it would seem there’d be more information about people and potential crime, or crime in certain zip codes. And if the financial institutions just take that data at face value, they may make interpretations. Because then that’s another thing someone has to decide 鈥 how do I aggregate all this data together and decide on a profile of who you are? And there’s implications further beyond just the criminal justice system. How many people are looking at that and what are they saying?

JM: It’s a real problem, what you just described. And for all the people listening, I want to look right in the camera. Where is it? Oh, I’m stuck in Zoom World it’s right here. [laughter] I want to make this very, very clear. All data science is political. It’s impossible to take a dataset and analyze it in a 鈥減erfectly objective鈥 way. Because you’re always going to be putting on there some type of value judgment about what the dataset represents; whether that dataset covers all the attributes that you care about; and what you are trying to do with the dataset. And I think that, once again, it’s very easy for technologists and entrepreneurs to say, “Let the machines handle it, because it’s just zeros and ones.” But that’s not at all the way that this sort of a system works. You look at things, for example, like predicting whether someone’s going to default on a loan, maybe to give them a mortgage or something like that. Let’s say I give you some dataset which looks at the historical rate of loan defaults for a bunch of different people. First of all, which communities are you looking at? You know, were they already the target of, let’s say, previous predatory lending, which put them in a poor position to pay for new loans 鈥 things like that? Thinking about those questions and whether those questions are important, that is a political process. And by “political” I don’t mean political in the sense that like, you’re a capital “R” Republican or a capital “D” Democrat. I mean “political” in the sense that you are making a statement about what you would prefer the world to look like, given that you’re going to analyze data in a certain way.

“All data science is political. It’s impossible to take a dataset and analyze it in a ‘perfectly objective’ way Because you’re always going to be putting some type of value judgment ON THERE.”

That’s what I mean by political. I think it’s so important for people to understand that because so often you hear this attitude of, 鈥淵eah, we have these humans making these decisions and of course, humans are biased. But once we feed it into this machine learning thing, once you feed into this algorithm, all of our bias problems go away.鈥 And that’s just completely false. And what you end up seeing, time and time again, is that if you don’t ask these political questions, if you’re not honest about that kind of stuff, then you see the old biases that you were supposedly trying to get rid of being replicated in these new systems that you create. Except now you have this sort of facade of like 鈥渋t’s just zeros and ones.鈥 So, when I see, or I hear about things like, 鈥渙h, we’re going to use algorithms to determine the first pass of CV screening.鈥 You know, you submit an application, then an algorithm basically says, “Here’s the first cut” of things. That concerns me. Because there’s many studies that show, for example, if you have two resumes that are exactly the same, you just change the name, and all kinds of bad things happen based on whether you change the name to a woman鈥檚 or a person of color sounding name, you know, so on and so forth. And so if you say, 鈥淥h, the goal of our algorithm is to be just as accurate as our old system,鈥 your old system wasn’t accurate. So, I think it’s really important for everyone listening, if you’re in a company and your CTO or some data scientist says, 鈥淒on’t worry, we’ve got an algorithm on the case. We’re not going to have any bias problems,鈥 fire that person! Arguably make a citizen’s arrest. [laughter] Look it up, be knowledgeable of the statutes. Try to do something to them till the authorities can show up. Because that’s just a terrible way of thinking about things. 

CA: So, this is a question that I was thinking about after watching some presentations you’ve given and then also reading about how you think about teaching. You’re an educator 鈥 you’re not just a researcher 鈥 and you’re really a communicator and somebody who cares a lot about trying to foster these conversations. I think anybody who knows about you knows that you really lean into humor and storytelling and narrative in these public talks, and I imagine maybe in the classroom, too. So, I would just love to hear you talk about why you do that. I imagine that it’s a deliberate choice that you’re making.

JM: You know, heavy is the crown. Sometimes I wake up and just have too many jokes in my mind and it’s difficult to find a way to share that gift with humanity. [laughter] And yet, I try. So, I think that one reason I try to incorporate storytelling and humor into my public speaking is that you hear from politicians and leaders all the time saying, 鈥淲e need more people in tech. Think more about tech. Tech is a great thing to get into: science, math and engineering, blah, blah.鈥 And yet, we don’t have as many popularizers of STEM subjects as one might expect, given all these exhortations to go into that field. And I think that it’s very easy for laypeople to get this impression that, 鈥淥h, you know, STEM stuff is very stodgy and I’m just going to be locked away in a lab all day. And it’s not fun.鈥 But I think it is, in fact, fun. I think it is, in fact, interesting. And furthermore, it is, in fact, important. You know, many of the issues we’ve talked about in this conversation are issues that are extremely important to large swaths of society. And I think that because of this latter issue in particular, that there are so many important issues which need to be talked about, but which can be uncomfortable to talk about. That’s one reason why I think humor can be very useful 鈥 because I know from personal experience teaching engineers, also being an engineer myself, sometimes there’s this reaction when someone comes to you and they’re like, 鈥淗ey, have you thought about this thorny ethical dilemma?鈥 You鈥檙e just like, 鈥淕et off my lawn. I don’t want to hear about this kind of stuff. You couldn鈥檛 come into my chair and write as many lines of bug-free code as I could. So, just get out of here.鈥 And that misses the point. You know, it misses the point about what it means to be a member of society where you have to care more about things beyond just your narrow worldview. But sometimes you have to lead people to that river so they can drink like a horse, or whatever that saying is. [laughter] 

So, what I find is that if you bring up some of these issues using the delivery mechanism of humor, people are more likely to be less defensive when you start talking about the more difficult things. Because, you know, no one likes to be told, for example, that they’re biased. I mean, anyone out there who’s listening, you know, take an implicit bias test. You will find out that you are basically like one bad day away from living in the 1500s. [laughter] I mean, it is rough! It doesn’t matter how open-minded you think you are, you’ll take the test and you’ll be like, 鈥淚’ve probably got a ninety five out of one hundred.鈥 You’ll get like a negative 18 out of 100! I guarantee that. I’ve never seen a score higher than negative five. [laughter] And so I think that’s tough. It’s tough for people to hear that message. And so that’s why I think it’s helpful to use comedy to soften some of those blows, and to tell personal stories. You know, I grew up in the South, and I’ve had a certain set of experiences there, and some of them were troubling. But it’s useful to share some of those stories. Because ultimately, you know, we’re all people. I mean, I don’t want to tear up on camera. [laughter] But we’re all people. There鈥檚 a set of these universal experiences that we all have. And I think that people realize that quicker through laughing, because when you laugh together… What is comedy? It鈥檚 very interesting. Not to get too philosophical here, but, you know, comedy, when you tell a joke, you’ve built a worldview and you’re asking people to join you in that worldview 鈥  to join you in this little universe that you’ve created. And then the same things that you find funny, you want them to find funny themselves. And in a certain sense, that’s what any teaching is. That’s what any advocacy is. You’re saying, 鈥淚’ve built this universe, this way of thinking about things, and I’m inviting you to come live in that universe.鈥 And so that’s why I think it’s so important that we get these messages out there, and we deliver them in a way that is both honest but also sort of caring. That makes it clear that we all make mistakes. But if you’re trying to constantly learn and you’re trying to constantly think about these issues explicitly, then that’s the best that we can do, and that’s the most you could ever ask of someone.

CA: That鈥檚 great. What you’re saying about humor and storytelling as a way to bring people into a place where they feel like they’re part of a community, and there’s kind of a shared experience, I think it really concurs with research that’s been done on bias training, which has found that if you simply educate people about bias, it actually makes people ultimately behave in more biased ways. And the only way to avoid that is to frame it around, 鈥淲e’re all trying to overcome these biases.鈥 And I think that’s what you’re doing, is creating that kind of condition around, 鈥淲e’re all trying to learn together and grow and overcome this.鈥

Before we close, is there anything that we haven’t asked you? Or anything that you haven’t had a chance to speak about that you want to leave people with? Or any resources where people can go to learn more? 

JM: I think that 鈥 and this is kind of building upon something that I mentioned a bit tangentially, earlier 鈥 one of the most important things anyone can do, regardless of what their job is, what profession they’re in, is talk to people. Because I feel that a lot of frictions or issues or problems that arise, they arise because people just haven’t been exposed to certain ideas or certain perspectives. So, one thing I’d really recommend that people do is that they try to talk to their coworkers, talk to their neighbors, talk to their friends, and just listen and see what kind of issues are top of mind for those people. Because I feel like if we look specifically at this problem of tech that’s sort of gone awry, tech that didn’t serve the population the way that we thought it would, many of the issues that arise were foreseeable. You know, they could have been dealt with early on if only we had talked to and valued the opinions of other people, who in many cases are not far away. It’s not like you need to put in a telegram to someone living at the center of the earth. You know, you just need to just go talk to the person in your next cubicle or down the street. So that’s one thing I’d really encourage people to do.

I’d also say that if you are interested in some more formal training for these types of things 鈥 and by 鈥渢hese types of things,鈥 I mean ethical reasoning, diversity training, things like that 鈥 if you’re currently a student, or you know anyone who is at a university, oftentimes universities offer resources. If you’re particularly interested in issues at the intersection of computer science and ethics, you can check out  that’s publicly available. You can see some of the readings that we have. Really, my high-level piece of advice is just talk to people, try to think about these issues involving business, tech, and ethics holistically, and I think you’ll see better outcomes. 

CA: And that’s a perfect note to end on.

DH: That’s a wrap on the interview. But the conversation continues.

CA: And we want to hear from you. Send your comments, questions, and ideas to justdigital@hbs.edu.

The post James Mickens on why all data science is political appeared first on 性视界 Business School AI Institute.

]]>
/james-mickens-on-why-all-data-science-is-political/feed/ 0
About the project, Pathways to a Just Digital Future /an-investigative-project-pathways-to-a-just-digital-future/ /an-investigative-project-pathways-to-a-just-digital-future/#respond Fri, 01 Jan 2021 14:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13281 Pathways to a Just Digital Future brings together thinkers, makers, and activists to unpack the roots of inequality in tech and chart a course toward a digital world that works for everyone.

The post About the project, Pathways to a Just Digital Future appeared first on 性视界 Business School AI Institute.

]]>
Watch the animated trailer breaking down the project

Technology has the potential to build a better world… but its applications and uses are often biased and can reinforce systems of inequality. What can we do?

We’ve teamed up with our good friends at the , a team that catalyzes and shares research aimed at eradicating gender, race, and other forms of inequality. Together, we’re embarking on a project that brings together thinkers, makers, and activists to unpack the roots of inequality in tech and chart a course toward a digital world that works for everyone. Over the course of the project, we鈥檒l host a listening tour, community gatherings, and an exciting opportunity to implement change in your workplace. 

Summit Listening logo

A 10-episode listening tour with scholars, practitioners, and activists challenging disparities in tech in the areas of data, design, and diversity on teams

Summit Gathering logo

Virtual gatherings to connect and problem solve with your community

Summit Improving logo

An opportunity for a cohort from the community to implement change in your world

We recognize that this project is ambitious, but we believe if change is going to come from somewhere, it鈥檚 going to come from you 鈥 as you train your next machine learning model, design your next interface, or build your next team. As a tech creator, a researcher, a maker, an employee of a company that builds or leverages tech, or a general citizen, how will YOU use your power? 

Join us for the full journey or jump in along the way. We鈥檒l share updates and opportunities to get involved in our newsletter and on and with #JustDigitalFuture.

We want to hear your comments, questions, ideas, and reactions. Send us a note! You’ll reach the team behind Pathways to a Just Digital Future: Ethiopiah Al-Mahdi, Colleen Ammerman, Tanya Flint, David Homa, Michelle Monti, Liz Sarley, and Jamie Thomas.

The post About the project, Pathways to a Just Digital Future appeared first on 性视界 Business School AI Institute.

]]>
/an-investigative-project-pathways-to-a-just-digital-future/feed/ 0
The coauthored brand story /the-coauthored-brand-story/ /the-coauthored-brand-story/#respond Fri, 24 Apr 2020 21:12:11 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=10493 The way to mitigate the effects of negativity about a brand is to try to tip the balance towards the voices of more positive brand allies.

The post The coauthored brand story appeared first on 性视界 Business School AI Institute.

]]>
Brand managers have long lived with the myth that they are in control of their brand鈥檚 story, but actually they are only one of many cultural producers who author and disseminate brand meaning. A brand is a cultural artifact, a vessel of meaning that is filled as the brand circulates through culture being used by various groups of people in day-to-day living. 

Brand authors include the brand鈥檚 owner and its managers, but also the brand鈥檚 consumers, critics, influencers, and other meaning-makers and gatekeepers who create and contribute brand meaning as they endorse or reject the brand, or use it as a marker to connote meaning. While the brand鈥檚 managers are prominent meaning makers of the brand as they introduce a new brand to the world establishing its intended meaning, known as its 鈥渂rand identity,鈥 once the brand enters society, other authors begin contributing as the brand鈥檚 identity morphs into its 鈥渂rand image,鈥 the cultural meaning of the brand that collectively lives in the minds of consumers. 

“A brand is a cultural artifact, a vessel of meaning that is filled as the brand circulates through culture being used by various groups of people in day-to-day living.”

The most that brand managers can hope for is a role in shepherding and curating the brand鈥檚 meaning through continued marketing communications designed to maintain aspects of the brand鈥檚 identity alongside the efforts of these other authors. However, powerful voices outside of the brand have the potential to counteract these efforts. Both consumers, who choose or reject the brand based on how well or poorly it supports their identity projects, as well as others endowed with cultural capital and a platform for meaning-making (such as journalists and influencers) can work to either complement the desired meaning of the brand鈥檚 managers or actively work to combat it. In today鈥檚 world where information spreads quickly and broadly, each individual author of brand stories has the potential for great influence.

Verse Simmonds singing infront of condenser microphone

So, what is a brand to do? Operating in a shared meaning-making system, brand managers need to recognize and value the meaning-making power of other authors and work to cajole and enlist them into supporting their desired brand identities. Brands do this by first, ensuring that the meaning they put forth with their brand identity is valuable to consumers because of its cultural relevance and resonance, and second, by clearly and consistently communicating this brand identity over time.

Brand managers cultivate and educate strong, positive brand communities, filled with consumers who stand at the ready to support and defend the brand if it finds itself under attack from other authors. Brands court cultural influencers through outreach, paid support, or free products and services, and entice them to author brand stories that endorse brand identity. Brand managers, through all of their brand鈥檚 customer and cultural touchpoints, work diligently to earn the trust of their consumers and of society more broadly by matching the brand鈥檚 actions to its espoused values. All of these actions are designed to spur others toward positive meaning-making for the brand, dynamically building a well of positive brand equity as the brand鈥檚 meaning continuously evolves. 

“Brand managers will never be able to prevent other authors from contributing to brand meaning.”

Brand equity represents the sum total of the set of cognitive, emotional, and attitudinal assets and liabilities linked to a brand that adds to or subtracts from the value provided by the product to consumers. As such, it can be a motivator for purchase and an enhancer of consumption. It is a perceptual frame that has the capacity to differentially change the way consumers choose, experience, and value a product and it is the direct result of brand meaning-making.

Brand managers will never be able to prevent other authors from contributing to brand meaning. However, by continuously working with brand allies to collectively fill their brands over time with strong, memorable, and valuable brand stories, brand managers can mitigate the effects of those who fight against them by drowning out their negativity with positive voices, tipping the balance of brand equity away from the negative and toward the positive.

The post The coauthored brand story appeared first on 性视界 Business School AI Institute.

]]>
/the-coauthored-brand-story/feed/ 0
Two powerful ways managers can curb implicit biases /two-powerful-ways-managers-can-curb-implicit-biases/ /two-powerful-ways-managers-can-curb-implicit-biases/#respond Wed, 20 Feb 2019 02:42:44 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=6893 Many managers want to be more inclusive, but they don鈥檛 know how to get there. They are often not given the right tools to overcome the challenges posed by implicit biases. Research shows there are two, small鈥攂ut more powerful鈥攚ays managers can block bias: first, by closely examining and broadening their definitions of success, and second, […]

The post Two powerful ways managers can curb implicit biases appeared first on 性视界 Business School AI Institute.

]]>
Many managers want to be more inclusive, but they don鈥檛 know how to get there. They are often not given the right tools to overcome the challenges posed by implicit biases. Research shows there are two, small鈥攂ut more powerful鈥攚ays managers can block bias: first, by closely examining and broadening their definitions of success, and second, by asking what each person adds to their teams, what we call their 鈥渁dditive contribution.鈥

When hiring, evaluating, or promoting employees, we often measure people against our implicit assumptions of what talent looks like鈥攐ur hidden 鈥渢emplate of success.鈥 These templates potentially favor one group over others, even if members of each group were equally likely to be successful. We need to challenge the assumptions behind our templates for success. We need to ask if the criteria used to evaluate candidates will lead us to choose employees who will add to our team success or simply replicate the status quo.

The post Two powerful ways managers can curb implicit biases appeared first on 性视界 Business School AI Institute.

]]>
/two-powerful-ways-managers-can-curb-implicit-biases/feed/ 0
When online harassment doesn鈥檛 follow the rules /when-online-harassment-doesnt-follow-the-rules/ /when-online-harassment-doesnt-follow-the-rules/#respond Sat, 16 Feb 2019 20:07:29 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=6857 Is there a 鈥榗ost鈥 to online harassment? Is it quantifiable? Beyond the toll it weighs on human victims, the rampant toxicity we see across social networks and communities, does this toxicity affect the bottom line of social networks? Well, in December 2018, Amnesty International released a robust report on online harassment against women politicians and […]

The post When online harassment doesn鈥檛 follow the rules appeared first on 性视界 Business School AI Institute.

]]>
Is there a 鈥榗ost鈥 to online harassment? Is it quantifiable? Beyond the toll it weighs on human victims, the rampant toxicity we see across social networks and communities, does this toxicity affect the bottom line of social networks?

Well, in December 2018, Amnesty International released a robust report on online harassment against women politicians and journalists on Twitter, which seemingly caused a聽 two days after it was published. What made Amnesty International鈥檚 “Troll Patrol” report so damaging was regardless of political affiliation, women journalists and politicians were targeted by online harassment more than other demographics. And black women were targeted even more-so, being 84% more likely to be mentioned in abusive or problematic tweets than white women.

Let’s break into the report’s background a bit: In April 2018, Milena Marin, the project lead for Amnesty International鈥檚 “Troll Patrol” reached out to me to help guide aspects of the data labeling. The project was the largest of its kind with over 600,000 tweets labeled by 6,524 volunteers all across the world.

“Online harassment is not black and white. It鈥檚 contextual, it鈥檚 nuanced, and it can seem innocuous.”

This was clearly a large-scale data project. But our biggest challenge was this report needed to capture the nuances in a gray area 鈥 the kinds of harassment that don鈥檛 technically break the terms of service or content policies, but are still harassing in nature. Women and marginalized groups face this problem on a daily basis.聽

We need to recognize this gray area. Online harassment is not black and white. It鈥檚 contextual, it鈥檚 nuanced, and it can seem innocuous. Online harassment can be something in aggregate, for example receiving misogynistic tweets, but over and over and over again. This kind of harassment can have a silencing effect on women and marginalized groups.

When discussing the gray area, Marin of Amnesty International said in a phone interview: 鈥淚鈥檓 really happy we made the decision to make this distinction, not just the clear-cut 鈥榶es abusive鈥 鈥榶es not abusive.’ It鈥檚 hard to categorize and a lot of people had issues with it, and that was the number one question when we published the report, people were asking 鈥檞hat is the difference, why did you label it like this鈥? It also made it extremely relevant. When we talk to women about their daily experiences, and journalists, [they explain]: 鈥榠t鈥檚 not the single tweet that breaks me, but it鈥檚 the volume. It鈥檚 everyday.鈥 It usually doesn鈥檛 break the policy, but when I鈥檓 on the receiving end, and I get this daily content day in and day out….鈥 She emphasized, 鈥淲e [the Amnesty International team] wanted to understand how this affects women, and how it silences women, you have to have that differentiation from abusive tweets like rape threats and death rates, but also the more veiled sexism and regular misogyny, that is not against the rules, but it does affect their work and their ability to freely express themselves on Twitter.鈥

If Twitter, and social networks and communities in general, only think of content in terms of 鈥榠s it abusive鈥 or 鈥榙oes it break this specific rule鈥, we will continue to create systems that harm marginalized groups and women. What鈥檚 important here is to understand the nuances of harassment, the gray areas that are hard to define in policy, but will deeply affect harassment victims. The problem here is not having nuanced policy and responses to harassment; harassment content and behavior should not be viewed under the lens of content takedowns, but under the lens of response, recidivism, and rehabilitation.

The post When online harassment doesn鈥檛 follow the rules appeared first on 性视界 Business School AI Institute.

]]>
/when-online-harassment-doesnt-follow-the-rules/feed/ 0
Barriers, not the pipeline, prevent gender equality in tech /barriers-not-the-pipeline-prevent-gender-equality-in-tech/ /barriers-not-the-pipeline-prevent-gender-equality-in-tech/#respond Fri, 15 Feb 2019 21:34:02 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=6848 The argument that the education pipeline does not produce enough qualified women to take up positions in science and technology-related industries does not hold water. In the US, labour force data show that the number of women working in computing and other digital technology jobs is disproportionately lower than the number of women graduating from […]

The post Barriers, not the pipeline, prevent gender equality in tech appeared first on 性视界 Business School AI Institute.

]]>
The argument that the education pipeline does not produce enough qualified women to take up positions in science and technology-related industries does not hold water. In the US, labour force data show that the number of women working in computing and other digital technology jobs is disproportionately lower than the number of women graduating from relevant academic programs. While globally women and girls are less likely than men and boys to aspire to or pursue technology careers, women who do acquire the necessary qualifications are still marginalized in the industry 鈥 much more so than other scientific fields such as medicine. And research shows that women report high levels of work dissatisfaction, and leave science and technology jobs at much higher rates than men.

Although there has been a lowering of the gender-related barriers to science and technology education, once women enter the technology workplace, a new set of obstacle courses comes into play that sets up barriers to women鈥檚 professional advancement, whether as employees or entrepreneurs. These barriers typically fall into one or more of three intermingled categories.

Discrimination

Gender-based discrimination emanates from conscious and unconscious biases about women’s aptitude and ability in technical fields. These biases affect the entire career experience, from recruitment to performance evaluation, salary levels, mentoring and career development opportunities. For example, about the gender of the ideal candidate; research from the and the UK shows unexplained gender pay gaps for technology workers; and women tend to be promoted into .

Safety and security

To make matters worse, tech industry workplaces have developed a reputation for macho-masculine and/or geeky work environments that are hostile towards women. Though concrete research data is hard to come by, indications are that , including overt and covert forms of harassment, compromise women鈥檚 sense of safety and comfort in the workplace, leading to high turnover rates or high psychological costs of staying.

Social, cultural and institutional contexts

Finally, the broader social context can affect a woman鈥檚 ability to be 鈥渟uccessful鈥 as a professional. Amongst other things, norms about gender roles in the home; the higher economic value placed on 鈥溾 (typically expected of men) compared to 鈥渇amily devotion鈥 (typically expected of women); or perceptions about what is feminine versus masculine behaviour either make it difficult for a woman to be a “good worker” or exert high personal cost.

The combined effect of these barriers threatens to negate all the effort put into encouraging women and girls to pursue technology careers. To rectify this trend, organizations can start by developing technical, social and policy measures to improve women鈥檚 experience in the technology workplace.

Technical measures could include:

  • Creative job design to reduce the influence of gender stereotypes.
  • Software to identify gender biases in recruitment processes.
  • Mechanisms to protect victims of harassment.

Social measures include:

  • Promoting a change in organizational culture to be more inclusive and less discriminatory.
  • Encouraging and valuing less masculine-oriented definitions of an ideal worker.
  • Fostering greater work/life balance for all employees.

Policy measures include:

  • Signing on to the , a set of guidelines for businesses to analyse their practices and take steps towards gender equality.
  • Establishing diversity goals, increasing transparency of administrative processes, and assigning managerial responsibility to ensure accountability.
  • Collecting, monitoring and openly sharing gender disaggregated data on recruitment and other company trends.

However, care should be taken when deciding to use any of these measures. Some solutions (e.g. diversity training) are relatively easy to implement, but may be less effective than other more demanding measures (e.g. mentorship programs). Furthermore, well-meant actions can have unintended consequences; such as women being further concentrated in low-level positions due to taking advantage of family-friendly policies like paid leave and part-time employment. Diversity initiatives should also be approached in an inclusive manner so as not to alienate men in the workplace or overlook other disadvantaged populations, which can lead to backlash.


These and other recommendations are discussed in detail in the released by the United Nations University Institute on Computing and Society on March 15, 2019.

The post Barriers, not the pipeline, prevent gender equality in tech appeared first on 性视界 Business School AI Institute.

]]>
/barriers-not-the-pipeline-prevent-gender-equality-in-tech/feed/ 0