mmonti, Author at 性视界 Business School AI Institute The 性视界 Business School AI Institute catalyzes new knowledge to invent a better future by solving ambitious challenges. Thu, 01 Feb 2024 15:28:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2026/04/cropped-Screenshot-2026-04-16-at-10.14.43-AM-32x32.png mmonti, Author at 性视界 Business School AI Institute 32 32 Laura N Montoya on the global cultural lens of AI /laura-n-montoya-on-the-global-cultural-lens-of-ai/ /laura-n-montoya-on-the-global-cultural-lens-of-ai/#respond Mon, 10 May 2021 14:23:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=14074 In this episode, we speak with Laura N Montoya from Accel.AI about the inherent correlation between underrepresentation and bias, the need to support communities building tech systems, plus advice for young people looking to get into the tech space.

The post Laura N Montoya on the global cultural lens of AI appeared first on 性视界 Business School AI Institute.

]]>
There is often a western cultural lens attributed to modern systems of artificial intelligence. Yet, remarkable advancements and innovative solutions have been developed and implemented all over the world 鈥 sometimes with limited resources and infrastructures. With advancements in AI only positioned to accelerate, threatening to further compound the historical, economic, and geopolitical systems of inequality and bias, it is critical for underrepresented people, communities, and societies to not only have recognition, but also equal access to the resources and tools that will empower them.

In this episode, our hosts Colleen Ammerman and David Homa speak with Laura N Montoya about the inherent correlation between underrepresentation and bias, the need to support communities building tech systems, plus advice for young people looking to get into the tech space. Laura is a scientist and engineer turned serial entrepreneur and startup adviser. She is also the founder and executive director of Accel.AI, a global nonprofit lowering the barriers to entry in engineering artificial intelligence.

Watch the episode with Laura N Montoya

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So, today, we’re talking with Laura Montoya. She is a scientist and engineer turned serial entrepreneur and startup adviser. She’s also the founder and managing partner of Accel Impact Organizations, which includes the Accel AI Institute, LatinX in AI, and the Research Colab network. Welcome, we’re really excited to talk with you today.

Laura Montoya (founder and managing partner of Accel Impact Organizations): Thank you. I’m really excited to be here.

David Homa (Digital Initiative director): Laura, thanks for joining us.

To start with, you work in an interesting space where you have expertise in the technical aspects of AI and ML, but also work in the social aspects around how people interact with it and how it affects society. I wonder, when was your first exposure to AI, and what was that about?

LM: You know, that’s a funny question. I feel like now that I better understand what AI is… I had exposure from a young age. And, being able to utilize the computer and really being able to even just play with little, like, robots and go to science museums 鈥 that’s something I did from [a] very young [age]. But, I didn’t have that conceptualization of what AI was. I didn’t understand its potential power and scalability and the impact that it could have on the world. But, I would say that the idea of a robot, or something that potentially you can interact with that would mimic a person or another entity 鈥 because I was really into animals when I was young 鈥 I think that it was something I was very drawn to as a child. And, it was something that I felt that I could really connect with 鈥 even my toys and the robots in general. So, I would say that as I grew into adulthood, I found that I drew from that experience as a child and my love of these 鈥 you could call them inanimate objects. But when you’re a kid, you breathe life into them through your imagination.

And, as time has gone on, (and obviously the technology has gotten better), now you have these robots that actually are very lifelike, right? One of my favorites, actually, sits on my desk. And, I can show him to you. It’s Cozmo. I’m sure you guys are familiar with him, from Anki. And, they have a new one out now, Vector. But this, I think to me personally, was one of the greatest innovations because of the way that they animated his face, right? And, it allows children really to connect with something that otherwise would just be a toy. And, the way that it breeds excitement, and in the way that it brings joy… And so, for me personally, that’s something that I have taken into my career now and my understanding of AI and how I want AI to really live in the world. I want it to be something that people can connect with, [that] they can appreciate, that can help them, that can solve problems. And, I think that that’s something that’s been missing in a lot of ways from the conversation around artificial intelligence so far.

CA: I would love to hear you talk about this notion of 鈥淒emystifying AI.鈥 What is this program, and what you’re doing with it, and why it’s important?

LM: Oh, of course. So, the Demystifying AI Symposiums is a program that we started with Accel AI Institute, actually from the beginning, about six years ago. And, the goal was really to help people better understand what AI is and how to apply it in the real world. And, that was a time when all the hype around AI wasn’t really there yet. We didn’t have all of these free courses that are available now through Coursera and Udemy, and even better courses like through Udacity and so on. And so, there had been a winter in AI prior to that. We were at a place where people were just trying to figure it out and say, “Okay, well, what is this technology and how can I use it? And would it actually make my company better? Would it make my products better? And how can I basically ride this wave?” And so, for people like me and more of the underrepresented communities that we work with, I really saw this as an opportunity to help people get into the space. To provide education and ensure that, once a technology is applied, and marginalized populations will have to transition their work, that they would also have that opportunity to reskill and move into that new economy.

The goal really of these symposiums is to ensure that people get that exposure. What we did is we actually hosted sessions, which were weekend long to begin with, and we would invite people from academia, from industry, basically experts in the field, to come in and really provide that exposure, and not necessarily from a theoretical vantage point, but really an applied vantage point. The goal was to get people with hands-on experience to really be able to use scientific packages like Python, and then apply that to the Anaconda packages, and then ensure that they could say, “Okay, if I have one specific project that I want to do, to achieve, how can I take that and actually make it happen using something like TensorFlow or PyTorch, and really utilize computer vision and natural language processing, and then make this a reality?” And so, that was the goal of these symposiums: to ensure that someone could have a very small idea, get exposure to people and a hands-on application, and then take that and turn it into something like a portfolio project. And then hopefully, also, really increase their curiosity and their interest, and then they can continue to learn and grow on their own. So, from there, we would apply and give them different resources so they can continue to learn. We have obviously a full GitHub repository with actually thousands of free resources that are available, especially now in this area of AI and machine learning, which is what we would impart on all of our attendees after they would work with us. So, that was really the goal of our Demystifying AI Symposiums.

CA: That’s great. And, it makes me wonder, is there anything that you learned from doing that work about how to do it well? You know, that you can offer to others? I mean, lots of organizations and companies are also adding more and more thinking about these issues. How do we make sure that we do create that access to marginalized communities? And, I think there’s lots of other programs and people who want to deliver on that same mission. So, what are the things you feel like are important to actually do that effectively?

LM: Oh, of course. I think the key, really, is understanding where someone is at before they start a program like that. Because you can’t expect that any program is going to be a one-size-fits-all solution. For us, we realize that a lot of people 鈥 especially those that hadn’t had any prior exposure 鈥 were really starting from a blank slate. Many of them had no prior experience in even software engineering, understanding any of the languages, or really using anything like even a terminal on their computer. They were starting from scratch. So, they had never downloaded any of the packages or anything like that. We had to really have a lot of patience and ensure that we were coupling people with the right volunteers and the right mentors, so that they would get that time that they needed to really ramp up from zero. And then from there, we would build them up to the next level. So, I think that was the biggest takeaway, is that… especially when you talk about AI today, and you think about, well, how do you actually do AI? Now there have been a lot of advances recently with APIs that could allow you to create something very, very simply in a day, right? Even within a few hours, if you have some technical and prior knowledge and experience.

“There is absolutely no reason why more thought shouldn’t be put into creating these products that will scale and will work for everyone.”

But, if you’re starting completely from scratch, oftentimes what people would do is really hit their head against the wall on the simplest issue. If something did not download correctly, if they couldn’t understand how to write a proper print statement, people get discouraged. And so, for us, we wanted to ensure, to be there, to encourage them, to make them understand that it is worth it, and they can’t give up just from the first step. And, they need to continue on if they really want to achieve this goal. So, for us, that was 鈥 I think for me, at least, my biggest personal takeaway 鈥 is ensuring that we meet people where they’re at, and that we can encourage them so that they continue to go and really achieve the goals that they’re trying to achieve, whether that be a simple project just to experience it, or if they’re going to move further and actually study this field and go into a grad program, and then become an AI engineer or researcher.

CA: It’s not just about imparting the technical skill or knowledge, is also what I hear you saying. I mean, that’s important, but it’s not just the training, but it’s actually creating that… capacitating people in a more holistic way, where they feel empowered and feel like they have the ability to learn and grow with the technology.

LM: Of course, yes, and not just a safe space in that they feel comfortable in doing the work that they can do, and feel like they can do the work, but also that they’re surrounded by other people that look like them, right? That are on a similar journey, that are encouraging them and say[ing], 鈥淗ey, I’m right here with you, and I want to achieve this goal, too, and how can we work together and put our minds together to achieve this goal?鈥 And, I think that that is something also that’s incredibly valuable. And, honestly, that’s something I took away from my time working with Women Who Code, because that was a wonderful organization with whom I’m a director, and I used to run one of their chapters. And, that for me, I felt like that time and experience was really key for me when I was first getting into software engineering. Being surrounded by other women who really want to see you achieve and are there for you and show up every week after hours and really want to help you 鈥 that is so amazing and so valuable of an experience. And, really knowing that you can do that with other people and have those networking connections that, once you’re ready to take that next step and apply for a job and really get into the field, they’re going to be there for you as well. So, ensuring that type of environment is something that we recreated for our members is really key.

DH: Laura, you mentioned, in particular, focusing on people who are traditionally underrepresented in tech. To you, why is that important both for society, but also for the technology outcomes?

LM: Honestly 鈥 and this has been said many, many times 鈥 if the technology is not reflective of the larger population, it’s going to have bias, right? And, not only that, it’s not going to be something that is achievable and something that is reflective of the broader community. And, if we’re going to develop products, if we’re going to have things that people can connect with 鈥 can engage with 鈥 you want them to be able to see themselves, right?

If you have, for example, a smartphone, and all your friends can use the facial recognition technology to unlock their phone, but you can’t, then, obviously, that technology wasn’t made for you, and how does that really make you feel inside? There’s this thing that potentially should be available to everyone at this point. Things like smartphones and anything, right? Hand washing sinks in the bathroom… these are utilities, right? These are purposeful items. And so, if they’re not developed in a way that actually works for the broader society, the broader community, then you’re hurting people, you’re causing harm, and you’re making them feel like they’re less than. It’s not really fair to have a world that isn’t created for people that are like you, especially when you are representative of a very large part of the population. There is absolutely no excuse and no reason why more thought shouldn’t be put into creating these products that will scale and will work for everyone with having them in mind. And also, putting in the time and the forethought to ensure that people from all different backgrounds [are represented], not just in the US. We have a very Western cultural lens when it comes to technology, even though a lot of our products are actually developed overseas. And, how unfair is that? I think it’s really adding to the issues around how the US population tends to capitalize on other economies in other societies, and that we are developing these products and utilizing people from other places to do the work for something that isn’t even going to be used by them. I think that that is something that we really need to change in this world, and it’s unfortunate.

DH: And, it’s important also, like you said, for people interested in this space, or who might be interested in the space, to see people like themselves doing this work. How about for you? Have you faced issues personally with not enough people looking like you? And, how have you overcome that?

LM: Yes, of course. For me personally, being a Latina, being a woman in this field, when I was first getting started, I didn’t see a lot of people, especially in the space. When you think about artificial intelligence, you see a lot of white men with PhDs. They’ve gone to Stanford, they’ve gone to 性视界 and MIT, and they’re the ones that are publishing the research, and they’re the ones that are taking on the position of CTO or AI Director within an organization. And, it is hard to look at that. For someone like me who wants to be… wants to embody that role, who wants to be that person, (and was the first person to graduate from university in my family), I think it’s invaluable to have role models that look like you and that you can see yourself in. And so, that’s a big reason why I created the LatinX in AI organization 鈥 because I wanted to provide that for Latinx people, not only within the US but also in South America. And, I think I, personally, occupy an interesting space, being a Latinx American, because I can see all sides of that culture and that ethnicity. And, oftentimes it can be hard as well, right? Because, when other people come from another country and they see you as an American, they don’t necessarily think that you have the same life experience as they do. And, they’re absolutely right. My life experience is completely different than their life experience. But that’s also why I want to provide that for them as well.

“We have a very Western cultural lens when it comes to technology, even though a lot of our products are actually developed overseas. how unfair is that?”

And, that’s also a huge driver of why I started the organization: to ensure that people that come from anywhere 鈥 whether that be a Latin American, whether that be someone who is born in Mexico or Uruguay or Brazil or Colombia 鈥 that they can see someone achieving in this space in the US And not only in the US, but in other developed countries as well around the world, that they can compete in the global market. And so, that’s part of why we host these large AI machine learning conferences 鈥 like the Neural Information Processing Systems Conference and the International Conference on Machine Learning that are often hosted in places like Montreal and Toronto, and in different parts of Asia and Germany and so on 鈥 is because we want to ensure that those people have the opportunity as well to get out of their comfort zone, get out of their personal space, and the bubbles in which they’re currently working in, and gain exposure, and also have representation in those spaces. Because if you 鈥 and it’s obviously not just with the Latinx population, with the black population, and being a woman in general in those spaces 鈥 if you go into that space, and all you see are people that don’t look like you, it’s not very welcoming, and it’s not encouraging. And so, just like that experience we had with Women Who Code, right? And, how that space made me feel safe and welcome 鈥 again, that is what I want to provide for others. So, that was a driver for me in creating this part of our org, and it’s something that is incredibly valuable to me as well.

DH: I was wondering what important and interesting things are going on in South America that maybe people haven’t heard about?

LM: Oh, wow. There is so much, honestly, that’s happening [laughter] 鈥 it’s incredible. And it’s funny, because you don’t hear about it as much, obviously, from here. But, what I would say is really amazing are the startups that are now popping up around AI machine learning as well. One of them that comes to mind is Rappi, which is similar to probably what you think of as Uber here, right? And, it’s a company that basically delivers anything that you need through a phone app. They have courier services that come in, and basically, they use AI machine learning technology to connect all the couriers to the people that are trying to order these products. And, they also want to ensure that they do it in a way that hasn’t been done before here. One of the reasons why it’s pretty unique is because they don’t drive as many cars in that area. This is in Medellin, in Bogota, in Colombia. And so, the delivery often happens by bicycle, it often happens by motorcycle. But still, they’re able to track these individuals and make sure that products are delivered in a really rapid pace. For me, I thought that having that company get so big so fast, and then expand to many other countries within South America, was incredibly amazing. It’s something that I think [has] actually caused great value because different remote areas of these countries now are able to receive products and goods that they haven’t been able to receive before. And so, for me, that’s something that we would traditionally think of as very common now, especially in the US. But, it’s not something that they’ve had for very long. So, it’s amazing. I think that that was something that I thought was really exciting to see from a more product standpoint.

On top of that, if you think about the medical industry, in places like Cuba, for example 鈥 because I have a lot of friends that actually live in or [were] raised in Cuba 鈥 they are quite advanced from a medical perspective. So, most of the funding that has gone into technical development within Cuba has been applied to the medical industry. And so, they have now been able to use AI machine learning and apply it for different areas of research, when it comes to detecting different kinds of cancers, to solving treatment problems that they wouldn’t necessarily have had before. But they are doing it in a way that uses what we would consider limited technology. They do basically what’s called AI on the edge, because a lot of the processing is just handled through your smartphone, right? And, then it sends the imaging data through the cloud. And so, because they don’t have the resources to necessarily purchase these large X-ray machines 鈥 and they wouldn’t necessarily have the servers available to them to actually send all of this data and do the processing themselves, through large servers that we would normally host through Amazon or through Google Cloud services here in the US. So, just seeing that type of advancement and seeing people and professionals be able to solve these problems, using the same ideas, the same theories, the same types of technologies, but apply it in their community even with limited resources. I think that that is amazing. And that is really something that we should be proud of and that we should speak more about.

CA: So, part of it is about supporting that work, like you’re saying, elevating that work, and enabling more people to do it, both in places like the US and elsewhere. But, it’s also about getting people in places like the US or Europe or Canada to pay attention to what’s going on in those places. So, like I said, it may be a tough question to answer, but [I鈥檓] just curious for your thoughts about how do we do that. How do we get the people who are in these positions of power and influence in places like the US to really pay attention and be open to learning from what’s happening in Cuba or Colombia?

LM: Yeah, of course. So, a lot of that is the work that we’re doing as an organization to provide that representation and get others to pay attention. Part of it is ensuring that these people are in the same place and time. That they have the opportunity to speak in a way that allows for collaboration and exchange of ideas and, really, that opens their minds up to the fact that there is a broader world out there, right? And, that people are doing this work and driving innovation in places that you wouldn’t necessarily consider otherwise if you didn’t have exposure to it. So, for us, we think that’s first and that’s key 鈥 is putting people in the right place to really drive those conversations. Otherwise, we really need to ensure that the researchers from the US and from Canada are paying attention. And, that they understand really what’s missing from these different places, and what they can do to make a difference.

And so, for us, after we surveyed many of our members within Central and South America, we found that one of the key areas of need that was lacking was mentorship. What traditionally happens 鈥 because these countries do not have the same types of resources and their governments do not invest as much within research and development within their countries 鈥 [is] people end up leaving. Once they get a master’s degree, once they get a PhD, the goal is to get out of the country and find a place that will actually pay them more, so that they can have a better lab, they can provide more for their family, and they can get that exposure. And, on the one hand, that’s [an] amazing opportunity for that individual, and it’s great that they were able to make it that far. But also, then, that still continues to create this vacuum of this lack of mentorship within the country itself, and basically, this brain drain that happens in that area, where all of these very highly educated individuals that are doing this wonderful work are now moving on to other places.

“We have very well-known researchers and engineers that are showing up to work every day, and they’re still experiencing bias.”

And so, what we have done, as well, is created this mentorship program that we offer, where we connect people from all over the world, and specifically people working at very large, very advanced companies. So, we have our mentors include people from Google, from Facebook, from LinkedIn, from Apple, and so on. And, we connect them directly with researchers in South America and Central America so that they can talk about their work, and so that they can gain exposure to people working within the industry. But, when you’re really considering maybe the people that are not reachable 鈥 those that wouldn’t necessarily take the time to mentor or aren’t already listening and aren’t open to the conversations 鈥 I think the key there is just going to be more time and more exposure. So, the more we bring people in from different countries that show that they can do this work and have the skills and want to be in this space, then the more they move up in the community, right? Others are going to have exposure to them and they’re going to see, okay, well, actually, people like this are doing the work. And it’s the same thing, I think, for any minority group.

DH: During this pandemic, a number of companies have obviously experimented heavily with remote work, and also this idea that some people maybe feel they don’t need to be out near San Francisco, they don’t need to be in the Valley anymore, they don’t need to be exactly at the home office. And, there’s been a bunch of talk about smaller cities in the US benefiting from people either going back home or leaving. And, I wonder, can you envision that extending to beyond the US and maybe into South America and other places? How might that happen, and how might that change things?

LM: We think 鈥 maybe in the US 鈥 even if you’re going to a more remote state, or environment if you live in the country, that maybe the internet isn’t as strong, right? This happens in our country also. You know, someone from a farm in Iowa potentially wouldn’t have that opportunity to do the most advanced machine learning research if they’re sitting in that family home. But, potentially they could upgrade that home, right, to get to the point where they could do their work. I think that that is a little bit more feasible here. In South America, there are very, very remote areas that wouldn’t even have that option, right? That you can’t just call up your utility company and say, “Hey, I want to install this gig fiber internet [laughter] to ensure that I can do my job.” So, I think that it’s a little trickier in those cases, and the support infrastructure has to be there first.

There are some grants now that are coming out, actually, to help provide a more stable internet for people within more remote locations in different parts of the world, and I think that that’s something that is very valuable. There are a few organizations that are trying to help solve that problem, as well. And that will also bring things like laptops and extra computers with them, so that other people in those environments can start gaining access to the internet and to technology in a way that they haven’t had previously. So, when I envision it happening, I think that it’s something that has to be 鈥 it’s not like an individual can just take responsibility for it 鈥 I think that it’s something that has to be more of a community-wide endeavor, that potentially the group and the community has to come together to say, well, this is something that we want, and this is something that we’re going to work hard to achieve. They have to then find the proper resources and reach out to the right organizations to come and help them. So, I think that that’s going to be a larger barrier for people within different parts of South and Central America who are lacking that access for sure.

DH: Laura, is there anything in particular you want to cover or speak to or share with people?

LM: Right now, there’s a lot that is happening in the area of AI and ethics space. And, for me personally, I think that it is invaluable that we continue to support the people that are doing this work, and not just the people that are living in other countries, but also the people that are within our community, right? We have very well-known researchers and engineers that are showing up to work every day, and they’re still experiencing bias. And, they’re still facing basically a marginalization within that community. It doesn’t matter that they have a PhD or they’re well published or they’re considered a research scientist. For us, it is essential that people who come from different backgrounds are represented within artificial intelligence and machine learning, because this technology has an effect on everyone’s lives, right? And, there’s absolutely nothing in the world almost today that you can touch that isn’t going to have AI embedded in it somehow going forward into the future. Someone said [something] recently that I really appreciated, that “AI is the next internet.” In the way that the internet is basically all around us, and it’s something that you can’t live without at this point. Well, that is going to be AI within the next few years. And, it almost is now, right? Between our cell phones, between our smart toothbrushes, right? [laughter] Like anything, really, that you touch. And so, if that technology is not representative of the broader population of the world, like, you’re doing people a disservice, you’re really causing harm. And, that’s not okay. People need to take a step back and really think about: is this the world that I want to live in in the future? Is this really the impact that I want to have on society is to create something that causes harm? And, I don’t think people really want that. I think deep down, most people, they just have been inconsiderate, and they haven’t been really taking into account the real potential cause of their actions. So, I think that that’s what I really want to leave the conversation with, is kind of that call to action to people to take some time and reflect and really ensure that the work that you’re doing does help and doesn’t harm others.

CA: Besides your own work, which we’ll definitely be sharing some of these great programs that you’ve talked about, what are some other resources you would point people to, whether it’s books or other organizations or people to learn about?

LM: When I was first starting out, honestly, we really just focused on the book. And, this is something that is available for free online. What I really appreciate about this book is that it basically takes you from the intro-level mathematics and then allows you to build up from there, because you have to understand that the basis for all of artificial intelligence and machine learning is math, right? So, for people that appreciate that and want to go further into theory and want to publish research and investigate that space, that would be an amazing resource for them to start with. But now, if you’re math-shy 鈥 because I know a lot of people are math-shy, and they don’t necessarily want to jump into linear algebra or calculus or differential equations or anything right from the start 鈥 I would say, honestly, one of the best resources is . I think that Rachel [Thomas] has done an amazing job putting that course together online, and their videos and resources help people get that hands-on experience. And again, just like our Demystifying AI Symposiums, [they] really help you build projects from the ground up and just get your hands dirty without thinking necessarily about the theory or thinking about the math. And, I think that that is really great.

“it is essential that people WHO come from different backgrounds are represented within artificial intelligence and machine learning, because this technology has an effect on everyone’s lives.”

Other than that, I would say pick a project. Like, if you’re going to do anything, you have to have a goal in mind. And then, from there, you have to break it down into small steps and work to achieve that goal. Otherwise, you’re going to get overwhelmed, or you’re going to lose interest or lose confidence in why you’re even studying this. Because the world of machine learning is so incredibly vast, and it’s changing rapidly, every single day, right? If you think about the number of papers that are being published and the different models that are coming out now, it’s like every day there’s something new. And so, if you try to just keep up with that and say, well, how am I ever going to achieve this ideal view of what AI is, or the pace in which the industry is moving, then you may get discouraged. But, I would say don’t do that. Just focus on what you want to achieve, right? What do you want to solve with artificial intelligence? How can it help you? How can it help your community? I think that that is really the key in this case. And, there is no other way to approach this problem of getting into AI, other than really thinking about your core values.

DH: Laura, thanks for joining us today.

LM: Thank you very much for having me. It’s been my pleasure.

CA: This has been a really fascinating and inspiring conversation. Thank you so much for talking with us.

DH: That’s a wrap on the interview, but the conversation continues.

CA: And, we want to hear from you. Send your questions, ideas, and comments to justdigital@hbs.edu.

The post Laura N Montoya on the global cultural lens of AI appeared first on 性视界 Business School AI Institute.

]]>
/laura-n-montoya-on-the-global-cultural-lens-of-ai/feed/ 0
Lumumba Seegars on inequality and agency in ERGs /lumumba-seegars-on-inequality-and-agency-in-ergs/ /lumumba-seegars-on-inequality-and-agency-in-ergs/#respond Mon, 26 Apr 2021 12:33:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13979 In this episode, we speak with Lumumba Seegars from 性视界 Business School about the challenges and limitations employees face when participating in Employee Resource Groups, how organizational leaders can be more effective allies, and the critical importance of intersectional approaches to any work involving change.

The post Lumumba Seegars on inequality and agency in ERGs appeared first on 性视界 Business School AI Institute.

]]>
Employee Resource Groups (ERGs), also referred to as affinity groups, have been a staple of many organizational cultures for over 50 years. Typically organized around a shared identity, such as race, gender, or sexual orientation, they can serve as a benefit to employees by offering a safe space to express organizational concerns or frustrations, a dynamic network for professional growth, as well as a more secure platform for advocacy. For organizational leadership, ERGs can help to identify and develop internal talent, broaden and target recruiting goals, and support retention efforts. However, there are various tensions between employees and leadership, as well as within the groups themselves that limit their potential.

In this episode, our hosts Colleen Ammerman and David Homa speak with Lumumba Seegars about the challenges and limitations employees face when participating in Employee Resource Groups, how organizational leaders can be more effective allies, and the critical importance of intersectional approaches to any work involving change. Lumumba is a PhD candidate in organizational behavior at 性视界 Business School, and his dissertation research focuses on how women and racial minorities collectively organize around their identities at work and try to make their workplaces more inclusive.

Watch the episode with Lumumba Seegars

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So, today, we’re talking with Lumumba Seegars. Lumumba is a PhD candidate in organizational behavior at 性视界 Business School, and his dissertation research focuses on how women and racial minorities collectively organize around their identities at work and try to make their workplaces more inclusive. Thank you so much for joining us today, Lumumba.

Lumumba Seegars (性视界 Business School PhD candidate): Thank you so much. I’m excited to be here.

David Homa (Digital Initiative director): Lumumba, good to see you again.

LS: Great to see you as well. Thank you.

DH: So, Lumumba, you’ve actually written a little bit about leadership, because there’s a big push, especially in the tech industry, to get representation into leadership. But that is not necessarily 鈥 that doesn’t solve things. It’s not a silver bullet for sure. Tell us a little bit about the complications there.

LS: So, I recently wrote this article with Lakshmi Ramarajan called 鈥溾 in this broader book about race, work, and leadership that was edited by Laura Morgan Roberts, Anthony Mayo, and David Thomas. And, in this book, there’s this fundamental kind of tension that we think about, which you alluded to, where like representation, we say, “isn’t emancipation, right?” Just because you’re getting more people there doesn’t mean you’re actually kind of like dismantling the structures that are leading to these inequalities. And, one of the key problems is that, if you look at organizations, and we think about who is in charge 鈥 especially in these predominantly white organizations 鈥 the same issues, the same traits, the same characteristics that might make somebody rise in this company, particularly a Black person, are not actually the same things that will make them want to more broadly challenge inequality.

And so, let’s think about this for a second, right? If I’m in a company, if I have to be a leader, right? Leadership is both granted, right 鈥 people have to give it to me and say that you are going to be the leader 鈥 and then I have to accept it, okay? I have to say, 鈥測es, I want to do this.鈥 And so, we built this theory we call a dual and mutual identification. Basically, we say that as a Black leader in this predominantly white organization, I have to want to stay in this organization if I’m going to be a leader, right? For example, let’s think about even at HBS. Like, I have to say I want to be in this environment, to a certain extent. And so, there’s identification: I can be here. But then also, if I want to continuously challenge inequality, a part of that is me thinking and still identifying with Black people as a Black leader, in terms of people who have been marginalized in this, and saying, not only do I want to stay in this environment, but I have a strong identification with the people who are marginalized within this environment. And so, that’s going to give me motivation 鈥 two different types of motivations. Motivation to access resources and motivation to use those resources to challenge inequality.

“Just because you’re getting more people there doesn’t mean you’re actually dismantling the structures that are leading to these inequalities.”

Now, we say that, once I’m here, and if the white majority in this organization is saying, 鈥測ou know what, we think you’re a leader, you should progress,鈥 right? That is that mutual identification. But once you get up there, we talk about that there are four different types of leaders. Basically, you say there’s this Black leader who has identification with the white majority, wants to stay in the organization, wants to challenge inequality. Then we look at how do people look at him?

And so, you can think about the Black leader to the extent that they can have identification from both the white majority and the Black minority in the organization. We actually say that they are more likely to be able to effect more lasting change and more systemic change because the people in power are saying, 鈥測es, you can stay here.鈥 But also 鈥 we don’t often think about this 鈥 is that that Black minority, they can protect them more. There’s much more of a sense of, like, we got you, we have your back, and you’re one of us. And so, you can’t just get rid of that person.

So, I study employee resource groups (ERGs) in my dissertation 鈥 there are these groups of people around a particular social identity. So, it’ll be like women@ or Black@ [or] something like that. And, one of the Black ERG members was talking about a Black leader saying, 鈥渨e have to protect this person, we have to make sure that they know that we have their back.鈥 And, that’s important because that gives that person a little more breathing room, right? I’m not on a tightrope. I don’t have to just do what the people in power say I have to do. That, if I kind of step out on the limb, there are people who are going to be here with me. That’s important. And, that’s where you kind of say representation isn’t enough, because you might have people up there rising, but they may not have that same identification with the people who are still marginalized in the company. And, that’s something that companies have to think about.

CA: It occurs to me, if I’m thinking about it the right way, that it matters then, how that Black leader positions themselves in relation to the Black employees. Because I can imagine that an ERG, say Black@ whatever the company is, is probably more interested and willing to say, 鈥渉ey, like, we’ve got your back, we want to support your leadership,鈥 if they feel as though that leader is invested in them, right?

LS: Exactly.

CA: We sort of don’t feel like that leader is going to bat for their needs and interests. Then they might say, 鈥渨ell, you know, we don’t really feel like this is something that we have such a strong stake in.鈥 So, [it] just seems like there would be an interplay there. Is that right?

LS: That’s exactly right. And, this is what can be confusing for people who just think about representation. They’re like, 鈥渙h, we gave you this person, why aren’t you happy?鈥 and they’re like, “uh? That’s not really a person who’s on our side.” And, we use this quotation from Zora Neale Hurston, “All my skinfolk ain’t kinfolk.” And, that’s important to think about.

So, I think a lot about slavery. And, I think a lot about slavery and capitalism and the relationships within organizations, especially if we think about how much of our interracial relationships are based, came out of, the master-slave relationship. One thing I think about is that if you take a very kind of hard stance, if you go to the extreme and think about a plantation and think about leadership on a plantation. If you think about their enslaved people, their masters, then enslaved people aren’t saying, 鈥渋f we could only have more Black overseers, we would feel better.鈥 The context 鈥 there’s something structurally wrong with the context. And so, if we look at a lot of our companies, and if we say that the way that we鈥檝e engaged in practices, we鈥檝e talked about external to the environment. Internal, we say there are structural issues, there is racialization, there is this white standard of neutrality. We talk about there’s a standard of a man being the ideal type, that you can’t just put people in, and places, and not look at what is the structure of these work relationships.

DH: And yet the structures are put and kept in place by the people who organize the company or found them. So, we end up in a bit of an impasse there. Any advice for that? Are we just waiting for founders to wake up, or do we need a different system?

LS: I wouldn’t say wait. If we wait on people to wake up, that’s going to take a long time. I think you can persuade people. I think there are some people… I think there’s variance, right? There are people who just don’t care. There are people at one extreme who say, 鈥淚 don’t care. I want to make more money the way that I need to make it.鈥 Then there are the people who are going to say, 鈥渓ook, if there were a better way, I would be happy to try that.鈥 And so, I think that’s where some of the persuasion comes in. If there were a better way, and you have these founders, and [they] say, 鈥渃an I start?鈥 And, that’s where you say, look, you can’t wait to do diversity, equity, and inclusion once you start making money. You have to kind of bake it [into] your strategy as a company, and say, look, we’re going to be an equitable place. That means we’re going to have to take more time in how we think about hiring people. That means we’re going to have to constantly be challenging our assumptions about who can do the work and what type of work they can do, where they can do that [work] from. So, that’s one set of people.

I think there’s another set of people who doesn’t have the access to the same resources that most founders have had, and you have to say, how can we activate those people and make sure they have resources to start organizations that we haven’t imagined? Like, I’m in the business of scholarship, right? I do research for a living. I’m creating new ideas. Fundamentally, it’s a creative task. So, when I think about the knowledge that we have now, every paper I read, I’m like, 鈥渙h, something we didn’t know. That’s something we hadn’t seen.鈥 I think the same thing is possible in business, right? There are people who will create new forms, new things that we haven’t seen or imagined. We just have to make sure that they have the resources and that they’re able to use that creativity. So, I think there’s a lot of talk about persuading those who have power. But what about providing people without resources a chance to actually build their own power and to self-determine for themselves? And, I think that also has to be a part of the discussion.

DH: You mentioned pipeline before 鈥 people are worried about pipeline, but it’s really about what’s happening inside organizations. There are actually some people now looking at what’s happening inside of universities. I know there are a couple of different programs. Northeastern, in particular, has one called the Align program. It looks to people who are typically underrepresented in computer science and actually has master’s degree programs for people who actually don’t have undergrad STEM degrees because they realize, well, the pipeline hasn’t served these people well. But, that doesn’t mean that they don’t realize when they’re a little older or a little more mature, like, wow, I can do this. So, providing the opportunity there is really critically important.

LS: And, providing the opportunity, but also saying that you can do something different. If we can’t just say, all right, you have not had access, and we are going to assimilate you and make you good enough to fit into this program and to fit into the system the way we’ve thought about doing things that has structurally marginalized people like you. Now you, you individual, you come through, and you be successful. That actually doesn’t change the nature of communities and groups of people that are being marginalized. You have to also say, all right, you can come here, and how [are] we going to learn from you? And, what type of creativity and what types of new insights can you bring?

And, I think that’s almost a larger scale. If you think about Robin Ely and David Thomas’s work 鈥 this idea of integration and learning. It鈥檚 like you almost think about taking that out of the context in which they study within teams and organizations, and think about an intervention for communities learning, where it’s like, we’re not just going to say, come in here and do what we want you to do. We can actually learn, and you will bring new insights, not because there’s some essential difference between us. Like, there are no essential biological differences between races or people of different genders. But because you have had different social experiences, can we kind of open the way that we think about the world? Because you are coming from a new perspective.

“All change has to be intersectional, and we have to think about it that way.”

And so, that type of thinking, I think, opens up for new creativity, for new ways of creativity, for people to say, all right, this is how it’s affected me and people like me. How can we think about engaging in a new way of doing business, a new way of building organizations, new ways of building communities? Not just, oh, we’re going to let you into the 性视界 Business School or to Northeastern, and we’re going to just… we want you to do what we do, and we want you to get good at what we’re good at, when what we have been good at, to be quite honest, has been perpetuating the same systems of hierarchy and marginalization. Maybe we want to be able to learn from the people who we’re bringing in 鈥 more than just making them do what we’ve always been good at. When I say “we,” I’m talking about institutionally, looking at some of these universities as well.

CA: I want to use that kind of segue a little bit to this idea of changing organizations or changing the system, changing the context, and give you an opportunity to share with us a little bit about the research you did for your dissertation and what you found. As you said before, it’s on these employee resource groups. Sometimes they were called affinity groups, right? But it’s these women’s networks or Black employees or LatinX employees. And, they’re pretty prominent in the tech sector, like big companies pretty much all have them. So, I just would love for you to tell us a little bit about what you found. What are these groups doing? How effective are they at actually changing organizations, making them more equitable? What are the challenges they’re facing? So, would love you to share some of those findings with us.

LS: I went into a couple of companies, and I looked at just their Black and their women and their Asian employee resource groups. I focused on that. And right now, on my dissertation, I’m focusing on the Black versus the women鈥檚 group. And it’s just in one company. And, what I saw, really, is that one of the things that shaped what these employees were doing was actually that their relationships with the leaders of the organization were actually very much resonant with the historical and cultural power relations between white men, who were the main people in charge of the organization, and white women 鈥 and, white women were the main group within the women ERG 鈥 and between white men and Black people. And so, what does that look like?

I found that, actually, the women ERG have this relationship with the organizational leaders of dependency, whereas the Black ERG have this relationship of deprivation. And, what this looked like was that, for the women ERG, there was actually this… Well, first of all, nothing happened for a while, then scandals happen, as they do in tech. And, then there was this, all right, let’s start pouring in resources, after this sense of threat. So, let’s acknowledge that both groups were being ignored, at first. And so, when the resources started to come in, there were more resources within the women ERG. And, there was a lot of talk about, also, this familial dependence. We can’t do things without men. We need them. We need allies 鈥 very much like we have to get allies in here. And, there’s a lot of talk about allyship. But at the same time, the white men in the company, as they gave more resources, they also were able to co-opt what the ERG did. And, they were like, this is what you’re going to do, like what’s effective. And, the women ERG had to [say] these were the goals, this is what we want to do. So, there was this kind of tension, in the sense that they were dependent on these resources if they wanted to do much.

I think both ERGs started off with these two broad goals. One was a sense of emotional and psychological safety. This is a safe space, where I can just come in and check on other people. I want to make sure that I’m not the only woman feeling this way. I want to make sure that I’m not the only Black person feeling this way, right? And, then there is also advocacy, where they wanted to advocate and change the company. I think that this relationship of dependency really shaped the women ERG, because it became less about 鈥渁re we advocating or are we feeling safe,鈥 but more like, 鈥渁ll right, how can we get allies involved and how can we show opportunities for women to be successful within this particular system?鈥 Again, not changing the system 鈥 they’re dependent on these men who are in charge of the system for the very resources.

And, what I then saw in response was what I called collusion. In the sense that white women actually prioritized the relationship with the white men in power in the organization, and actually deprioritized other women of color, particularly the Black women in the group. And so, you saw this collusion. We have to get allies, and allies are important because we’re not alone. But when you asked, 鈥渉ow do you think about the racial issues that Black women in the group are having?鈥 Oh, well, there’s a Black ERG for that. And so, you saw this juxtaposition of prioritization and deprioritization. And I call this 鈥渓atent privilege鈥 based on race. And, a lot of times, we think about privilege as this thing that is actually invisible or latent. But, I really highlight this latent [aspect], because you’re talking about a group of people that was being actively subordinated, they were being actively marginalized within this company. It’s still tech, right? And, to combat this marginalization, they were getting access to resources. But, a lot of that was shared through their shared racial privilege with white men. And so, there was a prioritization of like, all right, we can kind of pool these resources together, as long as white men still control the resources, but we have access to that.

Now, let’s take that aside and look at the Black group. The Black group was often deprived of resources, and it was like, well, we’re not going to deal with these Black issues. They’re not relevant. We’re not going to really talk about this, right? And so, as the Black group was trying to do stuff, they often felt like their goals ended up being dismissed. And so, you have them trying to talk about 鈥渢hese are the issues we’re facing,鈥 and people [are] like, 鈥渨hy are we talking about that? That’s not relevant here.” And, this is even after the same scandals. There was both a gender scandal and a race scandal, where [there was] this kind of dismissal of racial issues as important, which we, again, often see in society. And so, in response, whereas the women ERG was mostly white women colluded to maintain this access to resources, you saw the Black ERG separating itself. They were like, you know what? We’re not getting any resources. What we can do is create a safe space.

What the Black ERG then actually ended up doing was creating and maintaining this private chat room, where they could support each other, and even [during] their office hours, they talked about it being a safe space away from white people where they could actually speak honestly and candidly. And, I call that a sanctuary. And then, within that, you saw Black women, actually, they were being marginalized in the women ERG, and then in the Black ERG, there was still some sexism as well. And so, what do they do? I saw them actually build an inner sanctum within this Black ERG where they were still involved, but they were centering both their race and their gender.

“in order to actually help people create these more equitable systems and organizations, you have to allow a level of self-determination that is beyond what we normally see.”

Ultimately, as I said, the Black group’s goals were dismissed. The women’s group鈥檚 goals were co-opted. And so, they both were able to do certain things. But, at the end of the day, the actual structural issues in the company that maintained the racial and gender hierarchy actually went unchallenged. And, that’s that lack of effectiveness that you often see, [which] is that there might be some reprieves from the inequality through these groups, but these relationships actually end up minimizing the actual effectiveness that can happen, because those structural relationships remain the same.

All change has to be intersectional, and we have to think about it that way. So, if you’re in a women’s group, it can’t just be like, all right, let’s figure this out, and let’s be on the same accord, and then let’s like deal with this later. It actually has to be like, how do we deal with this complexity? How do we deal? And, not just the complexity. A lot of times people say, “you know, white women face certain issues, and then women of color face these issues, and it’s different.” It’s not just different 鈥 there’s actual racial harm that’s happened between those groups and between those people that has to be accounted for.

And so, in my data, I see Black women not just saying, oh, we’re treated differently. It’s not that it’s irrelevant. It’s actually that they’ve heard white women say very harmful racialized things and racist things within the context of their relationships. And, I think the same thing will be said within the Black group. It鈥檚 just not that Black men and Black women have different experiences, it鈥檚 that we have to be conscious of the actual patriarchal elements that have been [in place], and how we organize and how we relate to each other. And so, to create more effective spaces, you actually have to deal with those issues, to deal with those tensions, to deal with the marginalization and complexities within groups.

And so, one way is thinking about intersectional spaces. And, I think subgrouping is a way to deal with that. I think the fact that Black women create this subgroup within the Black ERG, but are still members of it and a part of it, that’s super important. So, instead of saying have your own separate thing 鈥 that [subgrouping] gives you a way to share resources, but to also create a space of learning and safety and growth and development within those spaces. So, that’s one thing. Is it allowing that complexity 鈥 not just allowing it, centering that complexity in a way 鈥 to be a part of the group’s goals and missions and making that a part of it?

The second thing, and this is really for organizational leaders, is that, in order to actually help people create these more equitable systems and organizations, you have to allow a level of self-determination that is beyond what we normally see. So, for example, in one of the situations that I talk about [that] use co-optation, I talk about this white male leader. The white woman was talking about how he was a great ally, and he was, you know, making sure that they were on top of their professional relationship. And then she gives an example of them having this listening session, and he said, all right, this isn’t effective. We need to do something else. We need to do more things. And so, in the kind of early reading of that, it’s like, oh, okay, he’s just helping them be more effective. But the problem, actually, is that he is determining what is effective for this women’s group. Whether or not the outcome is the same is kind of beyond the point for me. It鈥檚 the fact that he, this man in charge, is determining what’s effective for them. And so, that relationship actually has to be different. People in charge actually have to say, 鈥測ou know what? You need the space. How can we support your resources, and how can you do what’s important for you?鈥 Now, that doesn’t mean that you’re always going to get a separate answer, because, again, a lot of us have been socialized within these same institutions within the same context. And, that’s where this intersection and that work has to get done, too. But, if you allow these groups to self-determine more, provide them with resources without trying to control them, and then the groups themselves take a much more intersectional approach of saying, 鈥渁ll right, who is most vulnerable in this group, and how do we make sure that they can succeed?鈥 Then they can do that work without also saying, 鈥渁ll right, how do the white men in charge, how are they going to feel about this?鈥 How are we going to make sure that we still get more resources next week and we can deal with our stuff as well? So, there has to be this letting go, but supporting, and then a chance to also deal with these intersectional complex spaces.

DH: It’s interesting that there may be a parallel, or at least something tech companies can identify with, which is that in technology companies, (and this is through sort of my own personal experience), there can be organizations that give a lot of autonomy to people inventing technology. And, the groups feel very empowered to say like, oh, we’re going to invent something that doesn’t exist. And, there’s a reasonable level of tolerance for leadership to let those companies, let those small groups explore. And, I just wonder if there’s any way for organizations to see the parallels sort of along these lines.

LS: I think one of the challenges with technology is what people feel is the hypocrisy of the field. I had one person I was talking to in an interview. She said, “We can do all this other stuff, but we can’t innovate here?” And, it’s supposed to be this very innovative field, but when it comes to race, gender, when it comes to hierarchy, it’s like, oh, we don’t know what to do when it’s hard. It’s like, it’s hard? You said you do hard things. Like, that’s why people came to work with you. Do hard things now. And so, the idea that it’s too hard for companies that [their] bread and butter is saying that they solve hard problems is completely hypocritical. And, the employees often recognize this.

And so, what you’re saying is something that I think is just a way of thinking, all right, this is how the industry works. But, then the question is, why is it so different for race and gender? I think there are two reasons, because these organizations, again, are very comfortable operating within these structures of racial and gender hierarchy. But, I think if we can move past that, we can, like, attack that head-on 鈥 not move past, but attack it head-on 鈥 and say, all right, this is how we deal with complexity in these other ways. How can we incorporate it here? Because again, at the end of the day, it is an organization that you are working [at] 鈥 you’re not just meeting up with friends outside of that. And so, there has to be some sort of compromise or balance of like, all right, we are employee resource groups, how are we contributing?

“people get scared when too many Black people get together in a company, too many women in a conference room. People in charge of companies have to get beyond that and say, let’s not be threatened by you getting together.”

But, what I’m saying is that you have to then bake in some sense of, like you said, autonomy. Some sense of self-determination for people to also say what’s important to us, and then to say, all right, then where’s the mutual kind of importance for the company? But, it can’t just be the people in charge want this. And, we have to make sure that we’re going to do this in order to get resources. And, if we don’t do that, we need to kind of hide, because we’re afraid. Because a lot of times, people get scared when too many Black people get together in a company. People start getting nervous of too many women in a conference room. People start making jokes, like you’re not… you know, those nervous jokes that are always inappropriate. You see that, and I think that just explains the psychology of that sense of threat. People in charge of companies have to get beyond that and say, let’s not be threatened by you getting together, but let’s support you getting together so that we can actually come up with new ways to embody these ideals that we say we believe in.

CA: It goes back to, Lumumba, about what we were saying, trying to adopt this integration and learning paradigm, or this learning mindset, right? So, for the company, when they see employees gathering around, kind of solidarity around, their race or gender, instead of viewing it as a threat, viewing that as a resource, as “oh, well, they’re probably going to come up with some ideas that teach us about how we can be a better organization.” It’s just a learning mindset. I think it underscores your earlier point about that.

LS: Exactly. And, then sometimes just saying that even if they don’t come up with ideas in that meeting, like, we know that we’re an organization that exists, again, within this external environment… and we’ve made money and built in this environment. Sometimes, they’re going to need to get together to process what it’s like just dealing with this. And that鈥檚 what I mean 鈥 it’s like that balance of, yes, there are times I think instrumentally of what we can do, but then there are also times to just process and allow the space for the humanity of people. A lot of people come together within these groups because they just want a sense of dignity, because that’s been denied in the company and outside of the company.

CA: So, I wanted to ask you a little bit about how you came to this work. So, you’re a scholar of race and gender inequality, trying to understand how that operates in organizations, and how we can change that. You’re doing this work at a business school, and [I鈥檓] just curious about how it is that you ended up at HBS, and how you think about the goals that you’re trying to accomplish with your work in the context of where you’re doing it.

LS: I would say that I never thought I’d be at 性视界 Business School. It wasn’t something that I was applying for. I didn’t know organizational behavior programs existed until maybe, like, five months before I applied. And, the only reason why I applied to HBS is because 性视界 has this weird thing where the organizational behavior program was joint with [the] psychology and sociology department. I didn鈥檛 know it was because 性视界 just says only the Faculty of Arts and Sciences can give out PhDs. I was like, oh, okay, it鈥檚 joint with the psychology and sociology department. So, I won’t just be at the business school all the time, I can talk to some people who think about inequality. That’s the only reason. So, HBS was the only business school that I applied to. Because I was just like, I can’t be in a business school. So I applied to it 鈥 but I cared a lot in my application about not just doing the research, but like, how is this research going to be used by people?

Even once I got to HBS, I was like, what is this? [laughter] I spent a lot of time, my first year, year and a half… But two things changed in my second year. One, I took a course with Robin Ely, and I was like, oh, my God, yes, this is amazing. This is so… [laughter] you are thinking about this? And, then I went to the Gender and Work Symposium, and I was like, all right, these are my people. These are people who are thinking about these issues. They are here. It’s not perfect, you know, but there are people who are thinking about this and I’m not alone. And, I think that opened me up to the world of some of the scholars who are doing this work around race and gender in this field. But, it took me a while to find them. And, there was a lot of time at first, where I was like, I don’t think I should be doing this. I think, like most grad students, I always have the times where I was like, maybe I should… this is too slow. The inequality and hierarchy and anarchy is moving too quickly and my typing is moving too slowly. But, I think a lot of times it’s also kind of, all right, where do I fit in? What are some of the skills that I have and how can that contribute to the change I want to see? And, to me, being able to, like, step back and think kind of rigorously and slowly, and be able to say what’s going on here? That’s what I’ve also really come to appreciate.

That’s what I loved about my research process of doing field research. It wasn’t like, let me come out and give you an answer real quick. It was like, there are a lot of people thinking about [it] that way. But if I can step back, and actually be able to think, and not have the people that I’m thinking about controlling what I think about, right? That, to me, was what I needed. Because as much as I talk about these controlling relationships in my data, like, the people are controlling them, I feel that. And, it’s hard not to feel that, even in this field. You know, I’m a Black man not just talking about race, I’m also talking about gender. And so, that is a part of who I am and shaped me. Because I’m somebody who, I have to not only check, how am I doing talking about racism, is this okay? But, I also have to check and see, because me, I know somebody who’s participated in a patriarchy myself, and has had to check myself, and I’ve learned a lot, and I’m grateful for the people who have helped me learn and grow. And, I think as I do this work, I try to also take that growth mindset for other people. That shapes my ability to look at the imperfections at a place like HBS and in a field like it is, and say, all right, where are the opportunities for growth and development, and how do we understand this better?

CA: So, our final question is just what you might like to leave folks with. You know, for those who are thinking about these issues, want to be change agents 鈥 resources, things to read, what advice do you have for people?

LS: Yeah, I would say, there’s so much, and people are talking about a bunch of books and podcasts, so I’ll just give one, just so you can focus on one thing. And, I mentioned earlier, the book that I have a chapter in, the book, which is edited by Laura Morgan Roberts, Anthony Mayo, and David Thomas. But the reason why I recommend that book is not just because they’re a bunch of chapters by different scholars, but it will also introduce you to more scholars, and then you can also follow up on their work that they’re doing. So, I think that book is a great entry point into thinking about race and leadership. But also, I would encourage you, as you read their chapters, to also look up some of their broader work as well.

DH: Lumumba, thanks for being here today and for your perspectives. It’s been really great speaking with you.

LS: Thanks so much for having me. It was great to be here.

CA: Thank you for joining us, Lumumba. This has been a really fascinating and also inspiring conversation. We really appreciate your time.

DH: That’s a wrap on the interview, but the conversation continues.

CA: And, we want to hear from you. Send your comments, ideas, and suggestions to justdigital@hbs.edu.

The post Lumumba Seegars on inequality and agency in ERGs appeared first on 性视界 Business School AI Institute.

]]>
/lumumba-seegars-on-inequality-and-agency-in-ergs/feed/ 0
Elizabeth M. Adams on civic tech as advocacy work /elizabeth-m-adams-on-civic-tech-as-advocacy-work/ /elizabeth-m-adams-on-civic-tech-as-advocacy-work/#respond Mon, 12 Apr 2021 13:30:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13875 In this episode, we are speaking with Elizabeth M. Adams from Stanford University's Digital Civil Society Lab about the roles and responsibility of government in tech, the ethical implications of technology, and the long game of advocacy work.

The post Elizabeth M. Adams on civic tech as advocacy work appeared first on 性视界 Business School AI Institute.

]]>
Civic tech aims to enhance the relationship between people, their community, and government by centering and amplifying the public鈥檚 voice in the design and implementation processes of AI-enabled technology. Without public oversight, communities face over-policing, loss of data privacy protections, and the consequences of human bias directing technology used to govern society. It is therefore essential to include diverse perspectives in civic tech solutions to ensure proper representation and consideration for communities of color and other vulnerable populations that are most negatively impacted.

In this episode, our hosts Colleen Ammerman and David Homa speak with Elizabeth M. Adams about the roles and responsibility of government in tech, the ethical implications of technology, and the long game of advocacy work. Elizabeth is a technology integrator working at the intersection of cybersecurity, AI ethics, and AI governance with a focus on ethical tech design. Currently, Elizabeth is a fellow at Stanford University’s Digital Civil Society Lab in partnership with the Center for Race and Comparative Studies in Race & Ethnicity. 

Watch the episode with Elizabeth M. Adams

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): Today, we are joined by Elizabeth Adams, a technology integrator working at the intersection of cybersecurity, AI ethics, and AI governance with a focus on ethical tech design. Currently, Elizabeth is a fellow at Stanford University’s Digital Civil Society Lab in partnership with the Center for Race and Comparative Studies in Race & Ethnicity. Welcome, Elizabeth. We are very excited to talk with you today.

Elizabeth M. Adams (Stanford University fellow of race & technology at the Center for Comparative Studies in Race and Ethnicity): Thank you. I’m super excited to be here.

David Homa (Digital Initiative director): Elizabeth, thanks so much for joining us. Let’s start with the big picture here. Share with us your perspective on what constitutes 鈥渃ivic tech,鈥 and what are some of the ways that intersects with efforts to foster racial justice?

EA: So, that’s actually a very good question. In my mind, 鈥渃ivic tech鈥 is really the process of bringing government, people, and community together to share in the decision-making process around services or technology that communities could be impacted by. And so, when we talk about a racial equity framework, I feel like I’m in the best place in Minneapolis because the city of Minneapolis has adopted a racial equity framework in all of the work that it does and all of the decisions that it makes. So, obviously, when we have conversations about technology, transparency, [and] the things that are going on in the city of Minneapolis, racial equity is at the top of mind. It makes my job a little bit easier, so I don’t have to work so hard to educate those at the city around racial equity. What I’ve spent most of my time doing from a civic tech perspective is educating people on why technology transparency is so important and why we need to break down the entire lifecycle 鈥 from how technology is designed by a company, to how it’s procured by the city, to how users are trained to use that technology. Because if they’re bringing their human bias and they’re using this technology to govern society, we need to just make sure that technology works for all.

DH: That’s really super interesting. Do you think it’s a greater challenge for some people to understand how bias seeps into technology than maybe other sectors and what may be driving some of that difference?

EA: I do think so and I’ll tell you why. Because when I’m seated at the table with elected officials, and appointees, and commissions, not many of them are technologists. So, I can speak [about] this to data scientists and engineers and they get it. But it takes a while. And out of all the elected officials and Minneapolis city council members that I’ve spoken with, there might be one or two who actually get how bias can creep into technology. Most of the time, when you’re talking about vulnerable populations and communities of color, and you’re talking about equity or equality, you’re talking about it from a housing perspective, or an education perspective, or for jobs, or health. And, what I think people don’t realize is that technology runs underneath all of that. It’s all about data and what happens with that data and how that data is harvested and archived and used and, in some cases, profiled. So, I think that just because people are not technologists by nature, or many people that are making decisions around data policy and other policy concerns [are not technologists], that’s part of the challenge for what I see in this space.

CA: I guess, to me, it’s sort of the next step to our initial question 鈥 what is civic tech and how does this relate to racial inequality? And, you just talked a bit about the fact that often people who are making policy decisions or in those discussions don’t really have a solid grasp on how bias shapes technology. So, [what] I’m curious to hear you talk about 鈥 and I’m sure this is kind of what your work is really about in a lot of ways 鈥 is how you bring them along. How do you educate people? What are the effective ways, right, to get everybody to a point where they can understand applications?

EA: So, that’s an excellent, excellent question. I will teach people across all spectrums. But unless I’m talking with a data scientist, I don’t get super technical. And, even when I’m talking with a data scientist, or an engineer, or an architect, I still don’t get super, super technical because when you start talking ones and zeros, then everyone is the same. So, what I started off doing a couple years ago was just really creating learning events. And, I created avatars and I created personalities for these avatars. But I did not show their faces. I did not show that the male avatar was a Black man or the female was a Black woman. I would just use examples that this person runs the soccer league, or this person is a champion in their community for food security or cleanup.

“when you鈥檙e talking about… equity or equality, you鈥檙e talking about it from a housing perspective, or education, or jobs, or health. And, what I think people don鈥檛 realize is that technology runs underneath all of that.”

Then at the end of the experience, I wanted people to understand that these are your neighbors, right? If technology is impacting them and these are people that you like, shouldn’t we have some conversations about this? So, I found those ways to be really effective. By having these very, kind of, educational experiences, it really helps to bring people along when you’re not talking over them, and you’re talking with them, and you’re allowing them to participate in that process.

CA: That’s great. It sounds like part of what you’re doing 鈥 and especially hearing you talk about creating these personas and profiles 鈥 is kind of helping people move from the purely abstract to something that feels a little bit more tangible, or [that] they can connect to more and understand. Then it sounds like that motivates them to realize how important this is. Like, you’re kind of bringing them along to get them sort of incentivized and to prioritize these issues.

EA: Yeah. And, you know what else is interesting? When you start having an initial conversation with someone about racism, people get defensive immediately. So, you have to kind of break down those barriers and talk about issues that are affecting all of us. That’s part of how I’m able to kind of navigate some of these really sticky conversations that really, at the heart of it, are about racism, about inequality, about human bias. They’re about biases from the folks who are developing the code, because maybe they don’t have enough lived experience with people of diverse backgrounds. But you have to just kind of… for me, that’s what I’ve done. I’ve just used the experiences to bring people along by helping them understand that this really needs to work for all. Technology needs to work for every single human. And, to really make sure that the conversation is human-centered.

DH: Facial recognition is obviously a big topic in the world today. Are there specific examples or situations you’ve come across where people at first thought like, 鈥渙h, well, this is a perfectly good use,鈥 and then you help them realize what some of the stumbling blocks might be?

EA: Yeah, and I still talk about that today. So, I actually don’t think all technology is bad. Let’s talk about facial recognition technology from that perspective. If a child is lost in the mall, right, and they can use facial recognition to see where that child might have gone, what store, or where they have navigated around the mall, obviously that would be a good use of facial recognition. If someone is coming into your building and they shouldn’t be coming into your building, and maybe you might need to identify them because they harm someone in the building, that would be, to me, an acceptable use of facial recognition technology. Or, if someone’s grandparent was lost on the street, right? You’d want to be able to find them and bring them back safely.

But when you start using technology to profile people and overreach into communities and start, as I mentioned, profiling and taking that data and aggregating it with, let’s say, license plate readers or an Amazon ring camera, that’s when it becomes harmful, and there are organizations that use [technology] for that purpose. That’s where my work begins 鈥 kind of helping people understand why these facial recognition systems don’t typically work for Black women. And, a lot of it has to do with the training data. There’s not enough diversity in the data once the technology is brought together and then it’s sold. Also, the people who are designing and developing it aren’t necessarily understanding of the second- and third-order consequences of their work. They are selling a product and then they are trusting that those who are using the product are equipped enough to understand if there is bias or artificial intelligence nudging happening within their technology.

DH: What advice would you give to people who are working on technologies, like you said, who may not be thinking about the second- or third-order ramifications? Someone, maybe a data scientist, is working on a project. They’re building models. How should they be thinking about that? Maybe they work for a big company. What would your advice be?

EA: Well, I think it’s interesting question to talk about from an individual perspective, because I think it’s a little harder to reach an individual than it is maybe an organization or an academic institution. Because individually, when I’ve talked to data scientists, they actually think that they are doing the right thing. They have no idea. One of the suggestions I do talk to them about is just do a search, an internet search, on some of the biases in technology and see if maybe that can’t inform your work. When I’ve talked to academic institutions, I’m like, maybe you can bring in a historian so you can see how some of what has happened in our country, or maybe across the world, might be impacting how technology is designed and what people think. Or, just bring in a guest, a guest lecturer. At the city, I spent a number of meetings with the coalition that I’m a part of, and we’ve just made our way around, and started having these learning events with the city attorney and the city clerk, and the division of race and equity, and again, those [people] are not necessarily technologists, to just help them understand these are some tools. Start with the internet. So, I would say that, to me, is the easiest thing, because that’s what I did exactly almost two and a half years ago when I saw a video called, . I knew instantly that my experience with racism, prejudice, discrimination, and then my love for technology 鈥 that this is how it would merge. And, I used the internet to figure out what was going on in the space and I followed my curiosity.

DH: That’s great. And when you bring in those experts, make sure you pay them.

EA: Pay them. [Laughter] Pay them well.

CA: That is a great point, right? Because people who have been doing this work have been doing it for a long time. This is a whole body of research and knowledge that people have been working on and that is important, right? And [it] is something that really can help make progress. I just watched that video that you referenced. Ethi, our creative director, found it for us and shared it with Dave and [me] before this interview. It was great. And, it was such a cool thing to see visualized, as something that we already know from years of scholarly research, which is that gender is racialized. So, you just can’t 鈥 gender is not separate from race, right? The way that we perceive and understand gender is highly racialized, right? Which you see then in this video with all of these faces of Black women being interpreted as male or masculine. It’s such a vivid illustration of that. I just found that very powerful because you can say that to somebody, right? You can say, “well, gender is racialized, you know, let me tell you why.” But to actually see that, I think it was very, very powerful. Really kind of drives that home.

EA: I agree. When I first started doing my events, I would share Joy’s video and people would be amazed, and it created a great opportunity for conversation at the end of every session about why this work is so important to unpack the entire design lifecycle. But, in addition to how individuals can learn more about it, there are lots of companies who are now standing up responsible AI teams, where they are working through the process of understanding what this means so that before their tool hits the streets, they’ve at least gone through some gates and some checks to balance it. But without legislation, we are really at the hands of these organizations and these companies deciding for themselves and policing themselves to make sure that their products are the best for all of us.

DH: And that brings us to an interesting point, where when they’re building these products, what are the best ways to get people whose lives are impacted by these technologies into the process? What can organizations be doing?

EA: So, this is such a good question, because I tell people this all the time, and we just kind of have this conversation, which is: you can find someone doing the work and bring them in on a consultant basis, like to consult with you. You don’t have to create this massive diversity and inclusion team and start asking your employees to come in and help you solve and solution these problems. There are organizations that have been doing this work for a very, very long time. I just honestly believe that it is around communication. There’s no pipeline issue. There’s no lack of organizations. I don’t care what city it is who are not doing racial equity work. Someone is doing racial equity work. And, in the life of Zoom now, you can certainly, certainly find someone across the world if you need to, to pull them into the conversation. So yeah, there’s many, many different ways. And so, for me, this is just so, so important to continue to have these kinds of conversations to educate people that it’s not as hard I think as we make it. It’s certainly not hard for me to find a group to have the conversation with. It wasn’t hard for me to find a data scientist to talk to and ask them some very, very basic questions. And, I think people have to want to, once they are aware that there are possibly issues in their technology.

CA: It sounds like part of what you’re saying, too, is get the motivation, identify the people with the knowledge and expertise that can then help you go from awareness and motivation to, okay, what do I need to know and understand and get a more sophisticated view on so I can then go in the right direction.

EA: Well, and I would agree. So, let’s just think about this. You want to design something in your house 鈥 you want a porch, or you want a deck. What do you do? You do your research, and you can kind of go find out and make sure that it’s appropriate. And, in this day and age, I just cannot believe that there are companies out here who are developing facial recognition technology or some other technology that is AI-enabled that don’t know that it could possibly harm portions of our communities. So, that to me is just… Here I am, though, still trying to live this double life of finding joy and happiness and doing that while leading in this space of digital justice and making sure that people are still aware, and it’s a struggle. But, if I can do it, I think others can, too. And I think we owe it to our world to just be 鈥 offer those skills so that we can all live in communities that thrive.

“maybe they don’t have enough lived experience with people of diverse backgrounds… Technology needs to work for every single human.”

So, if I could just take a step back. My family has been in Minnesota since the late 1800s. My great-great-grandfather was the first Black firefighter in St. Paul, Minnesota, and he served and eventually retired as a captain in 1926. So, think about what was going on in our country then. And, of course, I’ve had several family members since [then] that have been involved in racial equity work, and my mom was instrumental, before her untimely death, in getting the first urban playground established here in the city. And so, I come from a history and a legacy of people who’ve shown up for this work. So, when I show up to a conversation like this, I’m not showing up because of some recent tragic event.

I’m showing up because I have a legacy. I’m showing up because I have a lived experience. And so, it was very personal for me with what happened with George Floyd, the murder of George Floyd. Not only are we dealing with the pandemic of COVID, but now we have another racial injustice pandemic. And it was very, very difficult for me and my family.

I withdrew, because I needed to figure out how to center myself. What was my space going to be like if I was going to continue to do this work? Because, like I said, I spent a whole year and a half working really, really hard with the committee and helping them understand. And when I joined the Racial Equity Community Advisory Committee, they weren’t talking about technology. They weren’t talking about video cameras and video surveillance. And so, I spent a lot of time doing that legwork. In order for me to continue in this space, I cannot dip into trauma-filled conversations. I won’t dip into trauma-filled conversations because I have to selfishly take care of myself. So, yeah, and as a practitioner, it’s extremely hard. You don’t just wake up one day and say, 鈥淥h, I’m going to be a practitioner, and I’m going to help a city of a half a million people move towards a more tech transparent city where racial equity is at the top of the top of the conversation.鈥 And, I’m thankful that our city had done that work in 2017 before I got involved in this work. But, it has to come from within.

CA: So, would love to hear you talk a little bit about how you do create change, not just through describing the problem, but figuring out solutions 鈥 kind of doing that with people who are coming from lots of different perspectives and may not be well versed in the problem, different stakeholders, sort of the complexity of trying to do that day-to-day. Would love to hear you reflect on that.

EA: Well, there’s no short-term solution. So, before I actually got really into the data policy stuff, I spent a year on the Racial Equity Committee learning about civic tech, learning who were the players in the city of Minneapolis, learning what their concerns were, and showing up to these conversations 鈥 really sometimes not saying anything, even though I knew that I had some advocacy work that I wanted to discuss with them.

And so, it’s really about relationship management and respect and understanding what a particular city councilperson, or person who runs a division at the city of Minneapolis, like, what are their major challenges, and how can you help them while still advocating for what you believe could help improve the city? There’s not like a blueprint. You just show up, and you mess up sometimes, and you say things that maybe aren’t appropriate in the city meeting, and you don’t know, but kind of having the courage to learn out loud, as I say, and kind of learn forward. I don’t consider it falling forward or failing forward, but learning forward and just taking those chances.

“without legislation, we are really at the hands of these organizations and companies policing themselves to make sure their products are the best for all of us.”

And, there’s a lot of people that I work with that do the same things. We’re trying to figure it out together and sometimes are stumbling over each other, especially in the coalition. So, we started forming, and then we started storming. But you typically storm first. So, we form first, and then we storm first, and now we’re norming, and now we’re performing.

CA: I love that 鈥渓earning forward.鈥 That’s great. And, I think part of what I hear you saying is that to do this kind of work on the ground, like in city government and in the community, you have to have a learning mindset. Right? If you don’t have that learning mindset, then you’re going to get stuck, it sounds like. Is this kind of happening?

EA: Yes, to that point. But the other thing I want to make sure is that if it wasn’t for the folks out there protesting, the folks who are out on the streets, really raising the awareness of why these issues are so important for us to address, my work would be a lot harder. So, it takes so many people in the community for things to turn. It’s not just the folks behind the scenes, you know, working in the meetings. And because it is, it’s a lot of work.

CA: Technology is so powerful 鈥 and these tools like facial recognition technology and different kinds of surveillance tools, and just the technology, is ever more powerful. So, it seems very important to be doing this work to try to make sure there is a human-centered approach to the development.

EA: Technology is impacting all of our lives. I have been working on pay and gender equity for just about 20 years as a technologist in D.C. So, I ran a systems integration lab that was around $53 million and 200 employees. It was in D.C., so there wasn’t really an issue around diversity and inclusion in the technology, right? It was more around pay equity and gender equity, making sure that the right opportunities were given to everyone. But coming back to Minneapolis, it has been… it just seems like the topic. And that’s why I say it almost feels like a tour. You know, you can only do this for so long because it really can become a part of the fabric of who you are if you don’t help other leaders, you know, give them an opportunity in this space, as well as understand what your personal limits are.

And, I just want to say this. While this conversation is enjoyable, it still takes something out of me, right? Because we’re talking about a subject, we’re talking about technology that possibly could harm people that look like me. And, to continue to show up every day telling people [that] I don’t want technology to harm people that look like me. So, that’s why I do this work. But it’s good. I think that we’re recording it so that, again, people can kind of hear that others need to step into the space. We need more people to kind of show up and help with this work.

CA: So, we do have a wrap-up question that we ask everyone 鈥 is there anything that we haven’t asked you that you want to talk about or anything that you haven’t had a chance to speak to? Any resources that you want to share? What’s a takeaway you’d like to leave people with?

EA: So, if you would have asked me this question maybe earlier in the year, I would have told people to read as much as they can about biases in AI. I would have told people to go write articles, go host their own learning events, go do whatever they can, write a short e-book. But, here I am on the other side of George Floyd, the murder of George Floyd, and I think it’s been a common theme that we’ve talked about throughout this conversation, [which] is reaching for the highest point of happiness that you can. Because guess what? There’s going to be another murder, right? We’ve seen that. There’s going to be more protests. There’s going to be another company coming out with biased data, and they will wait until the community says it’s harming vulnerable populations or communities of color and then they may go try and fix it. There will be constantly folks working on data policy. There’ll be new elected officials. There’ll be, you know, new divisions that are created within city and state and federal governments. But reach for the highest point of happiness and work in that joy space, because that’s the only way you can keep showing up.

“I just cannot believe that there are companies out here who are developing facial recognition technology or some other technology that is AI-enabled that don’t know that it could possibly harm portions of our communities.”

And, I say that because, as I mentioned, in 1885, diversity and inclusion started for my family then, when my great-great-grandfather, William Gaudette, became the first Black firefighter and retired a captain. So, for over a 100 years, this is a conversation that’s been happening. Maybe it wasn’t directly around technology, but it’s still a lived experience for Black people in this particular country. That’s why I say if you want to do this work, you’re going to have to find a way to make sure that you can survive doing this work. And so, that would be my message to others. And, surviving doesn’t necessarily mean you can’t be happy and you can’t find joy. Conversations like this give me a lot of joy, because I can be myself. I can be a proud Black woman. I can stand here and say, 鈥淚 love being a proud Black woman,鈥 and still go off and have a difficult conversation.

Again, if you were to ask me six months ago, it would have been study, study, study, study, study, become an expert, and then that’s how you’ll make it. Now it’s, you know what, the stuff is going to be here. All these problems 鈥 it’ll be a new problem tomorrow. So, find your center and find that happiness and find that joy.

CA: It’s a long game.

EA: It’s a lifetime game for Black people. It’s… we get no generations off, we get no generations off.

DH: And with that powerful note, that’s a wrap on our interview, but the conversation continues.

CA: We want to hear from you. Please send us your questions, ideas, comments, suggestions. Reach out to us at justdigital@hbs.edu.

The post Elizabeth M. Adams on civic tech as advocacy work appeared first on 性视界 Business School AI Institute.

]]>
/elizabeth-m-adams-on-civic-tech-as-advocacy-work/feed/ 0
Christine Marie Ortiz Guzman on how we are all 鈥渄esigners鈥 /christine-marie-ortiz-guzman-on-how-we-are-all-designers/ /christine-marie-ortiz-guzman-on-how-we-are-all-designers/#respond Fri, 19 Mar 2021 14:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13669 In this episode, we speak with Dr. Christine Marie Ortiz Guzman from Equity Meets Design about equity design and designers, organizational responsibilities to change, and the relationship between capital and lowercase 鈥渄鈥 designers.

The post Christine Marie Ortiz Guzman on how we are all 鈥渄esigners鈥 appeared first on 性视界 Business School AI Institute.

]]>
Inclusive and universal design have gained wider attention and practice in recent years. Its goals are grounded in the belief that recognizing, problem-solving for, and learning from excluded groups yields universal benefits for all. But what if this isn鈥檛 enough? If the foundations of these structures are designed within systems of inequality, how can they be altered to serve a different purpose? How can we all activate our agency to engage with design work in socially transformative ways?

In this episode, our hosts Colleen Ammerman and David Homa speak with Dr. Christine Marie Ortiz Guzman about equity design and designers, organizational responsibilities to change, and the relationship between capital and lowercase 鈥渄鈥 designers. Christine is the founder of Equity Meets Design, a think-and-do tank that works to increase the ability of those with creative authority to (re)design interactions, interventions, and institutions towards increased equity in process and outcomes. 

Watch the episode with Christine Marie Ortiz Guzman

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So, today we are joined by Dr. Christine Ortiz. She’s a serial entrepreneur with a passion for innovation through equity-centered design and her current venture is a think-and-do tank called Equity Meets Design. Welcome, Christine. Thank you so much for speaking with us.

Christine Ortiz: Yeah, thanks! Glad to be here.

David Homa (Digital Initiative director): Thanks for joining us. I want to talk about equity design and equity designers. This seems like a good place to start. Can you share with us a little bit just to kick us off? What do you consider equity design, and who are equity designers?

CO: Yes. So, the premise of our work is this really core belief that racism and inequity are products of design. And, for us, that is the kind of most helpful way to think about inequity and racism, because if they are products of design, then that means that they can be redesigned. And, so, our work is really thinking about how we can get folks to think of themselves as designers. We think that everyone is a designer. We are constantly designing things, creating things, making things, tangible things, right? Whether that’s a product or a website or the more traditional ways we think about design. But, also, lots of intangible things 鈥 processes or systems or organizations or cultures or experiences or relationships, right? All of those things we see as being designed. And so, we believe that if everyone can see themselves as a designer, and then explicitly use an equity-centered design process to design whatever those things are that folks are designing in their day-to-day, then that is how we’re going to actually do equity.

One of the big questions that we kind of came out of trying to answer is, 鈥淧eople believe in equity and can talk about equity, but what does it mean to do equity every day?鈥 That’s where equity design and equity designers come in, right? We think that everyone is a designer and you have to make a decision to be an equity designer. And, then, how do you be an equity designer? You design equitably. So how do you design equitably? Obviously there’s a lot more to designing equitably than just having a toolkit and a set of frameworks and mindsets to do that work, but that is one of the things that we think is really helpful to folks. To be equity designers is to have a process and a framework and a set of tools that [you] can use to engage in this design work. And, so, that is what we’ve been working on, really thinking about how do we create a truly equity-centered design process 鈥 and then all of the things that come with it, so folks can use it in their day-to-day work.

DH: So, what are some of the high-level tips? You don’t need to give away your entire work here, but what are some of the high-level tips for people to insert in that process? Again, not just for designers, but people thinking about all of their work. When they think about their work, what are some of the big ones they need to sort of stop and think about?

CO: Yeah, I mean, a lot of our work is about thinking about the “who.” One of the traditional problems in design is that folks who are capital “D” designers, trained in design, have a degree in design 鈥 largely folks who look a certain way, have a certain background 鈥 are given the power and the permission to design for problems that largely they don’t experience themselves, for communities that largely they do not live in themselves. That is just kind of a taken-for-granted separation, right? The designer is separate from the user. And so, a lot of equity design work really goes into thinking very deeply about who is experiencing the problem that’s on the table to be solved and centering those folks in the process. Ceding power to them, involving them in different ways in the process. That may or may not mean that they’re on the 鈥渄esign team,鈥 right? Because sometimes that can actually be an additional burden of having to do the work. But it鈥檚 really thinking about when and how they are involved in the process and what decisions and parts of the process are actually ceded to them to make a final call on and influence in different ways that really move past kind of a token or an extractive relationship that we see in traditional human-centered design. 

“the premise of our work is this core belief that racism and inequity are products of design. And, if they are products of design, then that means that they can be redesigned.”

I think that real interrogation and thinking about who is involved in what parts of the process, and how, and what power they have, what decisions are being made, is one of the big things that we focus on. And we talk a lot about designing the design process 鈥 which every time we say that, we laugh because it’s so meta. [laughter] But, it’s so important, right? Because one of the other things that we find is that the way that inequity persists is that we’re just not intentional and transparent about things. Those things often go together; you can’t be transparent about something you’re not intentional about. And so, really thinking about, “How are we going to engage in this design work, in this creation work and this co-creation work?” before diving in is really important.

CA: I am intrigued by this point that you make about how everyone is a designer. Because I certainly don’t think of myself that way, right? It wasn’t until reading some of your work that I sort of recognized, oh, right 鈥 I do think about it in terms of the capital “D” designer. And it does seem to me that it’s probably important for those of us who don’t think of ourselves that way, but to your point, are designing all the time. For me to think, well yeah, in my job I do design processes 鈥 even if I’m not creating, say, our digital content, I am ultimately informing what it looks like because I’m helping to drive that process or lead that process. I am curious about what would you say to folks like me who don’t think of themselves that way, but now have this awareness 鈥 are there ways to reflect on that and identify where are the places you are designing? How can you then go to the stage that you and David were talking about, of thinking about doing that in the most equitable way possible?

CO: I think that this is one of the things that we thought was going to be a huge barrier when we started and actually has not been, as far as getting people to think of themselves as designers and their work as design work. Because I think 鈥 so, first, we made an intentional decision. And, we went back and forth on what language to use when we were thinking about this work. And we intentionally wanted to reclaim the words “design”/”designer” because of the power that those words hold, right? There is inherent permission to create, power to create. And we know that if we are trying to create an equitable world, that’s something that hasn’t existed, so we have to 鈥 we’re in the business of creating something that has never existed. So how do we help people kind of latch on to the power and the agency that they have in contributing to that work?

We’ve actually been kind of pleasantly surprised that people, once we give them kind of the nudge and a little bit of the permission to try on that feeling of being a designer 鈥 especially a lowercase “d” designer 鈥 then folks really kind of step into it and it’s really cool to see that happen. That being said, one of the things that we try to do is de-mystify, de-jargonize what it means to do, design 鈥 the steps of the process, and the tools, and whatnot. And, really start from a place of “You were already a designer, you were already doing design work, let’s just use our stuff as a lens to look at what you were already doing and see where we can strengthen it towards this goal of equity that we have.” Versus, “Here’s this whole new set of things or area of expertise that now you need to introduce and build capacity in.” It’s really more like, “You’re already doing this, let’s start from there. Now let’s just see how we can strengthen that.” And yes, maybe you were doing some things that we totally need to abandon or rework, etc. It might be more than just tinkering around the edges. But let’s not pretend like we haven’t all been creating things for a lot of our careers.

DH: I wonder if when you work with people and they get sort of 鈥 I bet you see sometimes in their faces they get just all jazzed up about what they can do, and then they go back into their environments and it’s hard and they face challenges, barriers, other people who don’t see it that way. What do you tell them?

CO: That is one of the reasons that we have decided to work with organizations and not individuals in our work. We are really focused on systemic change. And, in order for systemic change to happen, whole organizations have to shift the way that they work. And so, in our work, we actually target change at the organizational level or institutional level, and then at the team level, and then at the individual level, in that priority order. Which I think is a little different than a lot of more traditional DI consultants, [who] come at it in the opposite way, right? Like, working on individuals who will then influence teams, who will then influence organizations. And, I don’t think it’s either/or. I think we need to be attacking all of this from both ends. But, especially if we’re talking about process changes, which at its core that’s what taking an equity design approach is 鈥 really changing the way the organization does whatever it is that it does. In order for that to take hold, be sustainable, and lead to the kind of changes and output in whatever the work product of the organization is, we need those changes to be systemic and institutional. And so, because of that, we don’t train people. We train [organizations] and do systems change in organizations. Also, because people leave [organizations] 鈥 all of these kinds of things that we know are issues with knowledge management and sustainability and all of that kind of stuff. So, obviously, we’re training people, but the work that we’re doing with them is changing the kind of structures and the culture of the organization in ways that ideally will outlast the people who are currently there.

So, not totally an answer to your question, but it’s a huge problem, right? This is one of the things that we set out to solve. People go to a conference or a training or whatever it is, have this magical experience, are all fired up, they go to work the next day and it’s like just back to the same old, same old. And so, really thinking about 鈥淲hat does it take to actually see some change?鈥 Part of that is, we need critical mass in the organization and we need that critical mass up and down the power structure, right? We actually will not work with an organization unless the CEO, executive director, etc. is totally on board, and is actually my point of contact for this work. Not delegated to a committee or to someone [else].

DH: That’s a fantastic view of the struggles of organizational change, but specifically this type of change, especially when you get people enthusiastic about it, to have them be in an environment that’s supportive of that. My guess is a lot of people watching this are like, 鈥淲ow, I wish I worked at an org like that.鈥

CO: [laughter] Yeah, I mean, one of the things that I think has been really interesting about our work is that we’re “Equity Meets Design,” right? So, we’ve really thought about the intersection of those two fields, or lines of work, or ways of thinking. They’re also two different motivations for the work, right? So, we have folks who enter organizations thinking about, 鈥淲hat is the driving force here?鈥 Is it this moral, emotional, justice-seeking DEI stuff 鈥 it makes sense to me, it matters, I can see it. I see where it’s missing [and] that needs to be what we do. And, then the design piece is more of a, 鈥淭here are these societal problems, oftentimes social issues, that we want to solve. And we’re going to use the best tools in order to solve those problems.鈥 While there’s some overlap there for some folks, it’s not the same 鈥 they just don’t have that same personal, moral [motivation], or they haven’t yet really fully understood what racism really looks like and feels like to folks who live that every day.

“We really think about equity as a way of being, of thinking, of making decisions, of creating things, of being a leader. It has to be something that we are living and being every day.”

And so, what we’re saying is okay, enter whichever way you want. Enter through the equity door, or enter through the design door. Either way, we’re going to engage in this work in such a way that we’re going to leverage the best of each of those things. And, that’s the only way that we’re going to actually solve these problems, right? Which is what these folks 鈥 the design door folks 鈥 care about. Or, the folks who enter through the equity door, 鈥淲e’re going to create something that has centered this equity lens.鈥 But we also can’t do that without doing the design work, fundamentally. This is one of the things that we heard all the time from traditional DEI experiences: “I can understand more, I can see more, I can speak more, but what do I do about it?鈥 That’s how we bring these two things together. Which I think gives everyone an entry point, regardless of where they are in their own personal journey or how fluent they are in these concepts or having these conversations.

CA: You have this great tagline that we saw in a lot of your work online that says, “Racism and inequity are products of design and they can be redesigned.” So, just to kind of bring it to life, I would love to hear you talk a little bit about what that means. Are there even some examples that you can provide about, okay, what does it mean to actually un-design or redesign racism and inequity in any kind of particular place?

CO: So, one of our core beliefs of our work is process as product. And so, that is really a belief that we cannot think about equity as a checklist, or outcomes, or things that we’re trying to accomplish. We really think about equity as a verb. And, that’s actually as far as you’ll get, for a definition of equity, from us when we’re working with folks. We really think about equity as a way of being, of thinking, of making decisions, of being in relationship with people, of creating things, of running your organization, of being a leader 鈥 all of those things. It has to be something that we are living and being every day. That’s why we focus on the design process and actually less on the design outcome. Now, our bet is that if the process is equitable then the thing we design will be equitable, or at least be directionally moving towards increased equity. And, oftentimes what we hear from our clients is that as we are facilitating them through an equity design cycle, it’s the first time that they have experienced what it can be like to be in a work team where folks feel like they can be their authentic selves or their experiences are valued, or they can share what they believe without repercussion, up and down power chains. Or, they can be totally honest about their lived experience and it’ll be taken care of, where they actually can impact the direction that things go. All of these kinds of things.

And so, you know, we’ve been struggling with thinking about, like, how do we measure the impact of our work? What does it mean to increase equity? And, one of the ways that we’ve been playing around with that is thinking about power: distribution of power, the use of power, the sharing of power. And, thinking about that in relationship to identity-based sources of power. Our society says that people who have lighter skin have more power, or people who are male-presenting have more power, right? Our society gives people power and takes people’s power away based on identity. So, how can we create environments that counter that, where that is not the way that power is distributed? And also thinking about the differences between power and responsibility and authority. So, thinking about people’s titles and roles and that kind of power structure, and redefining all of that in a way that folks who are most proximate to the problem get to have the power to make decisions about those problems, regardless of these other kinds of influences and ways of thinking [about] and distributing power.

CA: I love the point you are making about power. I often get a little uncomfortable with a focus on 鈥渋nclusion and belonging,鈥 with those being the end-goals, because I always thinks, well, if people feel included and feel like they belong, that doesn’t necessarily tell you anything about their access to opportunity, access to power, to development, etc. And so, I just really like that you’re bringing that emphasis to bear. That was a great answer.

DH: We asked our broad community how they do equity, or what they think about it. They spoke about critical listening, about biased assessments, maybe looking at the technology they have, certainly employing some design thinking, and then obviously hiring practices. Both, as you say, the process for hiring, but then whom they’re hiring. Is that on the right track? And, what else would you add to that list?

CO: With organizations, when we tackle the hiring or pipeline or retention question 鈥 when we get to the root cause of the problem, it’s always a culture issue. It’s never a technical solution that we land on as the first thing that needs to be implemented. And so, the first work that we do with any organization, regardless of how many people of color or whatever other kind of identity markers are missing from that organization 鈥 even in all-white organizations, there are not high levels of trust, there are not high levels of psychological safety 鈥 all of the things that we know are necessary for good work to happen and for people to feel good about themselves every day at work, regardless of these other pieces. That always ends up being the first thing that we tackle. How do we fix some of these culture things through this lens of equity. Fixing culture through a lens of equity is so much more than this cosmetic diversity question that oftentimes we start with.

DH: I’m wondering, what do you think about organizations that are sort of building technology? They have a special obligation in this space, but also special challenges and I wonder if you have a perspective on that?

CO: Yeah, I don’t think it’s special 鈥 I think it’s amplified. I think it’s the same problems that our society has been created on. It’s capitalism and patriarchy and racism and all of those things. I think it’s amplified because of the scale and speed at which technology companies work. And so, I think therefore the solution’s the same. If every tech company decided that they were going to use an intentional equity-centered design process to design both themselves and the products that they ship, [we] would be fine. But that’s not the norm and that’s not what is rewarded by VCs, by funders, by markets, et cetera. Well, actually 鈥 not true, I think we have found that markets do reward this work. It just takes a little massaging [laughter] to get there. 

CA: What you were just saying made me think of a concept that I was reading about in some of your work that I think connects to this. It was a new idea for me 鈥 this term you use: “meta empathy.” I think it鈥檚 related to some of these questions of process and power and agency. I would just love to hear you kind of explain what that is and why it’s important.

CO: Yeah, so, you know, we, we made this decision to start from design, design thinking as it currently exists, and then put it through all of these lenses and tools from the DEI space. Put it through critical race theory, put it through identity development frameworks 鈥 put it through all of these things, and take a critical lens to each piece. We were doing that with this empathy step. This empathy step is really the reason that human-centered design and Lean startup and all of these things took off, because that was a kind of revolutionary way to think about it, right? Like, we should talk to and and learn from the people who are experiencing this problem that we want to solve. We as political experts 鈥 however that is defined 鈥 cannot do this without talking to the folks who are actually experiencing the problem and theoretically will use the thing that we’re creating to solve this problem with them. That was a big shift in how people thought about things and people saw how important and useful it was in getting them closer to a good solution, closer to solving the problem. But, what we’re still finding is that while we’ve moved closer, we haven’t gotten there yet. And so, thinking about why that is and especially going back to [how] power is at the core of all inequity. So that is the first lens that we use to think about things. 

“How can we be in a community together and how can I support you as a designer?”

That whole empathy stage was still premised on this idea that the people who are experiencing the problem are fundamentally different people than the people who are the designers on the design team. And, at the end of the day, the power lies with the designers 鈥 so the power lies not with people who are experiencing or proximate to the problem. And that fundamental disconnect is where we think a lot of problems lie. And so, if we want to shift to a place where the people who are most proximate to the problem have the power in the design cycle and the design process, then what does that mean for designers? It means that designers need to shift from thinking about the relationship with folks who are experiencing the problem as an extractive one where, “You have information that I need so I can go build this thing” to a relational one: “How can we be in a community together and how can I support you as a designer?” Like, you have lived expertise with the problem, I have expertise from a design perspective, how can we work together to solve this problem? And that requires that the designer does a lot of self work and understanding of who they are, their relationship to the problem, why they’re interested in solving that problem, how they feel and interact with the community that they are entering, if it’s not their own. That is at the core of a shift to “meta-empathy” from a traditional empathy lens.

CA: In addition to your own writing, your own work, what are some resources that folks should look into, whether it’s books, other organizations, other people? What would you point listeners toward?

CO: There’s a whole group of folks, the , that have been doing this work for a long time. [I’m] super into their work. We’ve also kind of informally created our own little network called the . 

You can go on the Equity Design Collaborative website and see some of the other folks who have been working on creating the space, and some tools and frameworks around that. But I would love to highlight specifically Antionette Carroll’s work with and the work that she’s doing, creating an equity design framework, and specifically using it with Black and Brown young people to redesign their own communities.

DH: Your own personal lived experience and who you are, where you’ve lived, and how you’ve lived your life 鈥 how do you bring that, or those experiences, into your work and how do they influence your work?

CO: Yeah, so one of the things that I have been reflecting a lot on is actually in which there’s a quote about the darkness of the moment that we’re living in. She was referring to 2016, right after the election darkness of the moment that has now spanned four years. But she 鈥 and I am getting chills just thinking about it 鈥 she says, what if the darkness that we are experiencing is not a darkness of a tomb, but the darkness of a womb? And what if what we are doing right now is giving birth to the promise of this country, of this world, of a truly equitable world? Having literally given birth this year myself, that has really resonated with me as a powerful metaphor for this time. This idea of growing things inside of ourselves in darkness, and then this idea of laboring. I’ve been thinking a lot about my work these days as laboring, and how that allows for this simultaneous experience of pain and agony and also joy and ecstasy and just the miracle of creating something out of nothing. And that’s the work that we’re called to do right now, to labor, and as a new mom, that idea has really just resonated a lot for me.

And, on that note, come in! Here’s little Lucy. [laughter]

DH: Hi, Lucy.

CA: Thank you for joining us, Lucy. We appreciate your input into this conversation. [laughter]

DH: Thank you, Lucy, for making time in your busy schedule. 

CO: Oh, it is busy. It is busy!

DH: Christine, thank you so much for spending time with us today.

CA: Thank you, Christine. This has been a really fascinating conversation. We really appreciate your time.

DH: That’s a wrap on the interview. But the conversation continues and we want to hear from you.

CA: Send your questions, comments, ideas, and suggestions to justdigital@hbs.edu.

The post Christine Marie Ortiz Guzman on how we are all 鈥渄esigners鈥 appeared first on 性视界 Business School AI Institute.

]]>
/christine-marie-ortiz-guzman-on-how-we-are-all-designers/feed/ 0
Candice Morgan on navigating inclusive strategy in tech /candice-morgan-on-navigating-inclusive-strategy-in-tech/ /candice-morgan-on-navigating-inclusive-strategy-in-tech/#respond Mon, 08 Mar 2021 17:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13479 In this episode, we speak with Candice Morgan from GV (formerly known as Google Ventures) shares how to navigate inclusive strategies in organizations; how the 鈥渟ummer of protest鈥 has moved the dial of accountability; and how the venture space can better practice antiracism.

The post Candice Morgan on navigating inclusive strategy in tech appeared first on 性视界 Business School AI Institute.

]]>
In order for organizations to fully benefit from increased racial and gender diversity, they must and be willing to change the corporate culture and power structure. The venture space faces a particular challenge in this endeavor with often very insular networks and funding models that are less accountable to efforts for diversity. Over the last decade, of venture capital dollars went toward Black founders and less than 2% toward LatinX founders. So, how can the venture space better practice antiracism across the industry? 

In this episode, our hosts Colleen Ammerman and David Homa speak with Candice Morgan about how to navigate inclusive strategies in organizations; how the 鈥渟ummer of protest鈥 has moved the dial of accountability; and how the venture space can better practice antiracism. Candice is the equity, diversity & inclusion partner at GV (formerly known as Google Ventures), where she leads inclusive strategies for GV and its portfolio companies, and helps the firm expand diversity across the entrepreneurs it funds.

Watch the episode with Candice Morgan

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So, today we’re talking with Candice Morgan. Candice is the equity, diversity & inclusion partner at GV, formerly known as Google Ventures. In that role, she leads inclusive strategies for GV and its portfolio companies and helps the firm expand diversity across the entrepreneurs that it funds.  

Dave Homa (Digital Initiative director): Thanks for joining us, Candice.  

Candice Morgan: Yeah, happy to be here.  

CA: Thank you so much for joining us, Candice.  

DH: We always start big picture. And I’d like to start with a question about young companies in the venture space. Obviously, there’s a ton of discussion about diversity on teams. Are there things that are sort of top of mind in the venture space right now, when it comes to diversity of teams?  

CM: Yes, I think it depends on the size of the company, and at GV we are stage agnostic, we are sector agnostic. We focus across life sciences, enterprise, consumer tech, frontier tech. So, we have a really broad range of companies that we’re looking at. And particularly for high-growth small organizations, they’re thinking about putting their team together. They’re still in rapid growth. They’re dealing with, of course, 2020 and all the things that that brings. And then they’re still deciding whether to even bring in an individual to address diversity. I typically advise that when a company is really small, when we’re sub two hundred people or so, it’s probably too soon to bring in a person in that role.  

Now there are exceptions, but what you don’t want is to essentially outsource the work of DEI, especially when you’re that small. So they’re often thinking about how to build the right council structure, the accountability structure, which is always something to think about, even as the company grows. One thing I will say, though 鈥 in the venture space or outside of it 鈥 companies are dealing with on this topic is both being now distributed and remote if they weren’t before 鈥 certainly dealing with that 鈥 and then the events of May and June.  

So, yeah, certainly regardless of where you are or what sector you鈥檙e in, everyone is reckoning with coming out of Memorial Day weekend, coming out of that weekend of protests and seeing this still or video of this police officer with his knee upon this man’s neck. And that has completely changed a lot of the dynamics of conversations, the things that you talk about explicitly at work 鈥 talking about systemic racism, talking about the law enforcement system. So that’s something that companies are grappling with, as well. Where do we start? If we鈥檝e started, what do we do next?  

CA: I wanted to follow up a bit on that portion of your answer about, obviously, the murder of George Floyd being this real flashpoint in 2020. And then the summer of all these protests, the way that that sort of changed the national conversation and certainly [the conversation] in companies. Also in higher ed 鈥 we started having a very different conversation at 性视界 Business School. 

And, you talked about this a bit already, but I would love to hear you talk a little bit more about the nature of that change. I am really interested, also, in your view about what the long-term impact might be. I know you can’t predict the future, but what’s your sense of where we might go from here?  

CM: Yes, okay. So, first what changed immediately was the conversation. I鈥檒l give GV as an example. We have a monthly conversation where we talk about inclusion, belonging, culture. And, we had a meeting planned later that week, just on the calendar, and then that weekend happened, George Floyd鈥檚 murder. And people were negotiating individually how they were responding to that news. And, depending on where you sit, there was everyone from, 鈥淗ey, there’s a lot of looting going on,鈥 and, you know, focusing on property, to people like, 鈥淥kay, this is my first ever protest, how do I organize?鈥 So, these very personal conversations people are having at home over the weekend.  

“that’s something that companies are grappling with, as well. Where do we start? If we鈥檝e started, what do we do next?”

Second, then what? How do you actually conduct the conversation? And so, at GV, we had a conversation where we defined systemic racism and we talked about centering that conversation and how it’s connected to many other forms of social or systemic oppression. But we wanted to talk specifically about systemic racism and the many different institutions in which it shows up. 

I think most people realized that this isn’t a conversation we completely leave at the door when you come to work on Monday. This is something that is shaping how people got up that morning. This is a global conversation and we should do something. So, there was that realization that a lot of companies had that we should do something; we should have some sort of dialogue. Whatever tool we have, wherever we are, we should use some time to focus on this. And I think that that’s different. That’s different from a lot of issues that are considered political and, you know, that some people feel that you can just kind of check at the door. 

Another thing companies had to deal with besides these internal conversations was, “What do we say to the world?” We started to see the statements on Twitter, and Facebook, and LinkedIn, and the black squares, and the donations 鈥 and honestly, I’ve never seen leaders so confused over how to proceed. But certainly if we looked at the masses of individuals and companies and leaders making these statements, there was a pressure such that even if people were going to say, 鈥淲e’re going to keep this conversation internal,鈥 they came to me and they talked about it. There were [different] ways that that was done. There was, “Here’s our statement, we’re going to put it out there and then we’ve done it.” There were statements with donations to organizations doing some of this work; doing social justice work. And then there were people who were putting out their plans, their anti-racism plans, et cetera. We saw this whole range of reactions and some hand-wringing behind the scenes over how to handle that. And then finally, you know, people hit this kind of fork where they were like, “All right, are we going to carry this forward? Is this going to be a cameo by our leadership team or are we going to lean in and do some work? And, if we do lean in and do some work, what does that look like?鈥  

And that’s ultimately where I essentially did a series of conversations with a number of our portfolio companies around what you do now. I wrote an article about this, posted about it. I’ve talked about it. How do you create that long-term accountability structure from the executive level across the company to create strategic pillars for a multi-year diversity strategy?  

DH: Long term is something that businesses haven’t dealt with in this space. Long-term focus on equity is not something 鈥 I mean, long-term focus on a lot of things is difficult for businesses. But it鈥檚 a very different experience based on who you are. And obviously, in the US at least, leadership at companies is not very representative of the country as a whole. So, this in some ways is kind of new for leadership to decide what does long term look like, what does investing long term look like. And, I mean, it’s a lot of work.  

CM: It is a lot of work. I think, you know, in some cases leadership teams and particularly CEOs, co-founders, et cetera, had to make this decision. Are we going to lean in directly, or are we going to give more time than we had been prior to this? Are we going to tell our HR leader or our diversity leader, 鈥淟isten, you better come up with a plan.鈥 Are you going to bring this to the board? Is it going to go to the board level? Is it going to stay local?” And, we’ve seen across the spectrum how people have handled that. I will be honest, because there was another part, Colleen, to your question, around “What do we think the long-term impact of this is?” I don’t know. I think it provided a lot of momentum. But there seems to be, if we were to follow the media coverage, for example, that started to die down even before a lot of the global protests stopped or slowed down significantly. And knowing the very divisive political situation in the US and also in other parts of the world, that started to take precedence in the media 鈥 as if these conversations aren’t connected. They very, very much are. And, so I personally felt a sense of, look, I felt this almost as a countdown clock. Where I was like, 鈥淥kay, all of these initiatives, all of these plans 鈥 not only at GV, but I was asked to consult externally quite a bit with different institutions across different sectors. And I felt this sense of responsibility. I felt I had more support than ever. But I wasn’t sure how long that was going to last. And I’m still not sure how long that bump is going to sustain itself.

CA: That’s a really good point. What I hear from my own network, there is the sense that it’s like, “Well, we have this window. We don’t really know how long it’s going to last.” And so that is an interesting space to be in. So we can’t predict the future. We don’t know really what the long-term impact, the more systemic impact, will be of these conversations and of this new urgency and renewed attention. But I would love to hear you talk a little bit about, say, the long-term process to create change and to actually genuinely establish a greater degree of equity and inclusion in the workplace, particularly in tech. Can you talk a little bit about what that would look like or what does that mean?  

CM: Yes. So, I’m going to say three things. The first is going to be something you just alluded to, which is the composition of leadership. Throughout my career, when I was at Catalyst as a consultant, working with dozens of Fortune 1000 companies, through to working at Pinterest, through to working at GV, now working both internally for GV and across our portfolio companies. Pretty much wherever you look, you see this underrepresentation in leadership. And you can define what that underrepresentation means by industry, by the company, by the talent pool. But there is underrepresentation. And, I think that even companies that have done well on diversity have still struggled when it comes to, you know, senior leadership representation and board representation. Even having these conversations at the board level. Part of the reason for that, I鈥檝e found in my own experience, has been a combination of things. There’s 鈥淲ell, what does that conversation look like at the board level?鈥 Because the accountability wasn’t necessarily coming from the board. So the leadership team was like, 鈥淗ow do we shape that?鈥 When you’re doing these very specialized senior hires, even if you have some sort of diversity slate rule, a Rooney rule, or some modification of it, you see the exceptions start coming in at these very senior roles. “You know what, this is super mission-critical.” People start to get nervous. They pattern-match more. And even if they say they’ve got to meet this diversity slate, somehow with that senior layer, exceptions happen. And you [only] have a few shots, because you only have a few openings. So those exceptions reverberate.  

“How do you create that long-term accountability structure from the executive level across the company to create strategic pillars for a multi-year diversity strategy?”

Another thing that I would say I see happening is not enough of the diversity investment happening on the product side, the client side, the services side, B2B side. It’s a lot of internal talent discussions, but it’s not necessarily [connected]. Every conversation you’re having about the business is kind of rooted in, “We want to make the world better” or “We want to make this process easier for our customers,” but then you have this diversity conversation and it’s just about hiring and talent. Or most places are also talking about inclusion and belonging, which is excellent. Some organizations are talking about equity, right? So, how do you create these experiences where anyone can succeed? You see this continuum but if you can’t link that and carry that through to your product, then the conversation won’t 鈥 people won’t connect the dots. And they will relegate it to, 鈥淭his is a people thing.” So that’s another area.  

And, then kind of another variable we’ve talked about is, “How much senior leadership involvement? How much is everyone’s responsibility? How much is this built into reward systems?” Versus like, 鈥淲e’ll do a manager meeting every couple of months or once a year on this,” but otherwise, again, it’s an HR, people thing. So, I think what this would look like, to answer the question, is that we would see that diverse representation in leadership, and we would see leadership that looks something like the customers or the end-users of their products. We would see innovation around inclusive AI, inclusive products. I’ve even consulted on consumer products and, you know, skin tone and thinking about the vocabulary, the marketing, all of that, the creators that you use. Holistically, we would see all that prioritized across the leadership team globally for companies.  

CA: Yeah, that’s really well put. I mean, I think what you’re saying, right, is it just becomes core to the business. Instead of, as you said, over there as this HR thing. It’s about changing people’s approach. Like you said, you can have these processes like a Rooney Rule or whatever. That’s great. But then if those can easily be sort of overridden or not adhered to by the people in power, how effective are they going to be? So I think those are really, really great points.  

DH: Yes, this is really important that the work be core, but I’m starting to wonder 鈥 and you have a lot of experience in business and working with businesses 鈥 is that something that American businesses struggle with, making things core? I mean, there’s this history of, these segments and groups in HR and marketing. And, it strikes me that even with technology, making technology core is difficult. Companies are like, 鈥淥h, there’s an IT department. I don’t have to think about tech because I’m in marketing.鈥 Is there something more systemic even about companies or US business that has struggled with making things core unless it’s part of the built-in values, and as you said, of the leadership and how they contribute?  

CM: That’s a good question. I think it’s less specific to American businesses or companies. I’m kind of obsessed and fascinated by cross-cultural conversations on management and inclusion. My master’s work was in cross-cultural psychology and management. I just think that, in general, humans are kind of slow to adapt and change. And there is pain involved, right, to making these changes. So, we kind of fall back on our patterns when things get hard until we’re forced to do things differently.  

Let’s take the example of a distributed workforce or working remotely. A lot of organizations still, right up till March 10th or 15th, or whatever date it was that you became remote, just simply didn’t believe in having a very distributed workforce and assumed that that would lead to a profound loss in productivity. And, yes, there are definitely trade-offs with us not being in the office. Don’t get me wrong. But the fact is that there has been business continuity and, you know, it really depends on who you’re talking to and where they sit. But we haven’t seen the complete collapse of the global economy. Actually, doing things different[ly] requires all these pattern changes, but sometimes it can be done, right?  

DH: What advice do you have for getting unstuck, especially around the product or your service?  

CM: Yeah, I’ll give a concrete example. I was at Pinterest and I was a Pinterest user. Like many users, I go through cycles 鈥 my on-ramp was a friend’s wedding and I got obsessed and then, you know, kind of put it on the back burner for a bit until changed jobs, got a new wardrobe. I had the cycle times when I got really, really into it. And it brought me joy.  

However, as a black woman, as a woman of color, when I was searching for beauty tips, particularly hair care tips 鈥 one of my biggest [Pinterest] boards is called Don’t Touch the Hair. I had learned how to search and make the product work for me, but there were some extra steps that I had to take to find things that were relevant to me. And even I took that for granted in some ways. If anything, it was inconvenient, but I learned how to just make it work. There I was, status quo.  

What changed for me personally were a few things. Starting to get feedback from users, other women of color, who had the same experience, were frustrated by it. Realizing that with folks spending more time online, that was shaping their self-image and worldview. I mean, whether you’re online or not, you are getting these messages from marketing, and from beauty magazines, et cetera. But [with Pinterest] you’re trying to create this idea of what your life is going to look like. You’re constructing that in these boards.  

“humans are slow to adapt and change. And there is pain involved in making changes. So, we fall back on our patterns until we’re forced to do things differently.”

So, I remember going to the product team and approaching someone on the computation side and someone on the product side with some specific user queries that just didn’t work as well for users of color, users with deeper skin tones. And, the way it was initially approached was a little like, “Okay, we need to solve that query,” versus, 鈥淲e need to revamp this ecosystem.鈥

I officially created a full-on product team 鈥 several engineers, several designers 鈥 and began an initiative around creators. Now, when you go on the platform, not only is there more representation in search and ads, but this feature that we created, where you could customize search by skin tone 鈥 the recall on it is better than I’ve ever seen it. But that was a slow, like three-and-a-half, four-year process of kind of dipping a toe; crystallizing how it intersects with a core business need that the founder had envisioned; realizing customers wanted this; doing all kinds of research experiments; and then realizing that the rewards exceeded the pain of making a change.  

DH: I especially find it interesting how you said, even you, who’s aware of the space, sort of put up with it. What does that cost? What’s the cost to you personally? What’s the cost to people to have to interface with technologies where they actually have to put in that effort?  

CM: You know, wow, it’s one of these costs that is often subconscious. And, we learn a lot about it from the amazing strides that have been made in accessibility work, right? Which, still, anyone who’s doing accessibility work continues to advocate and struggle to mainstream technology that is inclusive if you have low vision, if you are hard of hearing, et cetera. So, there’s still this very slow crawl towards creating technology that benefits everyone because it takes the user issue and makes it more obsolete. But I don’t think I realized it consciously until I experienced it and heard it from people. And, it wasn’t a lot of people. I mean, most people had just kind of figured out and navigated their way. But when they talked about how their self-image and the vision for what they wanted to create were intertwined, I think realizing that power and the potential damage of not doing this became really strong. And so, when we were in that role, when we were going through different iterations of what this feature could look like and the fact that we even create a feature, then we started all asking these questions. Like, wait a minute, so what have people been doing? You show this example of how people were trying to get around these algorithms and their assumptions and you realize, wow, once it becomes conscious, I think that is when it feels the most costly. But it鈥檚 the idea that something wasn’t made for you, right? It鈥檚 awful.  

DH: This is a good point, right? Once you can get the people with the capabilities focused on the right topics, it’s not difficult. But they need the motivation to do it.  

CM: Yes. They can get creative. It was just, it’s just amazing what they can do.  

CA: I wanted to ask, Candice, you mentioned a little bit about your career journey. You were at Catalyst for a number of years. Catalyst’s been around for decades; it’s kind of the premier think tank and consulting firm around gender equality in the workplace. They work, I understand, with a lot of large companies, right? Fortune 500 companies that have been around for a long time, publicly listed companies. You went from there to Pinterest, to this high-growth startup in tech. And now you have moved on to a venture firm working again with a range of companies, but probably in a different space in a lot of ways, and a lot of different types of companies. So, I’m just curious to hear, as you think about your own journey, if there are things that you found are very consistent about how to push forward for diversity and inclusion and equity across all of those different kinds of organizations. What are your reflections about that?  

CM: Yes, absolutely. All such very different environments. I would say at Catalyst, just the dynamic nature of working with so many different companies, but a lot of them being very large, multinational Fortune 500 companies, the cultures were definitely different. Definitely more traditional, a traditional corporate culture, especially when I was working in a professional services firm environment. However, going over to tech in 2015, 2016 鈥 the diversity conversation was so new. And I found that so baffling, knowing that IBM had its first ERG in the late 鈥70s and suddenly tech was like, 鈥淭here’s this diversity thing.鈥 Twenty-fourteen was kind of that big moment, especially when a lot of the data started to come out and the conversation became more central in tech. And also, the tendency for high-growth tech not to want to look backwards, but to very much want to innovate forward, was just super interesting to me because there was so much to be learned from companies that have been working on this, in some cases, for a couple of decades already.  

“there’s still this very slow crawl towards creating technology that benefits everyone.”

I think, though, that a similarity across both 鈥 and I see this in venture somewhat as well 鈥 is the point at which companies got stuck. There was more governance; there tended to be more people in longer-tenured diversity roles, people who came from different sectors, in Fortune 500 companies. As opposed to tech, where you had people who were pretty short-tenured. You had probably 30 percent of people in diversity leadership roles who had been in other roles at companies, but were very passionate and had moved into diversity as the primary role, and were just amazing at it in all kinds of different respects. But [there was] a formula of “inclusion equals ERGs and employee survey results and some diagnostic workforce data, and then hiring, hiring, hiring, hiring.” Versus moving into equity and moving this conversation into the core of the business and more broadly looking for responsibility in the business, right? That point tended to be a jam no matter where you sit.  

And in venture, I’m going to be honest, I would say overall just less accountability for diversity. I will say certainly [more] so than before, as we鈥檝e talked about George Floyd鈥檚 murder, and as we’ve talked about this reckoning and this uprising that we’ve seen from people and from employees internally saying, “It isn’t good enough that you put out that statement, it never got followed up.” But I would say [before] spring of 2020, the accountability that we started to see in tech in 2014鈥2015 did not extend to venture.

Now, we are seeing in venture, particularly the funding model, and the fact that the funding model and how you make connections to get funded are so network-driven, and that the networks are just very insular. That has become a point of conversation now. I hope that leads to more than just conversation. We are seeing a lot of coalitions. We’re seeing groups get together and have several conversations with other firms or with cross-sector organizations, indexes, metrics, accountability, et cetera. I’m seeing something that could be promising, but I still just don’t see the same level of accountability yet. And I don’t know if that traction will stick. 

CA: It’s interesting to hear what you’re saying about how the conversation does seem to be evolving in venture. I am just so curious to see where that goes. The other way to ask this question is less about your [past] journey and what you’ve learned over the years. This is a relatively new role for you, right, at GV? You started maybe a year ago or something like that?  

CM: Yeah.  

CA: I just would love to hear you talk a little bit about what motivated you to move into this space. What are your goals? What do you hope to accomplish in this particular space as opposed to the past work that you’ve done? What’s different about it? What are you excited about?  

CM: Yes. So leading equity, diversity, and inclusion in venture is super unusual. The firm didn’t really have a model on which to build this role. There is no counterpart that I could go pick their brain and download and then say, what do I do now?  

Although there are several individuals who have specialist roles on culture who have really made inclusion a central part of their purview, and there are some corporate venture capital funds that are starting to create more roles like this 鈥 or at least beginning to experiment with a portion of people’s roles dedicated to it.  

But as a full-time role, it’s pretty new. And, that’s both exciting and a bit intimidating. I would say what motivated me to take the role 鈥 really there were a couple of things that motivated me. One was, I wanted to continue the work that I was doing. I’m so devoted to this equity work, but I wanted to figure out how to continue to do this work at scale in a meaningful way. And, if we think about one of the biggest challenges in terms of systemic racism and systemic bias, it’s the wealth gap. I remember actually back when I was at Catalyst, one of the researchers, Dr. Kathy Giscombe, led a session about the wealth gap that was amazing, informative, and frankly, terribly depressing. And nobody forgot it.  

“Now, we are seeing in venture 鈥 particularly the funding model 鈥 that the networks are very insular. that has become a point of conversation now.”

And so, you know, when you think about the amount of capital that flows through venture and private equity, and if you continue to extend and think about how people get funded and raise capital, it is such an important point. As I said to Ben at Pinterest and as I’ve said to others, 鈥淗ow do we build the next great company that happens to have an underrepresented woman at the helm, for example?鈥 And that has this leadership team that is representative and diverse, as we’ve talked about. And so, that was intriguing to me. I had never seen a role like this before. And I should probably mention that the other thing that really 鈥 that I found very intriguing was that the role itself is equally distributed across three different functions. So, part of my role is advising the portfolio, which is akin to my Catalyst work. Part of the role is leading equity, diversity, inclusion for GV, which is akin to, to some extent, my Pinterest work. And, then this third portion of the role, which is something I’ve never done before, is around working with the investing team, in particular, on diversifying the founders that they are meeting, the way that they are thinking about funding, their personal networks. Just super, super intriguing, powerful, and has the potential to make great impact.  

CA: So for somebody who’s watching who is interested in these issues and is maybe aspiring to make a difference in the tech sector the way you have, what are things that you recommend that they read or watch or think about? People whose work they should get to know? What kind of advice would you give?  

CM: In terms of the tech sector, it鈥檚 understanding some of the data around, basically, what’s up with this pipeline thing? Is it a pipeline problem? Where do you start to see people drop out of the workforce, particularly people from underrepresented backgrounds in tech leadership? And, then the founder population. The Kapor Center has this wonderful site. It’s called the . And you can basically explore across the student to tech pipeline, where you start to see people drop out of the workforce. Everything from retention, what starts to influence people’s success, or in other circumstances, people leaving the tech workforce [altogether]. So, the Leaky Tech Pipeline is another thing that I recommend.  

There are wonderful organizations for technologists of color. One organization that I recommend is called They are particularly an organization for Black software engineers. It was actually founded by a former Pinterest engineer who I had the pleasure of working with on my team for just a few months before he founded the organization. So, those are a few tools that I would give.  

In terms of books, I think there have been so many books that have been recommended that I am almost loathe to add to the list. But a few that we distributed at GV are by Ijeoma Oluo. I just think it’s such an accessible book to understand people’s daily experiences. And by Jennifer Eberhardt. Even if you tend to feel that a lot of these conversations, or a lot of the vocabulary, just kind of escape you, the facts are there in that book 鈥 that there are different outcomes based on our immediate subconscious perceptions of individuals. 聽

DH: Candice, thank you so much for spending time with us today. This has been a really fascinating conversation.  

CM: It’s been my pleasure. Thank you.  

DH: That’s a wrap on the interview, but the conversation continues.  

CA: And we want to hear from you. Send your questions, comments, and ideas to justdigital@hbs.edu.

The post Candice Morgan on navigating inclusive strategy in tech appeared first on 性视界 Business School AI Institute.

]]>
/candice-morgan-on-navigating-inclusive-strategy-in-tech/feed/ 0
Brandeis Marshall on the potential for data equity /brandeis-marshall-on-the-potential-for-data-equity/ /brandeis-marshall-on-the-potential-for-data-equity/#respond Mon, 22 Feb 2021 20:20:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13204 In this episode, we speak with Dr. Brandeis Marshall from DataedX about the consequences of data inequity, the balancing act of qualitative and quantitative mindsets, and critical importance of humanizing data systems.

The post Brandeis Marshall on the potential for data equity appeared first on 性视界 Business School AI Institute.

]]>
Data science and artificial intelligence have inescapable influence and power in our world. The people who are the most negatively affected are often the ones whose voices are not heard. What does a digital world that works for everyone look like? And who gets a seat at the table?

In this episode, our hosts Colleen Ammerman and David Homa speak with Dr. Brandeis Marshall about the consequences of data inequity, the balancing act of qualitative and quantitative mindsets, and the critical importance of humanizing data systems. Brandeis is the CEO of DataedX, a data science consultancy, among many other pursuits. She develops tools for practitioners to interpret the racial, gender, and socioeconomic impacts of data and technology.

Watch the episode with Brandeis Marshall

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So, today, we’re joined by Dr. Brandeis Marshall, CEO of DataedX, a data science consultancy, among many other pursuits. Dr. Marshall develops tools for practitioners to interpret the racial, gender, and socioeconomic impacts of data and technology. Welcome, Brandeis. Thank you so much for joining us for this conversation.

Brandeis Marshall (CEO of DataedX): Well, thanks for having me. I’m really looking forward to this.

David Homa (Digital Initiative director): Brandeis, good to see you again.

BM: Nice to see you, too, David.

DH: We’re going to go right to data equity 鈥 big umbrella 鈥 and talk wide. So, Brandeis, one of the things we want to start with is what exactly is ‘data equity’ and what are some of the consequences of data inequity?

BM: Right, so, let me start with what ‘data equity’ is. Data equity means that your data is inclusive of people of all backgrounds. So, that is race, gender, and class. That means that the data itself is doing its best in order to guard against any biases. And, this is very difficult because every time you are trying to grab data, you are putting your own past experiences into it, right? There could be some features that you don’t even know about. There are certain things that you have a skewed view of and this affects how you collect data. So, ‘data equity’ is really trying to take a step back, thinking less quantitative and trying to think more qualitative. What are we trying to attain? How is this data actually trying to move things forward? So, data equity is really trying to move forward quantitative people in a way that’s a little bit more qualitative, which means let’s start thinking about the people on the fringe. Let’s start prioritizing those people that are most vulnerable. Let’s now try to figure out what is our impact first and not just try to do things because we can do them.

The way data inequity has manifested in our society is, well鈥 look anywhere. There are spaces where鈥 okay, so, I just voted. I winded up in a county that’s majority people of color, majority Black, to be very specific. I waited in line for six hours and twenty minutes. But, in a county that is majority white 鈥 so all the 鈥渘on-people of color鈥 鈥 the wait in the line was about fifteen minutes. So, to me, that’s the idea of voter suppression. That is the point of, okay, all of a sudden there is no idea that there’s a lot of people coming to the polls? There’s not enough polling stations, there’s not enough poll workers in [these] places. There’s disregard [for] the fact that people are trying to do their civic duty. To me, this is all going back to data inequity. If you know the population of a county, you know how many people are registered to vote. You know that there has been a big campaign in order for individuals to be voting. You can expect that there’s going to be a lot of people wanting to vote, especially if it’s in-person, especially if it’s the first day. And then, as far as the reporting has come out鈥 people started to get in line to vote at about 4:30, 5:00 am. And the polling station didn’t open until 7:00. There are plenty of opportunities to know [with] just small data points and how they have connected to each other, to know that this was going to be a long line. However, there was no type of forecasting, there was no prediction. This is when data could have been used more equitably in order to then be able to deal with the influx of people. So, that’s just to give a very small example. But this happens in every sector of our society and it happens on a minute by minute, hour by hour, day by day basis. And, for Black people, in particular, this is the way we live. It’s just not fair. [I鈥檓] not saying everything in life should be fair, but this is literally trying to push down, push aside, keep marginalized certain groups of people. And, it’s very intentional and it’s very strategic. And so, data inequity happens a lot. What I’m trying to do is push forward the agenda of having more data equity. To not make those people on the fringe feel like they’re on the fringe and actually include them as part of the main thrust of our society 鈥 because we all are equal, but we all don’t have equality.

DH: What is it about people who work a lot with data that makes them not realize that they should look at this more qualitative side of it?

BM: Okay, I’ve been in the computing field for twenty years. I’ve taught for the majority of that time, since I was a graduate student. And, what I can say is that when certain people are analytically minded, they have what I would [say is] akin to a fixed mindset. And they believe that numbers are facts and that numbers are the only thing that can be trusted. And so, these individuals truly adhere onto the idea that any time that you are creating something inside of a digital space, the intention supersedes the impact. So, they believe that the intention can somehow be kind of thrust into everyone’s understanding, and that is what is going to help everybody. And, it’s not 鈥 it’s not the reality, right? It’s just not the way it is. So, it’s very hard to shift mindsets when they’re very determined that numbers are the solution, that facts are going to get you to a plausible end.

“You’re going to have to actually make sure there is diversity in leadership, which means there’s a power structure that’s going to need to change.”

What I try to do is shift a little bit in how I, you know, speak to them and how I give them examples, because I think it’s important that they see, “here is some bad impact.” So, case studies 鈥 I tend to use. With those individuals they will say, “Well, that’s only one case.” So, then I bring up another case. And, [they’re like], “Oh, that’s just another case.” And, I say, “How many cases would it take?” How many times have you been writing code and you get an error? You don’t understand what the error is. You call it a bug, but you find out it really was a logic issue. So, how many times do you go trying to fix an error? And, then they will say, they’ll keep doing it, they’ll keep doing it 鈥榯il they get it done. Okay, so why don’t you take that same approach when it comes to the impact of your work. And, that tends to really set them off-kilter. It’s an uncomfortable place for them.

So, then I say, okay, now that you are kind of getting that what you do in code is what you should do in the real world, then it’s just a matter of continuing to have a conversation. And, then it kind of spins out, because all of a sudden it鈥檚 like you have a little child who just realizes [that] they understand their ABC’s. They just want to suck up everything. But there is definitely a difficulty in having people who are very analytical understand that there is so much value in qualitative approaches. That, analytically, you want to think about what’s happening for the masse, [but] the people, and the items, and the ideas that are on the margins actually are the ones you need to think more closely about. And, that’s where I think qualitative approaches are so much more important, especially in this time when technology and people and data are all in the same soup of the world.

CA: That is so interesting. I mean, it sort of sounds to me like what you described is trying to dislodge a sort of ideology or a sort of belief system, right? About, like you said, the veracity of numbers and what information you should rely on. Like you said, it’s this fixed mindset and you’re trying to loosen it up and get people to think in a new way, which is really powerful.

BM: Yes. And, it’s very, very difficult. Because, you understand, ever since you were five years old, four years old, you were inundated with math, reading, and they’re separate. And, you either are a reader or you’re either a math person. Like, you can’t really be both. Well, I was both. I danced when I was younger. So, I took ballet class, and jazz, and African [dance], and clogging 鈥 I did all that stuff. And, I also loved math.

So, for me, I see the duality of the creative brain and analytical brain and how they need to be synergistic. Because otherwise, we’re not a whole person. And, it’s the same thing that happens inside of tech, inside of people who are coding, inside of how you manage data. You can’t be one without the other.

CA: Yeah, that makes a lot of sense. In this duo, I’m the qualitative person and David’s the quantitative person. In case you were wondering. In case that wasn’t clear. [laughter]

BM: I figured that out. [laughter]

CA: But that is in fact one of the things that we really enjoy about working together. We come from these two places, but then we have a [shared] interest. And, like you’re saying, [we鈥檙e] thinking about it more synergistically.

It reminds me of a question that I had 鈥 maybe a different way of tackling the same question you were just talking about. I was thinking about taking it up a level. It seems to me that obviously technology, which is such a huge catch-all, is top of mind for individuals, companies, institutions, and governments. And, I’m always kind of struck by how there are a lot of conversations about inequity in technology, things like algorithmic bias and how homogenous the tech sector is, but then there are all these; other conversations that are technology-rooted, that are also very vital 鈥 like cybersecurity, right? Like the power of big data. These are very top of mind for companies, for boards, for things like that. And it seems to me that [the conversations] are often very separate. So, even the issues around technology inequity, they still kind of get siloed. But it seems to me there’s got to be a relationship between topics like, say, cybersecurity and [equity] concerns. So, I’m just wondering, going up from the individual to the macro level, how you see that. Why do [these conversations] get separated and what can we do to have a more integrated conversation?

BM: Yeah, I think that there is a lot of grassroots effort, right? There’s different organizations. There’s non-profits. There’s, of course, for-profit companies that are trying to make these connections between the two. And, I think, quite honestly, capitalism is making money the top priority. So, when money is the top priority, people aren’t the priority. And so, making things siloed is actually really good for business, right? Because you have one track that’s working on one problem, another track with another problem, and finding connections between the two doesn’t necessarily optimize the bottom line. [What] would optimize the bottom line is to create as many threads as you can in order to get money. But in order to solve the problem, they want to now create avenues in order to solve it a certain way. Because that sort of way is not going to detract or mitigate or disrupt their funding channels. The trouble is, in order to really make impactful change, to make an inclusive work environment, to make an inclusive product, you’re going to have to work across different sectors. And so, that means that companies and organizations are going to have to now cross-pollinate. There’s no other way out of that particular box. If you really want technology to be inclusive, that means you have to do more than have a diverse team. You’re going to have to actually make sure there is diversity in leadership, which means there’s a power structure that’s going to need to change, right? That power structure means you’re going to have to make people who are, quite frankly, white supremacists or act[ing] for white supremacy at the very least, step back and stand down. How is that going to happen when the money is the ultimate goal?

DH: So, I wanted to talk a little bit about raw data. And, data historically can be very messy, especially as we get older. And, a lot of people don’t really understand what that means when you try and infer or learn things from that. Is there a good way that you would recommend people think about the underlying data underneath systems that are trying to advise them?

BM: Oh, that is a good question. Data is everywhere. So, how we bring it into our own orbit is what is always interesting and that can be problematic. So, I guess the best way to describe it is to really look at it in the eyes of a child. When a child learns something new, what happens? There’s all of these videos about the first time a child, like, eats ice cream and it is hilarious, or the first time a child, you know, understands what a popsicle is. That is what happens with raw data for us when we receive that raw data. We don’t necessarily know how to navigate it. We’re a little stunned by it. So, every time we’re in a system, that data has been in some way sanitized. And so, we have to kind of think back and recall 鈥 and I know it’s hard to do 鈥 what it was like the very first time we actually saw that content, we saw that data. And, data comes in many different forms, right? It’s audio, it’s visual, it’s a culmination of many different things. But the data, the rawness of data, is very helter-skelter.

CA: It’s interesting, what I heard you saying, Brandeis, is that we need to bring a certain amount of humility to encountering data. I think there’s an interesting parallel to what you were saying before about institutions. We need to bring some humility to what we encounter and not sort of think that we have the answer a priori, or even have the framework for understanding it, right?

My only real work with data is with this longitudinal study of HBS alumni that I’m part of. And, I’ve learned a lot there. I’m not trained, aside from taking Statistics 101 in college twenty years ago. One thing that I learned from the people on the team who actually can run regressions and who understand data is, you know, oftentimes I’ll make an assumption looking at a table or something, and it might be interesting. But it’s easy for them to point out, well, here are some things that you are not considering that sort of change the course. As somebody who likes to impose a narrative on things, as somebody who is a reader, it’s been helpful for me to realize that. What you’re saying resonates because it is about having that humility and knowing that even if you have a story in your head that’s very compelling, you can’t necessarily impose that on the data. There may be things that you need to cultivate the ability to see and to be aware of, even if they kind of disrupt what you would like the story to be.

BM: Yeah, exactly. Exactly. And, that’s the reason why having groups of people look at the same data with different perspective, different lenses, is so vital. That’s the reason why having people who are on the analytical side and also people who are 鈥 I love your words, you said, “I’m a reader.” I have people who are more on the communication side actually look at the same data because we see it differently. And, that’s very important to get to the same conclusion: to see it differently, different paths. And, that’s okay. That’s normal. That’s human.

DH: That makes sense.

CA: Just to be clear, David does read 鈥 he sends me articles constantly! 

BM: Well, that means that he knows how to send things. [Laughter]

DH: That’s true. I can’t send things to Colleen without having read them because she asks me hard questions about them. [Laughter]

It’s interesting. I’ve written a little bit about the future of workers, this ideal of someone who has a balance of computing skills, applied math skills, and domain expertise. I wonder if you could talk a little bit about what are good ways to bridge domain expertise, because when it comes to data, you have to sort of know something about what you’re looking at. If it’s, for instance, weather data and you don’t understand how temperatures work in different parts of the world, it’s very easy to misunderstand it. The same is true of data about people. For instance, I know that this is a big question in social media where online use of language, what is considered inappropriate in some places and in other groups is not inappropriate. The use of words is different, how they’re used. How do you think about the future of bridging that divide? Is it just people literally with different perspectives or is there a way to get that into one person?

BM: So, I’m of the philosophy that it’s not possible to get it into one person. I think you’re going to be highly leaning toward one of those three pillars, right? You’re going to be in that domain. So, I think it’s very, very important that you keep your mind and your work open to other people’s perspectives. Because that is where there鈥檚 strength; there’s strength in the different thoughts. There’s also strength in, not necessarily compromising, even though you are compromising your own ideals 鈥 you are trying to grow yourself. You’re in this growth mindset. And the only way to do that is to be around people who are not like you. And, the more that you spend time around people that are not like you, the more that you realize the different ways in which data can be easily manipulated. And so, you wind up being more human. You’re developing your humanity, in a way. And, that is to me, is what tech, as the big umbrella, is missing, is the humanity of it all. 

“the more that you spend time around people that are not like you, the more that you realize the different ways in which data can be easily manipulated.”

I think tech is just starting to get there 鈥 this conversation around the data, and the fact that data is a commodity at this point. It’s being traded like, I don’t know, casino chips. I think it’s important that more people have an awareness of how their data is being used and how not one person can be all things. And, I think that happens to be the issue in a lot of organizations. They want a data scientist who has twenty years of experience in like five different disciplines. That’s not going to happen. You’re going to need five different people, dang it! You’re going to need five different people. Like, this is not going to work. But again, going back to my earlier comments about that ROI, that money at the end 鈥 if you can hire one person to do five things, why hire five people to actually get you to that goal you’re looking for? So, there needs to be that type of shift. No, it needs to be multiple people. It needs to be multiple people.

DH: And, those multiple people, you said in there 鈥 they obviously have to have an appreciation of the other sectors. And, I think that’s where we see sort of this siloed world where people aren’t trained necessarily, or even have experiences, like you’re saying, workspaces, to gain an appreciation for those other spaces.

BM: Exactly, exactly. And I think that’s part of the issue, because work has been very much, “You do your job, you stay in your lane,” that means that the silos just kind of reinforce themselves. It’s hard to get out of that grind. That’s the reason why I think, at least for now, while we’re in this pandemic, which means people have to be online, which means they’re trying to find connections in different ways, this is an opportunity for organizations to now try to do ‘cross-pollination-ships,’ to really start to think through: “How are this team and that team working toward the same common goal? What is really the strategic plan? How are these different parts actually moving together and are they moving together in the first place? Are they designed to move together? How do you design them to move together?” 

That’s part of the work that DataedX is trying to do 鈥 to be that strategic partner in what is happening in your data practices. Where are there silos? What type of skill toolkit do you need in order to bridge this gap? Do you want to really bridge the gap or do you want it to sit on a shelf? Do you want to be inclusive? And, how do you now think through some of your operations in a way that is looking at data to be informative, to be data-informed, not data-driven? To really be data informed, to really understand what the data is telling you in order to then make actionable change within the organization. And how can we strategically get you there? Is it talking to the data analytics team about algorithmic bias and fairness? Is it then going to the front-line workers like the cashiers and just talking to them about, okay, how do you engage with customers and get information from them to make sure that their address and their email is correct and let them know of different reward systems? There is a lot of richness on how we move toward the same goal. But we do need a more holistic perspective.

CA: I just love everything you’re saying about the growth mindset piece. I hadn’t thought about that connection, but I totally agree. It’s easy for somebody like me to say, “Well, I’m just not really great at math;” to have a fixed mindset about it. Instead of saying, “I have the ability to actually understand more about what’s out there than I believe.” Anyway, I just think that’s a great connection.

BM: Yeah, it’s really the way we’re taught things is also an issue.

CA: You’re talking about how people are educated, who is educated, these systems that sort of determine who does what, basically. And so, I just would love to hear you talk a little bit about what are some of the ways that we can create that change. It’s very easy to say, yes, we need more underrepresented groups in tech. But what actually, as you see it, are some of those critical pathways for getting folks who are not dominant in that sector and in that field into the industry in a more effective way?

BM: Yes. So, I think a lot about this. And so, I would say, number one, that internet connectivity and machinery equipment needs to be priority. I think that at this point, how connected our world is, how much we rely on systems that are digital 鈥 that needs to be in every nook and cranny. So, there are places in this country that do not have good connectivity to the internet and that needs to change pronto, because if you do not change that underlying infrastructure, that everything else is for naught, right? There’s no need to be giving computers to kids if they have no way to connect to the internet. And so, that’s number one.

Number two, it would happen to be a lot of education on how to use technology. And, that means what are some of the pitfalls in doing it too much? What are some of the pitfalls of not doing it enough? When are there good times to use technology? Like banking is pretty much online now. So, having that type of instruction of fiscal responsibility and financial literacy is important. And, I suppose start with the parents and guardians of children.

Now moving forward to actual adults 鈥 that means that now it’s going to be about when to replace technology. That is something that is not really discussed at all. When do you move from one mobile device to another? When do you know you’ve used too much of the memory? How do you know what is on that particular device when transferring it? So, those are just some of the infrastructures.

So, in talking directly about technology, the same thing needs to happen inside of tech. Something that I do is amplifying Black people inside of technology 鈥 those that are in the past, those that are currently there today And, we do this amplification a lot. In the past few weeks, there’s been 鈥 past few months, excuse me 鈥 there’s been a lot of “Black in鈥 weeks. There is “Black in Computing,” there’s “Black in Engineering,” there’s going to be “Black in Data,” “Black in Cancer,” “Black in Cybersecurity,” [etc]. This is one effort inside of the Black community in order to amplify ourselves. But why do we need to amplify ourselves when we’ve been here all along? There’s a number of different ways in which we can now build that function, so, that being a part of this industry means that you鈥檙e actually included. Because, the way that things are going right now, the way things have gone, is that if you happen to be a Black, Hispanic, Indigenous person, you don’t really belong in tech. You can get the degree, but you’re pushed out within five years. And, that is what’s in the literature. And, that is intentional. That is strategic.

DH: I wonder, what set you on your computer science career?

BM: Interestingly enough, I talked about this a little bit before, it was a Gateway 2000. Who remembers the Gateway 2000?

DH: A cow box.

BM: Yes. Yes. [That] sets everybody apart. [laughter] You’re like, what generation are you part of? That was, for those of you who don’t know, the very cheap machine back in the 90s. My father and mother decided to get a machine, a desktop in the house, put it in the very cold basement. I was interested in the icons. And so, that kind of spurred my interest. I knew I wasn’t going to be a dancer because I am vertically challenged. I’m a people McNugget. [laughter] I really, I really was interested in what that [computer] was about. And so, that really set my course, and I was really good at math and I enjoyed numbers. And so, I said, well, what is this coding thing? What is this computer science? This looks kind of cool. And so I just pursued that in college. And then as I finished college, it was the dot-com boom, and something in my gut was like, “This is not going to last forever. This is not going to last forever.” And lo and behold, a year later, it had completely busted. So, as I was finishing college, I decided to go to grad school because I wanted to finish education. I did not want to stop. I’d seen both of my parents go back to college, both earning their college degrees. And I actually remember going to my dad’s master鈥檚 graduation when I was a young person, all of about twelve. So, I knew that education was important. That’s what they instilled in me. That no one could take education away from you is what I was told quite often as a child. And so, I wanted to finish. I wanted to be a doctor. My maiden name is Hill 鈥 I wanted to be Doctor Hill. I鈥檒l get that over with so then I can decide what I want to do with my life.

CA: So, we’ll do our final question, which we we ask everyone as a way to end. For folks who really care about this issue of data equity, is there one takeaway or one resource that you recommend?

BM: So, when it comes to ‘data equity,’ I think the one resource would happen to be a number of books. I can actually share a number of books. I think some of these books have been in the media for a while now. But if you have not read them, read them. is Safiya Noble. is Ruha Benjamin. And is Virginia Eubanks. Yes, they are three women 鈥 two Black women and one white woman 鈥 but I think this gets to gender, race, and class in three different perspectives. That is very important. So read those books, digest those books, have book clubs about them. That’s number one.

Number two would be to hire strategic ‘data equity’ firms and to really have a conversation about what your strategic goals are when it comes to your data practices. So, it’s not just about helping your organization earn money, it’s also about helping the retention of individuals, helping to understand what is the impact of your products moving forward. And also, it’s about, “Are you going to be a good company?” Or are you just going to be a company that earns money? So, I think the big takeaway is really start to think about the strategic vision, in the context of reading those books. And bringing that skill set forward to your workforce. And that does mean you’re going to have to spend money that’s going to be about educating your people. And it’s not going to be an automated system. It’s going to be small groups. It’s going to be collaborative. It’s going to be co-created with firms. Because that’s the only way, in order to bring humanity into this particular perspective.

CA: That’s great, and I think we can definitely make those book recommendations available to our community, along with other book recommendations from other folks we’re speaking to. I think we should make sure we’re circulating those and urge people, as you’re saying, not just to read them, but to talk about them.

Well, thank you so much, Dr. Marshall. This has been really fascinating and really invigorating. I have lots to think about and I’m sure everyone who is listening does too. So, thank you so much for joining us.

BM: Thank you so much, Colleen, David. This has been fantastic. Hopefully this helps folks and we can have a continued conversation and some action behind all of these conversations.

DH: Yes, thank you. Thank you very much for joining us today. That’s a wrap on our interview, but the conversation continues.

CA: Remember, we want to hear from you, so please send us your reactions, thoughts, ideas, comments to justdigital@hbs.edu.

The post Brandeis Marshall on the potential for data equity appeared first on 性视界 Business School AI Institute.

]]>
/brandeis-marshall-on-the-potential-for-data-equity/feed/ 0
Ifeoma Ajunwa on the limitless boundaries of employee surveillance /ifeoma-ajunwa-on-the-limitless-boundaries-of-employee-surveillance/ /ifeoma-ajunwa-on-the-limitless-boundaries-of-employee-surveillance/#respond Mon, 08 Feb 2021 15:00:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=13085 In this episode, we talk with Dr. Ifeoma Ajunwa from the University of North Carolina School of Law about the legal and ethical implications of workplace surveillance in the age of remote work, wearable tech, and DNA testing.

The post Ifeoma Ajunwa on the limitless boundaries of employee surveillance appeared first on 性视界 Business School AI Institute.

]]>
As automated and surveillance systems become unyieldingly dominant in our professional and personal lives, and as our professional and personal lives become unyieldingly blurred, hiring and managing practices have encountered new challenges in understanding and addressing systems on inequality in organizations. With often unclear ways to protect worker privacy, these new tools pose as both a resource and threat to anti-discrimination efforts in the workplace.

In this episode, our hosts Colleen Ammerman and David Homa talk with Dr. Ifeoma Ajunwa about the legal and ethical implications of workplace surveillance in the age of remote work, wearable tech, and DNA testing. She offers legal and scholarly frameworks in delineating and navigating the ever-changing boundaries for worker surveillance. Ifeoma is a tenured associate law professor at the University of North Carolina School of Law. She is also the founding director of the Artificial Intelligence Decision-Making Research (AI-DR) Program at UNC Law. At the time of this recording, she was an associate professor in the labor relations, law, and history department of Cornell University鈥檚 Industrial and Labor Relations School.

Watch the episode with Ifeoma Ajunwa

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): So today we’re speaking with Ifeoma Ajunwa, Dr. Ajunwa is an associate professor at the Industrial and Labor Relations School at Cornell University and also an associate faculty member at Cornell Law School. In addition, she’s a faculty associate at the Berkman Klein Center for Internet and Society at 性视界 University and also a faculty affiliate at the Center for Inequality at Cornell University. So welcome, Dr. Ajunwa. We’re very excited to talk to you today.

Ifeoma Ajunwa (tenured associate law professor, UNC Law): Thank you so much. I’m excited to have this conversation.

David Homa (Digital Initiative director): Thanks for joining us. We really appreciate you spending the time. So, we’re going to jump right into an interesting and deep question, looking at employee surveillance. I know this is a topic that you’ve spent quite a bit of time thinking about and researching and is on a lot of people’s minds with so much remote work. One of the things we’re interested in is how surveillance tools affect workers. What are the ways in which gender, race, and other axes of inequality shape those effects?

IA: Well, thanks for that question, David. Surveillance is something that has been around for as long as we’ve had the workplace. Employers do have a vested interest in surveilling workers, particularly in ensuring productivity and also in deterring misconduct. However, the issue arises when you have a workplace where the surveillance becomes intrusive or pervasive. And also surveillance operates on several axes in ways that can be discriminatory or that can be used to single out certain employees for harassment. So we do need to be aware of that. Currently, for American law, there are no limits really on what can be surveillance in the workplace. So, for example, in my law review article, , I look at the various types of surveillance currently employed in the workplace, whether it’s surveillance of productivity, or surveillance for healthy behaviors through workplace wellness programs. And I find that the law really essentially allows carte blanche to the employer in terms of how far they can go into revealing their employees. And while employers might think this is a boon or this is a benefit, employers do really have to be careful in weighing the surveillance choices that they make to ensure that it does not then become actionable against them or is not seen as being discriminatory or harassment.

And to that effect, I wanted to bring up a case that recently happened in the state of Georgia to mind here. So this case was called 鈥榙evious defecator,’ for reasons that will soon become clear. So in that case, certain people or individuals were leaving feces around the workplace, and this was a warehouse. And the employer, to determine who was doing it, decided that they would surveil the workers through DNA testing. Unfortunately, they singled out two employees for this DNA testing and those two employees happened to be African American. The DNA testing revealed that it was not these employees who were responsible for the acts of vandalism. Those employees subsequently sued their employer, and in a verdict returned against the employer, the judge noted that this could be seen as harassment or discrimination because of the singling out of those two individuals. And that also was a violation of the Genetic Information Nondiscrimination Act.

This is an interesting case for various reasons. First, you have to ask yourself, why do you use DNA testing to accomplish this surveillance? Could the employer not have used perhaps video cameras, which is actually still perfectly legal? And then the second interesting reason here is that the was not really created for the purposes that it was used in this case. It was really created to prevent employers from discriminating against employees for hiring or retention purposes because of their genetic profile. However, with this case, the court has now stretched GINA to, in some ways, be an anti-surveillance law when it comes to scrutinization of an employee’s DNA profile.

DH: Wow, that’s a fascinating intersection of law, technology, and employees鈥 relationships with their employers. That’s a very unusual situation. It’s a little unusual since companies, I think to some degree, are aware [that] targeting and singling out people is dangerous from a legal perspective. On the flip side, a lot of technology is sort of blanket and you’re casting a wide net that picks up all people and their activities in a broad sense 鈥 I wonder, at the other end of the spectrum, what’s happening in that space? You may have [tracking] software on your laptop [while] you sit at home and watch movies, et cetera. Your employer is capturing everything about you and we’re sort of blanket-capturing everything. There are certainly dangers there, right?

IA: Yeah, that’s a great question, because nowadays surveillance is prevalent in the workplace. It is pervasive. It is widespread. It’s not really just a trend. It’s really the standard, right? Any American working today can really expect it; you can expect that you will be surveilled in the workplace. And you might think, well, if everybody is being surveilled, then it’s going to affect everyone equally. But that’s not really the case. Let’s take the employer perspective for a moment. An employer might think, 鈥淲ell, I’m surveilling all my employees equally. I’m not singling out anyone. Perhaps, you know, taking screenshots of their computers and what they’re doing. Perhaps I’m taking transcripts of the emails. It’s equal for everyone.鈥 However, this can actually still be a situation of 鈥榤ore data, more problems鈥 for the employer. Because the more data you collect, the more you actually put yourself at risk of collecting data that is sensitive, or data that is really forbidden in terms of making employment decisions. This can then open up the employer to suits by an employee who comes from a protected category, right?

“You might think, well, if everybody is being surveilled, then it’s going to affect everyone equally. But that’s not really the case.”

So, for example, perhaps you have an employee who is not out in the workplace, in terms of their sexual orientation. But the information from surveillance actually captures this or shows this. If the employer then subsequently takes employment action against that employee 鈥 let’s say they are fired, or let’s say they鈥檙e demoted or not promoted. Well, in such a situation, the employee could have reason to say, 鈥淚 suspect that it was because of my sexual orientation.鈥 And this claim would be bolstered by the fact that the employer does actually have that information. So employers do really have to be cognizant of the issues that come with more data.

CA: Kind of following up on that, it sounds like part of what you’re saying is that some of the threat or risk around these surveillance tools and regimes is not just to the individual employee, in terms of their privacy or their rights being violated. There鈥檚 an exposure of the employer to certain kinds of legal risks, right? There are some threats there. And that’s sort of why it’s important for employers and organizations to be thoughtful about it, not just in the service of 鈥渄oing the right thing鈥 by their employees, but also just being cognizant of exposing themselves to risk.

IA: I really see the risks of surveillance as twofold. There’s certainly the risk to the individual employee: invasions of their privacy; information about them being revealed without their consent; and perhaps that information then being used to treat them differently. You can think of, for example, women with children, who perhaps prefer not to make that known in the workplace. But through surveillance of emails, or even through surveillance of the screenshots of the computer, that becomes known. This could in turn impact, severely, promotion chances, or their ascension to leadership positions, correct? But, on the flip side, there is also a risk to the employer of pervasive surveillance because they now have within their knowledge or within their possession, information that is pointing to protected categories. And just the mere fact of having that information puts them at greater risk for lawsuits alleging discrimination.

CA: It just makes me wonder about what we’re dealing with right now with remote work during the COVID pandemic. I feel as though I’m reading articles all the time about an increase in these surveillance tools and employers tracking employees. And it’s not quite clear to me what the prevalence of that is and how much that has increased. But I would be really curious to hear your thoughts. Is your sense that there are more intrusive or pervasive tools that are being used? And also, to this point about the risks to employers, what would you advise organizations that are thinking, “Oh, we need to more proactively monitor our employees if they’re working from home”?

IA: Yeah, that’s a great question. I would say that with the COVID-19 crisis, there certainly is an instinct to surveil workers who are working from home. Employers might have some anxiety [about remote work] in terms of maintaining productivity or even just deterring misconduct. We have seen some high-profile cases of misconduct happening with employees working from home. That being said, I think employers really need to be very deliberate and really need to be very conscientious in the surveillance tools that they are choosing and think about whether these are serving the purpose that they want them to serve, or whether those surveillance tools are too invasive and too infringing upon the dignity and privacy of workers. Because there’s another legal [angle] that has been brought on by this COVID-19 crisis. Most employees are now working from home, right? There is a difference between surveilling a worker in the workplace versus surveilling a worker in their actual home. And employers really have to give some thought to that.

DH: I wonder, for the people watching this [interview], how employees should be thinking about this [issue]. Most of what [workers] are doing is over video. I know a lot of cats show up, and also children. Do you have any advice for employees? Or do you have a sense of whether employees realize [they are being tracked], or should [workers] be more aware?

IA: I think it really behooves employees to be very careful when conducting work at home. And I would really urge any employee to treat their work hours as work hours and to be conscientious in terms of the activities that they’re doing during their work hours. I would say really try to have a dedicated laptop for your work and obviously don’t do personal activities on that laptop. I would say try, if you can afford it, to have a dedicated space where you work. That is hopefully a place that can be secluded, where you can close the door and it can be quiet and you can sort of shut out distractions. I would just really urge employees to understand that with the advent of technologies now, anything you do on an employee laptop 鈥 if an employer gives you a laptop or if an employer gives you any kind of electronic device 鈥 the law is that that device actually still belongs to the employer so that they can surveil anything on it. It is important for employees to realize that when they are using those devices. And it is important for employees to really be professional during your work hours and try very hard to keep their life, personal life, separate from your work life.

CA: Shifting gears to thinking in another way about how employers use technology, you’ve studied the use of artificial intelligence hiring tools 鈥 screening tools which often are created or implemented with the purpose of reducing bias, right? They鈥檙e approached as an intervention to either eliminate or mitigate human bias. I think most people who even casually keep up with news about technology and business know that that’s very much not the case all the time and often [the tools] just perpetuate bias. I know this is a long-winded sort of introduction, but we’d love to hear you talk a little bit about your work on how is it that these algorithmic hiring tools can perpetuate inequality. Maybe some examples of what you see in that space?

IA: Yeah, that’s a great question. When it comes to automated hiring, I would say that the public impression and also the ethos behind why employers adopt them is that they’re seen as impartial. They are seen as neutral. They are seen as having less bias than human decision-making. In my paper, “,” I really examine this idea that automated hiring platforms are neutral, or without bias, and can be sort of an intervention to prevent bias in coming into the hiring process. What I find is that this is not actually the reality. And don’t get me wrong 鈥 I think automated hiring as a technological tool can be quite useful. But just like any technological tool, automated hiring will perform the way that the people who use it make it perform. The people who use automated hiring are ultimately the people who will dictate the results. And what I mean by that is that there is a false binary between automated decision-making and human decision-making. And that’s because we don’t have the singularity, right? [laughter] We don’t really have machines that are completely thinking on their own. All the algorithms we have right now are created by humans. Yes, we have machine learning algorithms that learn from the initial program and then create new programs. But you still have to understand that there is an initial program there and then there is a training of the algorithm created. And this is trained on data that a human decision-maker decides should be the training data. And this training data can come with its own historically embedded bias.

And just to give you a real-life example of this, there was a news article of a whistleblower exposing that Amazon had created an automated hiring program, really for the purpose of actually finding more women for its computer science and engineering positions. And it turned out that that automated hiring program or platform was actually biased against women. Amazon subsequently had to scrape that program. And, of course, you know, [they] didn’t really reveal that to the public [before it was reported in the media]. The question then became, how could this be? How could a program that was actually created to help women 鈥 that was actually created to ameliorate this bias against women 鈥 how could that program then actually go ahead and replicate that bias? That is an important point that I make in my article, “The Paradox of Automation as Anti-Bias Intervention,” which is that automated hiring platforms, if not programmed correctly, if care is not taken, can actually serve to replicate bias. At large scale [they] can also serve to obfuscate 鈥 actually serve to hide that this bias is happening.

So it’s not enough for an employer to say, 鈥淚 want a more diverse workplace鈥 or 鈥淚 am going to use automated hiring and therefore eliminate human bias.鈥 The employer actually should do audits of the results coming out of this automated hiring, because those audits are what will tell [you] if it has an issue. I advocate in my forthcoming paper, 鈥淎utomated Employment Decision, Automated Employment Discrimination鈥 that there should be an auditing imperative for automated hiring systems. Because why should we have invented hiring systems, some of which can be machine learning, and just [expect] to get a good result without actually checking for it? So I argue that the federal government should actually mandate that automated hiring platforms be designed in such a way to allow easy audits. The design features can incorporate elements that would allow for audits to be run in, like, one hour or less, because these are computerized systems. It wouldn’t really be a big burden on the employer then.

And I want to add one other thing to that end. Some employers take this tack of 鈥渓ook for no evil, see no evil, hear no evil.鈥 Like they don’t want to do the audits because they’re afraid of finding discrimination, and then actually hav[ing] to do something about it. That’s not actually a good tack to take in this day and age. Why? Because a recent [court] decision actually has now allowed for academic researchers to audit systems. So whether the employer wants it or not, an academic researcher could come about and audit the system. And guess what? Now they’re caught unaware. So it is actually better for the employer to take this responsibility of auditing the system regularly, checking for bias and then also correcting for that bias.

CA: I found what you said about how we set up this false binary between human and machine decision making really useful, because in [the] general diversity, equity, inclusion field, there’s a lot of discussion about how it’s very hard for people to unlearn bias. So we need to focus on processes and systems. I think there’s a lot of merit to that. But I get uncomfortable sometimes with this pivot to [the idea that] if we just have the right technologies and tools, then that’s the solution. I think what you said is a very helpful way for me to think [about it]. I’ll be relying on [it] to articulate that concern. Human and machine decision making are not these two independent things. So I just want to thank you for that.

IA: It’s still quite shocking to me. Even other scholars have that idea [of], 鈥淥h, we should just give it all to the machines. You know, humans are just so full of unconscious bias that we can’t debug them. We can only debug the machines.鈥 But I’m like, “Well, who’s creating the machine?”

CA: Exactly. But I think there’s a strong trend for that. Especially in kind of behavioral-science-driven approaches to discrimination in the workplace.

DH: Those are really good examples. You’re starting to share examples of how technology can be perfected to actually reduce bias. Are there other ways 鈥 or have [you] come across [areas] where we can actually leverage technology to fight bias?

IA: You know, I think a lot of times the perception is that people like me are Cassandras. Because we are always predicting doom and gloom when you use technology. Many people see technology as like panaceas; there is this brand new shiny tool and they want to just be able to use it and not really have to worry about consequences. I don’t think I’m a technology pessimist, but I am also not necessarily a blind-eyed technology optimist. I think I’m somewhere in between. Which is [to say], technological tools are just that, tools. The results from them will depend on how you use them.

I think technology can be a boon for employers who are trying to do the right thing and diversify their workplaces. I think technology could also be a boon for employees who are trying to get a foothold in the workplace, trying to find employment. But I do think for that to happen, we need regulation of technologies. Technology makers can’t really just be allowed to take a laissez-faire approach to the development of automated decision-making technologies. We need strong regulation to make sure that they are serving the good of society.

“Technology makers can’t really just be allowed to take a laissez-faire approach to the development of automated decision-making technologies. We need strong regulation to make sure that they are serving the good of society.”

In automated hiring, specifically, I think the proper regulations could actually be a boon to anti-discrimination efforts. Because, for one, if you have a data retention mandate, and a record-keeping design, then through automated hiring, you could actually see exactly who is applying, and exactly who is getting the job. They could actually then be very accurate records of the picture of [your] employment decision-making, such that you could then see if there is bias. You could then see if there is employment discrimination. And I think, frankly, the first step to fixing the problem is seeing the problem. I think with traditional hiring, a lot of times the problem is quite hidden. It’s not as easy to see the bias. It’s not as easy to see the discrimination. Whereas with automated hiring, it could actually become easier to see all of that.

DH: You know, it’s a good point. With automated hiring systems and the appropriate audit tools, you could actually see the scoring of factors like you mentioned [with Amazon], where maybe [there鈥檚 bias] predominantly for women’s universities or higher ed. Whereas with hiring managers, that’s hidden away in someone’s head and they may not even know why they’re making that decision. That’s a great point.

IA: Exactly. As we say in the field, the worst black box is the human mind. That’s uncrackable to some extent.

DH: So maybe we could talk a little bit about wearable tech and the implications for employees and employers. I know in some of your writings, some of your research, you’ve [discussed] examples that affect people of different genders differently. Some of this technology is getting quite invasive. What can you share about this topic?

IA: Yeah, that’s a great question. I think we’ve had so many technological advances in the past, I would say, few decades. And one of the biggest ones is really this rise of wearable tech, because as computer systems become smaller and smaller, then we’re more able to embed tech in so many different things. And wearable tech is definitely becoming even more than a trend now. It’s become really a fixture of the workplace. And when I speak about wearable tech, probably the first one that comes to mind for most people is the Fitbit that you’re wearing on your wrist. There are also rings that do similar things to the Fitbit, like track your heart rate, pace, et cetera. But there’s actually a plethora of types of wearable tech. What I am seeing, though, is that these wearable tech are also raising several legal questions. The first one is really related to data ownership and control. There’s this idea that these wearable technologies are collecting so much data from employees and there’s a question of, well, who owns the data? The device belongs to the employer, but the data is being generated by the employee. So should the employee own the data? Even if the employer owns the data, who has access to the data? Should the employee have access to the data to actually review it and make sure it’s accurate? And have some say over how that data is used?

where I actually noted that currently all the data that’s being collected as part of workplace wellness programs through wearable tech can actually be sold without the knowledge or consent of the worker 鈥 and has been currently and in the past. So, should that be legal? Should employees have a say in how their data is exported and exploited? When it comes to a workplace wellness program, you have the wearable tech like Fitbit, but you also have other apps that workers are being asked to download on their phones to track their health habits. And unfortunately, some of those apps have actually been found to be doing things that could be used for discrimination or for discriminatory purposes.

“all the data that’s being collected as part of workplace wellness programs through wearable tech can actually be sold without the knowledge or consent of the worker.”

So there was an article in where Castlight, a workplace wellness vendor, had requested that employees download an app to track their prescription medicine usage. And they were using this information essentially to figure out when a woman was about to get pregnant. Certain prescriptions are contraindicated for when somebody is either pregnant or about to get pregnant. So women [employees] would stop taking those prescriptions and Castlight was using that to predict when a woman was about to get pregnant. This was especially concerning because, although we have the Pregnancy Discrimination Act, which forbids employers from discriminating against women who are pregnant 鈥 notice the act does not forbid employers from discriminating against women who are about to get pregnant. So essentially, this was a tool that could allow employers to really discriminate against women who were about to get pregnant without legal recourse. It is concerning when wearable tech is used for those purposes.

DH: Thank you for that. I can see from Colleen’s face she has given up on all of humanity, especially technology. I know some of your work has certainly looked at surveillance. And I know you have other scholars you either collaborate with or respect in the field. Tell us about some of that.

IA: Right. So I definitely want to mention the work of here. He is a 性视界 Business School professor who has done empirical work looking at surveillance in the workplace. He’s looked at surveillance in factories in China and other places. And I want to highlight one important finding of his, which I think is something that employers need to keep in mind. In , he noted that when workers were overly surveilled it actually backfired. It actually had the opposite effect from what employers wanted. He found that in one specific factory when workers felt that they were being overly surveilled, they did work exactly how they were expected to but they didn’t actually take initiative. They didn’t actually get creative, in terms of getting things done in ways that were faster and more productive. I think employers really need to think about the fact that organizational theory has recognized something called a practical drift, which is that in any given work, there’s sort of a standard way of getting it done, right? And the standard way has been thought of by management, right? But the people on the ground, the people who are doing the actual work, they sometimes quickly figure out that, “Yes, the standard ways is okay but there鈥檚 actually better or quicker or faster or more efficient ways to get the stuff done.” And so they drift away from the standard way of doing things. This is called practical drift. But when you have over-surveillance, then you’re not allowing for this practical drift from workers. You鈥檙e basically cutting off your nose to spite your face, as they say, right? You’re actually hamstringing your employees from being able to be as efficient as possible.

CA: We often end these conversations by asking the person to recommend a resource or a takeaway for people who care about these topics. I want to do that slightly differently, since you have this forthcoming book, that certainly is going to be a resource for people who care about these issues. It鈥檚 coming out I believe in 2021 鈥 I’m sure you’re in the homestretch with writing and editing and all of that! I would just love for you to talk a bit about the focus of your book, whom you hope will read it, and what impact you’re hoping to have with the book.

IA: My book, , is really a historical legal review of all the technology that is now remaking the workplace. The focus is on technologies and really examining how those technologies are changing the employer-employee relationship and whether we can ensure, through new legal regimes [and] through new regulations, that those technologies actually don’t change the workplace for the worse, but actually, can change the workplace for the better.

My hope is that my book will actually be read, not just by business leaders or HR people, but also by employees [and] definitely by lawmakers to get an in-depth look at what these technologies are doing in the workplace. Because I think a lot of times we hear about these technologies, but without having experienced them firsthand, we’re not really actually aware of the impact that they’re having on the individual worker. We’re not aware of the impact that they are having on society. So my book will include historical accounts of the evolution of these technologies [so that we can] understand where they came from and therefore the sort of ethos behind them. [I] also include some interviews of people who have encountered these technologies and their experience with them. And then finally, I have proposals for legal changes, new laws for how to better incorporate these tools in the workplace. I’m not a Luddite. I think these technologies are definitely here to stay. But it is about making sure that they are operating in a way that is respecting human autonomy, operating in a way that is respecting our societal ideals of equal opportunity for everyone and also inclusion of everyone, regardless of disability, race, gender, sexual orientation. So that’s really what I hope to do with the book.

DH: That’s a wrap on the interview, but the conversation continues.

CA: Yes. Thank you so much, Dr. Ajunwa. This has been a really fascinating conversation.

And we want to hear from all of you watching. So please send your comments, suggestions and ideas, and questions to justdigital@hbs.edu.

The post Ifeoma Ajunwa on the limitless boundaries of employee surveillance appeared first on 性视界 Business School AI Institute.

]]>
/ifeoma-ajunwa-on-the-limitless-boundaries-of-employee-surveillance/feed/ 0
Dan Mall on defining 鈥済ood鈥 design /dan-mall-on-defining-good-design/ /dan-mall-on-defining-good-design/#respond Tue, 26 Jan 2021 14:17:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=12896 In this episode, Dan Mall from SuperFriendly reflects on the importance of the designers intentions as a framework of understanding how bias can inform and perpetuate systems of inequality.

The post Dan Mall on defining 鈥済ood鈥 design appeared first on 性视界 Business School AI Institute.

]]>
Inclusive and universal design have gained wider attention and practice in recent years. Its goals are grounded in the belief that recognizing, problem solving for, and learning from excluded groups, yields universal benefits for all. However, these ideals can come into conflict with other desires or imperatives presented by stakeholders, such as speed or cost-savings. It can also be challenging for designers themselves to fully understand or embrace the needs of people who are different from them, even when they want to do so. How do we reconcile these tensions to push the needle of equality further? 

In this episode, our hosts Colleen Ammerman and David Homa speak with Dan Mall, who quotes friend and founding principal of User Interface Engineering, Jared Spool, to offer a useful definition: 鈥淒esign is the rendering of intent.鈥 Dan further expounds on this idea by reflecting on his own schools of thought, experience, and hopes for the design field. Dan is the founder and CEO of SuperFriendly, a design collaborative that aims to both teach and build design systems and products for their client base. 

Watch the episode with Dan Mall

Read the transcript, which is lightly edited for clarity.

Colleen Ammerman (Gender Initiative director): Today, we’re speaking with Dan Mall, founder and CEO of Design Collaborative, SuperFriendly. Thanks for joining us, Dan.

Dan Mall (design collaborative founder)My pleasure. Thanks for having me. 

Dave Homa (Digital Initiative director)Good to see you again, Dan. 

DM: Yeah, likewise. It’s nice to see your faces even though we can’t be in the same room together. 

DH: We’re going to talk about how design and inequity go together, and how we can actually make things a little better in the world. I’m going to start off with a question about bad design: When I say 鈥渂ad design鈥 and we talk about inclusion and injustice, are there things that come to mind for you that constitute bad design? And do those tie at all to what you would consider traditional bad design and technology?

DM: I think that we should define design. My favorite definition comes from my friend , who says, 鈥渄esign is the rendering of intent.鈥 I think that’s important to the idea of understanding good design and bad design. A lot of it centers around understanding what it is supposed to do. What did the designer intend for you to do with this thing, or this object, or this technology, or things like that? When I think about bad design in the world, I think about either a thing that is intended to do something that it is not able to do, or something that is actually intended to do harm. There’s a broad range within that scope. So, to me, bad design is about whether someone actually uses something the way that it’s intended 鈥 assuming the designer was actually trying to make something really good 鈥 can people actually use it in the way that [the designer] intended? 

DH: So, it sounds like obviously the designer has a lot of control over this aspect of the work.

DM: Absolutely. And I think some of that is a little bit self-fulfilling in that when we are designing things, we have to admit that we have control over the thing. Otherwise, it feels a little bit, I don’t know, nihilist or something. So the idea is that if I’m trying to make something for someone else to use, I should take care in how I’m making it. I think this idea is kind of crucial to the discipline of design. 

DH: If we take “the rendering of intent” as our definition for design, what constitutes bad design when we’re talking about inclusion and justice?

DM: I’ll share maybe one of the more obvious examples around exclusion, which is the idea of someone trying to enter a building. There is a lot of design that goes into how to enter a building: doors, handles, stairs, ramps, things like that. A lot of entrances are designed for a particular type of person 鈥 usually a person who has full motor control of their arms and legs; usually someone who can walk; usually someone who can see and can hear. And we see a lot of examples where ramps for people in wheelchairs are around the side of the building, or wheelchair users have to go in the service entrance in order to get in the building. That’s maybe one of the more obvious examples. The designer may not have been trying to exclude people, but maybe they just weren’t thinking about them as intently as they were thinking about a different type of person. But people get left out because of that. That’s one of the more obvious examples that comes to mind that gets discussed. But really, that’s bad design. If you’re trying to let people into a building, and you’re not trying to leave people out, but you accidentally did 鈥 even though it wasn’t your intention 鈥 it still counts as bad design. 

DH: That’s really interesting. When we shift online, I always think about how designers don’t consider where everyone’s coming from, especially with access to online systems. I’ve heard various people say that to get services online 鈥 even in municipalities 鈥 you need to enter an address. If you’re homeless, obviously, there’s a real limitation there. Do you run into things like that with online systems?

DM: All the time. Same thing with [people] who need to deposit money. Well, not everyone has a bank account. So, what do you do then? It creates this cycle of systemic exclusion that, again, may not be intended by the designers. Some of them are [intending to include] and some of them are somewhere in the middle, where they didn’t intend [to exclude]. But they didn’t really think as hard about about those things as maybe they should have. 

CA: So, I kind of want to ask the flip side to the question that you’re getting at with regard to how design can exclude. I think what I’m hearing you say is that there are a lot of examples where we’re not thinking about how a decision might be exclusionary, or might marginalize somebody, or might diminish somebody’s experience. And it sounds like you’re saying that one thing to do is to anticipate those things and kind of prevent them and think about how we can make sure that we’re not excluding. But, if our goal is to advance equity and advance inclusion broadly, are there ways that you can not just prevent exclusion, but harness the power of design to meet that goal of advancing inclusion?

DM: Absolutely. One of the things that I love about being a designer (and this is why I continue to love it) is that it gives me the ability to learn a lot of things that I wouldn’t have had a chance to learn otherwise. I’ve worked with companies that specialize in marine biology. And so, I as a designer now have to learn about marine biology in order to do a good job in helping them. I think a part of a designer skill set is some amount of empathy; some amount of being able to say, 鈥淥K, let me act as if I were in the shoes of the user I’m designing for, or the person I’m designing for.鈥 A lot of that is built into the discipline of design, where the designers have to adopt a mantle of someone that they are not. However, one of the things that we’re seeing a lot in society 鈥 especially with tech today 鈥 is that some designers just don’t have the ability to have the empathy for certain people who they just don’t have any experience with. What I love about what’s being discussed in the last decade or so with the idea of inclusive design is not just about, 鈥淥K, designers, try to put yourself in those shoes,鈥 but actually to include people from the demographic that you’re trying to serve in the design process. 

“I think a part of a designer skill set is some amount of empathy鈥 where the designers have to adopt a mantle of someone that they are not.”

So, for example, I have two legs and can walk on those two legs. If I’m trying to create something for people who can’t walk, there’s only so much empathy that I’m going to be able to have, I think, as much as I might try. Instead, maybe I should talk to people who can’t walk. Maybe I should talk to people who have different abilities than I do in order to understand their experience. And I think that’s where a lot of psychology fields, like ethnography, and behavioral science, and things like that, come into play where talking to people is a good idea; observing them; watching them; watching them go out; [watching them] go throughout their day. I think a lot of that gives us insight into things that we wouldn’t know otherwise, as designers. And so I think including [the] people that you’re trying to serve in the actual process of design is becoming more and more popular in the field. And I love that idea that there is a limit to our knowledge. So, instead, let’s be humble about it and let’s talk to people we are trying to serve, so that we really can serve them best. 

DH: Do you have advice for younger designers who are just thinking about inclusive design? The idea that you are able to know where your blind spots are may be difficult for people. Some things may be very obvious [to some], but for other designers, [they] may have to actually think about where their blind spots are. Any advice for them? 

DM: Yeah, I mean, it’s a tough thing to do. It is a tough thing to think about. It requires humility. It requires self-awareness. And, you know, not to be overgeneralizing here, but sometimes that doesn’t come with youth. When you’re a young designer, or a new designer, or a budding designer, you don’t necessarily have a lot of that. A lot of people got into design 鈥 myself included 鈥 just to be able to make cool stuff. And I think that there should be space for that, too. So, I suggest that young designers do as much design as you can. Get that part out of your system. Get the fidelity. Get the ability to stop fighting your tools: whether that’s pen and ink; or it’s a computer; or it’s Photoshop; or whatever those things are. Get that fidelity out as quickly as you can. Put in the hours to do that and then start to round that out with [learning] how to talk to people. How do you understand more things than just what you’re designing? Is this [product] really cool, or is this something that I’m proud of? I think that’s a big part of design. 

But I [also] think design has a larger responsibility. When I work with younger designers, or newer designers, one of the things that I tell them is, 鈥淭he skills that you have when designing an interface for people to submit a form, those are the same skills that it takes to redesign the way that insurance works in our country. It’s the same kind of thought process 鈥 very different execution, very different scale. But those are the same kind of skills that you employ.鈥 I think that if you think about it as a practice in order to get to do something larger, or something that’s your calling, or something that you’re going to do over the course of your life, those [intentions] are practiced when sitting in front of a computer and drawing things that look cool. I like to encourage that, but I also like to remind younger and newer designers that there’s more to design. This is a good stepping-stone into something else that you might be able to have a larger impact with, if that’s important to you. 

CA: I wonder if there’s a tension sometimes between design[ing] for the needs of people who are going to use a product or use an online service, and the people that are paying you to create the design. I’m just curious, when you’re not early in your career, (or maybe even then), but, as you have the ability to navigate those tensions, and maybe push for something that’s more inclusive, how do you manage that?

DM: I find that when you cater to the audience that is less served, a lot of times that ends up helping everyone, and ends up helping everyone more. So, a famous example of this is a staircase, I forget where it is, and built into the stairs is kind of this winding ramp. So, if you are in a wheelchair, or if you’re on a scooter, or if you have crutches, or [even] if you can walk 鈥  you can walk up the stairs or you can use the ramp. It is a beautiful design. So, from an esthetic point of view, it’s lovely. And from a functional point of view, it is excellent and useful to everyone. Now there are different schools of thought on it, where the grade of the slope might be a little bit too steep. [However] those are all things that are solvable. But I think what it takes is the mindset of the designer to say, 鈥淚s there a solution that works best for everyone? Is there a solution that doesn’t leave anyone else out?鈥 And that’s the thing that I see designers not doing in their process. 

I think that designers see it as, 鈥淚 don’t want to compromise my vision just for this.鈥 We see it on our teams all the time. This happened a couple of weeks ago, where one of our designers was kind of talking about some of our work in that way: 鈥淲ell, oh, no, no. We’re not catering to blind users on that site. We don’t have a big blind audience, so we don’t really have to worry about the screen [reader].鈥 Which makes sense if you look at it from that point of view. But if you look at it from a different angle, to [ask] 鈥淗ow can we serve everyone best?鈥 Actually, writing better HTML, which is better for screen readers, actually makes the site faster. How about that? It makes it better for everybody else. And so, I think those are the kinds of things that designers are not educated enough in. So, there’s an education problem there. And then also designers are not taught that, however they learned 鈥 whether they are brought up on their own, self-taught, or went to design school. I think it takes people to go, 鈥淲ell, let’s stop for a minute and let’s just take 30 minutes, maybe, to think about how this might work, and see if there are better solutions for everyone.鈥 And I think more often than not, just taking that time to do it, yields some really good solutions. 

CA: Yeah, I love that. I’m smiling because there’s such a parallel to people who study organizations, who know very well that, so often, problems show up [that are] affecting people who are in the minority. But so often, making things better for people who are in the minority, typically by their race and/or gender, makes things better for the majority. I think it’s the exact same thing. And getting away from the zero-sum mindset allows you to identify those things that actually are going to improve everyone’s experience; improve whatever the goal is. And I think that’s a really powerful insight. 

DH: Can we can we talk a little bit about the diversity of audience? You mentioned that making things better for some groups will sort of make it better for all. What are the challenges of designing for all today? Technology reaches so many people in so many different areas, just millions and millions of people in some cases. How does a designer think about that? Where do you focus? Do you focus like you’ve suggested? Do you narrow in on the biggest part of [your] audience? Or do you actually step back and think, how do we make this work for everyone, when certainly, not everyone has the same experiences with technology?

DM: I’m glad you brought that up, because one of the things that I see in training designers, and as a designer myself, is that the hardest part is not making the shapes in Photoshop or Illustrator or whatever. It’s not the craft part. The hardest part is decision-making and prioritization. And Colleen, as you were saying, organizationally, that’s a hard thing to do in leadership. [It鈥檚] hard to say, 鈥淲e’re going to focus on this.鈥 Because by virtue of saying that, you’re also saying [that] we’re not going to focus on this other thing, either temporarily or ever. That’s a really hard thing to do. So, how do you balance that with designing for all? You know, one of the things that I try to to tell a lot of the designers that I work with is that in most cases, they are not designing for all. There is a point of focus. I kind of come from the school of thought that focusing, and specialization, and niche is important, useful, and helpful. I also think that there is some kind of exclusivity that is advantageous at points. An example is a Jewish dating site. That’s not a site that’s for everyone, nor should it be. I think there are spaces that are designed for particular people to be able to flourish. And, for better or worse, I think that that’s part of our society right now. I think that being able to design for that is important. 

Where it gets really tricky is when designers come out of design school and go to work at Facebook. Now all of a sudden, they鈥檙e designing for a two billion-person audience, right out of school. Even if you had 10 years of experience, or 15 years of experience 鈥 that is an unprecedented design challenge. Now you’re designing a platform globally that takes into account governments and race and all of these factors at scale. Who has the training for that? I think that there are generalities to kind of fall back on. One is the idea of universal design. Universal design is creating a set of principles that can apply to everyone. And a lot of that comes down to, not skill and craft, but belief. I think what’s really tough about that is, what do you believe about what people have access to? What do you believe about their rights? What do you believe about who should be able to do what, and who should not be able to do what? Those are hard design problems. It’s the same skill as drawing an interface for people to submit a form. It’s just scaled differently. So it’s a really tough problem. 

“Universal design is creating a set of principles that can apply to everyone. And a lot of that comes down to, not skill and craft, but belief.”

How do you design for all? I don’t think that a lot of designers have experience doing that. And how could they? Only now, in the last few decades, have designers had the opportunity to deploy something to two billion people across the globe and see how they work with it? That’s a new phenomenon for us. And so we’re screwing it up. And hopefully it’ll get better. But I think we need to be conscious and intentional about it. Again, back to design being the rendering of intent: What is the intent of this? You know, what is the intent of what we’re actually making? And how does that not get manipulated? I think that’s the other part. That’s the flip side to what you were talking about [Dave], which is, I think we have to now take into account where this goes wrong. Where does this fail? Where does this get get screwed up? What are the things that we don’t anticipate? I think those are the right places to include other people. As a cisgender male, there are the things that I don’t know about people who are not in that demographic  who are going to use this. I need to get their perspectives. This is very important. Otherwise, my work is going to be incomplete because there’s a whole side of this that I’m not considering. 

CA: Dan, we’ve talked about 鈥渄esigning for whom?鈥 You’ve talked about how important it is to include a range of people who might use a certain kind of product or service and how often it doesn’t happen and how important that is. I want to flip that on its head a little bit and think about who is doing the design. Just like many, many other industries, [design] is not as diverse as the population. I wonder if you can tackle it from the angle of what happens when the people doing the designing have some limitations in accordance with their particular identity and their place in the world?

DM: Yeah, wow. What a deep topic that is. I think that a lot of it is thinking about how tech start[ed]. In the tech world that we have right now, who gets to design, and why do they get to design and other people don’t get to design? I like the school of thought that everyone is a designer, or everyone can be a designer. So, I try to take a more broad worldview. Part of that is because that’s what I want. So, it’s certainly biased in terms of that. But right now, or at least when it started, tech was expensive. Tech is still expensive to get into for a designer. If you think about just the equipment, a designer has to have a MacBook probably, because that’s what the tech world uses, and that’s a couple-thousand dollar machine. And then you have to have software that cost hundreds of dollars, sometimes thousands of dollars. Then, you have to have education that cost thousands of dollars sometimes. Or if you’re self-taught, you have to put in the work and the rigor of finding the things that are applicable to you. So, there’s a lot of barriers to folks that have lower incomes to just getting into tech. And that in itself means that there’s a particular type of demographic that we see designing the tech products that we use. And those are 30- to 40-year-old white men. And so because of that myopia, it is a specific demographic that’s doing the designing. It stands to reason that it’s only going to serve one population. A lot of the things that we’re seeing as a society 鈥 as more or different parts of our population are represented in the designing, not just the not just consuming what is designed 鈥 we’re seeing a lot more creativity and a lot more solutions to different kinds of problems.

And so I think that’s why it’s important to have a lot more representation in our industry 鈥 how do we get more women, how do we get more people of color, how do we get more underrepresented groups in tech to help create more solutions around this? I think tech is pervasive now. We all have supercomputers in our pockets now. And so what could we do with that if we give it to the right people? Right now, the right people don’t have it 鈥 some of the right people have it, but not enough of them do. I think that more access to tools and technologies for people and groups who are underrepresented 鈥 I think we’re going to see wonders from that. I hope that that becomes a thing. I hope that becomes a thing that more people have access to, because I think that’s going to be important for the world. And if we don’t, I think that’s going to be very impactful for the world in a very negative sense.

DH:I want to say that’s actually a fantastic perspective on where technology can take us, and I think that that’s really important to imagine not only where are we making mistakes, where we’re doing things wrong, but what are the opportunities. I know this is going to be a question beyond what we would typically ask someone like yourself who’s so experienced in design, but as a citizen, where do you think that impetus comes from? Where would you hope that it comes from? Does it come from government? Does it come from designers? Does it come from businesses? Does it come from activism? Where do you think we get the drive to make the change?

DM: I think the drive to make change sometimes comes from pain. And I think that that is unfortunate, but I think that it’s motivating, too. So, I’m not sure if that’s a thing that we should eliminate. But I think there is a sting to missing payment on your credit card bill because the interface wouldn’t load. I know that’s a first-world problem 鈥 there’s even more of a sting to not paying your electrical bill because the interface wouldn’t load. So, there are these systemic barriers to not being able to do the things that are your rights. Then, there are barriers to doing things that you just want to do. I think that there’s room for all of that stuff. And so I think the impetus comes from experiencing pain and no longer wanting to. I think that’s where a lot of products get designed from: 鈥淚 need a better note-taking app because taking notes sucked in that class.鈥 OK, so you build a note-taking app. I think that a lot of people’s lived experience informs what they want solutions to. 鈥淚 can’t find a parking spot, I need a parking app.鈥 I think that what we’re seeing is the people who have a lot of means that are in tech are designing problems for people with a lot of means, maybe not in tech. Which makes a lot of sense. So, how do we democratize that a little bit more? How do we say, 鈥淲ell, what about people that are having different problems? How do they get to voice their concerns?鈥 I think a lot of that comes from 鈥 as designers, as people with means, as people with privilege 鈥 we have to do a better job of listening, because otherwise there’s no forum for it. There’s no [excuse] to say, 鈥淲ell, someone else is having a problem that I don’t experience, and yet I have the skills to be able to solve that, but I’m not going to because I just don’t know about it.鈥 I mean, that’s immature at that point. That’s almost abdicating. I would put that as a responsibility nowadays. We have to do those things. Otherwise, our world is not going to get better. So, how do we, as designers, do a better job of listening to and including people from communities that we’re not typically part of? I think that’s an important pathway for this field to be able to thrive. If we really are going to have the impact on the world that I think design can have, we have to be able to listen to more folks than we’re used to listening to.

DH: That really goes right back to what you said before about getting the direct tools and empowering the people who know what their problems are and making sure that those people are fed into this system of designing and creating technology. I think there’s some systemic issues there about privilege, about access, about resources, et cetera. And I think there’s a lot of work. Certainly, we hope that educational institutions can help with in that space, but certainly there are other opportunities. I know that [your firm] SuperFriendly has worked to bring other people in to the design space that haven’t necessarily had access. Can you just say a little bit about some of the things you’ve done?

DM: I mean, a lot of this comes from my experience, too. I have brown skin. My parents are immigrants. I’m a first generation in this country. I watched them work really hard. We grew up in North Philly. I wasn’t poor and I wasn’t rich. We were upper lower class, and then lower middle class. I watched my parents work really hard, and I wouldn’t be able to be in tech had it not been for, partially, their hard work and their work ethic. Then, honestly, just people giving me a chance and lending their privilege to me. One of my cousins, who worked in IT, dropped off a 鈥渢otally legal鈥 copy of Photoshop when I was 13 or 14, and I just tinkered around with it. I was lucky to have a computer. My dad had brought an old computer home from work when I was eight. So, having a computer is a form of privilege for me that allowed me to get into tech. When I talk to a lot of folks now, they say, 鈥淚 didn’t have a computer growing up until I was 16.鈥 And [they] missed the formative years of being able to have that be fluent in technology.

One of the things that we had at SuperFriendly for a little while was an apprenticeship. Our model is that we don’t hire anyone full-time. We are a model of freelancers that are collected for every project. And every account that we work on is a team of specialists that are combined just for that work. We don’t hire anyone full-time. But the exception to that was that we had an apprenticeship where we would work with folks that just wanted to get into tech. So, they had no tech skills already, but they had an appreciation for design, or appreciation for development, or engineering, that would allow them to get an entry-level job. So we would train them for nine months and then put them on some projects that we work on 鈥 client projects, paying projects 鈥 and then help them get a job elsewhere. Whether that is a product company, or an in-house studio, or an agency, or freelancing, or on their own. We鈥檇 help them with resume prep. We’d help them with getting portfolios together. Because how else are people going to do that? And it’s not a replacement for a four-year education at a design school. It’s not a replacement for anything else. But it’s a way to get an entry-level job. We had a lot of career switchers. We had a lot of people who were substitute teachers that said, 鈥淚 want to make a career out of something else.鈥 We had some people that were kind of working odd jobs every year that wanted a career, wanted some stability, that being a designer or being an engineer or developer could give them. That’s important to us. Because, again, it’s the same thing. Had somebody not given me something when I was younger, I wouldn’t have been in this field. And that has been really transformative for me and for my family. And so why not give that access to someone else? 

DH: As a designer, do you feel like there are limitations to current technologies? Like, are there things you wish technology would do? And again, this may be a limitation of who’s actually building the technologies, but are there things you wish it did that would let you allow your designs to reach more people or adapt to more people?

DM: Yeah, I think design is expensive and I think it’s slow and so at least now in digital design, there are a lot of different movements happening. There’s a movement called the 鈥淣o Code Movement,鈥 which is the idea that you can just draw something and then it can work. Lots of tools are emerging in that space right now, which is great, because I think whether or not that is the particular movement that will catch on and people will kind of hop on that, I think the idea of it is sound, which is we need to make access faster, because if I have an idea for something 鈥 well, I have to design it, then I have to code it, and then I have to deploy it, and then I have to manage it. All of that stuff takes a long time. And if you’re an organization of any particular size that is hiring a company or a freelancer or somebody to make them something digital, you know, you’re talking six months, sometimes, from idea to fruition. That cycle is too long. So, if we’re going to try to make change in a system 鈥 maybe a lot of our systems are slow so they can afford to have slow solutions applied to them 鈥 but there are other things that need to happen faster. 

And so that’s one of the things that I see. How can we get from idea to launch in a much faster way? Does that mean we need to pare down our idea? Does that mean we need to try things more? Do we need to try more things? A lot of our work tends to focus on this idea that is particularly popular in tech, which is the idea of an MVP, a minimum viable product. We try to focus a lot of our work on that. Rather than spending six months to do a thing, is there a way that we could spend six weeks to do a thing, because maybe it’ll be cheaper that way. Maybe it’ll be faster. Maybe we’ll learn things more that way. And so that probably is one of my biggest frustrations in tech and in design, is that the speed at which we can have impact is too slow right now. And I think that’s gotten faster over time. I would like it to go even faster. I know that with great power comes great responsibility. I know that wielded in the wrong hands, that can be just as bad as it can be good. But I hope to look at the glass-half-full version of that, too. Maybe we can deploy systems and solutions that help people faster than what we’re doing right now. 

CA: What would you advise people like me who see how important this is but are not doing [design]? What’s the right way for us to participate or help make sure that the right communities are heard, just as citizens? How do we move the needle on this even further?

DM: I don’t know if I have answers to that, but I have some thoughts and I have some opinions about it. So, nothing that I’ve seen is definitive at least. And these are newer thoughts for me, so they’re still kind of developing. One of the things that I’m learning, especially lately, especially in the last year or two, is how important amplification is. I think that people in power are often expected to have the answer. And then when they don’t have an answer, they just stay silent because they don’t know what else to do. One of the things that I’ve been trying to practice a lot more, is [recognizing that] other people have the answer, especially sometimes people who are not in power, or people who do not have power. Can we just point to them? Could we just say, 鈥淲hat about what they’re saying?鈥 Because so far what we have been doing hasn’t been working. So can we start listening to the people we haven’t been listening to? Because who we have been listening to hasn’t been working. We have to change something. We have to change some of the variables in order to get a different result. Otherwise, it is just insanity to continue to do what we’re doing. 

“We have to change some of the variables in order to get a different result. Otherwise, it is just insanity to continue to do what we’re doing.”

So, for people who are not designers, I think keep an eye out and keep an ear out for people who have not been listened to historically, systemically, [or] traditionally. And let’s start pointing at those things. Let’s start considering those things, especially if they’re outside of our perspective and worldview, in order to stretch us and go, 鈥淲e need to include more people in this.鈥 We need to include more thoughts, more ideas, because what we’ve been doing has not been working. Let’s be more humble. Let’s try that thing because maybe that’ll work. Let’s listen to that. Let’s try that perspective. I think that’s the thing that I would say to non-designers 鈥 you have the ability to amplify, too. You don’t have to be able to render. The rendering part is not the important part, it is the intent part. And so, how can you broadcast that a little bit more?

CA: That’s great.

DH: Yeah, that’s fantastic.

CA: So before we end, is there anything that you like to share in terms of resources or things for people to be thinking about if they want to dig deeper into these topics?

DM: I actually have this book that I’m reading. I started reading it last week. And so a lot of this is top of mind for me. I have it here it鈥檚 called . I believe she was a former director of inclusive design at Microsoft and is now at Salesforce. I’ve been learning a lot from that book. One of the things that kind of jumped out at me is what she says about inclusive design and how she talks about that. She says, 鈥淎n inclusive designer is someone, arguably anyone, who recognizes and remedies mismatched interactions between people and their world.鈥 And I think that’s given me a new lens to think about who I can be as a designer and what my role is as a designer. It’s a short book and it’s a quick read, but it is very, very impactful. I would highly recommend it to anyone, whether they are a designer, or thinking about becoming a designer. It’s a really great framing of the role that design plays in our world to create better interactions between people and the abilities that they have and don’t have. So, I’m learning a lot of things. I’ve been a designer for a long time and this is really shifting my worldview about it, too.

CA: That’s great, thank you. As an academic institution, we do like to encourage people to read [laughter] so that seems a very appropriate note to end on.

DH: That’s really great and thanks for your perspective on that.

CA: Thank you. This has been really fabulous. 

DH: That’s all we have for today. But the conversation continues.

CA: And we want to hear from everyone out there in our community who’s listening to this interview. So please send your questions, comments, ideas, reactions, et cetera, to justdigital@hbs.edu.

The post Dan Mall on defining 鈥済ood鈥 design appeared first on 性视界 Business School AI Institute.

]]>
/dan-mall-on-defining-good-design/feed/ 0
How COVID is pushing tech to revamp sports /how-covid-is-pushing-tech-to-revamp-sports/ /how-covid-is-pushing-tech-to-revamp-sports/#respond Wed, 18 Nov 2020 20:27:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=12078 Tech has had an impact on sports for years, but the pandemic outbreak pushed this transformation to new heights. Here's how athletes, fans, and managers are being changed by COVID-19.

The post How COVID is pushing tech to revamp sports appeared first on 性视界 Business School AI Institute.

]]>
COVID-19 forced sports, similarly to other industries, to take a back seat to public health. Cancelled competitions and games held behind closed doors have caused devastating impact. For instance, the NCAA outright cancelled March Madness, which usually generates more than  annually. And even the NBA missed revenue projections by  this season. On the other side of the big pond, European football clubs expect to lose around  over the next two years. 

However, long before the outbreak of COVID-19, the adaptation of new technologies had already begun to transform the sports industry. Nearly every professional sports clubs employs their own data analysts. Some clubs have also invested in dedicated innovation units, such as the .

COVID-19 has worked as a catalyst to accelerate technological change. During the pandemic, the sports industry developed innovative strategies to resume the season. For example, in the NBA playoffs, spectators could enjoy games from virtual courtside seats. Overseas marketing trips were moved to completely digital setups, such as the Audi Digital Summer Tour by Bayern Munich. 

We also observed how sports technology was repurposed to fight the pandemic. KINEXON adapted its NBA-proven player performance tracking technology to create physical distancing and contact tracing solutions. It appears that technology will continue to transform the roles of athletes, consumers, and managers.

How athletes are impacted

Enabled by technology, new and altered categories of athletes might emerge. The first category refers to able-bodied, unassisted athletes. They have the same attributes as today’s athletes, but will be supported to a much larger extent by technology during training. In 20 to 30 years, humanoid robots are likely to serve as sparring partners.

The second category comprises Paralympic as well as able-bodied athletes assisted by technology.  An example includes , which are already in use for rehabilitation and in the military.

Alongside human athletes, we will observe robot athletes. They operate either under human control 鈥 e.g. through eye movement 鈥 or via artificial intelligence 鈥 such as in car races with autonomous vehicles.

An additional category emerges from mental athletes who exhibit extraordinary cognitive skills. Today, this spans from chess to eSports. Perhaps, in the distant future there will be completely new mental disciplines among humans, or between humans and robots.

Finally, holograms can create virtual athletes steered by algorithms. These can be deployed for training purposes or for competition against humans. As a result, a simultaneous race will emerge among the best software developers in this area.

Altogether, new athlete categories will complement existing ones. In this context, sports executives and tech developers should note that any new concept must not only be technologically feasible, but also fill consumer demand.

These are just a few ways that technology enables new opportunities for consumer interaction in sports. The use cases developed during the pandemic could reach even greater levels.

How consumers are impacted

Most importantly, consumers want more direct influence in the game. In the so-called “Fan Controlled Football League,” fans can already vote for a team’s logo, player transfers, as well as the starting lineup. There is even the possibility for real-time voting about the next move in a game. As a next step, fans may directly participate in a club’s income streams. Technologies like blockchain or smart contracts will facilitate this process.

Additionally, consumers demand new ways of virtual engagement. Virtual reality will allow fans to watch their favorite sport from the best seats in the stadium and to bounce to different perspectives at the touch of a button. Fans could visit small satellite stadiums in their local communities to watch virtual games, even if they live far away from the home team.

Today鈥檚 technological changes also pose new questions for sports executives. 鈥淗ow do I decide in which technology to invest? Which new capabilities does my organization need to build or acquire? How can I fight tech doping (the practice of gaining unfair advantages through technology.)鈥 Just recently, a heated debate took place around the . 

How managers are impacted

Though nobody has definite answers to these pressing questions today, a technology radar may help managers build strategic advantages. The idea is to constantly scan information on relevant technologies. Here, it is important to start from the challenges an organization is facing and then pose the question of how technology could solve those challenges. Hence, the organization’s vision and mission should be linked to the technology radar, which could become a crucial driver for long term success 鈥 in the sports industry and beyond.

Looking ahead 

As a side effect of the COVID-19 outbreak,聽there has been a wide a range of innovations in the sports industry this year. And聽technology is vital for future success 鈥 on and off the field. In a sense, athletes are like entrepreneurs who experiment, fail, start again, and eventually succeed. Off the field, sport executives must anticipate trends, nurture ideas, and act faster than others. Being successful will require strong networks that encompass all the players, including sports practitioners, tech experts, startups, and academia.

The post How COVID is pushing tech to revamp sports appeared first on 性视界 Business School AI Institute.

]]>
/how-covid-is-pushing-tech-to-revamp-sports/feed/ 0
Three myths to dispel about COVID-monitoring gadgets /three-myths-to-dispel-about-covid-monitoring-gadgets/ /three-myths-to-dispel-about-covid-monitoring-gadgets/#respond Wed, 11 Nov 2020 18:50:00 +0000 https://pr-373-hbsdi.pantheonsite.io/?p=11940 Some new home health tech products arrive with a lot of hype about their abilities. During the COVID-19 pandemic, consumers should be wary of what these devices can 鈥 and cannot 鈥 do.

The post Three myths to dispel about COVID-monitoring gadgets appeared first on 性视界 Business School AI Institute.

]]>
Some new sensors on the market are receiving a lot of fanfare, in part due to their potential as COVID-monitoring tools. These include the Apple Watch鈥檚 ability to monitor users鈥 blood oxygen levels and the FDA clearance for the FitBit Sense smartwatch heart-monitoring app. With COVID-19 here for the long haul, many who have adapted their homes and lives for social isolation are considering high-tech purchases like these to measure vital signs like blood oxygen saturation and heart rate. And, technology makers are discovering new and critical uses for their products. As consumers wade into the marketplace and makers position their products for the new pandemic-inspired market, here are some common myths that could give a false impression about the abilities of home health tech.

Myth 1: Everyone with COVID-19 has the same vital signs

Early in the pandemic, temperature spike was identified as a sign of COVID-19. Similarly, a blood oxygen level drop below 90 percent was linked to the disease. The reality, though, is that not all COVID-19 patients have the same vital sign anomalies and no single vital sign in isolation indicates that a person has contracted the disease.

Instead of relying on single measurements to evaluate health, consumers should think of their health tech as tools to get to know their unique biometrics. Vital signs taken during a snapshot in time may not be as helpful as the trend of vital signs over time.

Rather than falling for this myth, consumers should learn their baselines when they鈥檙e feeling well so the gadget they use can report meaningful information when they might be sick.

Myth 2: The data collected by your connected sensors is protected

The Health Insurance Portability and Accountability Act (HIPAA) mandates some aspects of health privacy, but it generally does not govern information collected by connected home health sensors. This makes it especially important for consumers to be vigilant about how a monitor collects, uses, and shares their data.

Information that connected tech collects can be repurposed in ways that users may not anticipate. Some secondary uses of data may be beneficial, such as maps that use aggregated data to piece together fever clusters. But many are not helpful, like when data aggregators create . Until comprehensive privacy protections are enshrined in law, 鈥渂uyer beware鈥 should be the guiding principle. At a minimum, consumers should review privacy policies before selecting a home health technology, and also review the Terms of Service document that is likely to arrive with the product itself.

Rather than falling for this myth, consumers should remember that their home health tech is unlikely to have the rigorous privacy protections that they get from their doctor鈥檚 office.

Myth 3: Home health tech sensors can diagnose or treat COVID-19

Home health tech is not a substitute for medical care for a number of reasons, among them: 

First: No home health tech is clinically-validated to sense the presence of SARS-CoV-2, which is what is currently required to diagnose COVID-19. Although phones may one day be used for reliable and accurate health diagnoses, today鈥檚 gadgets need to be thoroughly studied and tested before that claim can be made. 

Second: For example, some home sensors measure blood oxygen levels at the wrist while other clinical sensors measure it at the fingertip. Since different body locations have different amounts of blood flow, there is variation in the expected range of normal values. Not knowing how a sensor works could inadvertently make the user appear sicker 鈥 or healthier 鈥 than they actually are.

Third: Not all health gadgets are accurate, so using them as a substitute for medical care could be dangerous. For example, pulse oximeters, which measure blood oxygen levels, became a popular pandemic purchase early on. However, many on the market today have . Indeed, the Apple Watch 6 will have its skeptics until the accuracy claims are backed-up with solid data. 

Rather than falling for this myth, consumers must understand that home health tech can complement, but not substitute, medical care. Further, we invite all stakeholders that believe in the potential of home health tech, (as we, the authors, do), to join the call for enforceable safety and efficacy standards for these novel technologies. This means demanding that the FDA modernize its approvals channels and that lawmakers set comprehensive privacy protections into law.

The post Three myths to dispel about COVID-monitoring gadgets appeared first on 性视界 Business School AI Institute.

]]>
/three-myths-to-dispel-about-covid-monitoring-gadgets/feed/ 0