On demand | Elevate 2022

Ethical algorithms: How improving your AI diversity, equity, and inclusion can protect your business

Session details

SOLUTION BREAKOUT  •  DEI

When the datasets that power algorithms and AI carry inherent biases, the conclusions the technology draws from them can be problematic. Diversity, equity, and inclusion are crucial components to avoiding these problems with any new technology.

Join this discussion about making sure the rules of the game allow everyone to win. 

Transcript

Navid: Hey, thank you for joining us. We are here to talk about ethical algorithms and how improving your AI diversity, equity, and inclusion can protect your business.  

Before I introduce our guests, I want to ask everybody a favor. We have a disclaimer here since many of us here are in the same industry and competitors and whatnot. There's legal disclaimer that says what we are and are not allowed to talk about. So if everybody could just take a minute and read that and then we will continue on. Good. Perfect. I made a deal with her. I was not going to read that thing. Like we just put it up, let everybody read it. I'm not reading that thing.  

So let me introduce our guests here. First, we have Doctor Katie Shilton. She is the associate professor in College of Information Studies at University of Maryland, College Park. Your area of research focuses on technology and data ethics. And then we have Tony Sun, my other guest here. Tony led the development of the world's first airline revenue protection solution entirely driven by data analytics and AI.  

And I want to just say one quick thing. When I was asked to do this, I started trying to think about what does this mean? How does this practically apply? As a technologist, obviously I'm interested in making sure that whatever we build is built correctly and so forth. But it was interesting, it came up twice yesterday. We had a panel yesterday morning where there was a quick conversation about something being presented in a shopping scenario based on whether a user was logging in from a Mac or PC or something like that. And then Michael, I think there was a session yesterday afternoon where there were discussions about AI and how it might apply machine learning and how might apply to airline pricing going forward. And you guys made a comment about the algorithms, AI algorithms being a glass box and not a black box for transparency purposes. So I'm glad that those came up and those kinds of conversations came up.  

But first I wanted to start with Doctor Shilton. Can you lay out some of the terminology for us, how do diversity and inclusion differ? What is biases and data and just start us off with a little bit of terminology in case some of us are not quite as familiar with it?  

Katie: Yeah, sure. Yeah, I spend a lot of time thinking about these things and talking to students about these things. So let's start with DEI, this acronym, and then we'll connect it to data and bias.  

So diversity generally. So diversity, equity inclusion are the sort of three concepts we're working with here. Diversity is about different backgrounds, viewpoints, experiences brought together in a team, in a classroom, in a setting, with the goal of getting different worldviews into conversation with each other. And we'll talk in a minute about why actually that's really important. In building fair algorithms, diversity can be quite important.  

Equity, that middle term is about fair treatment and specifically fair treatment that supports success for everyone or success no matter who you are. So back to that diversity, success for different viewpoints, different backgrounds, different life experiences. And so that doesn't necessarily mean treating everybody the same. In fact, it doesn't mean treating everybody the same. Equity is about removing barriers so that everybody has the same chances. And in a society where there are barriers for some people and not for others, that doesn't mean, like my kids, I have two young kids and they think fairness is everything the same. But even with your kids, that's not it, right? They need different things. Some of them need clothes this year and some of them don't. And so removing a barrier is the way I like to think about it.  

And then inclusion is proactive efforts, whether individually or on a policy level, to make sure that everyone feels welcome at the table or feels welcome in an industry or feels welcome in a workplace or for us in a university, in a classroom. And so making an environment that works for everyone so that they don't get excluded from the get-go. And as I say, that really it can be an interpersonal thing, but when we're thinking about it at work, we're often thinking policy level, like how do we have inclusive policies.  

So then we have this concept of algorithmic bias, which you maybe read about, and it's not the same as DEI, but it's very connected. So when we're talking about algorithmic bias, we're not necessarily talking about statistical bias, although of course statistical bias is important when you're working with big data. But what we're thinking about is procedural or process biases and systematic biases that might be part of an algorithm because of the data that was used. So staff biases, social biases that are built into datasets because of how they were collected, because of who's in those sets or who's not in those sets. It can be because of the way that indicators were framed.  

When we're working with data, we're working with our best indicators for a phenomenon. Like it can be really, it's almost impossible to measure happiness. Instead, we have to measure all of these other things that we think get us to happiness. Those are indicators, and those indicators could have biases built into them. So we're looking for kinds of bias built in within datasets, there might be prejudices within a data set or, and this gets into machine learning, there might have been a bias among data labelers, whoever was working with that data to label the data. Maybe it was done in academia. It's always done with Amazon Turkers. And so people are paid to sort of apply labels to data. Those people might have cognitive biases and those can then get baked into datasets. So we're looking for systemic biases of various signs that would work their way into datasets at lots of different points in the life cycle.  

Navid: Now having defined that, let me ask you the why? Why is this important? What are the consequences? Why does it matter? If we don't intentionally build ethical algorithms, what do we stand to lose, and specifically within the airline industry?  

Katie: So I'll give you both the carrot and the stick. So the stick is the winding up in the New York Times because you built the thing that nobody expected, and turns out it had bias built into it and it was unfair to a group of people, and now you’re national news. And this happens in various industries. I mean you've mentioned sometimes that's happened in this industry. So that's the first reason, is like, okay, let's be careful about this because we don't, you don't want your family to be the one who's like, you built that? But that's the stick.  

Let's talk about the carrot. And that is, if we think of travel as an industry, that is about accessibility in many ways, accessibility to the world for everyone in travel is not just a luxury for the rich, if it ever was. We live in a global society. People need to go be able to see their families. They need to be able to travel the world. We should not have a system in which travel is unfairly inaccessible to certain groups If we can prevent that.  

Now, you know, this entire talk will be intentioned with profits, with capitalism at its roughest, because price discrimination, for instance, is one definition of discrimination and not necessarily always a bad one, right? It is a useful tool. But when does price discrimination become discrimination-discrimination? And that is the hard line that AI builders are thinking about and walking. So you know we're trying to balance an accessible industry with other goals and values that our companies have. So that's I think where we put it, it's like, let's try our best because we want a fair industry ultimately.  

Navid: I like that thought about the price discrimination. I'm going to come back to that one a little bit. I'm noting that one for a later question. So Tony in the B2B environment, I know you guys are a little bit further removed from the direct end customer, but what does diversity, equity, inclusion mean to you guys in what you do?  

Tony: Sure. Perhaps it doesn't start with, I'm sure many of us suffer a bit of a jet lag. If you find whatever I speak is not making sense, question your own jet lag first before you question mine.  

Navid: Michael said he's going to question us on everything. So I think we've got somebody ready to do that. Yeah.  

Tony: So just before I answer that question, just to give everybody a very brief intro about what this industry or the particular function I'm in, or that Rassure is in. We pretty much are helping airlines to protect their revenue, detect any revenue leakages through data analytics and so on.  

Right. In the past, this industry is heavily business processing outsourcing type of function. So there will be a lot of people in India, actually after graduation, they actually start going through the IATA training, start using the screen to look up on the screen to look for the fares and so on. And then subsequently they will decide whether there is a violation or not, whether there is a revenue leakage. For them, of course they also talk about AI as well, So they actually do do a little bit of automation using Amadeus pricing engine and then they also try to intelligently doing a bit of matching and then they called AI. Just kidding.  

So with Rassure, when we come to this part of the solving this problem, instead of, if we take a very traditional approach, which is you've got these audit analysis actually produce the results. So now you have the perfect dataset, right? You have the datasets showing, this is no violation. This dataset actually contains violation. Machine, go and learn about it and run and then try to do the same thing. after that, Then obviously that will suffer quite a bit of a bias built into that.  

So there's all the analysts, obviously they will focus on certain agents because they found that agents tend to detect violations in the past or they go to a certain country. So those biases will creep in. If you take a very traditional AI approach or machine learning approach to that.  

Instead when we come to this part of the problem, what we did was using data analytics as the way to come to solve this problem. So what we pretty much do is using the ATPCO data, the pricing data or the rules and so on, applied to millions of transactions, booking data, ticketing data, try to remove as much of a bias as possible. So as a result unless the airline rules contains biases and so on and the way how it's folded into the systems contain that, otherwise we don't quite suffer as much of a bias as typically B2C kind of scenarios.  

But then your question was, given we don't suffer that kind of biases, then what does this, DEI, actually mean to us? Certainly when we come to as building a company, it is a startup. We actually do pay a lot of attention. We do certainly conscious about these carrots and sticks and so on. For us obviously the long-term benefit is the diversification will help us to take a more balanced approach and when we think about the problems we get opinions from many different angles. So one of the things when typically when we employ people after the office, I typically get them to do a Myers Briggs typology or the personality test so that make sure people who join the team are well balanced. But certainly as a company we invest heavily to try to diversify in terms of how we recruit people. Perhaps it's just, it's a bit of a different country to what typically American perhaps. I don't know about the American labor market, but in Australia if you put a job out as, say I want the data engineer, you bet that 60% of the applicants will have Indian background, the other 30% will have Chinese background, and then the rest of the world to make up that 10%. That ends up bring us very little choice as that diversification, but nevertheless as a company we try to employ people from many, many, right now it's up to 10 nationalities as a background. So rather than just having kind of a monoculture side of it.  

And then also in terms of the male, female side, we also try to ensure that we get about 30, 40% of the female being the data analyst, data engineers rather than just typically men participating in this.  

Navid: Quick question for you as a follow-up. I'm anecdotally assuming, but I'm hoping you can confirm this, that having a diverse group of people at your organization whose job it is to look after these algorithms, build them, and so forth, will help you in recognizing... 

Katie: Yes. 

Navid: ...a bias or an unfairness or inequity in this situation.  

Katie: Yeah, It’s one of the more important tools you can have because we all bring our standpoints to these sociotechnical problems. So we call these sociotechnical problems because they are not just technical and they're not just social, it's the two interwoven. And it makes them tricky because you need somebody who has a background to say, oh, you know what, in the United States there have been historic forms of discrimination that might show up in this dataset. That is a tough thing for somebody trained up as an engineer to know, right? And that's fair to have that blind spot. But if you have somebody on the team who has a different educational background, who has a different cultural background, they may be more likely to spot those issues. And that issue spotting is one of the hardest parts of trying to find systemic bias. So yeah, so what we're seeing is diversity on teams is really helpful in issue spotting.  

Navid: Feel good, you got that one covered.  

Tony: Thank you. Thank you.  

Navid: All right. Next one for you, Tony. Let's talk about the great things about algorithms and AI and flight shopping and how they can help consumers and airlines alike going forward.  

Tony: Hmm. I think this topic probably came up during, especially yesterday afternoon's discussion, perhaps I talk about this, approach it from a slightly different angle.  

So I'm sure if you look at it today and then even we look at the statistics this morning, starting from like a 2017, we got about 2 billion fares and then 2018 got a 3 billion and then eventually now got about 8 billion or 7 billion fares published every year. So it is a huge amount of data then comes to the dynamic pricing, the dynamic, or sorry, the ONE Order and that kind of a concept that potentially will get these possibilities completely exploded to the point where the traditional approach of looking up the fares and then you do the audit and detect any possibilities of violations, is totally impossible. So I think the algorithm and the approach we have as using a data analytics approach will made this possible for airlines to engage in dynamic pricing. So now you, instead of an airline, you have 2 million fares you manage, you actually have 20 million passengers. All of a sudden now, I think on day one ATPCO outlaid that these are, once you offered, made a dynamic offer, actually you backfile those fares and routes. Now all of a sudden 20 million passengers, you will have a 20 million prices. And then how do you actually ensure that revenue leakage is not going to happen?  

So this data analytics is the key approach we take to ensure that this new way of doing things is well supported, not just at the early upstream type of work using various APIs and so on, but also supported by downstream players like us to ensure the action is taken.  

Navid: Okay. And I know, Doctor Shilton, I know your experience with the airline industry specifically is more limited, but just in general, how you have seen in the past AI and algorithms and machine learning and so forth improve a process, a capability and whatnot. Do you have any thoughts on how that might apply to our industry? On the positive side?  

Katie: I mean, I see, I think lots of room for this kind of learning in terms of I mean, we talked about accessibility before, targeting folks with opportunities for travel that work for them in whatever way that is, right, and making sure that they know about it when that opportunity is available. To me the personalization that can come with this kind of technology is not always bad and creepy, right? Like it really, it really does make more opportunities available to the right consumers at the right time if used well. So I think that that's pretty exciting.  

Navid: And I think that's our next question, actually. We were rolling right into that one. Go ahead, sure.  

Tony: Before you do that, actually I would have suffered the jet lag, like I definitely forget to do half of the page. So the question is we use data analytics to help airlines to detect any potential revenue leakages. Even on some of these potential biases could be introduced because, obviously when we try to help airlines, we try to find a way to say when this rule is applied, whether you can detect the leakage can depend on how you interpret the rules. Despite there are many long, many sections of the menus and the interpretations. So as a company we actually regularly have written emails to ATPCO to seek clarifications. Instead of, we take our angle of the interpretation and say this is a violation because this is how I read, or how we read about this Category 4, sorry I got into the jargons, and so on, which is about the flight restrictions. We actually do write to ATPCO to say, in such a scenario, what would be the common understanding? So regularly sometimes even ATPCO wrote to us, say, let me consult to this particular working group, they will actually provide you more unbiased interpretation and then subsequently you can implement that. So that's when we come to this part, we actually do do everything, try to remove biases where we tend to want to help airlines to recover more revenue.  

Navid: Okay. So now I know you have your list there. We have the questions that we're going to talk about. Doctor Shilton kind of stole my lead there. She led us right into the next question, which is what opportunities are, do we see potential algorithms to actually bring increased diversity and inclusion to airfare shopping and so forth? I think you just touched on that, but let's get your side on that one.  

Tony: Since you can see you might have just read it.  

Navid: Well, I took my glasses off, so now I can see out to here, but nothing beyond. So that's it. I can't see any facial expressions. You guys can laugh at me all you want. No idea.  

Tony: So obviously what we do is, I think this question probably can be interpreted in two ways. One is how you ensure whatever the thing, ensure it’s fair and so on, is actually building to the solution. Another one is probably the angle I'm come back is whatever we do is interpreted how to ensure the result of this is actually unbiased and fair and so on. And then also encourage the industry to grow and diversify and or whatever. I think with the revenue protection, quite often in the past if it is not detected, then in general there's a more manipulative agent will have an advantage over a more follow-the-rules type of agents. If we don't do that over longer time, you will find the manipulative approach will be gradually dominate the things. So for us, if we take a very thorough approach ensuring the overall industry is a very healthy, very fair, and everybody competes on the same ground, on a level ground, and this is a probably our contribution to the society, not society, to the industry, of fairness and the long-term growth.  

Navid: Question for you. How do you prepare people to deal with this? Or if you're talking, for example, about engineers, you mentioned earlier about sometimes something will come more naturally to an engineer, sometimes it won't, with regard to this kind of thing. So are we saying here anybody who's going to be working with these datasets or with these algorithms are writing the code, do they need some sort of ethical training, or is it just not something that you can teach in a classroom? That simple, more complicated than that?  

Katie: Great question. So we absolutely think we can teach it in classrooms. Universities are trying hard. We're still working on it. So ethics is actually a notoriously difficult to teach and so there are all of these studies like in business school programs and law school programs which have mandatory ethics courses. Those ethics courses can have bad results, like people learn how to cheat and things like that. So we don't want to do that. So in the data ethics space we're still, can I say, flying the plane while we buy it, can I say in this crowd?  

Navid: We do it all the time! We rebuild the planes while they're flying all the time!  

Katie: Okay, cool. So we're still trying to figure out what works in terms of data science, ethical data science, education and ethical AI education. But some things that we are trying that seem promising are multidisciplinary education. So at Maryland, we have a new social data science program that combines deep expertise in data analytics and data visualization with classes in the social sciences. So in African American studies, in sociology, in political science, so that students get theory and method from a discipline, from a traditional social science discipline so that they can sort of spot, A, this is the wrong data for this problem. Or B, this data might have these biases built in because I learned about this. Or C, this analysis might be a problem. All of these things can happen. So that's one thing we're trying.  

In the AI space, we are similarly trying to build UI design practices in. So there's a long tradition in, for instance, human/computer interaction of building user interfaces with community input, with user input, there's UI/UX, right. And we don't have that equivalent yet in AI. What does user interaction look like for training a model, right? It's a whole different set of techniques. So we're working in the research space on trying to figure out what does community input to an AI technique look like for an AI developer. When do you go to stakeholders and test this with them. So we're still figuring this out. I don't have concrete here's what you should do answers. That said, there is some, because the best practices aren't known yet, there's some room too for you all I think to start to figure them out.  

We know that diversity on teams will help, we know that building transparency to what, and so we've talked a little bit about this, the glass box not the black box, transparency to the point that you can with your systems is going to help because it's going to allow you, if somebody spots a problem, to go back and try and figure out why it's happening and why that decision ended up biased. There are some suggested practices around transparency that might work well. These are mights.  

But for instance, the Algorithmic Justice League, which is a nonprofit working in this space, has suggested bug bounties for AI bias. So bug bounties are something that happened in the computer security world, where you ask people to find the problems in your system, outside people, and you pay them if they do. And this used to be super controversial in security and is now common practice because it works. What if we did this for bias and algorithms, right? What if you had, you could have a red team, that was internal, that looked at your algorithms and tried to find the problem. Or you could pay a kid in Australia to do it or to just spot it and give them a reward if they do so. That might be one sort of work practice or process that we could start to look at.  

And then I think anything we can do to increase our ability, I say ours, technical people and I'm not totally a technical person, but our ability to think sociotechnically, sort of practice this because it can be really easy to just put your head in the data and go, and to practice this sort of reflective and reflexive, Are there problems with my data? Where did this, these features, where did they come from? And are they really indicative of what I'm trying to accomplish here? And that kind of reflective process I think can help us step back, it's just so easy to get into the pipes and just do it. So those are the best things we know so far.  

Navid: And Tony, what do you guys actually do in order to accomplish this or to the extent that you can help train folks to be able to spot those situations and deal with them accordingly?  

Tony: For us, well, besides training and so on, we probably outsource the problem to the industry. So if we do the detections right, the detections usually will have two type of mistakes. Typically talk about type 1, type 2. So for us, we will have these false positives. So you detect something as a violation. That could result is ADM. So I know it's a jargon. Some people, probably most of us do know, ADMs are sent to travel agency. You will actually come to protest and say you do get the things wrong. What happened to your algorithm? What happened to the data training and so on? So we do jump in and try to figure out exactly where things actually have gone wrong. And then we do, I think the typical airlines are also try to keep us working as hard as possible.  

So quite often after the machine they have something else, they call the second pass audit, which is a human. They bring up a couple of cases and say what happened? Your machines did not produce these right. So again we will get into that to understand what happened, try to ensure our result is balanced, accurate and so on or suffer the bias onto one direction. I think to ensure the results are not biased is one thing, but quite often it's more important for me was how we act on the results, the data being detected.  

So for example when we help airlines to do audit, let me assume this is a related to commission claiming right? So for every wrong claiming of commission by the travel agent, there is the other side which is agents are actually forgetting to claim commission. Now the question is for us, we no longer acting on that piece of information. And they traditionally the service provider would never produce that information. But our algorithm and the results actually do produce the flip side, the other side of the coin. So it will be actually so much more after this session. Perhaps you should talk to airlines, say look, you do have agents forget to claim commission. The reason you come up with these in this market, you come up with this commission structure is to encourage sales and people come to you and so on. Now if you don't act on this piece of information, perhaps it's a leading to a long-term lose-lose scenario where agents are not getting incentivized in selling your tickets and then gradually at some point you keep the short-term money now paying out over a much longer time you probably lose revenue from doing that. So we do outsource our checking.  

Navid: Okay. And what have you guys seen with organizations? How do they provide some methodology for oversight of the people that are building these algorithms? Is it a maker/ checker type of situation? Nothing goes out the door unless somebody else looks at it? How do they do that?  

Katie: So the most formalized versions I've seen are coming out of places like Microsoft Research, which some of you may remember, Microsoft released a language bot that was trained with natural language processing, was trained on the Internet and quickly became racist because the Internet taught it. And so Microsoft said, okay, we need an internal review process before we release these sorts of products. And they've been really on top of having just a check and I don't know enough about exactly how it works internally, but they do have a sort of internal check and they process.  

Similarly, this isn't exactly in the same space but the data ethics space. Facebook famously established Meta, an internal review process for uses of data in the company after there was a scandal about using data for research. Researchers were experimenting with people's timelines to see if it would change the emotional valence of people's posts. So they would give you your friend's gloomy posts and then see if you got gloomier, essentially. And it had a very small effect size, but it was a big psychological study that was occurring on the platform in real time and users were pissed about it. And after that, yeah, exactly. And after that, Facebook and Meta has an internal review process for data-oriented research projects. that happen. Because they do a ton of research with their data to figure out, you know what they should be marketing to users and they do AB testing and stuff. But all of that has to go through a review board now internally. So there's a set of checks and it slows things down, but it also hopefully helps prevent these kinds of egg-on-face moments to have somebody else who hasn't been directly involved in the project, say, okay, so what did you do? What data did you use? What conclusions did you draw? Can you tell why the system is making these decisions and have that conversation internally. Back to the glass box versus the black. 

Navid: Let's go back to Tony for a second. So you guys, you mentioned something earlier about the fact that you're in Australia and familiarity with US and so forth. Do you view things being different geographically? Does DEI mean something, more or less harder or easier, whatever, in one country versus another? Have you guys seen any evidence of that?  

Tony: No, actually. Because we're working with this airline industry, which is pretty much global. But in terms of the practice? we have not, as a small company, we certainly pay attention to this, but just have not invested significantly in this. But nevertheless, whenever we do things, we try to ensure that the data analyst, engineers and so on, not try to write the too many If-Elses based on their own judgment and so on. It is relying on the industry standards and the data as much as possible. Let the standards and the data decide the scenarios and the outcome. So that's what we do.  

Navid: Okay. Actually, I'm going to come back to you on the same question. Global diversity, cultural, so on and so forth. I suspect that plays a role. I'm from a Middle Eastern background. Now, you saw me asking the gentleman for whisky before we started to try to calm my nerves a little bit. So obviously this doesn't apply to me, but in certain Middle Eastern cultures, if you offered them booze, they may not like that because it's not part of their culture. So how does the cultural norms and so forth factor into this problem that we're talking about?  

Katie: You have asked the million dollar ethical question and that is that... 

Navid: Write that down.  

Katie: So data use norms and procedural expectations for system decision-making systems are highly contextual. It depends on the country, it depends on the industry, it depends on how people are used to decisions being made in that space, how people used to data being treated in that space. But our AIs are increasingly global, they're out of context, right? They're taking data from one context and applying it in another. They're taking data from one culture and applying it to another. We do not know exactly how to deal with this because people's reactions and norms, as you say, will vary based on the market that you're working in.  

And ethicists, philosophers don't really know how to solve this problem. Are there universal ethics? Yes, but they're very high level, right? Like, they're not low level enough to get, and I mean, once we build, if we build AI that are, you know, weaponized, for instance, we can get into universal ethics about those. They probably shouldn't kill humans. That's a pretty universal law.  

Navid: There's a movie about that.  

Katie: Right, we were talking about that earlier. So, those are the easy cases. They're scary, but they're the easy cases because we have some universal. Once we get into questions of discrimination, questions of the differences in how a system decides who gets one benefit or another, it is much harder to come up with universal laws. And that's a challenge when you're building a system that's meant for everyone. And it's one that we have not satisfactorily figured out how to do that change based on markets, or is that in itself unfair, if your system changes its output. So you are not alone in asking this question. Everybody is kind of wondering as we move into these global systems, how to handle this, and I'm afraid that I am not a smart enough person to have an answer. But as you all start to navigate this, tell me what you figure out.  

Navid: I think for us, air travel is about as global as it gets.  

Katie: It really is.  

Navid: Time check. Okay, so I came up with a couple of questions beyond our list and then I'll open it up to anybody who may have questions here.  

One of the questions was, I'm going to ask you guys to talk about it from two perspectives, but it's really the same question from a skill set perspective. If a person is interested in getting into this field or a company is interested in bringing somebody in who can be a member of the teams and effectively do some of these things we're talking about, what are the skill sets, what are the credentials, what are the learnings, what are you looking for? We'll start with you on that one.  

Katie: So I'll tell you what we're thinking on the training side, and traditionally there's not a lot of tradition when it comes to this space, it's new. But if you are hiring right now in this space, you will get a mix of people who are recent grads who maybe have data science in their degree or maybe have machine learning experience in a computer science department, and then a mix of people who have taught themselves to do it either in their jobs, through they've been working for a while and they've learned to do it that way or through code academies and things like that. So it's a really diverse space of education right now.  

Universities have sort of a vested interest in making it, a little, we want our students doing these jobs, right? So I won't say and, but I don't think that universities need to be the only path for training in data science. I do think there are some advantages to training our students in data science, like we can do these interdisciplinary things very easily because we have all these folks on campus who have expertise in race and racism and have expertise in gender studies and have expertise in areas that are going to help us have these conversations. So universities are a good place to do this training.  

And I would look for people who have not only data science and ML machine learning credentials but also took some area studies courses or took some courses in method and methodology or social theory because they're going to have a little bit of those pieces of questions, like is this data quality data? Will some other kind of data get us to a better answer to this question, right? Or better train our systems and our models and better ways? What if we adjusted it for X or Y? So that kind of expertise I think is something to look for. I would hope that people would look for that even for folks who don't have data science degrees, which are so new. But there are more and more of them out there, I will say, if you're interested in recent grads, there's lots of data science programs now, every university is going hard.  

Navid: So people are already in the field. So you've got somebody in the field, they've been doing data and possibly even working in AI and algorithms and so on and so forth. But they want to diversify their skills so that they can now say, hey, I can help my company with DEI as it pertains to artificial intelligence. What is a good path for somebody?  

Katie: Let me talk to you about masters programs. No, I'm not going to tell you what masters degrees. Oh, they're meant for working professionals.  

Navid: Okay, great. Thank you.  

Katie: So no, I think this is actually a really good question. I think that actually industry groups can have pretty meaningful— 

So I don't know, in your industry, are there continuing education opportunities?  

Navid: Yes.  

Katie: So I think that is the place for this conversation, is in the continuing education, because this is newly relevant, these skills are just developing. Like I said, in two years. I'll have a different answer for the how we do this than I do now because we're still figuring it out. And so continuing education, I think, is a really good way of addressing how we build this as we're flying it, that we can continue to disseminate best practices as we figure them out through industry orgs.  

Navid: And does that ring true with you guys? What are you guys typically looking for, what you're looking for somebody to bring in to help you with your AI and your algorithms and so forth?  

Tony: By the way, when you talk about that question initially I said that's another million dollar question. 

Navid: That's true. Tell somebody.  

Tony: Let me share two stories on this recruiting and then how we come to some, how we approach this. I remember when Rassure just started in 2016, and LinkedIn did come to Rassure and say, look we've got many, many millions of people, members, and they all got these IT background, got all these things. Why don't you recruit them? So before then, idea, actually contact ATPCO. Do you have some people? They said no. I don't remember, Samuel Lau, I guess, most people probably don't know, he said if I got people I will keep it to myself, why do I give it to you?  

So LinkedIn come to us and say look, do you want some people? We said yes, we want the people in the data analytics field and then AI everything was, and so on. So they actually said, no problem, we were going to help you to find. So they clean the word, like the IT side has still got millions. Now I will talk about airline background and then you got tens of thousands, then finally we said the word ATPCO, then they found us three names, they're all in Washington, DC. So that means that that rule actually got killed.  

And then at some point we did get to the natural language processing because we do need to, one of the airlines we support, they actually produce 40,000 contracts. We do need a machine to read through the 40,000 contracts. And the problem we struggled with, that was the expectation of the candidates at that time was extremely hot. It was extremely, to the point I would say unrealistic. So they come here, two months later, there's another even more exciting job. Before they even know about the industry well enough, they already jump to the next one and then the other one come and then and go around. We have about 7 people rotated in a matter of one and a half years. So we never really got to get to anywhere. So then after that, we decided just grow, do the homegrown part.  

So in terms of people we recruit, they typically come from, we diversified into many science fields, engineering fields, not necessarily having computer science. We just want people having very strong problem solving on the IQ side. The reason I say that there's another million dollar question is typically the other half of the interview discussion, I was trying to assess the EQ side. People actually do come to the problems, do come to the team working and even including this ethical side of things, actually taking a very mature approach to that. Of course I haven't quite figured out the answer. We look forward to getting some answers from everybody else, but that's what we look for, people being good at problem solving but also have the maturity to take on the problems we are dealing with.  

Navid: And then my last question stemmed from something you said earlier, I said I'm going come back to it, which is to some extent, we actually want to use AI and these datasets and these algorithms from a commercial perspective to actually figure out where there is a commercial opportunity. And in some cases that bias is, I think Michael's willing to pay for this service and this service and this service, and I'm going to put that offer in front of him because I think he's going to, he wants it. He's going to a meeting the next day, I think. Whatever. So in some cases that bias or that identification of that pattern is a commercial good thing. I'm going to put the product in front of him that he wants. He's going to pay for it. He'll be happy. I make some money, all is good. But at some point you can cross the line and now you have gone into some of those territories that you were describing earlier. What I'm assuming there's no Magic 8 ball that you can go to and go, is this where is the line and how do I cross it? But some ideas, some observations that you guys have had about how to figure out where that line is?  

Katie: No, I actually think there is a pretty good way to tell and that is, is my price discrimination really about individuals and their willingness to pay for that service or are there buckets of people who are going to be given higher prices or who are not going to be offered these services in a systematic way? So not buckets like, the three of you got offered this deal and the three of you didn't, okay, unless the three of you all also had indicated on your last flight that you wanted a kosher meal, or you all live in a ZIP code which is predominantly African American or. So there, then you start to say, okay, wait, was there another structural reason besides the person's preferences besides their travel the next day, a structural reason that they don't have anything to do with? That was not their fault that they didn't get that deal or that they did, they got that higher price. That's when you start to look. And so you want to look across for systematic patterns. No problem with charging me more because I have to go tomorrow. And also, yeah, I would like to upgrade too because it's an overnight flight or whatever. No problem. Doing that to only give those opportunities to people in my ZIP code would be a problem because of where I live.  

Navid: So you're kind of looking for those patterns that are maybe not directly related to the business situation, but coming in.  

Katie: And that's why I mean, so much of the benefit here is about actual personalization, right? We could break out of those social categories and go to preference-based categories. That's not a bad thing. It's a good thing we've been talking about. We don't want to sort on ZIP code forever, right? That's the problem. So yeah, it's just avoiding those systemic problems.  

Navid: Okay, that's everything I had. How much time do we have? 15 minutes. Questions? You said you were going to ask a bunch, so you've been kind of quiet.  

Audience member: I don't know. It's definitely a very relevant topic. Yeah, so I think it's definitely a very relevant problem and I'm personally very interested because I actually am trying to develop an ethical framework myself on dealing with this type of biases. And very often I think everything you said so far is great. But I think there's need to be a next step, right? The thing is, if you really trace down to the fundamental cause of these biases, it really comes down to human decision, right? We make decisions that are biased. A lot of people think that bias is a data problem. It is, okay, but who generated those data? We did. So what can you do to make us be more aware of the fact that we are making biased decisions and actually solve the problem at the root cause? I think there's a lot of opportunity there.  

So I think what Tony was talking about like using data analytics, those data analytics reports can be fed back to the analyst to let them see what they did. Then they could say maybe I've been acting biased. I didn't know I was acting biased, but I was actually capturing a lot more, flagging something that I, a lot more than for a certain region or certain type of transaction versus other. Maybe they didn't know that, but having the analytics for them to see it, they would be much more aware of that. And likewise in HR, in hiring. If you've been shown that you've been higher during the last five guys you've been hiring are all white male or something, 35 to 45, then you would think that maybe I should, the next guy should hire is maybe out of that bucket. So that's the kind of thing that I like to think a little bit further. That's why I very often like to challenge, I think having all this process having diversity is a great thing. But ultimately if you really want to solve the problem at the root cause, we should try to use this type of feedback mechanism to change behavior that includes decisions that humans, how we come up with those. Like I said, a lot of these are cultural when we may not be aware of it, but with data analytics, that feedback to us, we may become easily recognizable.  

Katie: Yeah, a root problem here is that that data science and AI, they just reify problems that were already there. They were already in the data, structural problems. And so Amazon actually found this out with their hire, they built a hiring algorithm. They trained it on their successful engineer resumes and then they only hired white men because that was who they had hired before. So in some ways I actually think these systems give us, exactly as you're saying, a lens in to something we were already doing.  

Audience member: Right. Own bias.  

Katie: Yeah, in our own biases. And then the question because, well, how do we fix those? So sometimes I worry I'm working on the wrong problem, trying to fix AI when what we need is... 

Audience member: Fix humans.  

Katie: ...a safety net that doesn't, know, protects people's data and all. So yeah, this is a really good problem point. That said, I do think there's a real advantage to knowing that tech tends to reify existing problems and to be on the lookout for that as a window into those problems.  

Navid: So put DEI out in front and make sure that your organizations are dealing with that as a general theme, and then also be on the lookout for how it may apply to algorithms and machine learning and AI and so on and so forth. But DEI itself, as a human concept, needs to be the one that's at the front, at the forefront.  

Katie: I think so. I hope so.  

Navid: Anybody else? Oh, John's back there. He saw my million dollar questions. Wow. I didn't even, I took my glasses off. I didn't even see you come in.  

Audience member: So is the ability to pay for something. Is that discriminating on just one's ability to purchase whatever service it may be, is that considered ethical? Or no?  

Katie: In a capitalist society? Yeah, I think that's acceptable. Especially in an industry in which we're expected to pay. Now there's a separate, this is back to this reification issue. This is a separate conversation to be had about whether how we are paid, is it all fair in the United States, in the world, for the services that we provide and for the work that we do? But that's again, we're looking at an AI problem that's based in a larger social problem of wealth inequality.  

But yes, I think when we talk about doing ethics in context, this is the kind of context that matters. You are offering something for purchase. Ability to pay is part of the expectation. Now, wouldn't it be great if everybody had a better ability to pay, is a social conversation we should have, but it's probably, you don't have to fix it with your AI.  

Tony: On that question, sorry, maybe I'm going to mention a couple of jargons, I guess people haven't learned about this microeconomics where you have these consumer surplus, which is how you extract the amount of money based on their willingness to pay or ability to pay. But then there is a slightly flip side, which is called the deadweight loss, which is people as an airline, it's still profitable to fly them, right, but it's the passengers or consumers got priced out because the airlines set the price. So I guess if airlines can really determine the ability to pay, so not only you get a consumer surplus which is helping you to make a little bit more money, but that money could help you to also then equally help the people who used to get priced out by this airfare and so on, now have the ability to fly and fly more. So that could be make the society fairer.  

Katie: Yeah, I also, to follow up on that, I think the assumption that we know someone's ability to pay actually could have problematic stuff built into it. So we don't have a perfect indicator of ability to pay. We have a bunch of other indicators. And those indicators might have systematic biases in them and we should in a perfect world, if we knew, if they told us what they could pay, okay, we would know. But even then we might not know, right? So that is one of those problematic indicators or could be. So I think we should be careful about that, knowing that we are working with an estimation.  

Audience member: Hi. I'm going to try to frame this correctly. So this is a sort of new area and we recognize that there is bias and the ethics around it. How do you balance that with the politics of, it's not just domestic, it's global, How does that factor in? How do you address that? And because, at times people are very aware that is unethical and it's choice. Any thoughts, any comments?  

Navid: That might be a 2 million dollar question, politics.  

Katie: I think one of the hardest things about AI ethics is the fact that sometimes, maybe even frequently, it is doing politics by other means, and nobody elected us. And so this is, we are doing decision making and putting procedures into place. And I mean that's always been true to some degree within a legislative, there's a structure, and then we get to make decisions within that. But all of a sudden the kinds of decisions that we can make with AI, and this is moving out of this industry, but sentencing algorithms and all kinds of things are hugely political and we are, and the folks who are building them are saying wait, I don't want to make this decision about which is fair. So definitions of fairness, there are a lot of definitions of fairness and which one you ascribe to is going to depend on your politics. And that is itself very complicated. We don't know how to resolve it. If we think of AI development as an industry or a cross industry of various sources, we need to have a conversation in that industry about the power of our decision making systems and the fact that there needs to be more democratic way to settle on some of these definitions. We don't, as a world, we don't have structures to do that right now and I literally have no idea. And so that's a huge, huge challenge coming down the road. You've spotted it.  

Navid: Anybody else? Actually, I didn't notice Nicole came in, our corporate counsel came in. So we did the disclaimer at the beginning. Just wanted to let you know. So I did that.  

Nicole: You'll learn.  

Audience member: So you talked about the time it's going to take to come up with best practices and how this technology is growing and widespread awareness of these problems is growing. Crystal ball, how long do you think it'll take?  

Katie: Good question. So best practices for solving problems in particular contexts with stakeholder groups that make sense, two years. That's going to be quick, right? We're going to figure out how to consult and those, but those are the small problems. But I think we'll have really good best practices. I think they're actually happening now and there's watchdog groups and there's, things bringing up that are going to form an ecosystem to help us do this.  

Problems like what counts as fair on a global level? The world is not trending that way right now, to a global discussion of fairness. And like I said, I dream of that world. I don't know what it's going to look like. I don't know, it's going to be a long time. And so I think the best we can do for a while is to cling to context as much as we can. Solve these AI problems for the context that we have, for the user expectations that we know, for the legal infrastructure that we're living under at that moment, and just kind of adapt as we go.  

Audience member: I want to come back to this discussion of this willingness to pay and ability to pay, discussing this, that's very often at the heart of personalization. I think that's because I know you want this more. I guess the thing is, how do you balance the two? Because having the ability to pay doesn't mean I want to pay for this. I may not need to pay for this. I may have the ability to pay for upgrade tomorrow flying out or midnight going back to SF (San Francisco), my home. But I may not want to, I mean so there needs to be a, I would say, some kind of balance on how do you determine what is the price that you want to offer to me as individual. I say we know that certainly offering it to a large group is problematic and we actually, I mentioned yesterday, hyper-personalization in fact actually it's easier to some extent. It gets you a lot of the technical problem as well as the ethical problem. It's easier and we should go there. But somehow airlines are a little bit reluctant to go that hyper-personal because the type of data that you're dealing with, but it does solve a lot of problems but there is still this problem of what do you offer, say, a price to me as an individual. How do you balance that? If you have data, every data point you can get about me but you know, my ability to pay, and my willingness to pay, you have my calendar, you see that tomorrow is actually free and everything. So you know I'm not busy. I'm not in a hurry or whatever. How do you balance that?  

Tony: Yeah. Three million dollar question?  

Navid: True.  

Tony: Yeah, well, I guess whether you're happy to pay or want to pay, quite often it's in relative terms. So if you find another airline is going to charge you $400 and here is trying to extract maximum to $450, you probably still happy and pay the $450 instead of $400 and so on. So I guess with this NDC and then also keep an eye on what is other alternatives you have may actually be able to achieve a balance where you are happy to pay and then also airlines enjoy slightly higher revenue. Instead of selling at the originally say $400 and have now the inventory have not quite closed or whatever that old term is, so I guess that's where AI, if you can scan the environment quick enough.  

Katie: Yeah. So I will say, and we haven't had the data privacy conversation here today necessarily. And so that's another piece of this puzzle because data is at the heart of these systems. There are valid concerns about user expectations around whether they would have your calendar data and your, because I mean we know somebody has your calendar data. It's Google probably and then also anybody Google has shared that with, which is quite a few people potentially. So yeah, there's this question of where are we on norms around data sharing and norms around data access and that sort of invasiveness of data access in order to do price discrimination, things like that. I think that's a super, another super interesting conversation. I think it's a little bit removed from the AI fairness, like fairness and privacy are two separate issues within data ethics or information ethics. So we can have that conversation, but I think we probably can't have it today.  

Navid: Okay. Well, thank you guys very much. I really, really appreciate it. Tony, Doctor Shilton, it has been a great conversation. I'm glad to have you guys here and thank you those members of the audience that asked us questions. I owe you 20 bucks for getting it started. Thank you very much. And I believe, everybody make sure we head back to the main room for one last quick session before we all wrap up. Thank you, everybody. Thank you.

Solution breakout • DEI

An event experience you can't miss

ATPCO's Elevate + ARC's TravelConnect

Speakers ondemand

Speakers

tony sun headshot

Tony Sun

Co-Founder, Rassure

Tony envisioned and led the development of the world’s first airline revenue protection solution (RAXÒ) entirely driven by data analytics and AI. He holds degrees in mathematics, computer science, and business administration. He had a successful early career in IT and management consulting, and in the past two decades, Tony focused on pursuing innovations and entrepreneurial opportunities. He has a strong passion in serving the aviation industry and has the acumen and a unique capability to deliver “Information > Insight > Impact.” In day-to-day life, Tony enjoys running and solving Sudoku. 

katie shilton headshot

Katie Shilton

Associate Professor, College of Information Studies , University of Maryland, College Park

Katie’s research focuses on technology and data ethics. She is the PI of the PERVADE project, a multi-campus collaboration focused on big data research ethics. Other projects include improving online content moderation with human-in-the-loop machine learning techniques, analyzing values in audiology technologies and treatment models, and designing experiential data ethics education. She is the founding co-director of the University of Maryland’s undergraduate major in social data science. Her work has been supported by multiple awards from the US National Science Foundation. Katie received a B.A. from Oberlin College, a Master of Library and Information Science from UCLA, and a Ph.D. in Information Studies from UCLA. 

navid abbassi headshot

Navid Abbassi

Chief Architect, ATPCO

Navid’s team provides support in alignment with ATPCO’s technology capabilities and investments with corporate strategy; creates integrated, efficient, and accurate ATPCO platform solutions across ATPCO’s product and technology groups; and helps evolve the corporate strategy based on the changing industry distribution and technology landscapes.

You might also like

airline dei session
How do we bring more diversity and amplify more voices in an industry that historically looks quite uniform? 
machine learning and AI
With AI at the forefront, learn how the next 10 years of innovation in AI can affect us all, and how machine learning can drive profitability. 
atpco DEI
During Elevate and all the time, ATPCO is committed to creating an inclusive environment for all.