Introducing Fairly Deep 🔬

Wassily Kandinsky, Joyous Assent. Color lithograph. 1923. UCLA Hammer Museum
What does it mean to be "fair"? How would you define fairness? Why?
I kept asking myself these questions when, in early September, I joined a research group at Cornell looking to define how ranking algorithms can be fair to both users and items being ranked. I found this topic fascinating because we encounter these algorithms multiple times a day as a natural way of helping us organize information. Whether I'm searching for a new coffee machine on Amazon, or I'm a recruiter looking for software engineering candidates on LinkedIn, these platforms have ranking algorithms that optimize for relevant items that I, the user, should hopefully see at the top of my search results. And it's no surprise that the first items in any ranking get disproportionately more exposure than the rest (when was the last time you went to the second page of a Google search?).
But, how do you produce a ranking that is not just relevant to the user but also fair to the people (or items) being ranked? What does it mean to apply fairness to algorithms in the first place?
Having spent the entire semester looking into these issues I came out with more questions than answers. At the same time, I found a wealth of research, blogs, articles and podcasts that were scattered across various corners of the internet.
In this newsletter, I will try to gather latest developments, news, and trends in the rapidly growing area of machine learning fairness, transparency and ethics. However, I can only hope to capture a tiny snapshot of this field in an email. So, I’m asking all my readers (yes, all 10 of my friends whom I’m spamming with this) to send any and all material that you think should be highlighted in the newsletter. Last but not least, this newsletter is intended for non-technical and technical audiences alike, and if you think that I’m not striking the right balance, please let me know. Having always been passionate about writing, I’m excited to take on this challenge.
Without further ado – welcome to Fairly Deep, a bi-weekly newsletter about fairness, privacy, transparency and ethics in machine learning.
Thank you for reading and Happy New Year 💫
Fair Enough
December was an important month for all things ML because of NeurIPS, the largest annual AI conference that brings together thousands of researchers, data scientists and others. NeurIPS 2019 had by far the most number of attendees in its 33 year history, attracting 13,000 registrations with 6743 paper submissions, of which 1428 were accepted (~21% acceptance rate).
🧬At the Fair ML for Health workshop, Stanford’s Sharad Goel talked about the challenges of formalizing ideas about fairness in algorithmic decision making. He presented on the limitations of a lot of the fairness definitions that have been proposed in recent years to deal with issues of discrimination and bias. For example, he discussed Classification Parity – which is a notion of fairness that requires a particular metric to be the same across groups – in the context of the false positive rate metric. [video of the talk | slides | paper]
P.S. for an excellent in-depth explanation of the talk and Goel’s associated paper, check out these slides.
If you are interested in the intersection of ML and healthcare, do check out other talks from the Fair ML for Health workshop as well as the ML4H (Machine Learning for Health) workshop.
🏹 Speaking of formalizing fairness, a group of researchers from UMass Amherst and Stanford introduced Robinhood, an offline contextual bandit algorithm that can satisfy fairness constraints and “achieve performance competitive with other offline and online contextual bandit algorithms.” The algorithm relies on users to specify their own fairness rule, which needs to be mathematically expressed, and plug it into the algorithm. For example, one can specify that men and women should have an equal chance of approval for a loan. The researchers tested the algorithm on three tasks: an interactive tutoring system, a loan approval system and a criminal recidivism predictor based on ProPublica’s notable investigation. They report that “in each of our experiments, Robinhood is able to return fair solutions with high probability given a reasonable amount of data.” Some of the fairness constraints discussed by the authors are also analyzed in Goel’s talk. [Paper]
🧮 The idea of “plug-and-play” fairness constraints is also present in a paper by Singh and Joachims that proposes both a framework and a learning-to-rank algorithm that not only optimizes for utility of the ranked items but also enforces fairness constraints with respect to the items. Concretely, the authors claim that algorithms that rank items (say, ranking relevant products for your Amazon search) are usually blind to how this ranking impacts the items themselves. In other words, the ranking can be maximizing utility to the user (and there’s many ways to define utility too) by being unfair to the items being ranked, causing real world harm to those stakeholders. The authors introduce a learning-to-rank framework that allows one to optimize for a variety of utility metrics while “satisfying fairness of exposure constraints with respect to the items,” where the definition of fairness can be specified by the user. [Paper]
[Full Disclosure: I worked with the authors of this paper on my research project this past semester. I will share more about my research in the upcoming newsletters]
Not Kidd-ing

Dr. Celeste Kidd, Professor of Psychology at UC Berkeley
One of the most talked about presentations at this year’s NeurIPS was Professor Celeste Kidd’s How To Know, an exploration of how people come to believe what they believe, “why do people sometimes believe things that aren’t true,” and how machine learning researchers and practitioners can be better aware of human belief formation when they design systems that can affect people’s perception of things. Dr. Kidd was also Time Person of the Year in 2017, sharing the title with numerous other women who were the trailblazers of the #MeToo movement.
Closer to the end of her remarks, she addressed the men in the audience to talk about a belief that she said was common among her male colleagues – that in the age of #MeToo, the slightest misstep or “misinterpretation” can ruin a male researcher’s career over allegations of sexual misconduct. “You have been misled,” she said, taking a pause before a predominantly male audience. “The truth is, it takes an incredible, a truly immense amount of mistreatment before women complain. No woman in tech wants to file a complaint, because they know the consequence of doing so.” Among those consequences is workplace retaliation, such as being overlooked for a promotion or being sidelined entirely. She argues that if one hears about a public case of mistreatment, then chances are this case involved particularly egregious circumstances. And, she cautions her listeners not to fall for smokescreens that the offenders try to put up by apologizing for “a minor infraction, while omitting the many more serious and severe behaviors they should be remorseful for – lying by omission.”
Dr. Kidd’s full remarks are available on YouTube (the #MeToo remarks start around 27:00)
👂All Ears: a selection of memorable podcast episodes
If you are interested in hearing more about Prof. Kidd’s background and research besides the invited talk at NeurIPS, I highly recommend her recent interview on the TWIML podcast (I highly recommend TWIML too).
Michael Kearns, a well-known researcher in algorithmic game theory and computer science generally, recently published a general-audience book called The Ethical Algorithm. Fun fact: Kearns and Leslie Valiant posed the famous weak learnability question that inspired the development of AdaBoost.
p.s. these podcasts are available on all major platforms
In Other News
In December, Dr. Rediet Abebe became the first black woman to receive a Ph.D. in computer science from Cornell University. She successfully defended her dissertation, titled “Designing Algorithms for Social Good,” and is now a Junior Fellow at the prestigious Harvard Society of Fellows. [Article]
Intel acquired an Israel-based Habana Labs for $2 billion. The company specializes in developing computer chips designed for AI and machine learning applications. [Article]
Following Intel’s high-profile acquisition, WSJ reported that “even before Intel’s latest purchase, AI-related deals globally had surged to $35 billion in value through early November, topping the previous high of $32 billion two years ago and $11 billion in transactions last year.” Top tech companies are rushing to poach, hire and train top talent for their AI/ML divisions, often poaching people from top universities and in the process obliterating their CS departments: “Last year, private industry also lured away 60% of AI doctoral graduates.” [Article]
Subscribe now so you don’t miss the next issue. Let’s stay in touch :)
In the meantime, tell your friends!