Issue #3: Promise & Responsibility
The last couple of weeks have been eventful. January 21st through 24th was the Davos World Economic Forum, and this past Thursday was the closing day of the FAT* conference, now renamed to FAccT (Fairness, Accountability and Transparency). The conference was established in 2018 to bring together “researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems” and over the last two years grew into an ACM-affiliated, international conference. This year it was hosted in Barcelona, Spain. The organizers stole some of my thunder this week because I have been meaning to write about the conference’s name and suggest that the “FAT” acronym may not be the best idea. Though the new acronym is not the most aesthetically appealing, I do find it meaningful that in an age when facts are too often ignored and diminished, a conference dedicated to fairness, accountability and transparency has “fact” in its name.
The other topic of today’s newsletter is promise and responsibility in the context of tech corporations. Right before Davos, Microsoft’s CEO Satya Nadella wrote a post titled “Achieving more for the world,” in which he outlined Microsoft’s four goals for “the decade for urgent action”:
Power broad economic growth through tech intensity
Ensure that this economic growth is inclusive
Build trust in technology and its use
Commit to a sustainable future
The language of the four commitments is relatively standard, but the phrase “tech intensity” seemed unusual. Nadella defined the term as “adopting best-in-class digital tools and platforms for the purpose of building new, proprietary products and services.” This principle is not just for Microsoft to take advantage of, he says, but is meant to be broadly utilized by “companies, communities and countries.” A certain section of the statement should be particularly relevant to the readers of this newsletter:
we [will] build AI responsibly, taking a principled approach and asking difficult questions, like not what computers can do, but what computers should do? Fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability are the ethical principles that guide our work.
At Davos, Nadella spoke in detail about the promises and dangers of AI, calling for government regulation in the space. His call for regulation was reiterated by IBM CEO Ginni Rometty, whose company issued a policy paper calling for “precision regulation of AI,” and Alphabet’s Sundar Pichai, who argued that AI will prove to be a more important invention than electricity and that “you can’t get [AI] safety by having one country or a set of countries working on it. You need a global framework.” But, some observers see tech’s increased self-awareness and AI ethics bent to be a diversion tactic from the more thorny issues of anti-trust and broad regulation of the tech sector. “Warning the business elite about the dangers of AI has meant little time has been spent at Davos on recurring problems, notably a series of revelations about how much privacy users are sacrificing to use tech products,” reports Bloomberg. The same article also mentions that despite flashy and (truly) commendable public statements about taking on more corporate responsibility, “Facebook, Amazon, Apple and Microsoft all increased the amount they spent on lobbying in Washington last year, with some of those funds going to pushing industry-friendly privacy bills.” Of course, it would be naive to think that a company like Microsoft would suddenly cease lobbying for itself in Washington (don’t hate the player, hate the game), but a healthy dose of skepticism about grand promises is encouraged. With that said, I think that if Satya Nadella and Microsoft deliver even on half of their promises for this decade, then we will all be far better off.
Best wishes,
Andrei
Speaking of Sustainability 🌍
BlackRock will emphasize environmental sustainability as a core investment goal, CEO Larry Fink announced in his annual letter to the CEOs of world's largest corporations. BlackRock will also "introduce new funds that shun fossil fuel-oriented stocks, move more aggressively to vote against management teams that are not making progress on sustainability" and more. The annual letter is considered to be among the most influential documents in the world of finance, and CEOs of some of the most important companies around the world tend to pay close attention because BlackRock often has outsize influence on their Boards of Directors. [Letter]
Microsoft announced plans to become “carbon negative” by 2030, as part of its other promises regarding environmental sustainability, AI safety and corporate responsibility in this new decade. Being carbon negative would mean that the company will not only have net zero atmospheric emissions, but will actively remove extra carbon dioxide as well. [Announcement]
ML Education 📚
In the previous issue of Fairly Deep, I wrote a bit about shoddy online AI education. Having taken a good number of online ML courses (and a number of in-person classes at Cornell) and having fallen victim to BS resources a few times, I decided to compile a spreadsheet of free online courses that I found to be worthwhile and educational: https://docs.google.com/spreadsheets/d/1QVSmUNUqT80Hh49dVmVVfpJoZEVz-elKXq2TxbpkkgY/edit?usp=sharing
It is by no means an exhaustive list of good resources, so please feel free to add more!
👂All Ears: a selection of memorable podcast episodes
Timnit Gebru, one of the most well-known researchers in ML ethics and the lead of Google’s ethical AI team, recently discussed latest trends in fairness and ML ethics on the TWIML podcast:
Cristos Goodrow, head of Search and Discovery at YouTube, talked about the YouTube algorithm, clickbait videos, personalization and much more in this excellent episode of the Artificial Intelligence Podcast. If you’re interested in more news about the YouTube algorithm, check out the previous issue of Fairly Deep :)
Last but not least, I recently discovered a wonderful podcast that discusses both technical and non-technical trade-offs in tech careers. The episodes’ topics range from “Bootcamp vs. Computer Science Degree” to “AWS vs. GCP” and the podcast is hosted by Mayuko Inoue, who is a popular tech YouTuber and an iOS engineer working in Silicon Valley. [Link]

In other News 📰
London police will begin using facial-recognition cameras, while privacy advocates and civil liberties groups are sounding alarm bells. The European Union, which the UK officially (Br)exited this past week, is considering a blanket ban on using the technology for law enforcement purposes (similar to what San Francisco did last year). [article]
Another AI Winter? 🥶The BBC reported that a number of ML researchers are concerned that overblown promises of Artificial General Intelligence (AGI) and “a general feeling of plateau” are increasing the possibility of another “AI Winter.” The term has been used to describe periods of low funding and low trust in AI research in the 1970s and the late-1980s. In the past two decades that has been replaced by an exuberantly optimistic outlook and a number of truly remarkable breakthroughs (and certainly no lack of funding). But, “by the end of the decade there was a growing realisation that current techniques can only carry us so far.” What do you think, will there be another “AI winter”? [article]
Visa acquired fintech startup Plaid for $5.3 billion. Plaid’s technology allows apps to access bank accounts of their users and facilitate transactions, acting as a middleman between the two endpoints. WSJ reports that Visa and other card issuers are increasingly concerned that consumers are starting to avoid credit cards in favor of direct bank transfers. “Bank-account payments also offer a way into business-to-business payments, a sector in which card companies have been trying to play a bigger role because it is viewed as untapped compared with consumer payments.” [article]