Where is your business on the AI ​​adoption curve? Take our AI survey discover.

Over the past decades, AI has become a ubiquitous technology, affecting businesses in all industries and around the world. These innovations are driven by research, and the goals of AI research are influenced by many factors. Together, these factors shape the patterns of what research achieves, as well as who benefits from it – and who does not.

In an effort to document factors influencing AI research, researchers at Stanford, University of California, Berkeley, University of Washington, and University College Dublin & Lero interrogates 100 highly cited studies submitted to two leading AI conferences, NeurIPS and ICML. They claim that in the articles they analyzed, published in 2008, 2009, 2018 and 2019, the dominant values ​​were operationalized in a way that centralized power, disproportionately benefiting companies while neglecting the less advantaged in society. .

“Our analysis of highly influential articles in the discipline reveals that they not only favor the needs of research communities and large companies over broader social needs, but also that they take this favoritism for granted,” wrote the co-authors of the article. “Favoritism manifests itself in the choice of projects, the lack of consideration of potential negative impacts, and the prioritization and operationalization of values ​​such as performance, generalization, efficiency and novelty. These values ​​are operationalized in such a way as to disadvantage the needs of society, usually without discussion or recognition. “

In the articles they reviewed, the researchers identified “performance”, “building on previous work”, “generalization”, “effectiveness”, “quantitative evidence” and “novelty” as the highest values ​​adopted by the co-authors. In contrast, values ​​related to user rights and ethical principles have appeared very rarely, if at all. None of the articles mentioned autonomy, justice or respect for people, and most only justified how the co-authors achieved certain internal technical objectives. More than two-thirds – 71% – made no mention of societal need or impact, and only 3% attempted to identify links linking their research to societal needs.

One of the documents included a discussion of negative impacts and a second mentioned the possibility. But tellingly, none of the other 98 contained any reference to potential negative impacts, according to researchers at Stanford, Berkeley, Washington and Dublin. Even after NeurIPS demanded that co-authors who submit papers must state the “wider potential impact of their work” on society, as of NeurIPS 2020 last year, the language leaned towards positive consequences, often mentioning negative consequences only briefly or not at all.

“We reject the vague conceptualization of the discipline of [AI] as a neutral value, ”the researchers wrote. “The result is that the discipline of ML is not neutral in terms of value. We find it to be socially and politically charged, frequently neglecting societal needs and prejudices, while prioritizing and fostering the concentration of power in the hands of already powerful actors.

To that end, the researchers found that the links to businesses – whether funding or affiliate – in the articles they examined had doubled to 79% from 2008 and 2009 to 2018 and 2019. Meanwhile, ties to universities have declined to 81%, putting companies almost on par. with universities for the most cited AI research.

This trend is in part attributable to private sector poaching. From 2006 to 2014, the proportion of publications on AI with an author affiliated with a company increased from about 0% to 40%, reflecting the growing movement of researchers from academia to business.

But whatever the cause, researchers say the effect is the suppression of values ​​such as charity, justice, and inclusion.

“The highest values ​​of [AI] that we have presented in this article such as performance, generalization and efficiency… enable and facilitate the achievement of Big Tech goals, ”they wrote. “A large, state-of-the-art image data set, for example, is essential for large-scale models, [AI] researchers and major technologies in possession of enormous computing power. In today’s climate where values ​​such as precision, efficiency and scale, as currently defined, are a priority, user safety, informed consent or participation can be seen as costly and time consuming , avoiding social needs.

A story of inequalities

The study is only the latest to claim that the AI ​​industry is built on inequality. In an analysis of publications in two large conference rooms on machine learning, NeurIPS 2020 and ICML 2020, none of the top 10 countries in terms of publication index were located in Latin America, Africa or Southeast Asia. A separate Georgetown University Center for Security and Emerging Technologies report found that while 42 of the top 62 AI labs are located outside of the United States, 68% of the workforce is located in the United States.

Imbalances can lead to damage, especially since the field of AI generally lacking clear descriptions of biases and does not explain how, why and to whom a specific bias is detrimental. Previous research has shown that ImageNet and OpenImages, two large datasets of publicly available images, focus on the United States and the Euro. Models trained on these data sets perform less well on images of Southern countries. For example, images of bride and groom are classified with lower precision when originating from Ethiopia and Pakistan, compared to images of bride and groom from the United States. In this vein, because of the way images of words like “marriage” or “spice” are presented in distinctly different cultures, publicly available object recognition systems fail to properly classify many of these. objects when they come from countries of the South.

Initiatives are underway to reverse the trend, such as Khipu and Black in AI, which aim to increase the number of Latin American and black academics attending and publishing at early AI conferences. Other communities based on the African continent, like Data Science Africa, Masakhane, and Indaba Deep Learning, have broadened their efforts with conferences, workshops, thesis awards and study programs developed for the wider African AI community.

But substantial gaps remain. AI researcher Timnit Gebru was fired of his position in an AI ethics team at Google would have partly on an article that discusses the risks associated with the deployment of large language models, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, micro-aggression, stereotypes, and other dehumanizing languages ​​aimed at specific groups of people. The co-authors affiliated with Google then published an article repulsive against Gebru’s environmental claims.

“We present this article in part in order to expose the contingency of the current state of the ground; it could be otherwise, ”wrote the researchers at University College Dublin & Lero and their associates. “For individuals, communities, and institutions browsing the hard-to-pinpoint values ​​of the domain, as well as those striving to achieve alternative values, this is a useful tool for having a characterization of how the domain is now, to understand, shape, dismantle or transform what is, and to articulate and bring out alternative visions.


VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member


Please enter your comment!
Please enter your name here