The language we use to talk about artificial intelligence is incredibly important, includinging on our technical terms. One of the biggest drawbacks of machine learning and artificial intelligence discourse is the terms we have used so far to describe. For example, look at the device and deployment: Which word best describes what you do with the production machine learning model? While practitioners may argue that words have the same meaning, in this post in the context of ML, I discuss why deployment and inclusion may be better terms than deployment. (Personally, I prefer implementation). I will also dive into how the use of different words can help us build a responsible artificial intelligence movement that encompasses disciplines and industries.
The use of the word “for military purposes” in English dates back to 1786, which means extension (troops) in line and / or expansion (unit formed in columns). The term has been used figuratively since 1829, and is not only useless in dealing with artificial intelligence, but also resonates with other words that are part of modern vocabulary, such as colonize (often used to describe human travel to Mars).
In addition, it means knowledge of what group or groups can be used against on the basis of historical language. It may sound innocent enough if you remove this meaning, however, we need to redevelop our language around the implementation of ML models. I challenge you to understand why the use of deployment runs counter to the disadvantages of justice and unjust systems in society.
This supposed knowledge of what is being introduced is a huge problem. Data researchers and ML engineers understand that a decision-making system will be introduced, but we do not always know what impact these models will have. Assuming that 65% of companies do not know how to make artificial intelligence model decisions or predictions, we should be concerned that we use things without knowing the possible consequences that groups may have.
Using deployment to describe technical activities, especially given the impact and scale of data science and artificial intelligence work, is inaccurate. The lack of non-specificity used when talking about artificial intelligence does not help to mitigate the disadvantages of these systems, which often reinforce stereotypes, weaken users and have little control. The introduction of the term strengthens the power structure between technology-creating organizations and their users. Deployment is one-sided, users are rarely consulted, and there is no dispute or discussion about the decisions of these systems. These are not in line with ethical goals in artificial intelligence.
The lack of precision and careful word choice reflects the precise practices that lead us to the results of different ML modes. Lack of guidance from social scientists, likely influential subgroups, and justice scholars; what engineers are doing at the moment, IS is deploying models with little attention to who the model is being deployed to. This is only part of the reason 90% of all ML models never get into the product in the first place.
Because ML models are used in production with questionable ethics without sufficient transparency or the ability to enter information, it has also facilitated backdoors for our workplaces and private lives through invasions of privacy. We need to move to using terms such as introduction to expand what these discussions can be if they are rooted in social science research on inclusion and empathy-driven development of artificial intelligence.
As artificial intelligence develops and our understanding of it changes, so does the language we use to describe it. What does this mean for you?
Lots of Ctrl + F and Ctrl + V
Updating our language based on new information is far from new. Language changes and changes over time. As artificial intelligence develops and our understanding of it changes, so does the language we use to describe it. For us as scientists, engineers, business leaders, decision makers, and educators working in the field of artificial intelligence, it is important to understand how language shapes perceptions of this technology.
We don’t have to wait for a revolution or a new world order before we take action to improve our modeling process; you can start today in the language you use in your group. Update slides and internal documents.
ML’s discussion to date has used terms that can code the beliefs, values, and perspectives of those who use them, even unintentionally. When we look at the future of artificial intelligence, it is important for us to be aware and intentional about our words. We should use language that conveys what we do in a way that captures the nuance and complexity of machine intelligence while respecting the etymology and meanings of the words we choose.
What do you think?
What word would you like to describe in your work in translating the model into production? Let me know in the comments below!