Information is the lifeblood of businesses today. Not only is day-to-day operations based on a continuous flow of information on every aspect of operations, it is becoming increasingly clear that previously unresolved problems can be solved and processes improved through adequate information and proper analysis. That should come as no surprise data science is currently ranked No. 2 on Glassdoor’s list of the best jobs in the U.S. in 2021 (and has been ranked No. 1 over the past 6 years).
But like the 2018 FacebOok / Cambridge An analytical political scandal made clear to the world, modern methods for collecting and analyzing large amounts of data can also raise ethical questions. With the outbreak of that scandal, the whole world began to form an opinion on how information can and should not be used, ushering in an era of so-called data ethics.
Combine this with existing and future legislation aimed at limiting how much customer data can be collected and for what purpose, and the bottom line is that if your company uses customer data to make customer-specific decisions, ethical and legal issues are made to make those decisions.
Creating experts in ethical thinking
In my last article, I described various things that can arise from careless use of data. I also touched on the numerous frameworks that data scientists can use as primers when starting to think about issues that might be posed by their work. But as information becomes more and more an incentive for decision-making across the organization, knowledge of information ethics needs to be extended beyond data processing teams. Avoiding unintentional pitfalls means considering the ethics of each use of data as an integral part of an organization’s processes.
The company I work for helps prevent companies from falling into ethical information locks. We try to increase the ethics of using our information in DNA. Every consultant in our company is trained in information security and ethics to some degree. We have also recently set up the Inclusion, Diversity, Equality and Awareness Group (IDEA) for people with a passion for thinking about how ethics and fairness affect business decision-making. As you work on a data project, the IDEA team’s ideas (which is a funny acronym) help ensure that our data scientists are aware of the latest trends in leveraging customer data.
Ethical framework for the use of information
Often, companies think that dealing with ethical issues is detrimental to the company, or somehow “breaking” what they are doing. But in our experience, this is never the case. In my work, I help companies achieve their goals in an ethical way.
To this end, I (along with two colleagues) have created a quick audit of ethical risk to help clients quickly assess whether their use of information poses an ethical risk. It is difficult to assess the risk and define the necessary measures for use cases for which solutions have not yet been planned, let alone. However, ethical risk is a crucial criterion in prioritizing use cases and choosing an approach. That’s why we developed Quick Scan for use at this very early stage in case selection and requirements collection. It gives us a sense of the level of risk and the areas involved in an early project and why special attention needs to be paid to the areas. The framework helps you to incorporate ethics into the planning of information projects.
Quick Scan looks like this:
We have designed a Quick Scan visual presentation of potential ethical issues. Just fill in the scores because they match the details of your case.
Frame in action: the case of a taxi driver
Here is an example of using a frame: Imagine a taxi service that digitally monitors many aspects of the driving components defined and performed by taxi drivers. They want to develop a model that evaluates driver performance based on this information and adjusts drivers ’pay scores automatically.
Let us discuss these use case results in all dimensions of the framework:
- Affect vulnerable people? People affected tend to have lower-than-average financial opportunities, and some may live on pay-to-pay.
- Number of people affected? This is an internal application that does not affect many people (only the employees of the company’s taxi driver).
- Does “things in life” affect? The decisions of the model affect matters of life because they affect economic well-being and occupational safety.
- Does it affect personal behavior? The decisions in the model may affect the behavior of taxi drivers in the sense that they may encourage taxi drivers to work longer hours or accept rides to places they are dissatisfied with.
- No or a slow feedback loop? The impact of model decisions is seen quickly (every week or month), so feedback is quick.
- No people in the loop? The use case requires independent performance and payroll, which means there are no people in the loop.
- Bias in the data? Performance is typically a subjective concept, so the information on which the model is trained may be biased.
- Personal information used? Finally, the information contains accurate personal information that is considered personal information.
Here’s how to complete a quick check based on this specific use case:
In each area filled with red dots, measures must be considered or taken to alleviate any problems. In addition, it is recommended to pay special attention to areas where the medium risk point has been met. The table below is an example of the biggest problem areas in this case.
The rapid review focuses on measures and mitigation measures that can be taken before the project is even underway. When the project is in full swing and as the real solution and information requirements become increasingly clear, the team can take their assessment to the next level through an ethical risk assessment framework. Here, the dimensions of risk are further examined in accordance with generally accepted ethical guidelines: security, fairness, transparency and privacy. We have distilled these instructions from a a meta-study conducted on 36 major AI principles.
Ensuring ethical behavior in data transfer
During the change, my company’s data scientists will work with industry experts to ensure a seamless interaction between data, analytics, technology and business expertise. At the same time, we ensure that the use of data is effectively optimized to achieve business objectives in violation of the guidelines and regulations on the use of ethical information.
In the next article, I will shed light on the types of actions we can take based on Quick Scan results using a (broader) ethical risk assessment framework. With these guidelines, we can eliminate and / or reduce the identified risks and make the use of artificial intelligence not only feasible but also viable and desirable.
This article (and frame) was written in collaboration with my colleagues Mando Rotman and Floor Komen. It has also been published for us company website, from which you can also download the frame in pdf format so you can use it yourself.