Most Americans are unsure of how consciously companies behave when using and protecting personal information. Nearly 81% of them say they are unsure about the potential risks of data collection, and 66% say they experience data collection in the same state.

It is really difficult to weigh the level of potential risks and understand the anticipated harm that Irresponsible Behavior with personal information can cause. How can this affect the follow-up, what limitations and changes will it bring?

We analyze the main points of what happens when personal information is disclosed and how to properly protect yourself using the example of the recent situation in South Korea.

The Korean company ScatterLab launched a “scientific and knowledge-based” application that was supposed to predict the degree of attachment of relationships.

It is based on KakaoTalk, South Korea’s most popular messenger app, used by about 90 percent of the population.

So analyzing romantic feelings (or their absences) only costs about $ 4.50 per conversation processing.
The “science of love” worked like this: it explored the conversation and was based on certain factors (such as average response time, how often a partner writes to you first, the facts of some triggers, and the use of emotions) to conclude a romantic connection between dialogue partners.

You can say “Come on! Does anyone know it better than inner feeling? How can an app be aware of events in a person’s head or heart when they send you text messages? “Well, that makes some sense here.
The fact is that by June 2020, Science of Love had received about 2.5 million downloads in South Korea and 5 million in Japan, and was preparing to expand its business to the United States.

So let’s reveal why it’s becoming so popular with Korean guys and girls?
“Because I felt like the app understood me, I felt safe and compassionate. It felt good because it felt like I had a love doctor next to me,” one of the users says in a review.

In December 2020, the company introduced artificial intelligence chatbot Lee-Luda.

As a well-trained artificial intelligence consultant, the bot was placed and taught in more than 10 billion application discussion logs. “20-year-old woman” Lee-Luda is ready to create a true friendship with everyone.
As the company’s CEO mentioned, CEO Lee-Luda intended to become an artificial intelligence chatbot that people consider them a better interlocutor than a human being. “

Immediately after launching the bot a couple of weeks ago, users couldn’t help but pay attention to the robot’s harsh treatment and statements to certain social groups and minorities (LGBTQ +, people with disabilities, feminists, etc.).

The development company ScatterLab explained this phenomenon by saying that the bot took data from a basic data set for training, not from personal user conversations.
Therefore, it is clear that the company did not properly filter the sentences and blasphemies before starting the bot training.

The developers “failed to delete some personal information depending on the context” (Well, that’s what it is).

Lee-Luda could not have learned to include such personal information in her answers if they were not in the training data set.
But there’s also some “good news”: with artificial intelligence, it’s possible to restore a set of training data chatbot. So if the training data set contained personal information, it can be decrypted by querying the chatbot.

Still not that bad, is it?
To make matters worse, ScatterLab had uploaded a training set of 1,700 sentences to Github as part of larger material.
It revealed the names of more than 20 people, their location, the state of their relationship, and some of their medical records.

ScatterLab provided clarifications on the event, which were intended to reassure the public, but in the end they infuriated people even more. Corporate statements showed that “Lee-Luda is a childish artificial intelligence that has just begun to talk to people”, that it “has a lot to learn” and “learns a better and more appropriate response by trying and making mistakes.” However, is it ethical to violate privacy and security In the learning process?

Despite the fact that this situation has become a high-level event in Korea, it has not received worldwide attention (and, in our view, quite unfairly).
It is not a question of the negligence and dishonesty of the perpetrators, this case reflects the general development of the artificial intelligence sector. Users of technology-based software have little control over the collection and use of personal information.
Situations like this should make you think more carefully and conscientiously about information management.

The pace of technological development is well ahead of the adoption of regulatory standards for their use. It’s hard to predict where technology will take us in a couple of years.

So the global question is “are artificial intelligence and technology companies able to independently control the ethical part of the innovations used and developed?”
Is there a reason to go back to the concept of corporate social responsibility? And where is this golden mean (Innovation VS Humanity)?

Also available in audio format here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here