calendar tag arrow download print
Skip to content

Greatest benefit of AI is the human-machine collaboration

Article
15 December 2020
Ethics AI

Photo:Hanyang University

Image
Sang Wook Yi

Artificial Intelligence (AI) is developing at lightning speed and transcending national borders. How can we steer the development of AI in the right direction? In this episode of the blog series #theWorld&AI, we talk to South Korean philosopher Sang Wook Yi. His biggest concern is that Korean engineers and government officials tend to look upon social problems too much as technical optimisation problems.

In short:

  • In the coming months, the Rathenau Instituut will be addressing the global importance for responsible AI.
  • South Korean philosopher Sang Wook Yi argues that caution is required for too much public trust in automated decisions.
  • The greatest benefits, according to Yi is through successful human-machine collaboration.

UNESCO is currently working on an international recommendation on ethics and AI. 24 experts from around the world are writing global guidelines that will be presented to 195 member states in November 2021. The Rathenau Instituut has been appointed as a national observer in the development of the international recommendation. This role gives the Rathenau Institute the opportunity to watch proceedings and to provide substantive comments.

In the coming months we will be asking inspiring thinkers for their ideas. Which aspects do they consider important for this international discussion? Each blog post covers a different theme, such as the responsibility of companies in deploying AI, the role of governments and policymakers, promoting technological citizenship, and the impact AI has on work and education.

About Sang Wook Yi

Sang Wook Yi door Hanyang University
Sang Wook Yi//Hanyang University

Sang Wook Yi is a Professor of Philosophy at Hanyang University, South Korea. and Director of the HY Center for Ethics, Law and Policy of Science and Technology in Hanyang. He was a former chairperson and committee member of the Korean counterpart of the Rathenau Instituut: the Korean Institute of Science & Technology Evaluation and Planning (KISTEP). He played an important role in advising the Korean Ministry of Science and Technology. Yi is a member of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST). He is one of the experts at UNESCO’s Ad Hoc Expert Group on the Ethics of AI.

About South Korea
South Korea is becoming increasingly important when it comes to AI potential and investment initiatives. Its government has developed an AI strategy and strives to become a Top 4 contender in AI by 2022, with an investment of over $2 billion in AI research and training. A striking number of start-ups in South Korea are offering AI technology. And, of course, there is Samsung, a global leader in the field of AI software and hardware. Alongside the economic opportunities that AI has to offer, the Korean government wants AI to greatly contribute to solving social problems such as caring for the elderly in the aging era, preventing crime and strengthening public safety.

How is the public debate on AI conducted in South Korea?
‘In Korea, the need for citizen participation is emphasized. We have a history of civil distrust, because for years the Korean government chose a strong top-down way to deal with societal issues, without involving or imparting knowledge to citizens. The attitude of the government was based on the so-called information deficit model. Korean officials thought people would just believe them when they said: ‘Trust us, we are the experts’. But in the 1980s and 1990s, we saw democratic uprisings and public demonstrations that resulted in the removal of the authoritarian government. Since then we have developed a very strong tradition of public engagement.’

How did thinking about the societal impact of Korean’s technology policy start?
‘In 2003 the Korean government, prompted by the Nanotechnology Initiative in 2002, established a technology assessment institution. The institute is called KISTEP, the Korea Institute of Science & Technology Evaluation and Planning (red. more or less comparable with the Rathenau Instituut). They also introduced a specific law, which means that assessing technology is mandatory.’

And how did it proceed?
‘Since 2015 – following a discussion between experts, government officials and KISTEP– we have been increasingly cultivating the public’s participation to ensure that different opinions are heard and shared. This is very important in South Korea, because it’s a well-known fact that if you simply push your science and technology development too hard on people, there will be a social cost to that. And we’ve seen several instances of this.’

Can you give an example?
‘A powerful example, though not related to the impact of technology, were the 2008 US Beef Protests. They were series of very peaceful, candlelit demonstrations that took place in central Seoul. Up to a million people gathering to protest because the South Korean government wanted to allow the import of US beef, which had been halted in 2003 when mad-cow disease was detected in US beef cattle. Critics accused the move as an attempt by the Korean government to please the US government. Eventually the import was allowed, but under restrictions. So even though our government is not always happy with the public consultations, they know it is unavoidable and that it can backfire if they don’t involve the public.’

South Korea wants to solve societal issues with the help of AI. What do you think about that?
‘Of course, AI shows great promise in gaining insights and potentially solving certain social problems Yet the Korean government should remain cautious about what is really possible with AI. Engineers tend to think that the best solution will be found when you optimise data. But you have to decide which variable you want to optimise in order to have AI solve the problem. Especially when considering problems with a social and behavioural component that is very complex. In cases like that, it’s not just an engineering problem; it’s also about political and social choices.’

Can you name an example that underlines your concerns?
‘Recently the Korean government initiated a research project to use AI to regulate an adolescent social delinquency problem. They wanted to figure out how best to deal with teenagers that behave badly. The researchers collated all sorts of data about the teenagers, from their behavioural habits, such as eating, sleeping and exercise, to their physical data, such as where they hang out and so on. All this data was put into an AI system and then they tried to get an AI answer to the question: What can we do to lower delinquency among Korean teenagers?’

Which solution did the engineers come up with?
‘One of the options was that they tried to optimise and minimise the number of hours Korean teenagers are awake after midnight. By doing so they made a claim that Korean teenagers who stay up later than midnight are more prone to social delinquency, breaking the law and developing bad habits. Of course, that’s a potential hypothesis. It’s not something you can just postulate. You need really good social research to verify this. Yet, because this research project was carried out by engineers, they took this common-sense notion and tried to optimise it in AI.’

And, has the AI solution been implemented?
‘No, fortunately not. This was just research and the results weren’t applied.’

What are you concerned about?
‘There are areas in which AI can be a great help. Take self-driving cars, for example: AI is able to respond much faster and more objectively in traffic than humans. But we must be aware that, just because AI works very well for one issue, it doesn’t automatically mean it will work out great in a different area as well. My worry is that if AI becomes increasingly better in optimising things and finding solutions for complex problems, people will naturally start assuming that AI will solve every difficult problem we encounter. That AI will be used to mitigate social conflicts is a scary and rather crazy concept to grasp. I do hope that my concerns about people assuming AI can be used to solve any issue once it becomes better optimised will be unfounded.’

Do you consider techno centricity typical for South Korea?
‘Yes, it seems that turning everything into an optimisation problem may be slightly characteristic for South Korea. I have been involved in numerous international discussions on AI ethics and policy, and as you may expect, there are several recurring themes in these discussions, such as privacy, explainability and the leaving no one behind principle. However, what strikes me is that none of the other 23 experts in the worldwide UNESCO group see broadening the technical mindset of engineers as a real priority. Of course, they recognise it, and agree upon the tension relating to social issues, but they don’t seem to have their own country’s example of this kind of concern.’

Now, for something different: is South Korea a developed country in terms of AI?
‘That is a difficult question. On the one hand, South Korea is a developed country economically and quite strong in IT technology – especially with the semiconductor and memory chip. But it is still not among the best top-level AI-technology countries. Korea is positioned somewhere between developed and under-developed countries.’

First the under-developed part?
‘Our country that sympathises with African countries on topics such as data sovereignty. In South Korea there’s also huge social controversy surrounding the American tech giants such as Facebook and Google and whether they are justified in collecting user data, especially when they don’t repay some of the benefits they obtain from the free access they get to it.’

And the developed part?
‘On the other hand, we have some Korean AI start-ups that are expanding their business operations to other parts of the world, namely South-east Asia, Europe and South America. There is demand for Korean cultural programmes such as movies, dramas and cartoons. Now, if UNESCO imposes international regulations, then these businesses will be in the same position as Facebook or Google.

That is kind of tricky?
‘Yes, I think Korea, like any other mid-level technology country, is resentful of these American tech giants and their way of doing business. But at the same time they too want to be out there, so if there are too many regulations then it’s going to interfere with their activities.’

How do you see this reflected in Korean national policies?
Our government is trying to find the right balance between encouraging innovation and providing proper regulation. The interesting thing is that during UNESCO’s Asia-Pacific consultation meeting about the ethics of AI, that was exactly the same struggle for countries such as Bangladesh, Singapore, the Philippines and Japan. All these countries want a slice of the AI pie and to develop their own AI technologies, so they don’t want strict policy that could hinder innovation. But at the same time we will all benefit from having international regulation, which protects us from gathering or using data in an unethical way.’

What do you feel needs to be emphasised in international discussions on AI?
‘One particular point I want to raise – and which I know can be misconstrued – is that I think it is very important we do not focus too much on the biases of AI systems. A lot of discussions revolve around this aspect. And I don’t want to deny this problem, but I strongly believe that AI can be used to learn more about ourselves and our own biases.’

How?
‘We know from numerous social psychology and neuroscience research studies that human beings are incredibly biased without intentionally being so. Let me use the popular author Malcolm Gladwell as an example. He is an ardent advocate of developing laws against racism in America, and he is far from a racist himself. Yet when he was doing an experiment with real-time response to black people and white people, it turned out that without intending to discriminate against black people, his skin response was actually stronger to black people. So this is the sort of thing we cannot really escape because we are educated and cultured by a society that may have a certain bias in some areas.’

And finally?
‘Another important thing I wish to be more prominent in the discussions we have about AI is that AI needs to be much more about how humans and AI can work together. The greatest benefits of AI can only be obtained through successful human-machine collaboration. Let’s not forget that.’