iKala Future Talks

The Latest Innovations in Artificial Intelligence – Part 2

According to IDC, the artificial intelligence market is expected to break the $500 billion mark by 2024, with a five-year CAGR of 17.5%. What are the top AI innovations in 2021? When AI is revolutionizing and transforming human society, how can we respond to the challenge and benefit from AI? 

Here are some highlights from iKala Co-founder & CEO Sega Cheng's sharing.

Privacy & Security

An online questionnaire conducted by MIT in 2018 raised a question that if an autonomous vehicle is about to hit a pedestrian on the way, whether it should save the young or the old? The result of this survey showed the cultural differences between the East and the West. It is clear that the eastern countries are mostly on the right of the chart. On the left are western countries which emphasize independence, and the importance of young people and the future of newborns. In the same moral choice of AI, people in different countries have completely different opinions. That's why AI faces so many moral issues.

There is one more local example. The year before last, Taiwan Railway introduced a smart surveillance camera, mainly to prevent crime, which caused a very serious human-rights controversy at the time. The main technology used was AI facial recognition. There is a group of people who argue that facial recognition is very good to deter crime, so it must be used unconditionally, but some feel that privacy is infringed, others feel that there should firstly be a balance between privacy and crime detection, then the technology can be widely adopted. What we are facing is the contradictory of "individual's right to privacy" and "public's right to know". These two things are constantly raising controversy due to the development of technology.

Back to the industry, the tracking of digital ads is widely adopted by advertisers, but some users have negative feelings about it. Therefore, several digital giants have responded to and improved on this. For example, Google promises to gradually phase out third-party cookies, the new Android system will start to allow users to block advertiser's tracking ID to prevent it from tracking specific customers, and Apple updates its privacy policy that tracking ID on users' phones are blocked by default. 

In this privacy-conscious world, how are we going to continue to develop in machine learning and AI? One of them is called Federated Learning. Its main concept is that I can see the whole picture without looking at an individual. In other words, I can still understand what the users' interests may be, without identifying who he is. What we did in the past is that we have to detect individuals and groups. The privacy-based machine learning technology is about behaviors of a group of people instead of a single person, and Google is the first company to unveil the technology, and it is now using this technology to train some of their models. This is how people develop AI in a world where privacy rights are rising.

Looking at the responses of various countries, the European Union is the most active player. Starting from the GDPR, California, Brazil, and South Africa have all proposed their own data protection and privacy rights regulations. Now companies in various countries who want to do cross-border online business are actually adapting to the laws and regulations of various governments. Therefore, the laws and regulations of various governments and the response of technology giants to privacy rights, have shaped the future business world.

Deep Fake

Take a look at the four pictures on this screen. Can you suspect anything unusual in these four pictures? I think most people think it's photos of dogs, landscapes, butterflies, and burgers. In fact, what is important is that these four pictures do not exist. They are not photos taken by real cameras, but completely synthesized by the computer. The technology was developed by DeepMind in 2018. This highlights a very important issue, especially in 2021. Seeing is not believing. When you see anything on the Internet, or in any other places, you have to be skeptical to find out whether it is true. This is a very big impact of AI. But we can still give an example here, of the same technology being used in the business world, for good.

Picaas is a technology developed by iKala, mainly used for image editing. The problem it solves is very simple. When we get a product image with some logos or taglines on it, some covering the product itself, most of us will ask the designer to retouch the image. But now, AI can help. The left is the image before editing, and the right is the picture retouched by Picaas. It improves the productivity of designers. Originally, it took about ten minutes to process a picture, but now with AI, it can be processed in 2.2 seconds, and there is little work to be done by designers. It greatly reduces some of our repetitive work in image editing.


How does Tesla train its autonomous car? In fact, it is a model of machine learning that collects a large amount of data when people are driving, sends all the data back to their data center, and deploys the data back to every car to make self-driving safer. It can be considered as a decentralized training model. When you are driving a Tesla, you are actually training it at the same time, and you can help other Tesla owners. However, there is an issue in it. Whether the data is Tesla's or mine? This will bring a lot of controversy. Why not use an even simpler example to explain why the issue is not easy to solve.

The farmer and the beekeeper are actually in an interesting cooperative relationship, because the farmer needs his crops to be pollinated by bees; on the other hand, the beekeeper needs to sell his honey. They have different needs, but are dependent on each other. Now the problem comes, under such a relationship, who is going to pay? The answer is that the farmer has to pay the beekeeper because the farmer is more dependent on him. 

Why can they make such a transaction? There are two main reasons why the pollination market can operate like this? The first one is that the transaction cost is low. The beekeeper provides pollination services and gets more honey, then crops of the farmer grow better. It is easy for them to reach a consensus. The second is that they have a clear ownership of their assets. Beekeepers own bees and what they need is honey. What farmers want is for his crops to grow vigorously. Therefore, they are clear about the ownership of these assets, which caused their transactions to be completed. In Economics, this is called the Coase Theorem, firstly introduced in 1966. 

One more example that may be easier for everyone to understand recently is the matter of vaccination. If I am a person who should get a vaccination, do I have to pay others for not getting vaccinated, or people have to pay to get me vaccinated, in order to prevent me from causing more damage to society? The externality that the Coase Theorem is talking about is that both options work, but still controversies exist.

The problem of data is quite similar. Whether enterprises can freely use the data because of human goods, or the data should be treated as a personal asset, and can be transacted? From the example of Tesla, we have seen such a problem of data ownership. Whether the data is yours will be an important issue to be discussed in the coming years.

Explainable AI

When AI makes more and more decisions for us,inevitably many people will be curious about the way it makes decisions. The problems can be big and small. For example, when we buy books on Amazon, AI will recommend some books we want to read. At this time, we may accept it with pleasure, and then take a look at what it recommends. But we don't care too much about why AI knows our preferences. We consider it as a normal customer journey, with AI helping us, and recommending the books to us. This is a relatively positive application of AI.

In some cases, such as bank loans, there will be some controversies that humans hope AI can explain how the machine makes decisions. In 2018, the European Union proposed a regulation of "the right to explanation", empowering its people to ask companies to explain how the machine makes decisions, especially when facing some automation decisions. As you can imagine, when we humans make decisions, it is already difficult to explain what the context of our thinking may be, and that's the same for machines. This is the so-called "Black box AI" in the field of AI. Many scholars and developers are trying to respond to the governments' protection of human rights and hope to strike a balance between AI development and human life.


Sign up to watch the latest episodes of iKala Future Talks for free.

Leave a Reply