丝袜脚交免费网站xx-国产91丝袜在线播放-国产视频一区二区三区在线观看-午夜美女视频-午夜爽爽视频-制服丝袜先锋影音-天天躁日日躁狠狠躁喷水-日韩综合一区二区三区-99思思-日本体内she精视频-欧美精品免费播放-日韩欧美国产不卡-一级在线免费观看视频-韩国午夜理伦三级在线观看按摩房-伦乱激情视频

China Focus: Chinese university aims to bring trust, resilience to next-generation AI

Source: Xinhua| 2019-05-14 14:08:52|Editor: mingmei
Video PlayerClose

BEIJING, May 14 (Xinhua) -- From voice assistant to face recognition; from defeating master players in Go to crushing professional gamers in strategy game StarCraft; the world has witnessed exciting progress in the development of artificial intelligence (AI).

As AI is applied to higher-stake functions - like self-driving cars, automated surgical assistants, hedge fund management and power grid controls - how can we ensure it's trustworthy?

China's prestigious Tsinghua University has announced it will step up basic research on third-generation AI, in the hope of building trust and preventing abuse and malicious behavior of AI models.

Zhang Bo, director of the Tsinghua Institute for Artificial Intelligence and academician at the Chinese Academy of Sciences, unveiled the plan at the opening of Center for Fundamental Theories under the Institute for Artificial Intelligence on Monday.

Tsinghua researchers have been talking about the future of AI since 2014 and expect it to enter the third stage of its development in coming years, said Zhang.

The first-generation AI was driven by the knowledge that researchers themselves possessed and they tried to provide the AI model with clear logical rules. These systems were capable of solving well-defined problems, but incapable of learning.

In the second-generation, AI started to learn. Machines learn by training a system on a data set and then testing it on another set. The system eventually becomes more precise and efficient.

Zhang said the weakness of the second-generation lies in its explainability and robustness.

AI robustness refers to an acceptably high performance even in worst-case scenarios.

Although AI has already outperformed humans in certain areas like image recognition, nobody understands why these systems are doing so well.

Machine learning and deep learning, the most common AI branches of recent years, suffer from the so-called "AI black box". People find it hard to interpret the AI-based decisions and cannot predict when the AI model will fail and how it will fail.

Meanwhile, even accurate AI models can be vulnerable to "adversarial attacks" in which subtle differences are introduced to input data to manipulate AI "reasoning".

For instance, an AI system might mistake a sloth for a racing car if some unnoticeable changes are made to a photo of sloth.

Researchers therefore need to improve and verify the robustness of AI models, leaving no room for adversarial examples or even attacks to manipulate results.

If AI technologies are deployed in security-sensitive or safety-critical scenarios, the next-generation needs to be comprehensible and more robust, said Zhang.

Zhu Jun, director of the new center, said it will carry out interdisciplinary studies and expects to attract talent from around the world, providing them with a relaxed academic environment.

He said Tsinghua University plans to host a high-level and fully-open AI meeting every year.

"If anything helps innovation, we'll give it a try," said Zhu.

"It's hard to predict the progress of research on fundamental theories. It could be explosive and trail-blazing."

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011100001380574051