I am a senior researcher at Huawei Noah's Ark Lab, Beijing, where I work on deep learning, model compression, and computer vision, etc. Before that, I did my PhD at school of EECS, Peking University, where I was co-advised by Prof. Chao Xu and Prof. Dacheng Tao. I did my bachelors at school of science, Xidian University.
12/2020, two papers have been accepted by AAAI 2021.
11/2020, I accepted the invitation to serve as an Area Chair for ICML 2021.
09/2020, six papers have been accepted by NeurIPS 2020.
07/2020, one paper has been accepted by ACM MM 2020.
07/2020, one paper has been accepted by IEEE TNNLS.
07/2020, one paper has been accepted by ECCV 2020.
06/2020, two papers have been accepted by ICML 2020.
02/2020, seven papers have been accepted by CVPR 2020.
01/2020, one paper has been accepted by IEEE TNNLS.
Recent Projects
Actually, model compression is a kind of technique for developing portable deep neural networks with lower memory and computation costs. I have done several projects in Huawei including some smartphones' applications in 2019 and 2020 (e.g. Mate 30 and Honor V30). Currently, I am leading the AdderNet project, which aims to develop a series of deep learning models using only additions (Discussions on Reddit).
I would like to say, AdderNet is very cool! The initial idea was came up in about 2017 when climbing with some friends at Beijing. By replacing all convolutional layers (except the first and the last layers), we now can obtain comparable performance on ResNet architectures. In addition, to make the story more complete, we recent release the hardware implementation and some quantization methods. The results are quite encouraging, we can reduce both the energy consumption and thecircuit areas significantly without affecting the performance. Now, we are working on more applications to reduce the costs of launching AI algorithms such as low-level vision, detection, and NLP tasks.
GhostNet on MindSpore: SOTA Lightweight CV Networks
The initial verison of GhostNet was accepted by CVPR 2020, which achieved SOTA performance on ImageNet: 75.7% top1 acc with only 226M FLOPS. In the current version, we release a series computer vision models (e.g. int8 quantization, detection, and larger networks) on MindsSpore 1.0 and Mate 30 Pro (Kirin 990).
This project aims to develop a video style transfer system on the Huawei Atlas 200 DK AI developer Kit. The latency of the original model for processing one image is about 630ms. After accelerating it using our method, the lantency now is about 40ms.
I'm interested in devleoping efficient models for computer vision (e.g. classification, detection, and super-resolution) using pruning, quantization, distilaltion, NAS, etc.
Conference Papers:
Distilling Object Detectors via Decoupled Features
Jianyuan Guo, Kai Han, Yunhe Wang, Wei Zhang, Chunjing Xu, Chang Xu
CVPR 2021
HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens
Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei Zhang,
Chao Xu, Chunjing Xu, Dacheng Tao, Chang Xu
CVPR 2021 | paper
Data-Free Knowledge Distillation For Image Super-Resolution
Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang CVPR 2021
Positive-Unlabeled Data Purification in the Wild for Object Detection
Jianyuan Guo, Kai Han, Han Wu, Xinghao Chen, Chao Zhang, Chunjing Xu, Chang Xu, Yunhe Wang CVPR 2021
One-shot Graph Neural Architecture Search with Dynamic Search Space
Yanxi Li, Zean Wen, Yunhe Wang, Chang Xu
AAAI 2021
Adversarial Learning of Portable Student Networks Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI 2018 | paper
Beyond Filters: Compact Feature Map for Portable Deep Model Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
ICML 2017 | paper | code | supplement
Beyond RPCA: Flattening Complex Noise in the Frequency Domain Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI 2017 | paper
Privileged Multi-Label Learning
Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
IJCAI 2017 | paper
CNNpack: Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
NeurIPS 2016 | paper | supplement
Journal Papers:
Adversarial Recurrent Time Series Imputation
Shuo Yang, Minjing Dong, Yunhe Wang, Chang Xu
IEEE TNNLS 2020 |paper
Learning Student Networks via Feature Embedding
Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TNNLS 2020 | paper
Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TPAMI 2018 | paper
DCT Regularized Extreme Visual Recovery Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
IEEE TIP 2017 | paper
DCT Inspired Feature Transform for Image Retrieval and Reconstruction Yunhe Wang, Miaojing Shi, Shan You, Chao Xu
IEEE TIP 2016 | paper