I am a senior researcher at Huawei Noah's Ark Lab, Beijing, where I work on deep learning, model compression, and computer vision, etc. Before that, I did my PhD at school of EECS, Peking University, where I was co-advised by Prof. Chao Xu and Prof. Dacheng Tao. I did my bachelors at school of science, Xidian University.
12/2020, two papers have been accepted by AAAI 2021.
11/2020, I accepted the invitation to serve as an Area Chair for ICML 2021.
09/2020, six papers have been accepted by NeurIPS 2020.
07/2020, one paper has been accepted by ACM MM 2020.
07/2020, one paper has been accepted by IEEE TNNLS.
07/2020, one paper has been accepted by ECCV 2020.
06/2020, two papers have been accepted by ICML 2020.
02/2020, seven papers have been accepted by CVPR 2020.
01/2020, one paper has been accepted by IEEE TNNLS.
11/2019, three papers have been accepted by AAAI 2020.
Recent Projects
Actually, model compression is a kind of technique for developing portable deep neural networks with lower memory and computation costs. I have done several projects in Huawei including some smartphones' applications in 2019 and 2020 (e.g. Mate 30 and Honor V30). Currently, I am leading the AdderNet project, which aims to develop a series of deep learning models using only additions (Discussions on Reddit).
GhostNet on MindSpore: SOTA Lightweight CV Networks
The initial verison of GhostNet was accepted by CVPR 2020, which achieved SOTA performance on ImageNet: 75.7% top1 acc with only 226M FLOPS. In the current version, we release a series computer vision models (e.g. int8 quantization, detection, and larger networks) on MindsSpore 1.0 and Mate 30 Pro (Kirin 990).
This project aims to develop a video style transfer system on the Huawei Atlas 200 DK AI developer Kit. The latency of the original model for processing one image is about 630ms. After accelerating it using our method, the lantency now is about 40ms.
I'm interested in devleoping efficient models for computer vision (e.g. classification, detection, and super-resolution) using pruning, quantization, distilaltion, NAS, etc.
Conference Papers:
Kernel Based Progressive Distillation for Adder Neural Networks
Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing Xu, Yunhe Wang NeurIPS 2020 | paper | Spotlight
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
Kai Han*, Yunhe Wang*, Qiulin Zhang, Wei Zhang, Chunjing Xu, Tong Zhang
NeurIPS 2020 (* equal contribution) | paper | code
Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts
Guilin Li*, Junlei Zhang*, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin,
Wei Zhang, Jiashi Feng, Tong Zhang
NeurIPS 2020 (* equal contribution) | paper | code
Searching for Low-Bit Weights in Quantized Neural Networks
Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing Xu, Chao Xu, Dacheng Tao, Chang Xu
NeurIPS 2020 | paper
SCOP: Scientific Control for Reliable Neural Network Pruning
Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu
NeurIPS 2020 | paper | code
Adapting Neural Architectures Between Domains
Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
NeurIPS 2020 | paper | code
Discernible Image Compression
Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian
ACM MM 2020 | paper
Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer
Xinghao Chen*, Yiman Zhang*, Yunhe Wang, Han Shu, Chunjing Xu, Chang Xu
ECCV 2020 (* equal contribution) | paper | code
Learning Binary Neurons with Noisy Supervision
Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, Enhua Wu, Chang Xu
ICML 2020 | paper
Neural Architecture Search in a Proxy Validation Loss Landscape
Yanxi Li, Minjing Dong, Yunhe Wang, Chang Xu
ICML 2020 | paper
On Positive-Unlabeled Classification in GAN
Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, Dacheng Tao
CVPR 2020 | paper
Adversarial Learning of Portable Student Networks Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI 2018 | paper
Beyond Filters: Compact Feature Map for Portable Deep Model Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
ICML 2017 | paper | code | supplement
Beyond RPCA: Flattening Complex Noise in the Frequency Domain Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI 2017 | paper
Privileged Multi-Label Learning
Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
IJCAI 2017 | paper
CNNpack: Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
NeurIPS 2016 | paper | supplement
Journal Papers:
Adversarial Recurrent Time Series Imputation
Shuo Yang, Minjing Dong, Yunhe Wang, Chang Xu
IEEE TNNLS 2020 | to appear
Learning student networks via feature embedding
Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TNNLS 2020 | paper
Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TPAMI 2018 | paper
DCT Regularized Extreme Visual Recovery Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
IEEE TIP 2017 | paper
DCT inspired feature transform for image retrieval and reconstruction Yunhe Wang, Miaojing Shi, Shan You, Chao Xu
IEEE TIP 2016 | paper