YUNHE
profile photo

Yunhe Wang

I am a senior researcher at Huawei Noah's Ark Lab, Beijing, where I work on deep learning, model compression, and computer vision, etc. Before that, I did my PhD at school of EECS, Peking University, where I was co-advised by Prof. Chao Xu and Prof. Dacheng Tao. I did my bachelors at school of science, Xidian University.

Email  /  Google Scholar  /  Zhi Hu

News

  • 03/2021, nine papers have been accepted by CVPR 2021.
  • 01/2021, I will give a talk about AdderNet at HAET ICLR 2021 workshop.
  • 12/2020, two papers have been accepted by AAAI 2021.
  • 11/2020, I accepted the invitation to serve as an Area Chair for ICML 2021.
  • 09/2020, six papers have been accepted by NeurIPS 2020.
  • 07/2020, one paper has been accepted by ACM MM 2020.
  • 07/2020, one paper has been accepted by IEEE TNNLS.
  • 07/2020, one paper has been accepted by ECCV 2020.
  • 06/2020, two papers have been accepted by ICML 2020.
  • 02/2020, seven papers have been accepted by CVPR 2020.
  • 01/2020, one paper has been accepted by IEEE TNNLS.
  • Recent Projects

    Actually, model compression is a kind of technique for developing portable deep neural networks with lower memory and computation costs. I have done several projects in Huawei including some smartphones' applications in 2019 and 2020 (e.g. Mate 30 and Honor V30). Currently, I am leading the AdderNet project, which aims to develop a series of deep learning models using only additions (Discussions on Reddit).

  • Adder Neural Networks
  • Project Page | Hardware Implementation

    I would like to say, AdderNet is very cool! The initial idea was came up in about 2017 when climbing with some friends at Beijing. By replacing all convolutional layers (except the first and the last layers), we now can obtain comparable performance on ResNet architectures. In addition, to make the story more complete, we recent release the hardware implementation and some quantization methods. The results are quite encouraging, we can reduce both the energy consumption and thecircuit areas significantly without affecting the performance. Now, we are working on more applications to reduce the costs of launching AI algorithms such as low-level vision, detection, and NLP tasks.

  • GhostNet on MindSpore: SOTA Lightweight CV Networks
  • Huawei Connect (HC) 2020 | MindSpore Hub

    The initial verison of GhostNet was accepted by CVPR 2020, which achieved SOTA performance on ImageNet: 75.7% top1 acc with only 226M FLOPS. In the current version, we release a series computer vision models (e.g. int8 quantization, detection, and larger networks) on MindsSpore 1.0 and Mate 30 Pro (Kirin 990).

  • AI on Ascend: Real-Time Video Style Transfer
  •   

    Huawei Developer Conference (HDC) 2020 | Online Demo

    This project aims to develop a video style transfer system on the Huawei Atlas 200 DK AI developer Kit. The latency of the original model for processing one image is about 630ms. After accelerating it using our method, the lantency now is about 40ms.

    Talks

  • 06/2020, "AI on the Edge - Discussion on the Gap Between Industry and Academia" at VALSE Webinar.
  • 05/2020, " Edge AI: Progress and Future Directions" at QbitAI using bilibili.
  • Research

    I'm interested in devleoping efficient models for computer vision (e.g. classification, detection, and super-resolution) using pruning, quantization, distilaltion, NAS, etc.

    Conference Papers:

    1. Distilling Object Detectors via Decoupled Features
      Jianyuan Guo, Kai Han, Yunhe Wang, Wei Zhang, Chunjing Xu, Chang Xu
      CVPR 2021

    2. HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens
      Zhaohui Yang, Yunhe Wang, Xinghao Chen, Jianyuan Guo, Wei Zhang,
      Chao Xu, Chunjing Xu, Dacheng Tao, Chang Xu
      CVPR 2021 | paper

    3. Manifold Regularized Dynamic Network Pruning
      Yehui Tang, Yunhe Wang, Yixing Xu, Yiping Deng, Chao Xu, Dacheng Tao, Chang Xu
      CVPR 2021

    4. Learning Student Networks in the Wild
      Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang
      CVPR 2021

    5. AdderSR: Towards Energy Efficient Image Super-Resolution
      Dehua Song*, Yunhe Wang*, Hanting Chen, Chang Xu, Chunjing Xu, Dacheng Tao
      CVPR 2020 (* equal contribution) | paper | code | Oral Presentation

    6. ReNAS: Relativistic Evaluation of Neural Architecture Search
      Yixing Xu, Yunhe Wang, Kai Han, Yehui Tang, Shangling Jui, Chunjing Xu, Chang Xu
      CVPR 2021 | paper | Oral Presentation

    7. Pre-Trained Image Processing Transformer
      Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu,
      Siwei Ma, Chunjing Xu, Chao Xu, Wen Gao
      CVPR 2021 | paper

    8. Data-Free Knowledge Distillation For Image Super-Resolution
      Yiman Zhang, Hanting Chen, Xinghao Chen, Yiping Deng, Chunjing Xu, Yunhe Wang
      CVPR 2021

    9. Positive-Unlabeled Data Purification in the Wild for Object Detection
      Jianyuan Guo, Kai Han, Han Wu, Xinghao Chen, Chao Zhang, Chunjing Xu, Chang Xu, Yunhe Wang
      CVPR 2021

    10. One-shot Graph Neural Architecture Search with Dynamic Search Space
      Yanxi Li, Zean Wen, Yunhe Wang, Chang Xu
      AAAI 2021

    11. Adversarial Robustness through Disentangled Representations
      Shuo Yang, Tianyu Guo, Yunhe Wang, Chang Xu
      AAAI 2021

    12. Kernel Based Progressive Distillation for Adder Neural Networks
      Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing Xu, Yunhe Wang
      NeurIPS 2020 | paper | Spotlight | code

    13. Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
      Kai Han*, Yunhe Wang*, Qiulin Zhang, Wei Zhang, Chunjing Xu, Tong Zhang
      NeurIPS 2020 (* equal contribution) | paper | code

    14. Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts
      Guilin Li*, Junlei Zhang*, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin,
      Wei Zhang, Jiashi Feng, Tong Zhang
      NeurIPS 2020 (* equal contribution) | paper | code

    15. Searching for Low-Bit Weights in Quantized Neural Networks
      Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing Xu, Chao Xu, Dacheng Tao, Chang Xu
      NeurIPS 2020 | paper | code

    16. SCOP: Scientific Control for Reliable Neural Network Pruning
      Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu
      NeurIPS 2020 | paper | code

    17. Adapting Neural Architectures Between Domains
      Yanxi Li, Zhaohui Yang, Yunhe Wang, Chang Xu
      NeurIPS 2020 | paper | code

    18. Discernible Image Compression
      Zhaohui Yang, Yunhe Wang, Chang Xu, Peng Du, Chao Xu, Chunjing Xu, Qi Tian
      ACM MM 2020 | paper

    19. Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer
      Xinghao Chen*, Yiman Zhang*, Yunhe Wang, Han Shu, Chunjing Xu, Chang Xu
      ECCV 2020 (* equal contribution) | paper | code

    20. Learning Binary Neurons with Noisy Supervision
      Kai Han, Yunhe Wang, Yixing Xu, Chunjing Xu, Enhua Wu, Chang Xu
      ICML 2020 | paper

    21. Neural Architecture Search in a Proxy Validation Loss Landscape
      Yanxi Li, Minjing Dong, Yunhe Wang, Chang Xu
      ICML 2020 | paper

    22. On Positive-Unlabeled Classification in GAN
      Tianyu Guo, Chang Xu, Jiajun Huang, Yunhe Wang, Boxin Shi, Chao Xu, Dacheng Tao
      CVPR 2020 | paper

    23. CARS: Continuous Evolution for Efficient Neural Architecture Search
      Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu
      CVPR 2020 | paper | code

    24. AdderNet: Do We Really Need Multiplications in Deep Learning?
      Hanting Chen*, Yunhe Wang*, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu
      CVPR 2020 (* equal contribution) | paper | code | Oral Presentation

    25. A Semi-Supervised Assessor of Neural Architectures
      Yehui Tang, Yunhe Wang, Yixing Xu, Hanting Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, Chang Xu
      CVPR 2020 | paper

    26. Hit-Detector: Hierarchical Trinity Architecture Search for Object Detection
      Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, Chang Xu
      CVPR 2020 | paper | code

    27. Frequency Domain Compact 3D Convolutional Neural Networks
      Hanting Chen, Yunhe Wang, Han Shu, Yehui Tang, Chunjing Xu, Boxin Shi, Chao Xu, Qi Tian, Chang Xu
      CVPR 2020 | paper

    28. GhostNet: More Features from Cheap Operations
      Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu
      CVPR 2020 | paper | code

    29. Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks
      Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
      AAAI 2020 | paper | code

    30. DropNAS: Grouped Operation Dropout for Differentiable Architecture Search
      Weijun Hong, Guilin Li, Weinan Zhang, Ruiming Tang, Yunhe Wang, Zhenguo Li, Yong Yu
      IJCAI 2020 | paper

    31. Distilling Portable Generative Adversarial Networks for Image Translation
      Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu
      AAAI 2020 | paper

    32. Efficient Residual Dense Block Search for Image Super-Resolution
      Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang
      AAAI, 2020 | paper | code

    33. Positive-Unlabeled Compression on the Cloud
      Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, Dacheng Tao, Chang Xu
      NeurIPS 2019 | paper | code | supplement

    34. Data-Free Learning of Student Networks
      Hanting Chen,Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi,
      Chunjing Xu, Chao Xu, Qi Tian
      ICCV 2019 | paper | code

    35. Co-Evolutionary Compression for Unpaired Image Translation
      Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Qi Tian, Chang Xu
      ICCV 2019 | paper | code

    36. Searching for Accurate Binary Neural Architectures
      Mingzhu Shen, Kai Han, Chunjing Xu, Yunhe Wang
      ICCV Neural Architectures Workshop 2019 | paper

    37. LegoNet: Efficient Convolutional Neural Networks with Lego Filters
      Zhaohui Yang, Yunhe Wang, Hanting Chen, Chuanjian Liu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
      ICML 2019 | paper | code

    38. Learning Instance-wise Sparsity for Accelerating Deep Models
      Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu
      IJCAI 2019 | paper

    39. Attribute Aware Pooling for Pedestrian Attribute Recognition
      Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu
      IJCAI 2019 | paper

    40. Crafting Efficient Neural Graph of Large Entropy
      Minjing Dong, Hanting Chen, Yunhe Wang, Chang Xu
      IJCAI 2019 | paper

    41. Low Resolution Visual Recognition via Deep Feature Distillation
      Mingjian Zhu, Kai Han, Chao Zhang, Jinlong Lin, Yunhe Wang
      ICASSP 2019 | paper

    42. Learning Versatile Filters for Efficient Convolutional Neural Networks
      Yunhe Wang, Chang Xu, Chunjing Xu, Chao Xu, Dacheng Tao
      NeurIPS 2018 | paper | code | supplement

    43. Towards Evolutionary Compression
      Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, Dacheng Tao
      SIGKDD 2018 | paper

    44. Autoencoder Inspired Unsupervised Feature Selection
      Kai Han, Yunhe Wang, Chao Zhang, Chao Li, Chao Xu
      ICASSP 2018 | paper | code

    45. Adversarial Learning of Portable Student Networks
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      AAAI 2018 | paper

    46. Beyond Filters: Compact Feature Map for Portable Deep Model
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      ICML 2017 | paper | code | supplement

    47. Beyond RPCA: Flattening Complex Noise in the Frequency Domain
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      AAAI 2017 | paper

    48. Privileged Multi-Label Learning
      Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao
      IJCAI 2017 | paper

    49. CNNpack: Packing Convolutional Neural Networks in the Frequency Domain
      Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
      NeurIPS 2016 | paper | supplement

    Journal Papers:

    1. Adversarial Recurrent Time Series Imputation
      Shuo Yang, Minjing Dong, Yunhe Wang, Chang Xu
      IEEE TNNLS 2020 |paper

    2. Learning Student Networks via Feature Embedding
      Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      IEEE TNNLS 2020 | paper

    3. Packing Convolutional Neural Networks in the Frequency Domain
      Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
      IEEE TPAMI 2018 | paper

    4. DCT Regularized Extreme Visual Recovery
      Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
      IEEE TIP 2017 | paper

    5. DCT Inspired Feature Transform for Image Retrieval and Reconstruction
      Yunhe Wang, Miaojing Shi, Shan You, Chao Xu
      IEEE TIP 2016 | paper

    Services

  • Area Chair of ICML 2021.

  • Senior Program Committee Members of IJCAI 2021, IJCAI 2020 and IJCAI 2019.

  • Journal Reviewers of IEEE T-PAMI, IJCV, IEEE T-IP, IEEE T-NNLS, IEEE T-MM, IEEE T-KDE, etc.

  • Program Committee Members of ICCV 2021, AAAI 2021, ICLR 2021, NeurIPS 2020, ICML 2020, ECCV 2020, CVPR 2020, ICLR 2020, AAAI 2020, ICCV 2019, CVPR 2019, ICLR 2019, AAAI 2019, IJCAI 2018, AAAI 2018, NeurIPS 2018, etc.

  • Awards

  • 2020, Nomination for Outstanding Youth Paper Award, WAIC

  • 2017, Google PhD Fellowship

  • 2017, Baidu Scholarship

  • 2017, President's PhD Scholarship, Peking University

  • 2017, National Scholarship for Graduate Students

  • 2016, National Scholarship for Graduate Students

  • Powered by w3.css.
    No. Visitor Since Jun 2020. Welcome!