Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
2021
Wang, Linnan; Xie, Saining; Li, Teng; Fonseca, Rodrigo; Tian, Yuandong
Sample-Efficient Neural Architecture Search by Learning Actions for Monte Carlo Tree Search Journal Article
In: IEEE transactions on pattern analysis and machine intelligence, vol. PP, 2021, ISSN: 0162-8828.
@article{PMID:33826511,
title = {Sample-Efficient Neural Architecture Search by Learning Actions for Monte Carlo Tree Search},
author = {Linnan Wang and Saining Xie and Teng Li and Rodrigo Fonseca and Yuandong Tian},
url = {https://doi.org/10.1109/TPAMI.2021.3071343},
doi = {10.1109/tpami.2021.3071343},
issn = {0162-8828},
year = {2021},
date = {2021-04-01},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {PP},
abstract = {Neural Architecture Search (NAS) has emerged as a promising technique for automatic neural network design. However, existing MCTS based NAS approaches often utilize manually designed action space, which is not directly related to the performance metric to be optimized (e.g., accuracy), leading to sample-inefficient explorations of architectures. To improve the sample efficiency, this paper proposes Latent Action Neural Architecture Search (LaNAS), which learns actions to recursively partition the search space into good or bad regions that contain networks with similar performance metrics. During the search phase, as different action sequences lead to regions with different performance, the search efficiency can be significantly improved by biasing towards the good regions. On three NAS tasks, empirical results demonstrate that LaNAS is at least an order more sample efficient than baseline methods including evolutionary algorithms, Bayesian optimizations, and random search. When applied in practice, both one-shot and regular LaNAS consistently outperform existing results. Particularly, LaNAS achieves 99.0% accuracy on CIFAR-10 and 80.8% top1 accuracy at 600 MFLOPS on ImageNet in only 800 samples, significantly outperforming AmoebaNet with 33x fewer samples. Our code is publicly available at https://github.com/facebookresearch/LaMCTS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Zhentong; Shan, Yugang; Yuan, Jie
Multi-Level Cell Progressive Differentiable Architecture Search to Improve Image Classification Accuracy Journal Article
In: Journal of Signal Processing Systems, 2021.
@article{ZhangJSPS2021,
title = {Multi-Level Cell Progressive Differentiable Architecture Search to Improve Image Classification Accuracy},
author = {Zhang, Zhentong and Yugang Shan and Jie Yuan},
url = {https://doi.org/10.1007/s11265-021-01647-1},
year = {2021},
date = {2021-03-08},
journal = {Journal of Signal Processing Systems},
abstract = {In recent years, the neural architecture search has continuously made significant progress in the field of image recognition. Among them, the differentiable method has obvious advantages compared with other search methods in terms of computational cost and accuracy to deal with image classification. However, the differentiable method is usually composed of single cell, which cannot efficiently extract the features of the network. In response to this problem, we propose a multi-level cell progressive differentiable method which allows cells to have different types according to the levels of the network. In differentiable method, the gap between the search network and the evaluation one is large, and the correlation is low. We design an algorithm to improve the distribution of architecture parameters. We also optimize the loss function and use the regularization method of additional action to improve deep network performance. The method achieves good search and classification results on CIFAR10 and ImageNet (mobile setting).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zheng, X; Ji, R; Chen, Y; Wang, Q; Zhang, B; Ye, Q; Chen, J; Huang, F; Tian, Y
MIGO-NAS: Towards Fast and Generalizable Neural Architecture Search Journal Article
In: IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1-1, 2021, ISSN: 1939-3539.
@article{9377468,
title = {MIGO-NAS: Towards Fast and Generalizable Neural Architecture Search},
author = {X Zheng and R Ji and Y Chen and Q Wang and B Zhang and Q Ye and J Chen and F Huang and Y Tian},
url = {https://www.computer.org/csdl/journal/tp/5555/01/09377468/1rUNdbz4LQY},
doi = {10.1109/TPAMI.2021.3065138},
issn = {1939-3539},
year = {2021},
date = {2021-03-01},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
number = {01},
pages = {1-1},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zimmer, Lucas; Lindauer, Marius; Hutter, Frank
Auto-Pytorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL Journal Article
In: IEEE transactions on pattern analysis and machine intelligence, vol. PP, 2021, ISSN: 0162-8828.
@article{PMID:33750687,
title = {Auto-Pytorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL},
author = {Lucas Zimmer and Marius Lindauer and Frank Hutter},
url = {https://doi.org/10.1109/TPAMI.2021.3067763},
doi = {10.1109/tpami.2021.3067763},
issn = {0162-8828},
year = {2021},
date = {2021-03-01},
journal = {IEEE transactions on pattern analysis and machine intelligence},
volume = {PP},
abstract = {While early AutoML frameworks focused on optimizing traditional ML pipelines and their hyperparameters, a recent trend in AutoML is to focus on neural architecture search. In this paper, we introduce Auto-PyTorch, which brings the best of these two worlds together by jointly and robustly optimizing the architecture of networks and the training hyperparameters to enable fully automated deep learning (AutoDL). Auto-PyTorch achieves state-of-the-art performance on several tabular benchmarks by combining multi-fidelity optimization with portfolio construction for warmstarting and ensembling of deep neural networks (DNNs) and common baselines for tabular data. To thoroughly study our assumptions on how to design such an AutoDL system, we additionally introduce a new benchmark on learning curves for DNNs, dubbed LCBench, and run extensive ablation studies of the full Auto-PyTorch on typical AutoML benchmarks, eventually showing that Auto-PyTorch performs better than several state-of-the-art competitors on average.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Lanlan; Zhang, Yuting; Deng, Jia; Soatto, Stefano
Dynamically Grown Generative Adversarial Networks Proceedings Article
In: AAAI 2021, 2021.
@inproceedings{LiuAAAI2021,
title = {Dynamically Grown Generative Adversarial Networks},
author = {Lanlan Liu and Yuting Zhang and Jia Deng and Stefano Soatto},
url = {https://www.aaai.org/AAAI21Papers/AAAI-1376.LiuL.pdf},
year = {2021},
date = {2021-02-02},
booktitle = {AAAI 2021},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Xu, Y; Xie, L; Dai, W; Zhang, X; Chen, X; Qi, G; Xiong, H; Tian, Q
Partially-Connected Neural Architecture Search for Reduced Computational Redundancy Journal Article
In: IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1-1, 2021, ISSN: 1939-3539.
@article{9354953,
title = {Partially-Connected Neural Architecture Search for Reduced Computational Redundancy},
author = {Y Xu and L Xie and W Dai and X Zhang and X Chen and G Qi and H Xiong and Q Tian},
url = {https://www.computer.org/csdl/journal/tp/5555/01/09354953/1rgCccYlOaQ},
doi = {10.1109/TPAMI.2021.3059510},
issn = {1939-3539},
year = {2021},
date = {2021-02-01},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
number = {01},
pages = {1-1},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Differentiable architecture search (DARTS) enables effective neural architecture search (NAS) using gradient descent, but suffers from high memory and computational costs. In this paper, we propose a novel approach, namely Partially-Connected DARTS (PC-DARTS), to achieve efficient and stable neural architecture search by reducing the channel and spatial redundancies of the super-network. In the channel level, partial channel connection is presented to randomly sample a small subset of channels for operation selection to accelerate the search process and suppress the over-fitting of the super-network. Side operation is introduced for bypassing (non-sampled) channels to guarantee the performance of searched architectures under extremely low sampling rates. In the spatial level, input features are down-sampled to eliminate spatial redundancy and enhance the efficiency of the mixed computation for operation selection. Furthermore, edge normalization is developed to maintain the consistency of edge selection based on channel sampling with the architectural parameters for edges. Experimental results demonstrate that the proposed approach achieves higher search speed and training stability than DARTS. PC-DARTS obtains a top-1 error rate of 2.55% on CIFAR-10 with 0.07 GPU-days for architecture search, and a state-of-the-art top-1 error rate of 24.1% on ImageNet (under the mobile setting) within 2.8 GPU-day.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hao, Jie; Zhu, William
Architecture self-attention mechanism: nonlinear optimization for neural architecture search Journal Article
In: Journal of Nonlinear and Variational Analysis, vol. 5, pp. 119-140, 2021.
@article{Hao2021,
title = { Architecture self-attention mechanism: nonlinear optimization for neural architecture search},
author = {Jie Hao and William Zhu},
url = {http://jnva.biemdas.com/issues/JNVA2021-1-8.pdf},
doi = { 10.23952/jnva.5.2021.1.08},
year = {2021},
date = {2021-02-01},
journal = {Journal of Nonlinear and Variational Analysis},
volume = {5},
pages = {119-140},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gao, Yanjie; Gu, Xianyu; Zhang, Hongyu; Lin, Haoxiang; Yang, Mao
Runtime Performance Prediction for Deep Learning Models with Graph Neural Network Technical Report
Microsoft no. MSR-TR-2021-3, 2021.
@techreport{gao2021runtime,
title = {Runtime Performance Prediction for Deep Learning Models with Graph Neural Network},
author = {Yanjie Gao and Xianyu Gu and Hongyu Zhang and Haoxiang Lin and Mao Yang},
url = {https://www.microsoft.com/en-us/research/publication/runtime-performance-prediction-for-deep-learning-models-with-graph-neural-network/},
year = {2021},
date = {2021-02-01},
urldate = {2021-02-01},
number = {MSR-TR-2021-3},
institution = {Microsoft},
abstract = {Recently, deep learning (DL) has been widely adopted in many application domains. Predicting the runtime performance of DL models such as GPU memory consumption and training time is important to boost development productivity and reduce resource waste because improper configurations of hyperparameters and neural architectures can result in many failed training jobs or inappropriate models. However, general runtime performance prediction for DL models is challenging due to the hybrid DL programming paradigm, complicated hidden factors within the framework runtime, fairly huge model configuration space, and wide differences among models. In this paper, we propose DNNPerf, a novel and general machine learning approach to predict the runtime performance of DL models using Graph Neural Network. DNNPerf represents a DL model as a directed acyclic computation graph and designs a rich set of effective performance-related features based on the computational semantics of both nodes and edges. We also propose a new Attention-based Node-Edge Encoder to better encode the node and edge features. DNNPerf is extensively evaluated on thousands of configurations of real-world and synthetic DL models to predict their GPU memory consumption and training time. The experimental results demonstrate that DNNPerf achieves an overall error of 13.684% for the GPU memory consumption prediction and an overall error of 7.443% for the training time prediction, outperforming all the compared methods.},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Pham, Hieu; Le, Quoc V
AutoDropout - Learning Dropout Patterns to Regularize Deep Networks Technical Report
2021.
@techreport{Pham2021_okt,
title = {AutoDropout - Learning Dropout Patterns to Regularize Deep Networks},
author = {Hieu Pham and Quoc V Le},
url = {https://arxiv.org/abs/2101.01761},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.01761},
key = {journals/corr/abs-2101-01761},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Huang, Yufang; Axsom, Kelly M; Lee, John; Subramanian, Lakshminarayanan; Zhang, Yiye
DICE: Deep Significance Clustering for Outcome-Aware Stratification Technical Report
2021.
@techreport{YufangHuang2021_xxi,
title = {DICE: Deep Significance Clustering for Outcome-Aware Stratification},
author = {Yufang Huang and Kelly M Axsom and John Lee and Lakshminarayanan Subramanian and Yiye Zhang},
url = {https://arxiv.org/abs/2101.02344},
year = {2021},
date = {2021-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Ma, Ailong; Wan, Yuting; Zhong, Yanfei; Wang, Junjue; Zhang, Liangpei
SceneNet: Remote sensing scene classification deep learning network using multi-objective neural evolution architecture search Technical Report
2021, ISSN: 0924-2716.
@techreport{AilongMa2021_voj,
title = {SceneNet: Remote sensing scene classification deep learning network using multi-objective neural evolution architecture search},
author = {Ailong Ma and Yuting Wan and Yanfei Zhong and Junjue Wang and Liangpei Zhang},
url = {https://www.sciencedirect.com/science/article/pii/S0924271620303361},
doi = {https://doi.org/10.1016/j.isprsjprs.2020.11.025},
issn = {0924-2716},
year = {2021},
date = {2021-01-01},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {172},
pages = {171-188},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yang, Hansi; Yao, Quanming; Kwok, James T
Tensorizing Subgraph Search in the Supernet Technical Report
2021.
@techreport{Yang2021_atf,
title = {Tensorizing Subgraph Search in the Supernet},
author = {Hansi Yang and Quanming Yao and James T Kwok},
url = {https://arxiv.org/abs/2101.01078},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.01078},
key = {journals/corr/abs-2101-01078},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Syed, Muhtadyuzzaman; Srinivasan, Arvind Akpuram
Generalized Latency Performance Estimation for Once-For-All Neural Architecture Search Technical Report
2021.
@techreport{Syed2021_kud,
title = {Generalized Latency Performance Estimation for Once-For-All Neural Architecture Search},
author = {Muhtadyuzzaman Syed and Arvind Akpuram Srinivasan},
url = {https://arxiv.org/abs/2101.00732},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.00732},
key = {journals/corr/abs-2101-00732},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Huang, Hanxun; Ma, Xingjun; Erfani, Sarah M; Bailey, James
Neural Architecture Search via Combinatorial Multi-Armed Bandit Technical Report
2021.
@techreport{Huang2021_aks,
title = {Neural Architecture Search via Combinatorial Multi-Armed Bandit},
author = {Hanxun Huang and Xingjun Ma and Sarah M Erfani and James Bailey},
url = {https://arxiv.org/abs/2101.00336},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.00336},
key = {journals/corr/abs-2101-00336},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Tang, Tianqi; Yu, Xin; Dong, Xuanyi; Yang, Yi
Auto-Navigator: Decoupled Neural Architecture Search for Visual Navigation Technical Report
2021.
@techreport{Tang2021_szq,
title = {Auto-Navigator: Decoupled Neural Architecture Search for Visual Navigation},
author = {Tianqi Tang and Xin Yu and Xuanyi Dong and Yi Yang},
url = {https://openaccess.thecvf.com/content/WACV2021/html/Tang_Auto-Navigator_Decoupled_Neural_Architecture_Search_for_Visual_Navigation_WACV_2021_paper.html},
year = {2021},
date = {2021-01-01},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
pages = {3743-3752},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zheng, Shuai; Wang, Yabin; Li, Baotong; Li, Xin
A Hardware-adaptive Deep Feature Matching Pipeline for Real-time 3D Reconstruction Journal Article
In: vol. 132, pp. 102984, 2021.
@article{Zheng2021_mdc,
title = {A Hardware-adaptive Deep Feature Matching Pipeline for Real-time 3D Reconstruction},
author = {Shuai Zheng and Yabin Wang and Baotong Li and Xin Li},
url = {https://www.sciencedirect.com/science/article/abs/pii/S0010448520301779},
doi = {10.1016/J.CAD.2020.102984},
year = {2021},
date = {2021-01-01},
volume = {132},
pages = {102984},
key = {journals/cad/ZhengWLL21},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hosseini, Ramtin; Yang, Xingyi; Xie, Pengtao
DSRNA - Differentiable Search of Robust Neural Architectures Proceedings Article
In: CVPR 2021, 2021.
@inproceedings{Hosseini2020_zbg,
title = {DSRNA - Differentiable Search of Robust Neural Architectures},
author = {Ramtin Hosseini and Xingyi Yang and Pengtao Xie},
url = {https://arxiv.org/abs/2012.06122},
year = {2021},
date = {2021-01-01},
booktitle = {CVPR 2021},
volume = {abs/2012.06122},
key = {journals/corr/abs-2012-06122},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ruchte, Michael; Zela, Arber; Siems, Julien Niklas; Grabocka, Josif; Hutter, Frank
NASLib: A Modular and Flexible Neural Architecture Search Library Technical Report
2021.
@techreport{MichaelRuchte2021_kjn,
title = {NASLib: A Modular and Flexible Neural Architecture Search Library},
author = {Michael Ruchte and Arber Zela and Julien Niklas Siems and Josif Grabocka and Frank Hutter},
url = {https://openreview.net/forum?id=EohGx2HgNsA},
year = {2021},
date = {2021-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Shaikh, Azhar; Sinha, Nishant
Learn to Bind and Grow Neural Structures Proceedings Article
In: pp. 119-126, 2021.
@inproceedings{Shaikh2021_sss,
title = {Learn to Bind and Grow Neural Structures},
author = {Azhar Shaikh and Nishant Sinha},
url = {https://arxiv.org/abs/2011.10568},
doi = {10.1145/3430984.3431019},
year = {2021},
date = {2021-01-01},
pages = {119-126},
key = {conf/comad/ShaikhS21},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Lu, Hao; Han, Hu
NAS-HR: search of neural architecture for heart-rate estimation from face videos Technical Report
2021.
@techreport{Lu2021_pdu,
title = {NAS-HR: search of neural architecture for heart-rate estimation from face videos},
author = {Hao Lu and Hu Han},
url = {http://vr-ih.com/vrih/resource/latest_accept/323112704838656.pdf},
doi = {10.1016/j.vrih.2020.10.002},
year = {2021},
date = {2021-01-01},
journal = {Virtual Reality& Intelligent Hardware},
pages = {33--42},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Peng, Daiyi; Dong, Xuanyi; Real, Esteban; Tan, Mingxing; Lu, Yifeng; Liu, Hanxiao; Bender, Gabriel; Kraft, Adam; Liang, Chen; Le, Quoc V
PyGlove - Symbolic Programming for Automated Machine Learning Technical Report
2021.
@techreport{Peng2021_jau,
title = {PyGlove - Symbolic Programming for Automated Machine Learning},
author = {Daiyi Peng and Xuanyi Dong and Esteban Real and Mingxing Tan and Yifeng Lu and Hanxiao Liu and Gabriel Bender and Adam Kraft and Chen Liang and Quoc V Le},
url = {https://papers.nips.cc/paper/2020/file/012a91467f210472fab4e11359bbfef6-Paper.pdf},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.08809},
key = {journals/corr/abs-2101-08809},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Liu, Jiaheng; Zhou, Shunfeng; Wu, Yichao; Chen, Ken; Ouyang, Wanli; Xu, Dong
Block Proposal Neural Architecture Search Journal Article
In: vol. 30, pp. 15-25, 2021.
@article{Liu2021_gru,
title = {Block Proposal Neural Architecture Search},
author = {Jiaheng Liu and Shunfeng Zhou and Yichao Wu and Ken Chen and Wanli Ouyang and Dong Xu},
url = {https://pubmed.ncbi.nlm.nih.gov/33035163/},
doi = {10.1109/TIP.2020.3028288},
year = {2021},
date = {2021-01-01},
volume = {30},
pages = {15-25},
key = {journals/tip/LiuZWCOX21},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
He, Xin; Wang, Shihao; Ying, Guohao; Zhang, Jiyong; Chu, Xiaowen
Efficient Multi-objective Evolutionary 3D Neural Architecture Search for COVID-19 Detection with Chest CT Scans Technical Report
2021.
@techreport{He2021_lri,
title = {Efficient Multi-objective Evolutionary 3D Neural Architecture Search for COVID-19 Detection with Chest CT Scans},
author = {Xin He and Shihao Wang and Guohao Ying and Jiyong Zhang and Xiaowen Chu},
url = {https://ieeexplore.ieee.org/abstract/document/9207545},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.10667},
key = {journals/corr/abs-2101-10667},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Chu, Xiangxiang; Wang, Xiaoxing; Zhang, Bo; Lu, Shun; Wei, Xiaolin; Yan, Junchi
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators Technical Report
2021.
@techreport{XiangxiangChu2021_pqn,
title = {DARTS-: Robustly Stepping out of Performance Collapse Without Indicators},
author = {Xiangxiang Chu and Xiaoxing Wang and Bo Zhang and Shun Lu and Xiaolin Wei and Junchi Yan},
url = {https://arxiv.org/abs/2009.01027},
year = {2021},
date = {2021-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Liang, Xinle; Liu, Yang; Luo, Jiahuan; He, Yuanqin; Chen, Tianjian; Yang, Qiang
Self-supervised Cross-silo Federated Neural Architecture Search Technical Report
2021.
@techreport{Liang2021_fxt,
title = {Self-supervised Cross-silo Federated Neural Architecture Search},
author = {Xinle Liang and Yang Liu and Jiahuan Luo and Yuanqin He and Tianjian Chen and Qiang Yang},
url = {https://arxiv.org/abs/2007.01500},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.11896},
key = {journals/corr/abs-2101-11896},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Mittal, Govind; Korus, Pawel; Memon, Nasir D
FiFTy - Large-Scale File Fragment Type Identification Using Convolutional Neural Networks Journal Article
In: vol. 16, pp. 28-41, 2021.
@article{Mittal2021_rhn,
title = {FiFTy - Large-Scale File Fragment Type Identification Using Convolutional Neural Networks},
author = {Govind Mittal and Pawel Korus and Nasir D Memon},
url = {https://ieeexplore.ieee.org/abstract/document/9122499},
doi = {10.1109/TIFS.2020.3004266},
year = {2021},
date = {2021-01-01},
volume = {16},
pages = {28-41},
key = {journals/tifs/MittalKM21},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yang, Zhao; Zhang, Shengbing; Li, Ruxu; Li, Chuxi; Wang, Miao; Wang, Danghui; Zhang, Meng
Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization Journal Article
In: vol. 21, no. 2, pp. 444, 2021.
@article{Yang2021_olp,
title = {Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization},
author = {Zhao Yang and Shengbing Zhang and Ruxu Li and Chuxi Li and Miao Wang and Danghui Wang and Meng Zhang},
url = {https://arxiv.org/abs/1806.07912},
doi = {10.3390/S21020444},
year = {2021},
date = {2021-01-01},
volume = {21},
number = {2},
pages = {444},
key = {journals/sensors/YangZLLWWZ21},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Wei, Tao; Wang, Changhu; Chen, Chang Wen
Modularized Morphing of Deep Convolutional Neural Networks - A Graph Approach Journal Article
In: vol. 70, no. 2, pp. 305-315, 2021.
@article{Wei2021_ghp,
title = {Modularized Morphing of Deep Convolutional Neural Networks - A Graph Approach},
author = {Tao Wei and Changhu Wang and Chang Wen Chen},
url = {https://arxiv.org/abs/1701.03281},
doi = {10.1109/TC.2020.2988006},
year = {2021},
date = {2021-01-01},
volume = {70},
number = {2},
pages = {305-315},
key = {journals/tc/WeiWC21},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Song, Xingyou; Choromanski, Krzysztof; Parker-Holder, Jack; Tang, Yunhao; Peng, Daiyi; Jain, Deepali; Gao, Wenbo; Pacchiano, Aldo; Sarlós, Tamás; Yang, Yuxiang
ES-ENAS - Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning Technical Report
2021.
@techreport{Song2021_xto,
title = {ES-ENAS - Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning},
author = {Xingyou Song and Krzysztof Choromanski and Jack Parker-Holder and Yunhao Tang and Daiyi Peng and Deepali Jain and Wenbo Gao and Aldo Pacchiano and Tamás Sarlós and Yuxiang Yang},
url = {https://arxiv.org/abs/1611.01578},
year = {2021},
date = {2021-01-01},
volume = {abs/2101.07415},
key = {journals/corr/abs-2101-07415},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Rorabaugh, Ariel Keller; -, Silvina Caíno; II, Michael Wyatt R; Johnston, Travis; Taufer, Michela
PEng4NN: An Accurate Performance Estimation Engine for Efficient Automated Neural Network Architecture Search Technical Report
2021.
@techreport{ArielKellerRorabaugh2021_ncl,
title = {PEng4NN: An Accurate Performance Estimation Engine for Efficient Automated Neural Network Architecture Search},
author = {Ariel Keller Rorabaugh and Silvina Caíno - and Michael Wyatt R II and Travis Johnston and Michela Taufer},
url = {https://arxiv.org/abs/2101.04185},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.04185},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhang, Haokui; Gong, Chengrong; Bai, Yunpeng; Bai, Zongwen; Li, Ying
3D-ANAS: 3D Asymmetric Neural Architecture Search for Fast Hyperspectral Image Classification Miscellaneous
2021.
@misc{HaokuiZhang2021_uqt,
title = {3D-ANAS: 3D Asymmetric Neural Architecture Search for Fast Hyperspectral Image Classification},
author = {Haokui Zhang and Chengrong Gong and Yunpeng Bai and Zongwen Bai and Ying Li},
url = {https://arxiv.org/abs/2101.04287},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.04287},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Gu, Hongyang; Fu, Guangyuan; Li, Jianmin; Zhu, Jun
Auto-ReID+: Searching for a multi-branch ConvNet for person re-identification Journal Article
In: Neurocomputing, vol. 435, pp. 53-66, 2021, ISSN: 0925-2312.
@article{HongyangGu2021_bui,
title = {Auto-ReID+: Searching for a multi-branch ConvNet for person re-identification},
author = {Hongyang Gu and Guangyuan Fu and Jianmin Li and Jun Zhu},
url = {https://www.sciencedirect.com/science/article/pii/S0925231220320178},
doi = {https://doi.org/10.1016/j.neucom.2020.12.105},
issn = {0925-2312},
year = {2021},
date = {2021-01-01},
journal = {Neurocomputing},
volume = {435},
pages = {53-66},
abstract = {In the field of person re-identification (ReID), multi-branch models are more effective in learning robust features than single-branch models. The current popular multi-branch models are based on ResNet or GoogleNet. These networks are designed initially to solve classification problems. There is an essential difference between ReID and classification problems, so it is particularly important to find a corresponding multi-branch backbone for ReID tasks. We propose to automatically search for a multi-branch convolutional neural network (CNN) for ReID tasks utilizing neural architecture search (NAS). First, we designed a multi-resolution, multi-branch macro search architecture that can extract more abundant scale information. Then in the searching process, the early stopping mechanism is proposed to improve the effectiveness and efficiency of the entire searching process. Finally, we experimentally prove on four mainstream datasets that the searched model can achieve state-of-the-art performance with only 5.7 million parameters.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
He, Xin; Wang, Shihao; Chu, Xiaowen; Shi, Shaohuai; Tang, Jiangping; Liu, Xin; Yan, Chenggang; Zhang, Jiyong; Ding, Guiguang
Automated Model Design and Benchmarking of 3D Deep Learning Models for COVID-19 Detection with Chest CT Scans Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-05442,
title = {Automated Model Design and Benchmarking of 3D Deep Learning Models for COVID-19 Detection with Chest CT Scans},
author = {Xin He and Shihao Wang and Xiaowen Chu and Shaohuai Shi and Jiangping Tang and Xin Liu and Chenggang Yan and Jiyong Zhang and Guiguang Ding},
url = {https://arxiv.org/abs/2101.05442},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.05442},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Zhou, Benjia; Li, Yunan; Wan, Jun
Regional Attention with Architecture-Rebuilt 3D Network for RGB-D Gesture Recognition Technical Report
2021.
@techreport{zhou2021regional,
title = {Regional Attention with Architecture-Rebuilt 3D Network for RGB-D Gesture Recognition},
author = {Benjia Zhou and Yunan Li and Jun Wan},
url = {https://arxiv.org/abs/2102.05348},
year = {2021},
date = {2021-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yang, Zhao; Zhang, Shengbing; Li, Ruxu; Li, Chuxi; Wang, Miao; Wang, Danghui; Zhang, Meng
Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization Journal Article
In: Sensors, vol. 21, no. 2, 2021, ISSN: 1424-8220.
@article{s21020444,
title = {Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization},
author = {Zhao Yang and Shengbing Zhang and Ruxu Li and Chuxi Li and Miao Wang and Danghui Wang and Meng Zhang},
url = {https://www.mdpi.com/1424-8220/21/2/444},
doi = {10.3390/s21020444},
issn = {1424-8220},
year = {2021},
date = {2021-01-01},
journal = {Sensors},
volume = {21},
number = {2},
abstract = {With the development of deep learning technologies and edge computing, the combination of them can make artificial intelligence ubiquitous. Due to the constrained computation resources of the edge device, the research in the field of on-device deep learning not only focuses on the model accuracy but also on the model efficiency, for example, inference latency. There are many attempts to optimize the existing deep learning models for the purpose of deploying them on the edge devices that meet specific application requirements while maintaining high accuracy. Such work not only requires professional knowledge but also needs a lot of experiments, which limits the customization of neural networks for varied devices and application scenarios. In order to reduce the human intervention in designing and optimizing the neural network structure, multi-objective neural architecture search methods that can automatically search for neural networks featured with high accuracy and can satisfy certain hardware performance requirements are proposed. However, the current methods commonly set accuracy and inference latency as the performance indicator during the search process, and sample numerous network structures to obtain the required neural network. Lacking regulation to the search direction with the search objectives will generate a large number of useless networks during the search process, which influences the search efficiency to a great extent. Therefore, in this paper, an efficient resource-aware search method is proposed. Firstly, the network inference consumption profiling model for any specific device is established, and it can help us directly obtain the resource consumption of each operation in the network structure and the inference latency of the entire sampled network. Next, on the basis of the Bayesian search, a resource-aware Pareto Bayesian search is proposed. Accuracy and inference latency are set as the constraints to regulate the search direction. With a clearer search direction, the overall search efficiency will be improved. Furthermore, cell-based structure and lightweight operation are applied to optimize the search space for further enhancing the search efficiency. The experimental results demonstrate that with our method, the inference latency of the searched network structure reduced 94.71% without scarifying the accuracy. At the same time, the search efficiency increased by 18.18%.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Weng, Yu; Chen, Zehua; Zhou, Tianbao
Improved differentiable neural architecture search for single image super-resolution Journal Article
In: Peer-to-Peer Networking and Applications, 2021.
@article{journals/PPNA/Weng21,
title = {Improved differentiable neural architecture search for single image super-resolution},
author = {Yu Weng and Zehua Chen and Tianbao Zhou},
url = {https://doi.org/10.1007/s12083-020-01048-4},
doi = {10.1007/s12083-020-01048-4},
year = {2021},
date = {2021-01-01},
journal = {Peer-to-Peer Networking and Applications},
abstract = {Deep learning has shown prominent superiority over other machine learning algorithms in Single Image Super-Resolution (SISR). In order to reduce the efforts and resources cost on manually designing deep architecture, we use differentiable neural architecture search (DARTS) on SISR. Since neural architecture search was originally used for classification tasks, our experiments show that direct usage of DARTS on super-resolutions tasks will give rise to many skip connections in the search architecture, which results in the poor performance of final architecture. Thus, it is necessary for DARTS to have made some improvements for the application in the field of SISR. According to characteristics of SISR, we remove redundant operations and redesign some operations in the cell to achieve an improved DARTS. Then we use the improved DARTS to search convolution cells as a nonlinear mapping part of super-resolution network. The new super-resolution architecture shows its effectiveness on benchmark datasets and DIV2K dataset.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Liu, Jia; Jin, Yaochu
Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-06507,
title = {Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks},
author = {Jia Liu and Yaochu Jin},
url = {https://arxiv.org/abs/2101.06507},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.06507},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wu, Yan; Huang, Zhiwu; Kumar, Suryansh; Sukthanker, Rhea Sanjay; Timofte, Radu; Gool, Luc Van
Trilevel Neural Architecture Search for Efficient Single Image Super-Resolution Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-06658,
title = {Trilevel Neural Architecture Search for Efficient Single Image Super-Resolution},
author = {Yan Wu and Zhiwu Huang and Suryansh Kumar and Rhea Sanjay Sukthanker and Radu Timofte and Luc Van Gool},
url = {https://arxiv.org/abs/2101.06658},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.06658},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Alparslan, Yigit; Moyer, Ethan Jacob; Isozaki, Isamu Mclean; Schwartz, Daniel; Dunlop, Adam; Dave, Shesh; Kim, Edward
Towards Searching Efficient and Accurate Neural Network Architectures in Binary Classification Problems Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-06511,
title = {Towards Searching Efficient and Accurate Neural Network Architectures in Binary Classification Problems},
author = {Yigit Alparslan and Ethan Jacob Moyer and Isamu Mclean Isozaki and Daniel Schwartz and Adam Dunlop and Shesh Dave and Edward Kim},
url = {https://arxiv.org/abs/2101.06511},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.06511},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lee, Sanghyeop; Kim, Junyeob; Kang, Hyeon; Kang, Do-Young; Park, Jangsik
Genetic Algorithm Based Deep Learning Neural Network Structure and Hyperparameter Optimization Journal Article
In: Applied Sciences, vol. 11, no. 2, 2021, ISSN: 2076-3417.
@article{app11020744,
title = {Genetic Algorithm Based Deep Learning Neural Network Structure and Hyperparameter Optimization},
author = {Sanghyeop Lee and Junyeob Kim and Hyeon Kang and Do-Young Kang and Jangsik Park},
url = {https://www.mdpi.com/2076-3417/11/2/744},
doi = {10.3390/app11020744},
issn = {2076-3417},
year = {2021},
date = {2021-01-01},
journal = {Applied Sciences},
volume = {11},
number = {2},
abstract = {Alzheimer’s disease is one of the major challenges of population ageing, and diagnosis and prediction of the disease through various biomarkers is the key. While the application of deep learning as imaging technologies has recently expanded across the medical industry, empirical design of these technologies is very difficult. The main reason for this problem is that the performance of the Convolutional Neural Networks (CNN) differ greatly depending on the statistical distribution of the input dataset. Different hyperparameters also greatly affect the convergence of the CNN models. With this amount of information, selecting appropriate parameters for the network structure has became a large research area. Genetic Algorithm (GA), is a very popular technique to automatically select a high-performance network architecture. In this paper, we show the possibility of optimising the network architecture using GA, where its search space includes both network structure configuration and hyperparameters. To verify the performance of our Algorithm, we used an amyloid brain image dataset that is used for Alzheimer’s disease diagnosis. As a result, our algorithm outperforms Genetic CNN by 11.73% on a given classification task.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Jiang, Hanliang; Shen, Fuhao; Gao, Fei; Han, Weidong
Learning efficient, explainable and discriminative representations for pulmonary nodules classification Journal Article
In: Pattern Recognition, vol. 113, pp. 107825, 2021, ISSN: 0031-3203.
@article{JIANG2021107825,
title = {Learning efficient, explainable and discriminative representations for pulmonary nodules classification},
author = {Hanliang Jiang and Fuhao Shen and Fei Gao and Weidong Han},
url = {https://www.sciencedirect.com/science/article/pii/S0031320321000121},
doi = {https://doi.org/10.1016/j.patcog.2021.107825},
issn = {0031-3203},
year = {2021},
date = {2021-01-01},
journal = {Pattern Recognition},
volume = {113},
pages = {107825},
abstract = {Automatic pulmonary nodules classification is significant for early diagnosis of lung cancers. Recently, deep learning techniques have enabled remarkable progress in this field. However, these deep models are typically of high computational complexity and work in a black-box manner. To combat these challenges, in this work, we aim to build an efficient and (partially) explainable classification model. Specially, we use neural architecture search (NAS) to automatically search 3D network architectures with excellent accuracy/speed trade-off. Besides, we use the convolutional block attention module (CBAM) in the networks, which helps us understand the reasoning process. During training, we use A-Softmax loss to learn angularly discriminative representations. In the inference stage, we employ an ensemble of diverse neural networks to improve the prediction accuracy and robustness. We conduct extensive experiments on the LIDC-IDRI database. Compared with previous state-of-the-art, our model shows highly comparable performance by using less than 1/40 parameters. Besides, empirical study shows that the reasoning process of learned networks is in conformity with physicians’ diagnosis. Related code and results have been released at: https://github.com/fei-hdu/NAS-Lung.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Alparslan, Yigit; Moyer, Ethan Jacob; Kim, Edward
Evaluating Online and Offline Accuracy Traversal Algorithms for k-Complete Neural Network Architectures Journal Article
In: CoRR, vol. abs/2101.06518, 2021.
@article{DBLP:journals/corr/abs-2101-06518,
title = {Evaluating Online and Offline Accuracy Traversal Algorithms for k-Complete Neural Network Architectures},
author = {Yigit Alparslan and Ethan Jacob Moyer and Edward Kim},
url = {https://arxiv.org/abs/2101.06518},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.06518},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Vaccaro, Lorenzo; Sansonetti, Giuseppe; Micarelli, Alessandro
An Empirical Review of Automated Machine Learning Journal Article
In: Computers, vol. 10, no. 1, 2021, ISSN: 2073-431X.
@article{computers10010011,
title = {An Empirical Review of Automated Machine Learning},
author = {Lorenzo Vaccaro and Giuseppe Sansonetti and Alessandro Micarelli},
url = {https://www.mdpi.com/2073-431X/10/1/11},
doi = {10.3390/computers10010011},
issn = {2073-431X},
year = {2021},
date = {2021-01-01},
journal = {Computers},
volume = {10},
number = {1},
abstract = {In recent years, Automated Machine Learning (AutoML) has become increasingly important in Computer Science due to the valuable potential it offers. This is testified by the high number of works published in the academic field and the significant efforts made in the industrial sector. However, some problems still need to be resolved. In this paper, we review some Machine Learning (ML) models and methods proposed in the literature to analyze their strengths and weaknesses. Then, we propose their use—alone or in combination with other approaches—to provide possible valid AutoML solutions. We analyze those solutions from a theoretical point of view and evaluate them empirically on three Atari games from the Arcade Learning Environment. Our goal is to identify what, we believe, could be some promising ways to create truly effective AutoML frameworks, therefore able to replace the human expert as much as possible, thereby making easier the process of applying ML approaches to typical problems of specific domains. We hope that the findings of our study will provide useful insights for future research work in AutoML.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Qing; Wu, Xia; Liu, Tianming
Differentiable Neural Architecture Search for Optimal Spatial/Temporal Brain Function Network Decomposition Journal Article
In: Medical Image Analysis, pp. 101974, 2021, ISSN: 1361-8415.
@article{LI2021101974,
title = {Differentiable Neural Architecture Search for Optimal Spatial/Temporal Brain Function Network Decomposition},
author = {Qing Li and Xia Wu and Tianming Liu},
url = {https://www.sciencedirect.com/science/article/pii/S1361841521000207},
doi = {https://doi.org/10.1016/j.media.2021.101974},
issn = {1361-8415},
year = {2021},
date = {2021-01-01},
journal = {Medical Image Analysis},
pages = {101974},
abstract = {ABSTRACT
It has been a key topic to decompose the brain's spatial/temporal function networks from 4D functional magnetic resonance imaging (fMRI) data. With the advantages of robust and meaningful brain pattern extraction, deep neural networks have been shown to be more powerful and flexible in fMRI data modeling than other traditional methods. However, the challenge of designing neural network architecture for high-dimensional and complex fMRI data has also been realized recently. In this paper, we propose a new spatial/temporal differentiable neural architecture search algorithm (ST-DARTS) for optimal brain network decomposition. The core idea of ST-DARTS is to optimize the inner cell structure of the vanilla recurrent neural network (RNN) in order to effectively decompose spatial/temporal brain function networks from fMRI data. Based on the evaluations on all seven fMRI tasks in human connectome project (HCP) dataset, the ST-DARTS model is shown to perform promisingly, both spatially (i.e., it can recognize the most stimuli-correlated spatial brain network activation that is very similar to the benchmark) and temporally (i.e., its temporal activity is highly positively correlated with the task-design). To further improve the efficiency of ST-DARTS model, we introduce a flexible early-stopping mechanism, named as ST-DARTS±, which further improves experimental results significantly. To our best knowledge, the proposed ST-DARTS and ST-DARTS+ models are among the early efforts in optimally decomposing spatial/temporal brain function networks from fMRI data with neural architecture search strategy and they demonstrate great promise.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
It has been a key topic to decompose the brain's spatial/temporal function networks from 4D functional magnetic resonance imaging (fMRI) data. With the advantages of robust and meaningful brain pattern extraction, deep neural networks have been shown to be more powerful and flexible in fMRI data modeling than other traditional methods. However, the challenge of designing neural network architecture for high-dimensional and complex fMRI data has also been realized recently. In this paper, we propose a new spatial/temporal differentiable neural architecture search algorithm (ST-DARTS) for optimal brain network decomposition. The core idea of ST-DARTS is to optimize the inner cell structure of the vanilla recurrent neural network (RNN) in order to effectively decompose spatial/temporal brain function networks from fMRI data. Based on the evaluations on all seven fMRI tasks in human connectome project (HCP) dataset, the ST-DARTS model is shown to perform promisingly, both spatially (i.e., it can recognize the most stimuli-correlated spatial brain network activation that is very similar to the benchmark) and temporally (i.e., its temporal activity is highly positively correlated with the task-design). To further improve the efficiency of ST-DARTS model, we introduce a flexible early-stopping mechanism, named as ST-DARTS±, which further improves experimental results significantly. To our best knowledge, the proposed ST-DARTS and ST-DARTS+ models are among the early efforts in optimally decomposing spatial/temporal brain function networks from fMRI data with neural architecture search strategy and they demonstrate great promise.
Song, Xingyou; Choromanski, Krzysztof; -, Jack Parker; Tang, Yunhao; Peng, Daiyi; Jain, Deepali; Gao, Wenbo; Pacchiano, Aldo; ó, Tamás Sarl; Yang, Yuxiang
ES-ENAS: Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-07415,
title = {ES-ENAS: Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning},
author = {Xingyou Song and Krzysztof Choromanski and Jack Parker - and Yunhao Tang and Daiyi Peng and Deepali Jain and Wenbo Gao and Aldo Pacchiano and Tamás Sarl ó and Yuxiang Yang},
url = {https://arxiv.org/abs/2101.07415},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.07415},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wu, Haiwei; Zhou, Jiantao
GIID-Net: Generalizable Image Inpainting Detection via Neural Architecture Search and Attention Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-07419,
title = {GIID-Net: Generalizable Image Inpainting Detection via Neural Architecture Search and Attention},
author = {Haiwei Wu and Jiantao Zhou},
url = {https://arxiv.org/abs/2101.07419},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.07419},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Fu, Chaoyou; Hu, Yibo; Wu, Xiang; Shi, Hailin; Mei, Tao; He, Ran
CM-NAS: Rethinking Cross-Modality Neural Architectures for Visible-Infrared Person Re-Identification Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-08467,
title = {CM-NAS: Rethinking Cross-Modality Neural Architectures for Visible-Infrared Person Re-Identification},
author = {Chaoyou Fu and Yibo Hu and Xiang Wu and Hailin Shi and Tao Mei and Ran He},
url = {https://arxiv.org/abs/2101.08467},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.08467},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Abdelfattah, Mohamed S; Mehrotra, Abhinav; Dudziak, Lukasz; Lane, Nicholas D
Zero-Cost Proxies for Lightweight NAS Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-08134,
title = {Zero-Cost Proxies for Lightweight NAS},
author = {Mohamed S Abdelfattah and Abhinav Mehrotra and Lukasz Dudziak and Nicholas D Lane},
url = {https://arxiv.org/abs/2101.08134},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.08134},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Benmeziane, Hadjer; Maghraoui, Kaoutar El; Ouarnoughi, Hamza; ï, Sma; Wistuba, Martin; Wang, Naigang
A Comprehensive Survey on Hardware-Aware Neural Architecture Search Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-09336,
title = {A Comprehensive Survey on Hardware-Aware Neural Architecture Search},
author = {Hadjer Benmeziane and Kaoutar El Maghraoui and Hamza Ouarnoughi and Sma ï and Martin Wistuba and Naigang Wang},
url = {https://arxiv.org/abs/2101.09336},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.09336},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
He, Xin; Wang, Shihao; Ying, Guohao; Zhang, Jiyong; Chu, Xiaowen
Efficient Multi-objective Evolutionary 3D Neural Architecture Search for COVID-19 Detection with Chest CT Scans Technical Report
2021.
@techreport{DBLP:journals/corr/abs-2101-10667,
title = {Efficient Multi-objective Evolutionary 3D Neural Architecture Search for COVID-19 Detection with Chest CT Scans},
author = {Xin He and Shihao Wang and Guohao Ying and Jiyong Zhang and Xiaowen Chu},
url = {https://arxiv.org/abs/2101.10667},
year = {2021},
date = {2021-01-01},
journal = {CoRR},
volume = {abs/2101.10667},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}