Maintained by Difan Deng and Marius Lindauer.
The following list considers papers related to neural architecture search. It is by no means complete. If you miss a paper on the list, please let us know.
Please note that although NAS methods steadily improve, the quality of empirical evaluations in this field are still lagging behind compared to other areas in machine learning, AI and optimization. We would therefore like to share some best practices for empirical evaluations of NAS methods, which we believe will facilitate sustained and measurable progress in the field. If you are interested in a teaser, please read our blog post or directly jump to our checklist.
Transformers have gained increasing popularity in different domains. For a comprehensive list of papers focusing on Neural Architecture Search for Transformer-Based spaces, the awesome-transformer-search repo is all you need.
5555
Zhu, Huijuan; Xia, Mengzhen; Wang, Liangmin; Xu, Zhicheng; Sheng, Victor S.
A Novel Knowledge Search Structure for Android Malware Detection Journal Article
In: IEEE Transactions on Services Computing, no. 01, pp. 1-14, 5555, ISSN: 1939-1374.
@article{10750332,
title = { A Novel Knowledge Search Structure for Android Malware Detection },
author = {Huijuan Zhu and Mengzhen Xia and Liangmin Wang and Zhicheng Xu and Victor S. Sheng},
url = {https://doi.ieeecomputersociety.org/10.1109/TSC.2024.3496333},
doi = {10.1109/TSC.2024.3496333},
issn = {1939-1374},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Services Computing},
number = {01},
pages = {1-14},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {While the Android platform is gaining explosive popularity, the number of malicious software (malware) is also increasing sharply. Thus, numerous malware detection schemes based on deep learning have been proposed. However, they are usually suffering from the cumbersome models with complex architectures and tremendous parameters. They usually require heavy computation power support, which seriously limit their deployment on actual application environments with limited resources (e.g., mobile edge devices). To surmount this challenge, we propose a novel Knowledge Distillation (KD) structure—Knowledge Search (KS). KS exploits Neural Architecture Search (NAS) to adaptively bridge the capability gap between teacher and student networks in KD by introducing a parallelized student-wise search approach. In addition, we carefully analyze the characteristics of malware and locate three cost-effective types of features closely related to malicious attacks, namely, Application Programming Interfaces (APIs), permissions and vulnerable components, to characterize Android Applications (Apps). Therefore, based on typical samples collected in recent years, we refine features while exploiting the natural relationship between them, and construct corresponding datasets. Massive experiments are conducted to investigate the effectiveness and sustainability of KS on these datasets. Our experimental results show that the proposed method yields an accuracy of 97.89% to detect Android malware, which performs better than state-of-the-art solutions.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Feifei; Li, Mao; Ge, Jidong; Tang, Fenghui; Zhang, Sheng; Wu, Jie; Luo, Bin
Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-18, 5555, ISSN: 1558-0660.
@article{10742476,
title = { Privacy-Preserving Federated Neural Architecture Search With Enhanced Robustness for Edge Computing },
author = {Feifei Zhang and Mao Li and Jidong Ge and Fenghui Tang and Sheng Zhang and Jie Wu and Bin Luo},
url = {https://doi.ieeecomputersociety.org/10.1109/TMC.2024.3490835},
doi = {10.1109/TMC.2024.3490835},
issn = {1558-0660},
year = {5555},
date = {5555-11-01},
urldate = {5555-11-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-18},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {With the development of large-scale artificial intelligence services, edge devices are becoming essential providers of data and computing power. However, these edge devices are not immune to malicious attacks. Federated learning (FL), while protecting privacy of decentralized data through secure aggregation, struggles to trace adversaries and lacks optimization for heterogeneity. We discover that FL augmented with Differentiable Architecture Search (DARTS) can improve resilience against backdoor attacks while compatible with secure aggregation. Based on this, we propose a federated neural architecture search (NAS) framwork named SLNAS. The architecture of SLNAS is built on three pivotal components: a server-side search space generation method that employs an evolutionary algorithm with dual encodings, a federated NAS process based on DARTS, and client-side architecture tuning that utilizes Gumbel softmax combined with knowledge distillation. To validate robustness, we adapt a framework that includes backdoor attacks based on trigger optimization, data poisoning, and model poisoning, targeting both model weights and architecture parameters. Extensive experiments demonstrate that SLNAS not only effectively counters advanced backdoor attacks but also handles heterogeneity, outperforming defense baselines across a wide range of backdoor attack scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Yu-Ming; Hsieh, Jun-Wei; Lee, Chun-Chieh; Fan, Kuo-Chin
RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-11, 5555, ISSN: 2691-4581.
@article{10685480,
title = { RATs-NAS: Redirection of Adjacent Trails on Graph Convolutional Networks for Predictor-based Neural Architecture Search },
author = {Yu-Ming Zhang and Jun-Wei Hsieh and Chun-Chieh Lee and Kuo-Chin Fan},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2024.3465433},
doi = {10.1109/TAI.2024.3465433},
issn = {2691-4581},
year = {5555},
date = {5555-09-01},
urldate = {5555-09-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-11},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Manually designed CNN architectures like VGG, ResNet, DenseNet, and MobileNet have achieved high performance across various tasks, but design them is time-consuming and costly. Neural Architecture Search (NAS) automates the discovery of effective CNN architectures, reducing the need for experts. However, evaluating candidate architectures requires significant GPU resources, leading to the use of predictor-based NAS, such as graph convolutional networks (GCN), which is the popular option to construct predictors. However, we discover that, even though the ability of GCN mimics the propagation of features of real architectures, the binary nature of the adjacency matrix limits its effectiveness. To address this, we propose Redirection of Adjacent Trails (RATs), which adaptively learns trail weights within the adjacency matrix. Our RATs-GCN outperform other predictors by dynamically adjusting trail weights after each graph convolution layer. Additionally, the proposed Divide Search Sampling (DSS) strategy, based on the observation of cell-based NAS that architectures with similar FLOPs perform similarly, enhances search efficiency. Our RATs-NAS, which combine RATs-GCN and DSS, shows significant improvements over other predictor-based NAS methods on NASBench-101, NASBench-201, and NASBench-301.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Chen, X.; Yang, C.
CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture Journal Article
In: IEEE Micro, no. 01, pp. 1-12, 5555, ISSN: 1937-4143.
@article{10551739,
title = {CIMNet: Joint Search for Neural Network and Computing-in-Memory Architecture},
author = {X. Chen and C. Yang},
url = {https://www.computer.org/csdl/magazine/mi/5555/01/10551739/1XyKBmSlmPm},
doi = {10.1109/MM.2024.3409068},
issn = {1937-4143},
year = {5555},
date = {5555-06-01},
urldate = {5555-06-01},
journal = {IEEE Micro},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Computing-in-memory (CIM) architecture has been proven to effectively transcend the memory wall bottleneck, expanding the potential of low-power and high-throughput applications such as machine learning. Neural architecture search (NAS) designs ML models to meet a variety of accuracy, latency, and energy constraints. However, integrating CIM into NAS presents a major challenge due to additional simulation overhead from the non-ideal characteristics of CIM hardware. This work introduces a quantization and device aware accuracy predictor that jointly scores quantization policy, CIM architecture, and neural network architecture, eliminating the need for time-consuming simulations in the search process. We also propose reducing the search space based on architectural observations, resulting in a well-pruned search space customized for CIM. These allow for efficient exploration of superior combinations in mere CPU minutes. Our methodology yields CIMNet, which consistently improves the trade-off between accuracy and hardware efficiency on benchmarks, providing valuable architectural insights.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Yan, J.; Liu, J.; Xu, H.; Wang, Z.; Qiao, C.
Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing Journal Article
In: IEEE Transactions on Mobile Computing, no. 01, pp. 1-17, 5555, ISSN: 1558-0660.
@article{10460163,
title = {Peaches: Personalized Federated Learning with Neural Architecture Search in Edge Computing},
author = {J. Yan and J. Liu and H. Xu and Z. Wang and C. Qiao},
doi = {10.1109/TMC.2024.3373506},
issn = {1558-0660},
year = {5555},
date = {5555-03-01},
urldate = {5555-03-01},
journal = {IEEE Transactions on Mobile Computing},
number = {01},
pages = {1-17},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In edge computing (EC), federated learning (FL) enables numerous distributed devices (or workers) to collaboratively train AI models without exposing their local data. Most works of FL adopt a predefined architecture on all participating workers for model training. However, since workers' local data distributions vary heavily in EC, the predefined architecture may not be the optimal choice for every worker. It is also unrealistic to manually design a high-performance architecture for each worker, which requires intense human expertise and effort. In order to tackle this challenge, neural architecture search (NAS) has been applied in FL to automate the architecture design process. Unfortunately, the existing federated NAS frameworks often suffer from the difficulties of system heterogeneity and resource limitation. To remedy this problem, we present a novel framework, termed Peaches, to achieve efficient searching and training in the resource-constrained EC system. Specifically, the local model of each worker is stacked by base cell and personal cell, where the base cell is shared by all workers to capture the common knowledge and the personal cell is customized for each worker to fit the local data. We determine the number of base cells, shared by all workers, according to the bandwidth budget on the parameters server. Besides, to relieve the data and system heterogeneity, we find the optimal number of personal cells for each worker based on its computing capability. In addition, we gradually prune the search space during training to mitigate the resource consumption. We evaluate the performance of Peaches through extensive experiments, and the results show that Peaches can achieve an average accuracy improvement of about 6.29% and up to 3.97× speed up compared with the baselines.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sun, Genchen; Liu, Zhengkun; Gan, Lin; Su, Hang; Li, Ting; Zhao, Wenfeng; Sun, Biao
SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture Journal Article
In: IEEE Transactions on Artificial Intelligence, vol. 1, no. 01, pp. 1-12, 5555, ISSN: 2691-4581.
@article{10855683,
title = { SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture },
author = {Genchen Sun and Zhengkun Liu and Lin Gan and Hang Su and Ting Li and Wenfeng Zhao and Biao Sun},
url = {https://doi.ieeecomputersociety.org/10.1109/TAI.2025.3534136},
doi = {10.1109/TAI.2025.3534136},
issn = {2691-4581},
year = {5555},
date = {5555-01-01},
urldate = {5555-01-01},
journal = {IEEE Transactions on Artificial Intelligence},
volume = {1},
number = {01},
pages = {1-12},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {In recent years, Neural Architecture Search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of Spiking Neural Networks (SNNs) — often hailed as the third generation of neural networks for their biological realism and computational thrift — this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire (LIF) neurons with variable thresholds as candidate operations. It encompasses 15,625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100 and Tiny-ImageNet datasets. This paper delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark’s utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark’s consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the paper benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench’s broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and the promise to release all code and datasets post-acceptance, aiming to catalyze further exploration and innovation within the SNN domain. SpikeNAS-Bench is open source at https://github.com/XXX (hidden for double anonymous review).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Changlin; Lin, Sihao; Tang, Tao; Wang, Guangrun; Li, Mingjie; Li, Zhihui; Chang, Xiaojun
BossNAS Family: Block-wisely Self-supervised Neural Architecture Search Journal Article
In: IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 01, pp. 1-15, 5555, ISSN: 1939-3539.
@article{10839629,
title = { BossNAS Family: Block-wisely Self-supervised Neural Architecture Search },
author = {Changlin Li and Sihao Lin and Tao Tang and Guangrun Wang and Mingjie Li and Zhihui Li and Xiaojun Chang},
url = {https://doi.ieeecomputersociety.org/10.1109/TPAMI.2025.3529517},
doi = {10.1109/TPAMI.2025.3529517},
issn = {1939-3539},
year = {5555},
date = {5555-01-01},
urldate = {5555-01-01},
journal = {IEEE Transactions on Pattern Analysis & Machine Intelligence},
number = {01},
pages = {1-15},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
abstract = {Recent advances in hand-crafted neural architectures for visual recognition underscore the pressing need to explore architecture designs comprising diverse building blocks. Concurrently, neural architecture search (NAS) methods have gained traction as a means to alleviate human efforts. Nevertheless, the question of whether NAS methods can efficiently and effectively manage diversified search spaces featuring disparate candidates, such as Convolutional Neural Networks (CNNs) and transformers, remains an open question. In this work, we introduce a novel unsupervised NAS approach called BossNAS (Block-wisely Self-supervised Neural Architecture Search), which aims to address the problem of inaccurate predictive architecture ranking caused by a large weight-sharing space while mitigating potential ranking issue caused by biased supervision. To achieve this, we factorize the search space into blocks and introduce a novel self-supervised training scheme called Ensemble Bootstrapping, to train each block separately in an unsupervised manner. In the search phase, we propose an unsupervised Population-Centric Search, optimizing the candidate architecture towards the population center. Additionally, we enhance our NAS method by integrating masked image modeling and present BossNAS++ to overcome the lack of dense supervision in our block-wise self-supervised NAS. In BossNAS++, we introduce the training technique named Masked Ensemble Bootstrapping for block-wise supernet, accompanied by a Masked Population-Centric Search scheme to promote fairer architecture selection. Our family of models, discovered through BossNAS and BossNAS++, delivers impressive results across various search spaces and datasets. Our transformer model discovered by BossNAS++ attains a remarkable accuracy of 83.2% on ImageNet with only 10.5B MAdds, surpassing DeiT-B by 1.4% while maintaining a lower computation cost. Moreover, our approach excels in architecture rating accuracy, achieving Spearman correlations of 0.78 and 0.76 on the canonical MBConv search space with ImageNet and the NATS-Bench size search space with CIFAR-100, respectively, outperforming state-of-the-art NAS methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2025
Wang, Weiqi; Bao, Feilong; Xing, Zhecong; Lian, Zhe
A Survey: Research Progress of Feature Fusion Technology Journal Article
In: 2025.
@article{wangsurvey,
title = {A Survey: Research Progress of Feature Fusion Technology},
author = {Weiqi Wang and Feilong Bao and Zhecong Xing and Zhe Lian},
url = {http://poster-openaccess.com/files/ICIC2024/862.pdf},
year = {2025},
date = {2025-12-01},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
MACHINE-GENERATED NEURAL NETWORKS FOR SHORT-TERM LOAD FORECASTING Collection
2025.
@collection{nokey,
title = { MACHINE-GENERATED NEURAL NETWORKS FOR SHORT-TERM LOAD FORECASTING},
author = {Gergana Vacheva and Plamen Stanchev and Nikolay Hinov
},
url = {https://unitechsp.tugab.bg/images/2024/1-EE/s1_p143_v1.pdf},
year = {2025},
date = {2025-12-01},
urldate = {2025-12-01},
booktitle = {International Scientific Conference UNITECH`2024},
journal = {International Scientific Conference UNITECH`2024},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Feng, Shiyang; Li, Zhaowei; Zhang, Bo; Chen, Tao
DSF2-NAS: Dual-Stage Feature Fusion via Network Architecture Search for Classification of Multimodal Remote Sensing Images Journal Article
In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. , 2025.
@article{feng-ieeejstoaeors25a,
title = {DSF2-NAS: Dual-Stage Feature Fusion via Network Architecture Search for Classification of Multimodal Remote Sensing Images},
author = {Shiyang Feng and Zhaowei Li and Bo Zhang and Tao Chen},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10904332},
year = {2025},
date = {2025-03-01},
urldate = {2025-03-01},
journal = { IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
TinyDevID: TinyML-Driven IoT Devices IDentification Using Network Flow Data Collection
2025.
@collection{Rushikesh-csp25a,
title = {TinyDevID: TinyML-Driven IoT Devices IDentification Using Network Flow Data},
author = {Priyanka Rushikesh Chaudhary and Anand Agrawal and Rajib Ranjan Maiti},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10885715},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
booktitle = {COMSNETS 2025 - Cybersecurity & Privacy Workshop (CSP)},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Yu, Sixing
2025.
@phdthesis{yu-phd25a,
title = {Scalable and resource-efcient federated learning: Techniques for resource-constrained heterogeneous systems},
author = {Sixing Yu},
url = {https://www.proquest.com/docview/3165602177?pq-origsite=gscholar&fromopenview=true&sourcetype=Dissertations%20&%20Theses},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Fu, Jintao; Cong, Peng; Xu, Shuo; Chang, Jiahao; Liu, Ximing; Sun, Yuewen
Neural architecture search with Deep Radon Prior for sparse-view CT image reconstruction Journal Article
In: Med Phys , 2025.
@article{Fu-medphs25a,
title = { Neural architecture search with Deep Radon Prior for sparse-view CT image reconstruction },
author = {Jintao Fu and Peng Cong and Shuo Xu and Jiahao Chang and Ximing Liu and Yuewen Sun
},
url = {https://pubmed.ncbi.nlm.nih.gov/39930320/},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
journal = { Med Phys },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhao, Yi-Heng; Pang, Shen-Wen; Huang, Heng-Zhi; Wu, Shao-Wen; Sun, Shao-Hua; Liu, Zhen-Bing; Pan, Zhi-Chao
Automatic clustering of single-molecule break junction data through task-oriented representation learning Journal Article
In: Rare Metals , 2025.
@article{zhao-rarem25a,
title = {Automatic clustering of single-molecule break junction data through task-oriented representation learning},
author = {
Yi-Heng Zhao and Shen-Wen Pang and Heng-Zhi Huang and Shao-Wen Wu and Shao-Hua Sun and Zhen-Bing Liu and Zhi-Chao Pan
},
url = {https://link.springer.com/article/10.1007/s12598-024-03089-7},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
journal = { Rare Metals },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Huang, Tao
Efficient Deep Neural Architecture Design and Training PhD Thesis
2025.
@phdthesis{nokey,
title = {Efficient Deep Neural Architecture Design and Training},
author = { Huang, Tao },
url = {https://ses.library.usyd.edu.au/handle/2123/33598},
year = {2025},
date = {2025-02-01},
urldate = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
Herterich, Nils; Liu, Kai; Stein, Anthony
Multi-objective neural architecture search for real-time weed detection on embedded system Miscellaneous
2025.
@misc{Herterich,
title = {Multi-objective neural architecture search for real-time weed detection on embedded system},
author = {Nils Herterich and Kai Liu and Anthony Stein},
url = {https://dl.gi.de/server/api/core/bitstreams/29a49f8d-304e-4073-8a92-4bef6483c087/content},
year = {2025},
date = {2025-02-01},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Tabak, Gabriel Couto; Molenaar, Dylan; Curi, Mariana
An evolutionary neural architecture search for item response theory autoencoders Journal Article
In: Behaviormetrika , 2025.
@article{nokey,
title = {An evolutionary neural architecture search for item response theory autoencoders},
author = {Gabriel Couto Tabak and Dylan Molenaar and Mariana Curi
},
url = {https://link.springer.com/article/10.1007/s41237-024-00250-5},
year = {2025},
date = {2025-01-27},
urldate = {2025-01-27},
journal = {Behaviormetrika },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Hao, Debei; Pei, Songwei
MIG-DARTS: towards effective differentiable architecture search by gradually mitigating the initial-channel gap between search and evaluation Journal Article
In: Neural Computing and Applications, 2025.
@article{nokey,
title = {MIG-DARTS: towards effective differentiable architecture search by gradually mitigating the initial-channel gap between search and evaluation},
author = {
Debei Hao and Songwei Pei
},
url = {https://link.springer.com/article/10.1007/s00521-024-10681-6},
year = {2025},
date = {2025-01-09},
urldate = {2025-01-09},
journal = {Neural Computing and Applications},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
(Ed.)
2025.
@collection{nokey,
title = {H4H: Hybrid Convolution-Transformer Architecture Search for NPU-CIM Heterogeneous Systems for AR/VR Applications},
author = {Yiwei Zhao and Jinhui Chen and Sai Qian Zhang and Syed Shakib Sarwar and Kleber Hugo Stangherlin and Jorge Tomas Gomez and Jae-Sun Seo and Barbara De Salvo and Chiao Liu and Phillip B. Gibbons and Ziyun Li},
url = {https://www.pdl.cmu.edu/PDL-FTP/associated/ASP-DAC2025-1073-12.pdf},
year = {2025},
date = {2025-01-02},
urldate = {2025-01-02},
booktitle = {ASPDAC ’25},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
Chouhan, Avinash; Chutia, Dibyajyoti; Deb, Biswarup; Aggarwal, Shiv Prasad
Attention-Based Neural Architecture Search for Effective Semantic Segmentation of Satellite Images Proceedings Article
In: Noor, Arti; Saroha, Kriti; Pricop, Emil; Sen, Abhijit; Trivedi, Gaurav (Ed.): Emerging Trends and Technologies on Intelligent Systems, pp. 325–335, Springer Nature Singapore, Singapore, 2025, ISBN: 978-981-97-5703-9.
@inproceedings{10.1007/978-981-97-5703-9_28,
title = {Attention-Based Neural Architecture Search for Effective Semantic Segmentation of Satellite Images},
author = {Avinash Chouhan and Dibyajyoti Chutia and Biswarup Deb and Shiv Prasad Aggarwal},
editor = {Arti Noor and Kriti Saroha and Emil Pricop and Abhijit Sen and Gaurav Trivedi},
url = {https://link.springer.com/chapter/10.1007/978-981-97-5703-9_28},
isbn = {978-981-97-5703-9},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {Emerging Trends and Technologies on Intelligent Systems},
pages = {325–335},
publisher = {Springer Nature Singapore},
address = {Singapore},
abstract = {Semantic segmentation is an important activity in satellite image analysis. The manual design and development of neural architectures for semantic segmentation is very tedious and can result in computationally heavy architectures with redundant computation. Neural architecture search (NAS) produces automated network architectures for a given task considering computational cost and other parameters. In this work, we proposed an attention-based neural architecture search (ANAS), which uses attention layers at cell levels for effective and efficient architecture design for semantic segmentation. The proposed ANAS has achieved better results than previous NAS-based work on two benchmark datasets.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Cai, Zicheng; Tang, Yaohua; Lai, Yutao; Wang, Hua; Chen, Zhi; Chen, Hao
SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models Technical Report
2025.
@techreport{cai2025sekiselfevolutionknowledgeinspiration,
title = {SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models},
author = {Zicheng Cai and Yaohua Tang and Yutao Lai and Hua Wang and Zhi Chen and Hao Chen},
url = {https://arxiv.org/abs/2502.20422},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Rumiantsev, Pavel; Coates, Mark
Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation Technical Report
2025.
@techreport{rumiantsev2025variationmattersmitigatingembracing,
title = {Variation Matters: from Mitigating to Embracing Zero-Shot NAS Ranking Function Variation},
author = {Pavel Rumiantsev and Mark Coates},
url = {https://arxiv.org/abs/2502.19657},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Ding, Zhenyang; Pu, Ninghao; Miao, Qihui; Chen, Zhiqiang; Xu, Yifan; Liu, Hao
Efficient Palm Vein Recognition Optimized by Neural Architecture Search and Hybrid Compression Proceedings Article
In: 2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI), pp. 826-832, 2025.
@inproceedings{10894245,
title = {Efficient Palm Vein Recognition Optimized by Neural Architecture Search and Hybrid Compression},
author = {Zhenyang Ding and Ninghao Pu and Qihui Miao and Zhiqiang Chen and Yifan Xu and Hao Liu},
url = {https://ieeexplore.ieee.org/abstract/document/10894245},
doi = {10.1109/ICMSCI62561.2025.10894245},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
booktitle = {2025 International Conference on Multi-Agent Systems for Collaborative Intelligence (ICMSCI)},
pages = {826-832},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Ang, Li-Minn; Su, Yuanxin; Seng, Kah Phooi; Smith, Jeremy S.
Customized Binary Convolutional Neural Networks and Neural Architecture Search on Hardware Technologies Journal Article
In: IEEE Nanotechnology Magazine, pp. 1-8, 2025.
@article{10904266,
title = {Customized Binary Convolutional Neural Networks and Neural Architecture Search on Hardware Technologies},
author = {Li-Minn Ang and Yuanxin Su and Kah Phooi Seng and Jeremy S. Smith},
url = {https://ieeexplore.ieee.org/abstract/document/10904266},
doi = {10.1109/MNANO.2025.3533937},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Nanotechnology Magazine},
pages = {1-8},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lu, Kang-Di; Huang, Jia-Cheng; Zeng, Guo-Qiang; Chen, Min-Rong; Geng, Guang-Gang; Weng, Jian
Multi-Objective Discrete Extremal Optimization of Variable-Length Blocks-Based CNN by Joint NAS and HPO for Intrusion Detection in IIoT Journal Article
In: IEEE Transactions on Dependable and Secure Computing, pp. 1-18, 2025.
@article{10902222,
title = {Multi-Objective Discrete Extremal Optimization of Variable-Length Blocks-Based CNN by Joint NAS and HPO for Intrusion Detection in IIoT},
author = {Kang-Di Lu and Jia-Cheng Huang and Guo-Qiang Zeng and Min-Rong Chen and Guang-Gang Geng and Jian Weng},
url = {https://ieeexplore.ieee.org/abstract/document/10902222},
doi = {10.1109/TDSC.2025.3545363},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Dependable and Secure Computing},
pages = {1-18},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Li, Chunchao; Li, Jun; Peng, Mingrui; Rasti, Behnood; Duan, Puhong; Tang, Xuebin; Ma, Xiaoguang
Low-Latency Neural Network for Efficient Hyperspectral Image Classification Journal Article
In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. PP, pp. 1-17, 2025.
@article{articleg,
title = {Low-Latency Neural Network for Efficient Hyperspectral Image Classification},
author = {Chunchao Li and Jun Li and Mingrui Peng and Behnood Rasti and Puhong Duan and Xuebin Tang and Xiaoguang Ma},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10900438},
doi = {10.1109/JSTARS.2025.3544583},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
volume = {PP},
pages = {1-17},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
İlker, Günay; Özkan, İnik
SADASNet: A Selective and Adaptive Deep Architecture Search Network with Hyperparameter Optimization for Robust Skin Cancer Classification Journal Article
In: Diagnostics, vol. 15, no. 5, 2025, ISSN: 2075-4418.
@article{diagnostics15050541,
title = {SADASNet: A Selective and Adaptive Deep Architecture Search Network with Hyperparameter Optimization for Robust Skin Cancer Classification},
author = {Günay İlker and İnik Özkan},
url = {https://www.mdpi.com/2075-4418/15/5/541},
doi = {10.3390/diagnostics15050541},
issn = {2075-4418},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Diagnostics},
volume = {15},
number = {5},
abstract = {Background/Objectives: Skin cancer is a major public health concern, where early diagnosis and effective treatment are essential for prevention. To enhance diagnostic accuracy, researchers have increasingly utilized computer vision systems, with deep learning-based approaches becoming the primary focus in recent studies. Nevertheless, there is a notable research gap in the effective optimization of hyperparameters to design optimal deep learning architectures, given the need for high accuracy and lower computational complexity. Methods: This paper puts forth a robust metaheuristic optimization-based approach to develop novel deep learning architectures for multi-class skin cancer classification. This method, designated as the SADASNet (Selective and Adaptive Deep Architecture Search Network by Hyperparameter Optimization) algorithm, is developed based on the Particle Swarm Optimization (PSO) technique. The SADASNet method is adapted to the HAM10000 dataset. Innovative data augmentation techniques are applied to overcome class imbalance issues and enhance the performance of the model. The SADASNet method has been developed to accommodate a range of image sizes, and six different original deep learning models have been produced as a result. Results: The models achieved the following highest performance metrics: 99.31% accuracy, 97.58% F1 score, 97.57% recall, 97.64% precision, and 99.59% specificity. Compared to the most advanced competitors reported in the literature, the proposed method demonstrates superior performance in terms of accuracy and computational complexity. Furthermore, it maintains a broad solution space during parameter optimization. Conclusions: With these outcomes, this method aims to enhance the classification of skin cancer and contribute to the advancement of deep learning.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mecharbat, Lotfi Abdelkrim; Marchisio, Alberto; Shafique, Muhammad; Ghassemi, Mohammad M.; Alhanai, Tuka
MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks Technical Report
2025.
@techreport{mecharbat2025moenasmixtureofexpertbasedneuralb,
title = {MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks},
author = {Lotfi Abdelkrim Mecharbat and Alberto Marchisio and Muhammad Shafique and Mohammad M. Ghassemi and Tuka Alhanai},
url = {https://arxiv.org/abs/2502.07422},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Xu, Liming; Zheng, Jie; He, Chunlin; Wang, Jing; Zheng, Bochuan; Lv, Jiancheng
Adaptive Multi-particle Swarm Neural Architecture Search for High-incidence Cancer Prediction Journal Article
In: IEEE Transactions on Artificial Intelligence, pp. 1-12, 2025.
@article{10896623,
title = {Adaptive Multi-particle Swarm Neural Architecture Search for High-incidence Cancer Prediction},
author = {Liming Xu and Jie Zheng and Chunlin He and Jing Wang and Bochuan Zheng and Jiancheng Lv},
url = {https://ieeexplore.ieee.org/abstract/document/10896623},
doi = {10.1109/TAI.2025.3543822},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Artificial Intelligence},
pages = {1-12},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zou, Juan; Liu, Yang; Liu, Yuan; Xia, Yizhang
Evolutionary multi-objective neural architecture search via depth equalization supernet Journal Article
In: Neurocomputing, pp. 129674, 2025, ISSN: 0925-2312.
@article{ZOU2025129674,
title = {Evolutionary multi-objective neural architecture search via depth equalization supernet},
author = {Juan Zou and Yang Liu and Yuan Liu and Yizhang Xia},
url = {https://www.sciencedirect.com/science/article/pii/S0925231225003467},
doi = {https://doi.org/10.1016/j.neucom.2025.129674},
issn = {0925-2312},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Neurocomputing},
pages = {129674},
abstract = {To provide a diverse selection of models suitable for different application scenarios, neural architecture search (NAS) is constructed as a multi-objective optimization problem aiming to simultaneously optimize multiple metrics such as model size and accuracy. Evolutionary algorithms (EA) have been shown to be an effective multi-objective approach that can balance different metrics. However, EA require many evaluations, and the evaluation of architectures is expensive. Training a supernet to evaluate an architecture is considered a promising method to reduce the cost of EA. But there are still many challenges in applying supernet to multi-objective NAS: (1) Supernet tends to give higher scores to shallower architectures, causing potential deeper architectures to be ignored. (2) The receptive field of the architecture has a large gap between search and evaluation, causing a decrease in performance. (3) Larger models are gradually eliminated during evolution, leading to a diversity disaster. We proposed a framework called DESEvo to solve these problems in this paper. DESEvo trains a depth equalization supernet to improve bias of supernet via a frequency rejection sampling method. In addition, DESEvo adaptively constrainted receptive field of architecture to reduce the gap. Finally, DESEvo developed a diversity-preserving strategy to enhance the diversity. Experimental results validate the efficiency and effectiveness of the algorithm, DESEvo can search a set of architectures that are more competitive compared to other state-of-the-art algorithms within 0.2 days, becoming the most efficient multi-objective NAS method in the supernet-based methods.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Zheyu; Zhang, Yueyi; sun, Xiaoyan
Denoising Designs-inherited Search Framework for Image Denoising Technical Report
2025.
@techreport{zhang2025denoisingdesignsinheritedsearchframework,
title = {Denoising Designs-inherited Search Framework for Image Denoising},
author = {Zheyu Zhang and Yueyi Zhang and Xiaoyan sun},
url = {https://arxiv.org/abs/2502.13359},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Deng, Difan; Lindauer, Marius
Neural Attention Search Technical Report
2025.
@techreport{deng2025neuralattentionsearch,
title = {Neural Attention Search},
author = {Difan Deng and Marius Lindauer},
url = {https://arxiv.org/abs/2502.13251},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Njor, Emil; Banbury, Colby; Fafoutis, Xenofon
Fast Data Aware Neural Architecture Search via Supernet Accelerated Evaluation Technical Report
2025.
@techreport{njor2025fastdataawareneural,
title = {Fast Data Aware Neural Architecture Search via Supernet Accelerated Evaluation},
author = {Emil Njor and Colby Banbury and Xenofon Fafoutis},
url = {https://arxiv.org/abs/2502.12690},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Feng, Yuqi; Sun, Yanan; Yen, Gary G.; Tan, Kay Chen
REP: An Interpretable Robustness Enhanced Plugin for Differentiable Neural Architecture Search Journal Article
In: IEEE Transactions on Knowledge and Data Engineering, pp. 1-15, 2025.
@article{10892073,
title = {REP: An Interpretable Robustness Enhanced Plugin for Differentiable Neural Architecture Search},
author = {Yuqi Feng and Yanan Sun and Gary G. Yen and Kay Chen Tan},
url = {https://ieeexplore.ieee.org/abstract/document/10892073},
doi = {10.1109/TKDE.2025.3543503},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Knowledge and Data Engineering},
pages = {1-15},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kim, Hyeonah; Choi, Sanghyeok; Son, Jiwoo; Park, Jinkyoo; Kwon, Changhyun
Neural Genetic Search in Discrete Spaces Technical Report
2025.
@techreport{kim2025neuralgeneticsearchdiscrete,
title = {Neural Genetic Search in Discrete Spaces},
author = {Hyeonah Kim and Sanghyeok Choi and Jiwoo Son and Jinkyoo Park and Changhyun Kwon},
url = {https://arxiv.org/abs/2502.10433},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Kuhn, Lukas; Saba-Sadiya, Sari; Roig, Gemma
Cognitive Neural Architecture Search Reveals Hierarchical Entailment Technical Report
2025.
@techreport{kuhn2025cognitiveneuralarchitecturesearch,
title = {Cognitive Neural Architecture Search Reveals Hierarchical Entailment},
author = {Lukas Kuhn and Sari Saba-Sadiya and Gemma Roig},
url = {https://arxiv.org/abs/2502.11141},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Gao, Yang; Yang, Hong; Chen, Yizhi; Wu, Junxian; Zhang, Peng; Wang, Haishuai
LLM4GNAS: A Large Language Model Based Toolkit for Graph Neural Architecture Search Technical Report
2025.
@techreport{gao2025llm4gnaslargelanguagemodel,
title = {LLM4GNAS: A Large Language Model Based Toolkit for Graph Neural Architecture Search},
author = {Yang Gao and Hong Yang and Yizhi Chen and Junxian Wu and Peng Zhang and Haishuai Wang},
url = {https://arxiv.org/abs/2502.10459},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Yin, Shantong; Niu, Ben; Wang, Rui; Wang, Xin
Spatial and channel level feature redundancy reduction for differentiable neural architecture search Journal Article
In: Neurocomputing, vol. 630, pp. 129713, 2025, ISSN: 0925-2312.
@article{YIN2025129713,
title = {Spatial and channel level feature redundancy reduction for differentiable neural architecture search},
author = {Shantong Yin and Ben Niu and Rui Wang and Xin Wang},
url = {https://www.sciencedirect.com/science/article/pii/S0925231225003856},
doi = {https://doi.org/10.1016/j.neucom.2025.129713},
issn = {0925-2312},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Neurocomputing},
volume = {630},
pages = {129713},
abstract = {Differentiable architecture search (DARTS), based on the continuous relaxation of the architectural representation and gradient descent, achieves effective results in Neural Architecture Search (NAS) field. Among the neural architectures, convolutional neural networks (CNNs) have achieved remarkable performance in various computer vision tasks. However, convolutional layers inevitably extract redundant features as the limitation of the weight-sharing property by convolutional kernels, thus slowing down the search efficiency of DARTS. In this paper, we propose a novel search approach named Slim-DARTS from the perspective of reducing feature redundancy, to further achieve high-speed and efficient neural architecture search. At the level of spatial redundancy, we design a spatial reconstruction module to eliminate spatial feature redundancy and facilitate representative feature learning. At the channel redundancy level, partial channel connection is applied to randomly sample a small subset of channels for operation selection to reduce unfair competition among candidate operations. And we introduce a group of channel parameters to automatically adjust the proportion of selected channels. The experimental results show that our research greatly improves search efficiency and memory utilization, achieving classification error rates of 2.39% and 16.78% on CIFAR-10 and CIFAR-100, respectively.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cranney, Caleb; Meyer, Jesse G.
AttentionSmithy: A Modular Framework for Rapid Transformer Development and Customization Technical Report
2025.
@techreport{cranney2025attentionsmithymodularframeworkrapid,
title = {AttentionSmithy: A Modular Framework for Rapid Transformer Development and Customization},
author = {Caleb Cranney and Jesse G. Meyer},
url = {https://arxiv.org/abs/2502.09503},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Cheng, Jian; Jiang, Jinbo; Kang, Haidong; Ma, Lianbo
A Hybrid Neural Architecture Search Algorithm Optimized via Lifespan Particle Swarm Optimization for Coal Mine Image Recognition Journal Article
In: Mathematics, vol. 13, no. 4, 2025, ISSN: 2227-7390.
@article{math13040631,
title = {A Hybrid Neural Architecture Search Algorithm Optimized via Lifespan Particle Swarm Optimization for Coal Mine Image Recognition},
author = {Jian Cheng and Jinbo Jiang and Haidong Kang and Lianbo Ma},
url = {https://www.mdpi.com/2227-7390/13/4/631},
doi = {10.3390/math13040631},
issn = {2227-7390},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Mathematics},
volume = {13},
number = {4},
abstract = {Coal mine scene image recognition plays an important role in safety monitoring and equipment detection. However, traditional methods often depend on manually designed neural network architectures. These models struggle to handle the complex backgrounds, low illumination, and diverse objects commonly found in coal mine environments. Manual designs are not only inefficient but also restrict the exploration of optimal architectures, resulting to subpar performance. To address these challenges, we propose using a neural architecture search (NAS) to automate the design of neural networks. Traditional NAS methods are known to be computationally expensive. To improve this, we enhance the process by incorporating Particle Swarm Optimization (PSO), a scalable algorithm that effectively balances global and local searches. To further enhance PSO’s efficiency, we integrate the lifespan mechanism, which prevents premature convergence and enables a more comprehensive exploration of the search space. Our proposed method establishes a flexible search space that includes various types of convolutional layers, activation functions, pooling operations, and network depths, enabling a comprehensive optimization process. Extensive experiments show that the Lifespan-PSO NAS method outperforms traditional manually designed networks and standard PSO-based NAS approaches, offering significant improvements in both recognition accuracy (improved by 10%) and computational efficiency (resource usage reduced by 30%). This makes it a highly effective solution for real-world coal mine image recognition tasks via a PSO-optimized approach in terms of performance and efficiency.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Krishnanunni, C G; Bui-Thanh, Tan; Dawson, Clint
Topological derivative approach for deep neural network architecture adaptation Technical Report
2025.
@techreport{krishnanunni2025topologicalderivativeapproachdeep,
title = {Topological derivative approach for deep neural network architecture adaptation},
author = {C G Krishnanunni and Tan Bui-Thanh and Clint Dawson},
url = {https://arxiv.org/abs/2502.06885},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Wang, Zilong; Liang, Pei; Zhai, Jinglei; Wu, Bei; Chen, Xin; Ding, Fan; Chen, Qiang; Sun, Biao
Efficient detection of foodborne pathogens via SERS and deep learning: An ADMIN-optimized NAS-Unet approach Journal Article
In: Journal of Hazardous Materials, vol. 489, pp. 137581, 2025, ISSN: 0304-3894.
@article{WANG2025137581,
title = {Efficient detection of foodborne pathogens via SERS and deep learning: An ADMIN-optimized NAS-Unet approach},
author = {Zilong Wang and Pei Liang and Jinglei Zhai and Bei Wu and Xin Chen and Fan Ding and Qiang Chen and Biao Sun},
url = {https://www.sciencedirect.com/science/article/pii/S0304389425004959},
doi = {https://doi.org/10.1016/j.jhazmat.2025.137581},
issn = {0304-3894},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Journal of Hazardous Materials},
volume = {489},
pages = {137581},
abstract = {Amid the increasing global challenge of foodborne diseases, there is an urgent need for rapid and precise pathogen detection methods. This study innovatively integrates surface-enhanced Raman Spectroscopy (SERS) with deep learning technology to develop an efficient tool for the detection of foodborne pathogens. Utilizing an automated design of mixed networks (ADMIN) strategy, coupled with neural architecture search (NAS) technology, we optimized convolutional neural networks (CNNs) architectures, significantly enhancing SERS data analysis capabilities. This research introduces the U-Net architecture and attention mechanisms, which improve not only classification accuracy but also the model's ability to identify critical spectral features. Compared to traditional detection methods, our approach demonstrates significant advantages in accuracy. In testing samples from 22 foodborne pathogens, the optimized NAS-Unet model achieved an average precision of 92.77 %, surpassing current technologies. Additionally, we explored how different network depths affect classification performance and validated the model's generalization capabilities on the Bacteria-ID dataset, laying the groundwork for practical applications. Our study provides an innovative detection approach for the food safety sector and opens new avenues for applying deep learning technologies in microbiology. Looking ahead, we aim to further explore diverse network modules to enhance model generalization and promote the application of these technologies in real-world food safety testing, playing a crucial role in the fight against foodborne diseases.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lupión, M.; Cruz, N. C.; Ortigosa, E. M.; Ortigosa, P. M.
A holistic approach for resource-constrained neural network architecture search Journal Article
In: Applied Soft Computing, vol. 172, pp. 112832, 2025, ISSN: 1568-4946.
@article{LUPION2025112832,
title = {A holistic approach for resource-constrained neural network architecture search},
author = {M. Lupión and N. C. Cruz and E. M. Ortigosa and P. M. Ortigosa},
url = {https://www.sciencedirect.com/science/article/pii/S1568494625001437},
doi = {https://doi.org/10.1016/j.asoc.2025.112832},
issn = {1568-4946},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Applied Soft Computing},
volume = {172},
pages = {112832},
abstract = {The design of Artificial Neural Networks (ANN) is critical for their performance. The research field called Neural Network Search (NAS) investigates automated design strategies. This work proposes a novel NAS stack that stands out in three facets. First, the representation scheme encodes problem-specific ANN as plain vectors of numbers without needing auxiliary conversion models. Second, it is a pioneer in relying on the TLBO meta-heuristic. This optimizer supports large-scale problems and only expects two parameters, contrasting with other meta-heuristics used for NAS. Third, the stack includes a new evaluation predictor that avoids evaluating non-promising architectures. It combines several machine learning methods that train as the optimizer evaluates solutions, which avoids preliminary preparing this component and makes it self-adaptive. The proposal has been tested by using it to build a CIFAR-10 classifier while forcing the architecture to have fewer than 150,000 parameters, assuming that the resulting network must be deployed in a resource-constrained IoT device. The designs found with and without the predictor achieve validation accuracies of 78.68% and 80.65%, respectively. Both outperform a larger model from the recent literature. The predictor slightly constraints the evolution of solutions, but it approximately halves the computational effort. After extending the test to the CIFAR-100 dataset, the proposal achieves a validation accuracy of 65.43% with 478,006 parameters in its fastest configuration, competing with current results in the literature.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Franco-Gaona, Erick; Avila-Garcia, Maria Susana; Cruz-Aceves, Ivan
Automatic Neural Architecture Search Based on an Estimation of Distribution Algorithm for Binary Classification of Image Databases Journal Article
In: Mathematics, vol. 13, no. 4, 2025, ISSN: 2227-7390.
@article{math13040605,
title = {Automatic Neural Architecture Search Based on an Estimation of Distribution Algorithm for Binary Classification of Image Databases},
author = {Erick Franco-Gaona and Maria Susana Avila-Garcia and Ivan Cruz-Aceves},
url = {https://www.mdpi.com/2227-7390/13/4/605},
doi = {10.3390/math13040605},
issn = {2227-7390},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Mathematics},
volume = {13},
number = {4},
abstract = {Convolutional neural networks (CNNs) are widely used for image classification; however, setting the appropriate hyperparameters before training is subjective and time consuming, and the search space is not properly explored. This paper presents a novel method for the automatic neural architecture search based on an estimation of distribution algorithm (EDA) for binary classification problems. The hyperparameters were coded in binary form due to the nature of the metaheuristics used in the automatic search stage of CNN architectures which was performed using the Boltzmann Univariate Marginal Distribution algorithm (BUMDA) chosen by statistical comparison between four metaheuristics to explore the search space, whose computational complexity is O(229). Moreover, the proposed method is compared with multiple state-of-the-art methods on five databases, testing its efficiency in terms of accuracy and F1-score. In the experimental results, the proposed method achieved an F1-score of 97.2%, 98.73%, 97.23%, 98.36%, and 98.7% in its best evaluation, better results than the literature. Finally, the computational time of the proposed method for the test set was ≈0.6 s, 1 s, 0.7 s, 0.5 s, and 0.1 s, respectively.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ma, Lianbo; Zhou, Yuee; Ma, Ye; Yu, Guo; Li, Qing; He, Qiang; Pei, Yan
Defying Multi-model Forgetting in One-shot Neural Architecture Search Using Orthogonal Gradient Learning Journal Article
In: IEEE Transactions on Computers, pp. 1-13, 2025.
@article{10880105,
title = {Defying Multi-model Forgetting in One-shot Neural Architecture Search Using Orthogonal Gradient Learning},
author = {Lianbo Ma and Yuee Zhou and Ye Ma and Guo Yu and Qing Li and Qiang He and Yan Pei},
url = {https://ieeexplore.ieee.org/abstract/document/10880105},
doi = {10.1109/TC.2025.3540650},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {IEEE Transactions on Computers},
pages = {1-13},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Mecharbat, Lotfi Abdelkrim; Marchisio, Alberto; Shafique, Muhammad; Ghassemi, Mohammad M.; Alhanai, Tuka
MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks Technical Report
2025.
@techreport{mecharbat2025moenasmixtureofexpertbasedneural,
title = {MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly Accurate, Fair, and Robust Edge Deep Neural Networks},
author = {Lotfi Abdelkrim Mecharbat and Alberto Marchisio and Muhammad Shafique and Mohammad M. Ghassemi and Tuka Alhanai},
url = {https://arxiv.org/abs/2502.07422},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
Lu, Aojun; Ke, Junchao; Ding, Chunhui; Fan, Jiahao; Sun, Yanan
Position: Continual Learning Benefits from An Evolving Population over An Unified Model Technical Report
2025.
@techreport{lu2025positioncontinuallearningbenefits,
title = {Position: Continual Learning Benefits from An Evolving Population over An Unified Model},
author = {Aojun Lu and Junchao Ke and Chunhui Ding and Jiahao Fan and Yanan Sun},
url = {https://arxiv.org/abs/2502.06210},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
V, Srilakshmi; G, Uday Kiran; Moulika, B; Mahitha, G S; Laukya, G; Ruthick, M
Integrating NAS for Human Pose Estimation Journal Article
In: Procedia Computer Science, vol. 252, pp. 182-191, 2025, ISSN: 1877-0509, (4th International Conference on Evolutionary Computing and Mobile Sustainable Networks).
@article{V2025182,
title = {Integrating NAS for Human Pose Estimation},
author = {Srilakshmi V and Uday Kiran G and B Moulika and G S Mahitha and G Laukya and M Ruthick},
url = {https://www.sciencedirect.com/science/article/pii/S1877050924034525},
doi = {https://doi.org/10.1016/j.procs.2024.12.020},
issn = {1877-0509},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Procedia Computer Science},
volume = {252},
pages = {182-191},
abstract = {Neural Architecture Search (NAS) technologies have become popular across various fields, allowing for the joint learning of neural network architectures and weights. However, most existing NAS methods are task-specific, focusing on optimizing a single architecture to replace human-designed networks while often neglecting domain knowledge. This paper introduces Pose Neural Fabrics Search (PoseNFS), a unique NAS framework that integrates domain knowledge via part-specific neural architecture search—a form of multi-task learning—for human posture estimation. PoseNFS utilizes a novel search space called Cell-based Neural Fabric (CNF), employing a differentiable search approach to facilitate learning at both micro and macro levels. By utilizing prior knowledge of human body structure, PoseNFS directs the search for part-specific architectures personalized to different body components, treating the localization of human key points as multiple disentangled sub-tasks. Experimental results on the MPII and MS-COCO datasets demonstrate that PoseNFS significantly outperforms a manually designed part-based baseline model and several state-of-the-art methods, validating the effectiveness of this knowledge-guided strategy.},
note = {4th International Conference on Evolutionary Computing and Mobile Sustainable Networks},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Týbl, Ondřej; Neumann, Lukáš
Training-free Neural Architecture Search through Variance of Knowledge of Deep Network Weights Technical Report
2025.
@techreport{týbl2025trainingfreeneuralarchitecturesearch,
title = {Training-free Neural Architecture Search through Variance of Knowledge of Deep Network Weights},
author = {Ondřej Týbl and Lukáš Neumann},
url = {https://arxiv.org/abs/2502.04975},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
keywords = {},
pubstate = {published},
tppubtype = {techreport}
}
G, Uday Kiran; V, Srilakshmi; G, Padmini; G, Sreenidhi; B, Venkata Ramana; J, Preetham Reddy G
Neural Architecture Search-Driven Optimization of Deep Learning Models for Drug Response Prediction Journal Article
In: Procedia Computer Science, vol. 252, pp. 172-181, 2025, ISSN: 1877-0509, (4th International Conference on Evolutionary Computing and Mobile Sustainable Networks).
@article{G2025172,
title = {Neural Architecture Search-Driven Optimization of Deep Learning Models for Drug Response Prediction},
author = {Uday Kiran G and Srilakshmi V and Padmini G and Sreenidhi G and Venkata Ramana B and Preetham Reddy G J},
url = {https://www.sciencedirect.com/science/article/pii/S1877050924034513},
doi = {https://doi.org/10.1016/j.procs.2024.12.019},
issn = {1877-0509},
year = {2025},
date = {2025-01-01},
urldate = {2025-01-01},
journal = {Procedia Computer Science},
volume = {252},
pages = {172-181},
abstract = {In this study, the efficacy of various Neural Architecture Search (NAS) techniques for optimizing neural network architectures in drug response prediction is explored. Accurate prediction of drug responses is crucial for advancing personalized medicine, enabling personalized therapeutic interventions that enhance effectiveness and reduce adverse effects. Traditional models often rely on manually designed architectures, which may not fully capture the complex relationships among drug properties, genetic variations, and cellular phenotypes. An automated NAS approach is introduced to optimize neural network architectures for drug response prediction. The framework explores a defined search space using three techniques: Random Search, Q-Learning, and Bayesian Optimization. A modular architecture that integrates layers, activation functions, and dropout rates is proposed. Findings reveal the strengths and limitations of each NAS method, offering insights into effective model optimization strategies. Validation on publicly available pharmacogenomics datasets shows that NAS-optimized models outperform conventional deep learning and machine learning approaches, highlighting the potential of NAS to enhance predictive modelling in drug response and support personalized medicine and drug development.},
note = {4th International Conference on Evolutionary Computing and Mobile Sustainable Networks},
keywords = {},
pubstate = {published},
tppubtype = {article}
}