Jian Wang, Shujun Wu, Huaqing Zhang, Bin Yuan, Caili Dai, Nikhil R Pal
Approximation ability is one of the most important topics in the field of neural networks (NNs). Feedforward NNs, activated by rectified linear units and some of their specific smoothed versions, provide universal approximators to convex as well as continuous functions. However, most of these networks are investigated empirically, or their characteristics are analyzed based on specific operation rules. Moreover, an adequate level of interpretability of the networks is missing as well. In this work, we propose a class of new network architecture, built with reusable neural modules (functional blocks), to supply differentiable and interpretable approximators for convex and continuous target functions...
April 3, 2024: IEEE Transactions on Neural Networks and Learning Systems