journal
Journals IEEE Transactions on Neural Ne...

IEEE Transactions on Neural Networks and Learning Systems

https://read.qxmd.com/read/38648127/size-and-depth-of-monotone-neural-networks-interpolation-and-approximation
#41
JOURNAL ARTICLE
Dan Mikulincer, Daniel Reichman
We study monotone neural networks with threshold gates where all the weights (other than the biases) are nonnegative. We focus on the expressive power and efficiency of the representation of such networks. Our first result establishes that every monotone function over [0,1]d can be approximated within arbitrarily small additive error by a depth-4 monotone network. When , we improve upon the previous best-known construction, which has a depth of d+1 . Our proof goes by solving the monotone interpolation problem for monotone datasets using a depth-4 monotone threshold network...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648126/privfr-privacy-enhanced-federated-recommendation-with-shared-hash-embedding
#42
JOURNAL ARTICLE
Honglei Zhang, Xin Zhou, Zhiqi Shen, Yidong Li
Federated recommender systems (FRSs), with their improved privacy-preserving advantages to jointly train recommendation models from numerous devices while keeping user data distributed, have been widely explored in modern recommender systems (RSs). However, conventional FRSs require transmitting the entire model between the server and clients, which brings a huge carbon footprint for cost-conscious cross-device learning tasks. While several efforts have been dedicated to improving the efficiency of FRSs, it's suboptimal to treat the whole model as the objective of compact design...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648125/high-order-neighbors-aware-representation-learning-for-knowledge-graph-completion
#43
JOURNAL ARTICLE
Hong Yin, Jiang Zhong, Rongzhen Li, Jiaxing Shang, Chen Wang, Xue Li
As a building block of knowledge acquisition, knowledge graph completion (KGC) aims at inferring missing facts in knowledge graphs (KGs) automatically. Previous studies mainly focus on graph convolutional network (GCN)-based KG embedding (KGE) to determine the representations of entities and relations, accordingly predicting missing triplets. However, most existing KGE methods suffer from limitations in predicting tail entities that are far away or even unreachable in KGs. This limitation can be attributed to the related high-order information being largely ignored...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648124/pinning-based-neural-control-for-multiagent-systems-with-self-regulation-intermediate-event-triggered-method
#44
JOURNAL ARTICLE
Hongru Ren, Zeyi Liu, Hongjing Liang, Hongyi Li
A pinning-based self-regulation intermediate event-triggered (ET) funnel tracking control strategy is proposed for uncertain nonlinear multiagent systems (MASs). Based on the backstepping framework, a pinning control strategy is designed to achieve the tracking control objective, which only uses the communication weight between the agents without additional feedback parameters. Moreover, by designing a self-regulation triggered condition based on the tracking error, the intermediate triggered signal is calculated to replace the continuous signal in the controller, so as to achieve the goal of discontinuous update of the controller signal, and this mechanism does not need to add additional compensation function to the controller signal...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648123/on-the-robustness-of-bayesian-neural-networks-to-adversarial-attacks
#45
JOURNAL ARTICLE
Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem. In this article, we analyse the geometry of adversarial attacks in the over-parameterized limit for Bayesian neural networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38648122/multifair-model-fairness-with-multiple-sensitive-attributes
#46
JOURNAL ARTICLE
Huan Tian, Bo Liu, Tianqing Zhu, Wanlei Zhou, Philip S Yu
While existing fairness interventions show promise in mitigating biased predictions, most studies concentrate on single-attribute protections. Although a few methods consider multiple attributes, they either require additional constraints or prediction heads, incurring high computational overhead or jeopardizing the stability of the training process. More critically, they consider per-attribute protection approaches, raising concerns about fairness gerrymandering where certain attribute combinations remain unfair...
April 22, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38625778/des-inspired-accelerated-unfolded-linearized-admm-networks-for-inverse-problems
#47
JOURNAL ARTICLE
Weixin An, Yuanyuan Liu, Fanhua Shang, Hongying Liu, Licheng Jiao
Many research works have shown that the traditional alternating direction multiplier methods (ADMMs) can be better understood by continuous-time differential equations (DEs). On the other hand, many unfolded algorithms directly inherit the traditional iterations to build deep networks. Although they achieve superior practical performance and a faster convergence rate than traditional counterparts, there is a lack of clear insight into unfolded network structures. Thus, we attempt to explore the unfolded linearized ADMM (LADMM) from the perspective of DEs, and design more efficient unfolded networks...
April 16, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38625777/new-rnn-algorithms-for-different-time-variant-matrix-inequalities-solving-under-discrete-time-framework
#48
JOURNAL ARTICLE
Yang Shi, Chenling Ding, Shuai Li, Bin Li, Xiaobing Sun
A series of discrete time-variant matrix inequalities is generally regarded as one of the challenging problems in science and engineering fields. As a discrete time-variant problem, the existing solving schemes generally need the theoretical support under the continuous-time framework, and there is no independent solving scheme under the discrete-time framework. The theoretical deficiency of solving scheme greatly limits the theoretical research and practical application of discrete time-variant matrix inequalities...
April 16, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38625776/adaptive-individual-q-learning-a-multiagent-reinforcement-learning-method-for-coordination-optimization
#49
JOURNAL ARTICLE
Zhen Zhang, Dongqing Wang
Multiagent reinforcement learning (MARL) has been extensively applied to coordination optimization for its task distribution and scalability. The goal of the MARL algorithms for coordination optimization is to learn the optimal joint strategy that maximizes the expected cumulative reward of all agents. Some cooperative MARL algorithms exhibit exciting characteristics in empirical studies. However, the majority of the convergence results are confined to repeated games. Moreover, few MARL algorithms consider adaptation to the switched environments such as the alternation between peak hours and off-peak hours of urban traffic flow or an obstacle suddenly appearing on the planned route for the automated guided vehicle...
April 16, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619964/disentangling-modality-and-posture-factors-memory-attention-and-orthogonal-decomposition-for-visible-infrared-person-re-identification
#50
JOURNAL ARTICLE
Zefeng Lu, Ronghao Lin, Haifeng Hu
Striving to match the person identities between visible (VIS) and near-infrared (NIR) images, VIS-NIR reidentification (Re-ID) has attracted increasing attention due to its wide applications in low-light scenes. However, owing to the modality and pose discrepancies exhibited in heterogeneous images, the extracted representations inevitably comprise various modality and posture factors, impacting the matching of cross-modality person identity. To solve the problem, we propose a disentangling modality and posture factors (DMPFs) model to disentangle modality and posture factors by fusing the information of features memory and pedestrian skeleton...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619963/ngde-a-niching-based-gradient-directed-evolution-algorithm-for-nonconvex-optimization
#51
JOURNAL ARTICLE
Qi Yu, Xijun Liang, Mengzhen Li, Ling Jian
Nonconvex optimization issues are prevalent in machine learning and data science. While gradient-based optimization algorithms can rapidly converge and are dimension-independent, they may, unfortunately, fall into local optimal solutions or saddle points. In contrast, evolutionary algorithms (EAs) gradually adapt the population of solutions to explore global optimal solutions. However, this approach requires substantial computational resources to perform numerous fitness function evaluations, which poses challenges for high-dimensional optimization in particular...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619962/multidimensional-refinement-graph-convolutional-network-with-robust-decouple-loss-for-fine-grained-skeleton-based-action-recognition
#52
JOURNAL ARTICLE
Sheng-Lan Liu, Yu-Ning Ding, Jin-Rong Zhang, Kai-Yuan Liu, Si-Fan Zhang, Fei-Long Wang, Gao Huang
Graph convolutional networks (GCNs) have been widely used in skeleton-based action recognition. However, existing approaches are limited in fine-grained action recognition due to the similarity of interclass data. Moreover, the noisy data from pose extraction increase the challenge of fine-grained recognition. In this work, we propose a flexible attention block called channel-variable spatial-temporal attention (CVSTA) to enhance the discriminative power of spatial-temporal joints and obtain a more compact intraclass feature distribution...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619961/boosting-on-policy-actor-critic-with-shallow-updates-in-critic
#53
JOURNAL ARTICLE
Luntong Li, Yuanheng Zhu
Deep reinforcement learning (DRL) benefits from the representation power of deep neural networks (NNs), to approximate the value function and policy in the learning process. Batch reinforcement learning (BRL) benefits from stable training and data efficiency with fixed representation and enjoys solid theoretical analysis. This work proposes least-squares deep policy gradient (LSDPG), a hybrid approach that combines least-squares reinforcement learning (RL) with online DRL to achieve the best of both worlds...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619960/on-practical-robust-reinforcement-learning-adjacent-uncertainty-set-and-double-agent-algorithm
#54
JOURNAL ARTICLE
Ukjo Hwang, Songnam Hong
Robust reinforcement learning (RRL) aims to seek a robust policy by optimizing the worst case performance over an uncertainty set. This set contains some perturbed Markov decision processes (MDPs) from a nominal MDP (N-MDP) that generate samples for training, which reflects some potential mismatches between the training simulator (i.e., N-MDP) and real-world settings (i.e., the testing environments). Unfortunately, existing RRL algorithms are only applied to the tabular setting and it is still an open problem to extend them into more general continuous state space...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619959/efficient-and-stable-unsupervised-feature-selection-based-on-novel-structured-graph-and-data-discrepancy-learning
#55
JOURNAL ARTICLE
Pei Huang, Zhaoming Kong, Limin Wang, Xuming Han, Xiaowei Yang
Unsupervised feature selection is an important tool in data mining, machine learning, and pattern recognition. Although data labels are often missing, the number of data classes can be known and exploited in many scenarios. Therefore, a structured graph, whose number of connected components is identical to the number of data classes, has been proposed and is frequently applied in unsupervised feature selection. However, methods based on the structured graph learning face two problems. First, their structured graphs are not always guaranteed to maintain the same number of connected components as the data classes with existing optimization algorithms...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619958/reduced-complexity-algorithms-for-tessarine-neural-networks
#56
JOURNAL ARTICLE
Aleksandr Cariow, Galina Cariowa
The brief presents the results of synthesizing efficient algorithms for implementing the basic data-processing macro operations used in tessarine-valued neural networks. These macro operations primarily include the macro operation of multiplication of two tessarines: the macro operation of calculating the inner product of two tessarine-valued vectors and the macro operation of multiple multiplications of one tessarine by the set of different tessarines. When synthesizing the discussed algorithms, we use the fact that tessarine multiplications can be interpreted as matrix-vector products...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619957/hicl-hashtag-driven-in-context-learning-for-social-media-natural-language-understanding
#57
JOURNAL ARTICLE
Hanzhuo Tan, Chunpu Xu, Jing Li, Yuqun Zhang, Zeyang Fang, Zeyu Chen, Baohua Lai
Natural language understanding (NLU) is integral to various social media applications. However, the existing NLU models rely heavily on context for semantic learning, resulting in compromised performance when faced with short and noisy social media content. To address this issue, we leverage in-context learning (ICL), wherein language models learn to make inferences by conditioning on a handful of demonstrations to enrich the context and propose a novel hashtag-driven ICL (HICL) framework. Concretely, we pretrain a model, which employs #hashtags (user-annotated topic labels) to drive BERT-based pretraining through contrastive learning...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619956/a-quantum-spatial-graph-convolutional-neural-network-model-on-quantum-circuits
#58
JOURNAL ARTICLE
Jin Zheng, Qing Gao, Maciej Ogorzalek, Jinhu Lu, Yue Deng
This article proposes a quantum spatial graph convolutional neural network (QSGCN) model that is implementable on quantum circuits, providing a novel avenue to processing non-Euclidean type data based on the state-of-the-art parameterized quantum circuit (PQC) computing platforms. Four basic blocks are constructed to formulate the whole QSGCN model, including the quantum encoding, the quantum graph convolutional layer, the quantum graph pooling layer, and the network optimization. In particular, the trainability of the QSGCN model is analyzed through discussions on the barren plateau phenomenon...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619955/selective-memory-recursive-least-squares-recast-forgetting-into-memory-in-rbf-neural-network-based-real-time-learning
#59
JOURNAL ARTICLE
Yiming Fei, Jiangang Li, Yanan Li
In radial basis function neural network (RBFNN)-based real-time learning tasks, forgetting mechanisms are widely used such that the neural network can keep its sensitivity to new data. However, with forgetting mechanisms, some useful knowledge will get lost simply because they are learned a long time ago, which we refer to as the passive knowledge forgetting phenomenon. To address this problem, this article proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
https://read.qxmd.com/read/38619954/temporal-network-embedding-enhanced-with-long-range-dynamics-and-self-supervised-learning
#60
JOURNAL ARTICLE
Zhizheng Wang, Yuanyuan Sun, Zhihao Yang, Liang Yang, Hongfei Lin
Temporal network embedding (TNE) has promoted the research of knowledge discovery and reasoning on networks. It aims to embed vertices of temporal networks into a low-dimensional vector space while preserving network structures and temporal properties. However, most existing methods have limitations in capturing dynamics over long distances, which makes it difficult to explore multihop topological associations among vertices. To tackle this challenge, we propose LongTNE, which learns the long-range dynamics of vertices to endow TNE with the ability to capture high-order proximity (HP) of networks...
April 15, 2024: IEEE Transactions on Neural Networks and Learning Systems
journal
journal
48247
3
4
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.