journal
https://read.qxmd.com/read/38052077/performance-evaluation-of-matrix-factorization-for-fmri-data
#41
JOURNAL ARTICLE
Yusuke Endo, Koujin Takeda
A hypothesis in the study of the brain is that sparse coding is realized in information representation of external stimuli, which has been experimentally confirmed for visual stimulus recently. However, unlike the specific functional region in the brain, sparse coding in information processing in the whole brain has not been clarified sufficiently. In this study, we investigate the validity of sparse coding in the whole human brain by applying various matrix factorization methods to functional magnetic resonance imaging data of neural activities in the brain...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/37844328/robustness-to-transformations-across-categories-is-robustness-driven-by-invariant-neural-representations
#42
JOURNAL ARTICLE
Hojin Jang, Syed Suleman Abbas Zaidi, Xavier Boix, Neeraj Prasad, Sharon Gilad-Gutnick, Shlomit Ben-Ami, Pawan Sinha
Deep convolutional neural networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations (e.g., blur or noise) when these transformations are included in the training set. A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed. However, to what extent this hypothesis holds true is an outstanding question, as robustness to transformations could be achieved with properties different from invariance; for example, parts of the network could be specialized to recognize either transformed or nontransformed images...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844327/training-a-hyperdimensional-computing-classifier-using-a-threshold-on-its-confidence
#43
JOURNAL ARTICLE
Laura Smets, Werner Van Leekwijck, Ing Jyh Tsang, Steven Latré
Hyperdimensional computing (HDC) has become popular for light-weight and energy-efficient machine learning, suitable for wearable Internet-of-Things devices and near-sensor or on-device processing. HDC is computationally less complex than traditional deep learning algorithms and achieves moderate to good classification performance. This letter proposes to extend the training procedure in HDC by taking into account not only wrongly classified samples but also samples that are correctly classified by the HDC model but with low confidence...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844326/predictive-coding-as-a-neuromorphic-alternative-to-backpropagation-a-critical-evaluation
#44
JOURNAL ARTICLE
Umais Zahid, Qinghai Guo, Zafeirios Fountas
Backpropagation has rapidly become the workhorse credit assignment algorithm for modern deep learning methods. Recently, modified forms of predictive coding (PC), an algorithm with origins in computational neuroscience, have been shown to result in approximately or exactly equal parameter updates to those under backpropagation. Due to this connection, it has been suggested that PC can act as an alternative to backpropagation with desirable properties that may facilitate implementation in neuromorphic systems...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844325/adaptive-filter-model-of-cerebellum-for-biological-muscle-control-with-spike-train-inputs
#45
JOURNAL ARTICLE
Emma Wilson
Prior applications of the cerebellar adaptive filter model have included a range of tasks within simulated and robotic systems. However, this has been limited to systems driven by continuous signals. Here, the adaptive filter model of the cerebellum is applied to the control of a system driven by spiking inputs by considering the problem of controlling muscle force. The performance of the standard adaptive filter algorithm is compared with the algorithm with a modified learning rule that minimizes inputs and a simple proportional-integral-derivative (PID) controller...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844324/generalized-low-rank-update-model-parameter-bounds-for-low-rank-training-data-modifications
#46
JOURNAL ARTICLE
Hiroyuki Hanada, Noriaki Hashimoto, Kouichi Taji, Ichiro Takeuchi
In this study, we have developed an incremental machine learning (ML) method that efficiently obtains the optimal model when a small number of instances or features are added or removed. This problem holds practical importance in model selection, such as cross-validation (CV) and feature selection. Among the class of ML methods known as linear estimators, there exists an efficient model update framework, the low-rank update, that can effectively handle changes in a small number of rows and columns within the data matrix...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37725710/reducing-catastrophic-forgetting-with-associative-learning-a-lesson-from-fruit-flies
#47
JOURNAL ARTICLE
Yang Shen, Sanjoy Dasgupta, Saket Navlakha
Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725709/optimal-feedback-control-for-the-proportion-of-energy-cost-in-the-upper-arm-reaching-movement
#48
JOURNAL ARTICLE
Yoshiaki Taniai
The minimum expected energy cost model, which has been proposed as one of the optimization principles for movement planning, can reproduce many characteristics of the human upper-arm reaching movement when signal-dependent noise and the co-contraction of the antagonist's muscles are considered. Regarding the optimization principles, discussion has been mainly based on feedforward control; however, there is debate as to whether the central nervous system uses a feedforward or feedback control process. Previous studies have shown that feedback control based on the modified linear-quadratic gaussian (LQG) control, including multiplicative noise, can reproduce many characteristics of the reaching movement...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725708/winning-the-lottery-with-neural-connectivity-constraints-faster-learning-across-cognitive-tasks-with-spatially-constrained-sparse-rnns
#49
JOURNAL ARTICLE
Mikail Khona, Sarthak Chandra, Joy J Ma, Ila R Fiete
Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that use task-agnostic predetermined graphs with sparsity as low as 4%...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725706/a-tutorial-on-the-spectral-theory-of-markov-chains
#50
JOURNAL ARTICLE
Eddie Seabrook, Laurenz Wiskott
Markov chains are a class of probabilistic models that have achieved widespread application in the quantitative sciences. This is in part due to their versatility, but is compounded by the ease with which they can be probed analytically. This tutorial provides an in-depth introduction to Markov chains and explores their connection to graphs and random walks. We use tools from linear algebra and graph theory to describe the transition matrices of different types of Markov chains, with a particular focus on exploring properties of the eigenvalues and eigenvectors corresponding to these matrices...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725705/self-organization-of-nonlinearly-coupled-neural-fluctuations-into-synergistic-population-codes
#51
JOURNAL ARTICLE
Hengyuan Ma, Yang Qi, Pulin Gong, Jie Zhang, Wen-Lian Lu, Jianfeng Feng
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling between correlated fluctuations with the overall dynamics of the system. In this study, we investigate the emergence of synergistic neural population codes from the intrinsic dynamics of correlated neural fluctuations in a neural circuit model capturing realistic nonlinear noise coupling of spiking neurons...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37523463/exploring-trade-offs-in-spiking-neural-networks
#52
JOURNAL ARTICLE
Florian Bacho, Dominique Chu
Spiking neural networks (SNNs) have emerged as a promising alternative to traditional deep neural networks for low-power computing. However, the effectiveness of SNNs is not solely determined by their performance but also by their energy consumption, prediction speed, and robustness to noise. The recent method Fast & Deep, along with others, achieves fast and energy-efficient computation by constraining neurons to fire at most once. Known as time-to-first-spike (TTFS), this constraint, however, restricts the capabilities of SNNs in many aspects...
July 28, 2023: Neural Computation
https://read.qxmd.com/read/37523461/transfer-learning-with-singular-value-decomposition-of-multichannel-convolution-matrices
#53
JOURNAL ARTICLE
Tak Shing Au Yeung, Ka Chun Cheung, Michael K Ng, Simon See, Andy Yip
The task of transfer learning using pretrained convolutional neural networks is considered. We propose a convolution-SVD layer to analyze the convolution operators with a singular value decomposition computed in the Fourier domain. Singular vectors extracted from the source domain are transferred to the target domain, whereas the singular values are fine-tuned with a target data set. In this way, dimension reduction is achieved to avoid overfitting, while some flexibility to fine-tune the convolution kernels is maintained...
July 28, 2023: Neural Computation
https://read.qxmd.com/read/37523457/grid-cell-percolation
#54
JOURNAL ARTICLE
Yuri Dabaghian
Grid cells play a principal role in enabling cognitive representations of ambient environments. The key property of these cells-the regular arrangement of their firing fields-is commonly viewed as a means for establishing spatial scales or encoding specific locations. However, using grid cells' spiking outputs for deducing geometric orderliness proves to be a strenuous task due to fairly irregular activation patterns triggered by the animal's sporadic visits to the grid fields. This article addresses statistical mechanisms enabling emergent regularity of grid cell firing activity from the perspective of percolation theory...
July 28, 2023: Neural Computation
https://read.qxmd.com/read/37523456/learning-intention-aware-policies-in-deep-reinforcement-learning
#55
JOURNAL ARTICLE
T Zhao, S Wu, G Li, Y Chen, G Niu, Masashi Sugiyama
Deep reinforcement learning (DRL) provides an agent with an optimal policy so as to maximize the cumulative rewards. The policy defined in DRL mainly depends on the state, historical memory, and policy model parameters. However, we humans usually take actions according to our own intentions, such as moving fast or slow, besides the elements included in the traditional policy models. In order to make the action-choosing mechanism more similar to humans and make the agent to select actions that incorporate intentions, we propose an intention-aware policy learning method in this letter To formalize this process, we first define an intention-aware policy by incorporating the intention information into the policy model, which is learned by maximizing the cumulative rewards with the mutual information (MI) between the intention and the action...
July 28, 2023: Neural Computation
https://read.qxmd.com/read/37437205/composite-optimization-algorithms-for-sigmoid-networks
#56
JOURNAL ARTICLE
Huixiong Chen, Qi Ye
In this letter, we use composite optimization algorithms to solve sigmoid networks. We equivalently transfer the sigmoid networks to a convex composite optimization and propose the composite optimization algorithms based on the linearized proximal algorithms and the alternating direction method of multipliers. Under the assumptions of the weak sharp minima and the regularity condition, the algorithm is guaranteed to converge to a globally optimal solution of the objective function even in the case of nonconvex and nonsmooth problems...
July 12, 2023: Neural Computation
https://read.qxmd.com/read/37437202/mean-field-approximations-with-adaptive-coupling-for-networks-with-spike-timing-dependent-plasticity
#57
JOURNAL ARTICLE
Benoit Duche, Christian Bick, Áine Byrne
Understanding the effect of spike-timing-dependent plasticity (STDP) is key to elucidating how neural networks change over long timescales and to design interventions aimed at modulating such networks in neurological disorders. However, progress is restricted by the significant computational cost associated with simulating neural network models with STDP and by the lack of low-dimensional description that could provide analytical insights. Phase-difference-dependent plasticity (PDDP) rules approximate STDP in phase oscillator networks, which prescribe synaptic changes based on phase differences of neuron pairs rather than differences in spike timing...
July 12, 2023: Neural Computation
https://read.qxmd.com/read/37437199/mirror-descent-of-hopfield-model
#58
JOURNAL ARTICLE
Hyungjoon Soh, Dongyeob Kim, Juno Hwang, Junghyo Jo
Mirror descent is an elegant optimization technique that leverages a dual space of parametric models to perform gradient descent. While originally developed for convex optimization, it has increasingly been applied in the field of machine learning. In this study, we propose a novel approach for using mirror descent to initialize the parameters of neural networks. Specifically, we demonstrate that by using the Hopfield model as a prototype for neural networks, mirror descent can effectively train the model with significantly improved performance compared to traditional gradient descent methods that rely on random parameter initialization...
July 12, 2023: Neural Computation
https://read.qxmd.com/read/37437197/on-an-interpretation-of-resnets-via-gate-network-control
#59
JOURNAL ARTICLE
Changcun Huang
This paper first constructs a typical solution of ResNets for multicategory classifications based on the idea of the gate control of LSTMs, from which a general interpretation of the ResNet architecture is given and the performance mechanism is explained. We also use more solutions to further demonstrate the generality of that interpretation. The classification result is then extended to the universal-approximation capability of the type of ResNet with two-layer gate networks, an architecture that was proposed in an original paper of ResNets and has both theoretical and practical significance...
July 12, 2023: Neural Computation
https://read.qxmd.com/read/37437192/a-noise-based-novel-strategy-for-faster-snn-training
#60
JOURNAL ARTICLE
Chunming Jiang, Yilei Zhang
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bioplausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have advantages and limitations. ANN-to-SNN conversion requires a long inference time to approximate the accuracy of ANN, thus diminishing the benefits of SNN. With spike-based BP, training high-precision SNNs typically consumes dozens of times more computational resources and time than their ANN counterparts...
July 12, 2023: Neural Computation
journal
journal
31799
3
4
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.