journal
https://read.qxmd.com/read/38101328/emergence-of-universal-computations-through-neural-manifold-dynamics
#21
JOURNAL ARTICLE
Joan Gort
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models...
December 13, 2023: Neural Computation
https://read.qxmd.com/read/38101327/q-a-label-learning
#22
JOURNAL ARTICLE
Kota Kawamoto, Masato Uchida
Assigning labels to instances is crucial for supervised machine learning. In this letter, we propose a novel annotation method, Q&A labeling, which involves a question generator that asks questions about the labels of the instances to be assigned and an annotator that answers the questions and assigns the corresponding labels to the instances. We derived a generative model of labels assigned according to two Q&A labeling procedures that differ in the way questions are asked and answered. We showed that in both procedures, the derived model is partially consistent with that assumed in previous studies...
December 13, 2023: Neural Computation
https://read.qxmd.com/read/38101326/cooperativity-information-gain-and-energy-cost-during-early-ltp-in-dendritic-spines
#23
JOURNAL ARTICLE
Jan Karbowski, Paulina Urban
We investigate a mutual relationship between information and energy during the early phase of LTP induction and maintenance in a large-scale system of mutually coupled dendritic spines, with discrete internal states and probabilistic dynamics, within the framework of nonequilibrium stochastic thermodynamics. In order to analyze this computationally intractable stochastic multidimensional system, we introduce a pair approximation, which allows us to reduce the spine dynamics into a lower-dimensional manageable system of closed equations...
December 13, 2023: Neural Computation
https://read.qxmd.com/read/38052088/modeling-the-role-of-contour-integration-in-visual-inference
#24
JOURNAL ARTICLE
Salman Khan, Alexander Wong, Bryan Tripp
Under difficult viewing conditions, the brain's visual system uses a variety of recurrent modulatory mechanisms to augment feedforward processing. One resulting phenomenon is contour integration, which occurs in the primary visual (V1) cortex and strengthens neural responses to edges if they belong to a larger smooth contour. Computational models have contributed to an understanding of the circuit mechanisms of contour integration, but less is known about its role in visual perception. To address this gap, we embedded a biologically grounded model of contour integration in a task-driven artificial neural network and trained it using a gradient-descent variant...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/38052084/active-predictive-coding-a-unifying-neural-model-for-active-perception-compositional-learning-and-hierarchical-planning
#25
JOURNAL ARTICLE
Rajesh P N Rao, Dimitrios C Gklezakos, Vishwas Sathish
There is growing interest in predictive coding as a model of how the brain learns through predictions and prediction errors. Predictive coding models have traditionally focused on sensory coding and perception. Here we introduce active predictive coding (APC) as a unifying model for perception, action, and cognition. The APC model addresses important open problems in cognitive science and AI, including (1) how we learn compositional representations (e.g., part-whole hierarchies for equivariant vision) and (2) how we solve large-scale planning problems, which are hard for traditional reinforcement learning, by composing complex state dynamics and abstract actions from simpler dynamics and primitive actions...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/38052081/synchronization-and-clustering-in-complex-quadratic-networks
#26
JOURNAL ARTICLE
Anca Rǎdulescu, Danae Evans, Amani-Dasia Augustin, Anthony Cooper, Johan Nakuci, Sarah Muldoon
Synchronization and clustering are well studied in the context of networks of oscillators, such as neuronal networks. However, this relationship is notoriously difficult to approach mathematically in natural, complex networks. Here, we aim to understand it in a canonical framework, using complex quadratic node dynamics, coupled in networks that we call complex quadratic networks (CQNs). We review previously defined extensions of the Mandelbrot and Julia sets for networks, focusing on the behavior of the node-wise projections of these sets and on describing the phenomena of node clustering and synchronization...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/38052080/the-limiting-dynamics-of-sgd-modified-loss-phase-space-oscillations-and-anomalous-diffusion
#27
JOURNAL ARTICLE
Daniel Kunin, Javier Sagastuy-Brena, Lauren Gillespie, Eshed Margalit, Hidenori Tanaka, Surya Ganguli, Daniel L K Yamins
In this work, we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance traveled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction among the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/38052079/cocaine-use-prediction-with-tensor-based-machine-learning-on-multimodal-mri-connectome-data
#28
JOURNAL ARTICLE
Anru R Zhang, Ryan P Bell, Chen An, Runshi Tang, Shana A Hall, Cliburn Chan, Kareem Al-Khalil, Christina S Meade
This letter considers the use of machine learning algorithms for predicting cocaine use based on magnetic resonance imaging (MRI) connectomic data. The study used functional MRI (fMRI) and diffusion MRI (dMRI) data collected from 275 individuals, which was then parcellated into 246 regions of interest (ROIs) using the Brainnetome atlas. After data preprocessing, the data sets were transformed into tensor form. We developed a tensor-based unsupervised machine learning algorithm to reduce the size of the data tensor from 275 (individuals) × 2 (fMRI and dMRI) × 246 (ROIs) × 246 (ROIs) to 275 (individuals) × 2 (fMRI and dMRI) × 6 (clusters) × 6 (clusters)...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/38052077/performance-evaluation-of-matrix-factorization-for-fmri-data
#29
JOURNAL ARTICLE
Yusuke Endo, Koujin Takeda
A hypothesis in the study of the brain is that sparse coding is realized in information representation of external stimuli, which has been experimentally confirmed for visual stimulus recently. However, unlike the specific functional region in the brain, sparse coding in information processing in the whole brain has not been clarified sufficiently. In this study, we investigate the validity of sparse coding in the whole human brain by applying various matrix factorization methods to functional magnetic resonance imaging data of neural activities in the brain...
November 22, 2023: Neural Computation
https://read.qxmd.com/read/37844328/robustness-to-transformations-across-categories-is-robustness-driven-by-invariant-neural-representations
#30
JOURNAL ARTICLE
Hojin Jang, Syed Suleman Abbas Zaidi, Xavier Boix, Neeraj Prasad, Sharon Gilad-Gutnick, Shlomit Ben-Ami, Pawan Sinha
Deep convolutional neural networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations (e.g., blur or noise) when these transformations are included in the training set. A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed. However, to what extent this hypothesis holds true is an outstanding question, as robustness to transformations could be achieved with properties different from invariance; for example, parts of the network could be specialized to recognize either transformed or nontransformed images...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844327/training-a-hyperdimensional-computing-classifier-using-a-threshold-on-its-confidence
#31
JOURNAL ARTICLE
Laura Smets, Werner Van Leekwijck, Ing Jyh Tsang, Steven Latré
Hyperdimensional computing (HDC) has become popular for light-weight and energy-efficient machine learning, suitable for wearable Internet-of-Things devices and near-sensor or on-device processing. HDC is computationally less complex than traditional deep learning algorithms and achieves moderate to good classification performance. This letter proposes to extend the training procedure in HDC by taking into account not only wrongly classified samples but also samples that are correctly classified by the HDC model but with low confidence...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844326/predictive-coding-as-a-neuromorphic-alternative-to-backpropagation-a-critical-evaluation
#32
JOURNAL ARTICLE
Umais Zahid, Qinghai Guo, Zafeirios Fountas
Backpropagation has rapidly become the workhorse credit assignment algorithm for modern deep learning methods. Recently, modified forms of predictive coding (PC), an algorithm with origins in computational neuroscience, have been shown to result in approximately or exactly equal parameter updates to those under backpropagation. Due to this connection, it has been suggested that PC can act as an alternative to backpropagation with desirable properties that may facilitate implementation in neuromorphic systems...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844325/adaptive-filter-model-of-cerebellum-for-biological-muscle-control-with-spike-train-inputs
#33
JOURNAL ARTICLE
Emma Wilson
Prior applications of the cerebellar adaptive filter model have included a range of tasks within simulated and robotic systems. However, this has been limited to systems driven by continuous signals. Here, the adaptive filter model of the cerebellum is applied to the control of a system driven by spiking inputs by considering the problem of controlling muscle force. The performance of the standard adaptive filter algorithm is compared with the algorithm with a modified learning rule that minimizes inputs and a simple proportional-integral-derivative (PID) controller...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37844324/generalized-low-rank-update-model-parameter-bounds-for-low-rank-training-data-modifications
#34
JOURNAL ARTICLE
Hiroyuki Hanada, Noriaki Hashimoto, Kouichi Taji, Ichiro Takeuchi
In this study, we have developed an incremental machine learning (ML) method that efficiently obtains the optimal model when a small number of instances or features are added or removed. This problem holds practical importance in model selection, such as cross-validation (CV) and feature selection. Among the class of ML methods known as linear estimators, there exists an efficient model update framework, the low-rank update, that can effectively handle changes in a small number of rows and columns within the data matrix...
November 7, 2023: Neural Computation
https://read.qxmd.com/read/37725710/reducing-catastrophic-forgetting-with-associative-learning-a-lesson-from-fruit-flies
#35
JOURNAL ARTICLE
Yang Shen, Sanjoy Dasgupta, Saket Navlakha
Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725709/optimal-feedback-control-for-the-proportion-of-energy-cost-in-the-upper-arm-reaching-movement
#36
JOURNAL ARTICLE
Yoshiaki Taniai
The minimum expected energy cost model, which has been proposed as one of the optimization principles for movement planning, can reproduce many characteristics of the human upper-arm reaching movement when signal-dependent noise and the co-contraction of the antagonist's muscles are considered. Regarding the optimization principles, discussion has been mainly based on feedforward control; however, there is debate as to whether the central nervous system uses a feedforward or feedback control process. Previous studies have shown that feedback control based on the modified linear-quadratic gaussian (LQG) control, including multiplicative noise, can reproduce many characteristics of the reaching movement...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725708/winning-the-lottery-with-neural-connectivity-constraints-faster-learning-across-cognitive-tasks-with-spatially-constrained-sparse-rnns
#37
JOURNAL ARTICLE
Mikail Khona, Sarthak Chandra, Joy J Ma, Ila R Fiete
Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that use task-agnostic predetermined graphs with sparsity as low as 4%...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725706/a-tutorial-on-the-spectral-theory-of-markov-chains
#38
JOURNAL ARTICLE
Eddie Seabrook, Laurenz Wiskott
Markov chains are a class of probabilistic models that have achieved widespread application in the quantitative sciences. This is in part due to their versatility, but is compounded by the ease with which they can be probed analytically. This tutorial provides an in-depth introduction to Markov chains and explores their connection to graphs and random walks. We use tools from linear algebra and graph theory to describe the transition matrices of different types of Markov chains, with a particular focus on exploring properties of the eigenvalues and eigenvectors corresponding to these matrices...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37725705/self-organization-of-nonlinearly-coupled-neural-fluctuations-into-synergistic-population-codes
#39
JOURNAL ARTICLE
Hengyuan Ma, Yang Qi, Pulin Gong, Jie Zhang, Wen-Lian Lu, Jianfeng Feng
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling between correlated fluctuations with the overall dynamics of the system. In this study, we investigate the emergence of synergistic neural population codes from the intrinsic dynamics of correlated neural fluctuations in a neural circuit model capturing realistic nonlinear noise coupling of spiking neurons...
September 19, 2023: Neural Computation
https://read.qxmd.com/read/37523463/exploring-trade-offs-in-spiking-neural-networks
#40
JOURNAL ARTICLE
Florian Bacho, Dominique Chu
Spiking neural networks (SNNs) have emerged as a promising alternative to traditional deep neural networks for low-power computing. However, the effectiveness of SNNs is not solely determined by their performance but also by their energy consumption, prediction speed, and robustness to noise. The recent method Fast & Deep, along with others, achieves fast and energy-efficient computation by constraining neurons to fire at most once. Known as time-to-first-spike (TTFS), this constraint, however, restricts the capabilities of SNNs in many aspects...
July 28, 2023: Neural Computation
journal
journal
31799
2
3
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.