journal
MENU ▼
Read by QxMD icon Read
search

Neural Computation

journal
https://read.qxmd.com/read/30764745/multiclass-alpha-integration-of-scores-from-multiple-classifiers
#1
Gonzalo Safont, Addisson Salazar, Luis Vergara
Alpha integration methods have been used for integrating stochastic models and fusion in the context of detection (binary classification). Our work proposes separated score integration (SSI), a new method based on alpha integration to perform soft fusion of scores in multiclass classification problems, one of the most common problems in automatic classification. Theoretical derivation is presented to optimize the parameters of this method to achieve the least mean squared error (LMSE) or the minimum probability of error (MPE)...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764744/decreasing-the-size-of-the-restricted-boltzmann-machine
#2
Yohei Saito, Takuya Kato
In this letter, we propose a method to decrease the number of hidden units of the restricted Boltzmann machine while avoiding a decrease in the performance quantified by the Kullback-Leibler divergence. Our algorithm is then demonstrated by numerical simulations.
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764743/deconstructing-odorant-identity-via-primacy-in-dual-networks
#3
Daniel R Kepple, Hamza Giaffar, Dima Rinberg, Alexei A Koulakov
In the olfactory system, odor percepts retain their identity despite substantial variations in concentration, timing, and background. We study a novel strategy for encoding intensity-invariant stimulus identity that is based on representing relative rather than absolute values of stimulus features. For example, in what is known as the primacy coding model, odorant identities are represented by the conditions that some odorant receptors are activated more strongly than others. Because, in this scheme, the odorant identity depends only on the relative amplitudes of olfactory receptor responses, identity is invariant to changes in both intensity and monotonic nonlinear transformations of its neuronal responses...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764742/gated-orthogonal-recurrent-units-on-learning-to-forget
#4
Li Jing, Caglar Gulcehre, John Peurifoy, Yichen Shen, Max Tegmark, Marin Soljacic, Yoshua Bengio
We present a novel recurrent neural network (RNN)-based model that combines the remembering ability of unitary evolution RNNs with the ability of gated RNNs to effectively forget redundant or irrelevant information in its memory. We achieve this by extending restricted orthogonal evolution RNNs with a gating mechanism similar to gated recurrent unit RNNs with a reset gate and an update gate. Our model is able to outperform long short-term memory, gated recurrent units, and vanilla unitary or orthogonal RNNs on several long-term-dependency benchmark tasks...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764741/biologically-realistic-mean-field-models-of-conductance-based-networks-of-spiking-neurons-with-adaptation
#5
Matteo di Volo, Alberto Romagnoni, Cristiano Capone, Alain Destexhe
Accurate population models are needed to build very large-scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved, such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of adaptive exponential integrate-and-fire excitatory and inhibitory neurons. Using a master equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764740/a-distributed-framework-for-the-construction-of-transport-maps
#6
Diego A Mesa, Justin Tantiongloc, Marcela Mendoza, Todd P Coleman
The need to reason about uncertainty in large, complex, and multimodal data sets has become increasingly common across modern scientific environments. The ability to transform samples from one distribution <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>P</mml:mi> </mml:math> to another distribution <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>Q</mml:mi> </mml:math> enables the solution to many problems in machine learning (e...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764739/estimating-scale-invariant-future-in-continuous-time
#7
Zoran Tiganj, Samuel J Gershman, Per B Sederberg, Marc W Howard
Natural learners must compute an estimate of future outcomes that follow from a stimulus in continuous time. Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially discounted future reward using the Bellman equation (model-free algorithms). An important drawback of model-based algorithms is that computational cost grows linearly with the amount of time to be simulated...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30764738/filtering-compensation-for-delays-and-prediction-errors-during-sensorimotor-control
#8
F Crevecoeur, M Gevers
Compensating for sensorimotor noise and for temporal delays has been identified as a major function of the nervous system. However, the aspects have often been described separately in the frameworks of optimal cue combination or motor prediction during movement planning. But control-theoretic models suggest that these two operations are performed simultaneously, and mounting evidence supports that motor commands are based on sensory predictions rather than sensory states. In this letter, we study the benefit of state estimation for predictive sensorimotor control...
February 14, 2019: Neural Computation
https://read.qxmd.com/read/30645180/state-space-representations-of-deep-neural-networks
#9
Michael Hauser, Sean Gunn, Samer Saab, Asok Ray
This letter deals with neural networks as dynamical systems governed by finite difference equations. It shows that the introduction of <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>k</mml:mi> </mml:math> -many skip connections into network architectures, such as residual networks and additive dense networks, defines <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>k</mml:mi> </mml:math> th order dynamical equations on the layer-wise transformations...
January 15, 2019: Neural Computation
https://read.qxmd.com/read/30645179/gradient-descent-with-identity-initialization-efficiently-learns-positive-definite-linear-transformations-by-deep-residual-networks
#10
Peter L Bartlett, David P Helmbold, Philip M Long
We analyze algorithms for approximating a function <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>f</mml:mi> <mml:mo>(</mml:mo> <mml:mi>x</mml:mi> <mml:mo>)</mml:mo> <mml:mo>=</mml:mo> <mml:mi>Φ</mml:mi> <mml:mi>x</mml:mi> </mml:mrow> </mml:math> mapping <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mi>ℜ</mml:mi> <mml:mi>d</mml:mi> </mml:msup> </mml:math> to <mml:math xmlns:mml="http://www...
January 15, 2019: Neural Computation
https://read.qxmd.com/read/30645178/scalable-and-flexible-unsupervised-feature-selection
#11
Haojie Hu, Rong Wang, Xiaojun Yang, Feiping Nie
Recently, graph-based unsupervised feature selection algorithms (GUFS) have been shown to efficiently handle prevalent high-dimensional unlabeled data. One common drawback associated with existing graph-based approaches is that they tend to be time-consuming and in need of large storage, especially when faced with the increasing size of data. Research has started using anchors to accelerate graph-based learning model for feature selection, while the hard linear constraint between the data matrix and the lower-dimensional representation is usually overstrict in many applications...
January 15, 2019: Neural Computation
https://read.qxmd.com/read/30645177/forgetting-memories-and-their-attractiveness
#12
Enzo Marinari
We study numerically the memory that forgets, introduced in 1986 by Parisi by bounding the synaptic strength, with a mechanism that avoids confusion; allows remembering the pattern learned more recently; and has a physiologically very well-defined meaning. We analyze a number of features of this learning for a finite number of neurons and finite number of patterns. We discuss how the system behaves in the large but finite <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>N</mml:mi> </mml:math> limit...
January 15, 2019: Neural Computation
https://read.qxmd.com/read/30576619/dynamic-computational-model-of-the-human-spinal-cord-connectome
#13
Jeffrey E Arle, Nicolae Iftimia, Jay L Shils, Longzhi Mei, Kristen W Carlson
Connectomes abound, but few for the human spinal cord. Using anatomical data in the literature, we constructed a draft connectivity map of the human spinal cord connectome, providing a template for the many calibrations of specialized behavior to be overlaid on it and the basis for an initial computational model. A thorough literature review gleaned cell types, connectivity, and connection strength indications. Where human data were not available, we selected species that have been studied. Cadaveric spinal cord measurements, cross-sectional histology images, and cytoarchitectural data regarding cell size and density served as the starting point for estimating numbers of neurons...
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576618/functional-diversity-in-the-retina-improves-the-population-code
#14
Michael J Berry Ii, Felix Lebois, Avi Ziskind, Rava Azeredo da Silveira
Within a given brain region, individual neurons exhibit a wide variety of different feature selectivities. Here, we investigated the impact of this extensive functional diversity on the population neural code. Our approach was to build optimal decoders to discriminate among stimuli using the spiking output of a real, measured neural population and compare its performance against a matched, homogeneous neural population with the same number of cells and spikes. Analyzing large populations of retinal ganglion cells, we found that the real, heterogeneous population can yield a discrimination error lower than the homogeneous population by several orders of magnitude and consequently can encode much more visual information...
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576617/first-passage-time-memory-lifetimes-for-simple-multistate-synapses-beyond-the-eigenvector-requirement
#15
Terry Elliott
Models of associative memory with discrete-strength synapses are palimpsests, learning new memories by forgetting old ones. Memory lifetimes can be defined by the mean first passage time (MFPT) for a perceptron's activation to fall below firing threshold. By imposing the condition that the vector of possible strengths available to a synapse is a left eigenvector of the stochastic matrix governing transitions in strength, we previously derived results for MFPTs and first passage time (FPT) distributions in models with simple, multistate synapses...
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576616/accelerating-nonnegative-matrix-factorization-algorithms-using-extrapolation
#16
Andersen Man Shun Ang, Nicolas Gillis
We propose a general framework to accelerate significantly the algorithms for nonnegative matrix factorization (NMF). This framework is inspired from the extrapolation scheme used to accelerate gradient methods in convex optimization and from the method of parallel tangents. However, the use of extrapolation in the context of the exact coordinate descent algorithms tackling the nonconvex NMF problems is novel. We illustrate the performance of this approach on two state-of-the-art NMF algorithms: accelerated hierarchical alternating least squares and alternating nonnegative least squares, using synthetic, image, and document data sets...
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576615/learning-invariant-features-in-modulatory-networks-through-conflict-and-ambiguity
#17
W Shane Grant, Laurent Itti
This work lays the foundation for a framework of cortical learning based on the idea of a competitive column, which is inspired by the functional organization of neurons in the cortex. A column describes a prototypical organization for neurons that gives rise to an ability to learn scale, rotation, and translation-invariant features. This is empowered by a recently developed learning rule, conflict learning, which enables the network to learn over both driving and modulatory feedforward, feedback, and lateral inputs...
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576614/calculating-the-mutual-information-between-two-spike-trains
#18
Conor Houghton
It is difficult to estimate the mutual information between spike trains because established methods require more data than are usually available. Kozachenko-Leonenko estimators promise to solve this problem but include a smoothing parameter that must be set. We propose here that the smoothing parameter can be selected by maximizing the estimated unbiased mutual information. This is tested on fictive data and shown to work very well.
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576613/modeling-the-correlated-activity-of-neural-populations-a-review
#19
Christophe Gardella, Olivier Marre, Thierry Mora
The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simulatneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons...
December 21, 2018: Neural Computation
https://read.qxmd.com/read/30576612/systems-of-bounded-rational-agents-with-information-theoretic-constraints
#20
Sebastian Gottwald, Daniel A Braun
Specialization and hierarchical organization are important features of efficient collaboration in economical, artificial, and biological systems. Here, we investigate the hypothesis that both features can be explained by the fact that each entity of such a system is limited in a certain way. We propose an information-theoretic approach based on a free energy principle in order to computationally analyze systems of bounded rational agents that deal with such limitations optimally. We find that specialization allows a focus on fewer tasks, thus leading to a more efficient execution, but in turn, it requires coordination in hierarchical structures of specialized experts and coordinating units...
December 21, 2018: Neural Computation
journal
journal
31799
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"