journal
https://read.qxmd.com/read/38457768/toward-improving-the-generation-quality-of-autoregressive-slot-vaes
#1
JOURNAL ARTICLE
Patrick Emami, Pan He, Sanjay Ranka, Anand Rangarajan
Unconditional scene inference and generation are challenging to learn jointly with a single compositional model. Despite encouraging progress on models that extract object-centric representations ("slots") from images, unconditional generation of scenes from slots has received less attention. This is primarily because learning the multiobject relations necessary to imagine coherent scenes is difficult. We hypothesize that most existing slot-based models have a limited ability to learn object correlations. We propose two improvements that strengthen object correlation learning...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457767/learning-korobov-functions-by-correntropy-and-convolutional-neural-networks
#2
JOURNAL ARTICLE
Zhiying Fang, Tong Mao, Jun Fan
Combining information-theoretic learning with deep learning has gained significant attention in recent years, as it offers a promising approach to tackle the challenges posed by big data. However, the theoretical understanding of convolutional structures, which are vital to many structured deep learning models, remains incomplete. To partially bridge this gap, this letter aims to develop generalization analysis for deep convolutional neural network (CNN) algorithms using learning theory. Specifically, we focus on investigating robust regression using correntropy-induced loss functions derived from information-theoretic learning...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457766/vector-symbolic-finite-state-machines-in-attractor-neural-networks
#3
JOURNAL ARTICLE
Madison Cotteret, Hugh Greatorex, Martin Ziegler, Elisabetta Chicca
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network for dense bipolar state vectors and approximately quadratic for sparse binary state vectors...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457764/object-centric-scene-representations-using-active-inference
#4
JOURNAL ARTICLE
Toon Van de Maele, Tim Verbelen, Pietro Mazzaglia, Stefano Ferraro, Bart Dhoedt
Representing a scene and its constituent objects from raw sensory data is a core ability for enabling robots to interact with their environment. In this letter, we propose a novel approach for scene understanding, leveraging an object-centric generative model that enables an agent to infer object category and pose in an allocentric reference frame using active inference, a neuro-inspired framework for action and perception. For evaluating the behavior of an active vision agent, we also propose a new benchmark where, given a target viewpoint of a particular object, the agent needs to find the best matching viewpoint given a workspace with randomly positioned objects in 3D...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457763/mathematical-modeling-of-pi3k-akt-pathway-in-microglia
#5
JOURNAL ARTICLE
Alireza Poshtkohi, John Wade, Liam McDaid, Junxiu Liu, Mark L Dallas, Angela Bithell
The motility of microglia involves intracellular signaling pathways that are predominantly controlled by changes in cytosolic Ca2+ and activation of PI3K/Akt (phosphoinositide-3-kinase/protein kinase B). In this letter, we develop a novel biophysical model for cytosolic Ca2+ activation of the PI3K/Akt pathway in microglia where Ca2+ influx is mediated by both P2Y purinergic receptors (P2YR) and P2X purinergic receptors (P2XR). The model parameters are estimated by employing optimization techniques to fit the model to phosphorylated Akt (pAkt) experimental modeling/in vitro data...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457762/instance-specific-model-perturbation-improves-generalized-zero-shot-learning
#6
JOURNAL ARTICLE
Guanyu Yang, Kaizhu Huang, Rui Zhang, Xi Yang
Zero-shot learning (ZSL) refers to the design of predictive functions on new classes (unseen classes) of data that have never been seen during training. In a more practical scenario, generalized zero-shot learning (GZSL) requires predicting both seen and unseen classes accurately. In the absence of target samples, many GZSL models may overfit training data and are inclined to predict individuals as categories that have been seen in training. To alleviate this problem, we develop a parameter-wise adversarial training process that promotes robust recognition of seen classes while designing during the test a novel model perturbation mechanism to ensure sufficient sensitivity to unseen classes...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457757/an-overview-of-the-free-energy-principle-and-related-research
#7
JOURNAL ARTICLE
Zhengquan Zhang, Feng Xu
The free energy principle and its corollary, the active inference framework, serve as theoretical foundations in the domain of neuroscience, explaining the genesis of intelligent behavior. This principle states that the processes of perception, learning, and decision making-within an agent-are all driven by the objective of "minimizing free energy," evincing the following behaviors: learning and employing a generative model of the environment to interpret observations, thereby achieving perception, and selecting actions to maintain a stable preferred state and minimize the uncertainty about the environment, thereby achieving decision making...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457756/obtaining-lower-query-complexities-through-lightweight-zeroth-order-proximal-gradient-algorithms
#8
JOURNAL ARTICLE
Bin Gu, Xiyuan Wei, Hualin Zhang, Yi Chang, Heng Huang
Zeroth-order (ZO) optimization is one key technique for machine learning problems where gradient calculation is expensive or impossible. Several variance, reduced ZO proximal algorithms have been proposed to speed up ZO optimization for nonsmooth problems, and all of them opted for the coordinated ZO estimator against the random ZO estimator when approximating the true gradient, since the former is more accurate. While the random ZO estimator introduces a larger error and makes convergence analysis more challenging compared to coordinated ZO estimator, it requires only O(1) computation, which is significantly less than O(d) computation of the coordinated ZO estimator, with d being dimension of the problem space...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457753/column-row-convolutional-neural-network-reducing-parameters-for-efficient-image-processing
#9
JOURNAL ARTICLE
Seongil Im, Jae-Seung Jeong, Junseo Lee, Changhwan Shin, Jeong Ho Cho, Hyunsu Ju
Recent advancements in deep learning have achieved significant progress by increasing the number of parameters in a given model. However, this comes at the cost of computing resources, prompting researchers to explore model compression techniques that reduce the number of parameters while maintaining or even improving performance. Convolutional neural networks (CNN) have been recognized as more efficient and effective than fully connected (FC) networks. We propose a column row convolutional neural network (CRCNN) in this letter that applies 1D convolution to image data, significantly reducing the number of learning parameters and operational steps...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457752/probing-the-structure-and-functional-properties-of-the-dropout-induced-correlated-variability-in-convolutional-neural-networks
#10
JOURNAL ARTICLE
Xu Pan, Ruben Coen-Cagli, Odelia Schwartz
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been studied, and its role in decoding accuracy is unknown. We studied the above questions in a convolutional neural network model with dropout in both the training and testing phases...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457750/ca3-circuit-model-compressing-sequential-information-in-theta-oscillation-and-replay
#11
JOURNAL ARTICLE
Satoshi Kuroki, Kenji Mizuseki
The hippocampus plays a critical role in the compression and retrieval of sequential information. During wakefulness, it achieves this through theta phase precession and theta sequences. Subsequently, during periods of sleep or rest, the compressed information reactivates through sharp-wave ripple events, manifesting as memory replay. However, how these sequential neuronal activities are generated and how they store information about the external environment remain unknown. We developed a hippocampal cornu ammonis 3 (CA3) computational model based on anatomical and electrophysiological evidence from the biological CA3 circuit to address these questions...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457749/frequency-propagation-multimechanism-learning-in-nonlinear-physical-networks
#12
JOURNAL ARTICLE
Vidyesh Rao Anisetti, Ananth Kandala, Benjamin Scellier, J M Schwarz
We introduce frequency propagation, a learning algorithm for nonlinear physical networks. In a resistive electrical circuit with variable resistors, an activation current is applied at a set of input nodes at one frequency and an error current is applied at a set of output nodes at another frequency. The voltage response of the circuit to these boundary currents is the superposition of an activation signal and an error signal whose coefficients can be read in different frequencies of the frequency domain. Each conductance is updated proportionally to the product of the two coefficients...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38457747/lateral-connections-improve-generalizability-of-learning-in-a-simple-neural-network
#13
JOURNAL ARTICLE
Garrett Crutcher
To navigate the world around us, neural circuits rapidly adapt to their environment learning generalizable strategies to decode information. When modeling these learning strategies, network models find the optimal solution to satisfy one task condition but fail when introduced to a novel task or even a different stimulus in the same space. In the experiments described in this letter, I investigate the role of lateral gap junctions in learning generalizable strategies to process information. Lateral gap junctions are formed by connexin proteins creating an open pore that allows for direct electrical signaling between two neurons...
February 28, 2024: Neural Computation
https://read.qxmd.com/read/38363661/active-learning-for-discrete-latent-variable-models
#14
JOURNAL ARTICLE
Aditi Jha, Zoe C Ashwood, Jonathan W Pillow
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models...
February 16, 2024: Neural Computation
https://read.qxmd.com/read/38363660/advantages-of-persistent-cohomology-in-estimating-animal-location-from-grid-cell-population-activity
#15
JOURNAL ARTICLE
Daisuke Kawahara, Shigeyoshi Fujisawa
Many cognitive functions are represented as cell assemblies. In the case of spatial navigation, the population activity of place cells in the hippocampus and grid cells in the entorhinal cortex represents self-location in the environment. The brain cannot directly observe self-location information in the environment. Instead, it relies on sensory information and memory to estimate self-location. Therefore, estimating low-dimensional dynamics, such as the movement trajectory of an animal exploring its environment, from only the high-dimensional neural activity is important in deciphering the information represented in the brain...
February 16, 2024: Neural Computation
https://read.qxmd.com/read/38363659/learning-only-on-boundaries-a-physics-informed-neural-operator-for-solving-parametric-partial-differential-equations%C3%A2-in-complex-geometries
#16
JOURNAL ARTICLE
Zhiwei Fang, Sifan Wang, Paris Perdikaris
Recently, deep learning surrogates and neural operators have shown promise in solving partial differential equations  (PDEs). However, they often require a large amount of training data and are limited to bounded domains. In this work, we present a novel physics-informed neural operator method to solve parameterized boundary value problems without labeled data. By reformulating the PDEs into boundary integral equations (BIEs), we can train the operator network solely on the boundary of the domain...
February 16, 2024: Neural Computation
https://read.qxmd.com/read/38363658/quantifying-and-maximizing-the-information-flux-in-recurrent-neural-networks
#17
JOURNAL ARTICLE
Claus Metzner, Marius E Yamakou, Dennis Voelkl, Achim Schilling, Patrick Krauss
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems...
February 16, 2024: Neural Computation
https://read.qxmd.com/read/38363657/evidence-for-multiscale-multiplexed-representation-of-visual-features-in-eeg
#18
JOURNAL ARTICLE
Hamid Karimi-Rouzbahani
Distinct neural processes such as sensory and memory processes are often encoded over distinct timescales of neural activations. Animal studies have shown that this multiscale coding strategy is also implemented for individual components of a single process, such as individual features of a multifeature stimulus in sensory coding. However, the generalizability of this encoding strategy to the human brain has remained unclear. We asked if individual features of visual stimuli were encoded over distinct timescales...
February 16, 2024: Neural Computation
https://read.qxmd.com/read/38363656/errata-to-a-tutorial-on-the-spectral-theory-of-markov-chains-by-eddie-seabrook-and-laurenz-wiskott-neural-computation-november-2023-vol-35-no-11-pp-1713-1796-https-doi-org-10-1162-neco_a_01611
#19
JOURNAL ARTICLE
https://read.qxmd.com/read/38101329/efficient-decoding-of-large-scale-neural-population-responses-with-gaussian-process-multiclass-regression
#20
JOURNAL ARTICLE
C Daniel Greenidge, B Scholl, Jacob L Yates, Jonathan W Pillow
Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the gaussian process multiclass decoder (GPMD), is well suited to decoding a continuous low-dimensional variable from high-dimensional population activity and provides a platform for assessing the importance of correlations in neural population codes...
December 13, 2023: Neural Computation
journal
journal
31799
1
2
Fetch more papers »
Fetching more papers... Fetching...
Remove bar
Read by QxMD icon Read
×

Save your favorite articles in one place with a free QxMD account.

×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"

We want to hear from doctors like you!

Take a second to answer a survey question.