journal
MENU ▼
Read by QxMD icon Read
search

Journal of Mathematical Psychology

journal
https://read.qxmd.com/read/30956351/multinomial-models-with-linear-inequality-constraints-overview-and-improvements-of-computational-methods-for-bayesian-inference
#1
Daniel W Heck, Clintin P Davis-Stober
Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: Either as the solution set to a system of linear inequalities or as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method...
August 2019: Journal of Mathematical Psychology
https://read.qxmd.com/read/30774151/thermodynamic-integration-and-steppingstone-sampling-methods-for-estimating-bayes-factors-a-tutorial
#2
Jeffrey Annis, Nathan J Evans, Brent J Miller, Thomas J Palmeri
One of the more principled methods of performing model selection is via Bayes factors. However, calculating Bayes factors requires marginal likelihoods, which are integrals over the entire parameter space, making estimation of Bayes factors for models with more than a few parameters a significant computational challenge. Here, we provide a tutorial review of two Monte Carlo techniques rarely used in psychology that efficiently compute marginal likelihoods: thermodynamic integration (Friel & Pettitt, 2008; Lartillot & Philippe, 2006) and steppingstone sampling (Xie, Lewis, Fan, Kuo, & Chen, 2011)...
April 2019: Journal of Mathematical Psychology
https://read.qxmd.com/read/30906069/extended-formulations-for-order-polytopes-through-network-flows
#3
Clintin P Davis-Stober, Jean-Paul Doignon, Samuel Fiorini, Francois Glineur, Michel Regenwetter
Mathematical psychology has a long tradition of modeling probabilistic choice via distribution-free random utility models and associated random preference models. For such models, the predicted choice probabilities often form a bounded and convex polyhedral set, or polytope. Polyhedral combinatorics have thus played a key role in studying the mathematical structure of these models. However, standard methods for characterizing the polytopes of such models are subject to a combinatorial explosion in complexity as the number of choice alternatives increases...
December 2018: Journal of Mathematical Psychology
https://read.qxmd.com/read/29200501/a-tutorial-on-bridge-sampling
#4
Quentin F Gronau, Alexandra Sarafoglou, Dora Matzke, Alexander Ly, Udo Boehm, Maarten Marsman, David S Leslie, Jonathan J Forster, Eric-Jan Wagenmakers, Helen Steingroever
The marginal likelihood plays an important role in many areas of Bayesian statistics such as parameter estimation, model comparison, and model averaging. In most applications, however, the marginal likelihood is not analytically tractable and must be approximated using numerical methods. Here we provide a tutorial on bridge sampling (Bennett, 1976; Meng & Wong, 1996), a reliable and relatively straightforward sampling method that allows researchers to obtain the marginal likelihood for models of varying complexity...
December 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28827888/recasting-a-biologically-motivated-computational-model-within-a-fechnerian-and-random-utility-framework
#5
Clintin P Davis-Stober, Nicholas Brown, Sanghyuk Park, Michel Regenwetter
The selective integration model of Tsetsos et al. (2016a) is a biologically motivated computational framework that aims to model intransitive preference and choice. Tsetsos et al. (2016a) concluded that a noisy system can lead to violations of transitivity in otherwise rational agents optimizing a task. We show how their model can be interpreted from a Fechnerian perspective and within a random utility framework. Specifically, we spell out the connection between the selective integration model and two probabilistic models of transitive preference, weak stochastic transitivity and the triangle inequalities, tested by Tsetsos et al...
April 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28630524/a-martingale-analysis-of-first-passage-times-of-time-dependent-wiener-diffusion-models
#6
Vaibhav Srivastava, Samuel F Feng, Jonathan D Cohen, Naomi Ehrich Leonard, Amitai Shenhav
Research in psychology and neuroscience has successfully modeled decision making as a process of noisy evidence accumulation to a decision bound. While there are several variants and implementations of this idea, the majority of these models make use of a noisy accumulation between two absorbing boundaries. A common assumption of these models is that decision parameters, e.g., the rate of accumulation (drift rate), remain fixed over the course of a decision, allowing the derivation of analytic formulas for the probabilities of hitting the upper or lower decision threshold, and the mean decision time...
April 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/30147145/model-based-cognitive-neuroscience
#7
Thomas J Palmeri, Bradley C Love, Brandon M Turner
This special issue explores the growing intersection between mathematical psychology and cognitive neuroscience. Mathematical psychology, and cognitive modeling more generally, has a rich history of formalizing and testing hypotheses about cognitive mechanisms within a mathematical and computational language, making exquisite predictions of how people perceive, learn, remember, and decide. Cognitive neuroscience aims to identify neural mechanisms associated with key aspects of cognition using techniques like neurophysiology, electrophysiology, and structural and functional brain imaging...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/29118459/model-based-functional-neuroimaging-using-dynamic-neural-fields-an-integrative-cognitive-neuroscience-approach
#8
Sobanawartiny Wijeakumar, Joseph P Ambrose, John P Spencer, Rodica Curtu
A fundamental challenge in cognitive neuroscience is to develop theoretical frameworks that effectively span the gap between brain and behavior, between neuroscience and psychology. Here, we attempt to bridge this divide by formalizing an integrative cognitive neuroscience approach using dynamic field theory (DFT). We begin by providing an overview of how DFT seeks to understand the neural population dynamics that underlie cognitive processes through previous applications and comparisons to other modeling approaches...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28435173/how-attention-influences-perceptual-decision-making-single-trial-eeg-correlates-of-drift-diffusion-model-parameters
#9
Michael D Nunez, Joachim Vandekerckhove, Ramesh Srinivasan
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28392584/relating-accumulator-model-parameters-and-neural-dynamics
#10
Braden A Purcell, Thomas J Palmeri
Accumulator models explain decision-making as an accumulation of evidence to a response threshold. Specific model parameters are associated with specific model mechanisms, such as the time when accumulation begins, the average rate of evidence accumulation, and the threshold. These mechanisms determine both the within-trial dynamics of evidence accumulation and the predicted behavior. Cognitive modelers usually infer what mechanisms vary during decision-making by seeing what parameters vary when a model is fitted to observed behavior...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28298703/a-tutorial-on-the-free-energy-framework-for-modelling-perception-and-learning
#11
Rafal Bogacz
This paper provides an easy to follow tutorial on the free-energy framework for modelling perception developed by Friston, which extends the predictive coding model of Rao and Ballard. These models assume that the sensory cortex infers the most likely values of attributes or features of sensory stimuli from the noisy inputs encoding the stimuli. Remarkably, these models describe how this inference could be implemented in a network of very simple computational elements, suggesting that this inference could be performed by biological networks of neurons...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28298702/fixed-versus-mixed-rsa-%C3%A2-explaining-visual-representations-by-fixed-and-mixed-feature-sets-from-shallow-and-deep-computational-models
#12
Seyed-Mahdi Khaligh-Razavi, Linda Henriksson, Kendrick Kay, Nikolaus Kriegeskorte
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28298701/identification-of-probabilities
#13
Paul M B Vitányi, Nick Chater
Within psychology, neuroscience and artificial intelligence, there has been increasing interest in the proposal that the brain builds probabilistic models of sensory and linguistic input: that is, to infer a probabilistic model from a sample. The practical problems of such inference are substantial: the brain has limited data and restricted computational resources. But there is a more fundamental question: is the problem of inferring a probabilistic model from a sample possible even in principle? We explore this question and find some surprisingly positive and general results...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28286346/integrating-theoretical-models-with-functional-neuroimaging
#14
Michael S Pratte, Frank Tong
The development of mathematical models to characterize perceptual and cognitive processes dates back almost to the inception of the field of psychology. Since the 1990s, human functional neuroimaging has provided for rapid empirical and theoretical advances across a variety of domains in cognitive neuroscience. In more recent work, formal modeling and neuroimaging approaches are being successfully combined, often producing models with a level of specificity and rigor that would not have been possible by studying behavior alone...
February 2017: Journal of Mathematical Psychology
https://read.qxmd.com/read/28579640/comparing-fixed-and-collapsing-boundary-versions-of-the-diffusion-model
#15
Chelsea Voskuilen, Roger Ratcliff, Philip L Smith
Optimality studies and studies of decision-making in monkeys have been used to support a model in which the decision boundaries used to evaluate evidence collapse over time. This article investigates whether a diffusion model with collapsing boundaries provides a better account of human data than a model with fixed boundaries. We compared the models using data from four new numerosity discrimination experiments and two previously published motion discrimination experiments. When model selection was based on BIC values, the fixed boundary model was preferred over the collapsing boundary model for all of the experiments...
August 2016: Journal of Mathematical Psychology
https://read.qxmd.com/read/30713353/rejection-odds-and-rejection-ratios-a-proposal-for-statistical-practice-in-testing-hypotheses
#16
M J Bayarri, Daniel J Benjamin, James O Berger, Thomas M Sellke
Much of science is (rightly or wrongly) driven by hypothesis testing. Even in situations where the hypothesis testing paradigm is correct, the common practice of basing inferences solely on p -values has been under intense criticism for over 50 years. We propose, as an alternative, the use of the odds of a correct rejection of the null hypothesis to incorrect rejection. Both pre-experimental versions (involving the power and Type I error) and post-experimental versions (depending on the actual data) are considered...
June 2016: Journal of Mathematical Psychology
https://read.qxmd.com/read/25089060/analytical-expressions-for-the-rem-model-of-recognition-memory
#17
Maximiliano Montenegro, Jay I Myung, Mark A Pitt
An inordinate amount of computation is required to evaluate predictions of simulation-based models. Following Myung et al (2007), we derived an analytic form expression of the REM model of recognition memory using a Fourier transform technique, which greatly reduces the time required to perform model simulations. The accuracy of the derivation is verified by showing a close correspondence between its predictions and those reported in Shiffrin and Steyvers (1997). The derivation also shows that REM's predictions depend upon the vector length parameter, and that model parameters are not identifiable unless one of the parameters is fixed...
June 1, 2014: Journal of Mathematical Psychology
https://read.qxmd.com/read/25214675/a-comparison-model-of-reinforcement-learning-and-win-stay-lose-shift-decision-making-processes-a-tribute-to-w-k-estes
#18
Darrell A Worthy, W Todd Maddox
W.K. Estes often championed an approach to model development whereby an existing model was augmented by the addition of one or more free parameters, and a comparison between the simple and more complex, augmented model determined whether the additions were justified. Following this same approach we utilized Estes' (1950) own augmented learning equations to improve the fit and plausibility of a win-stay-lose-shift (WSLS) model that we have used in much of our recent work. Estes also championed models that assumed a comparison between multiple concurrent cognitive processes...
April 1, 2014: Journal of Mathematical Psychology
https://read.qxmd.com/read/24948840/markovian-interpretations-of-dual-retrieval-processes
#19
C F A Gomes, C J Brainerd, K Nakamura, V F Reyna
A half-century ago, at the dawn of the all-or-none learning era, Estes showed that finite Markov chains supply a tractable, comprehensive framework for discrete-change data of the sort that he envisioned for shifts in conditioning states in stimulus sampling theory. Shortly thereafter, such data rapidly accumulated in many spheres of human learning and animal conditioning, and Estes' work stimulated vigorous development of Markov models to handle them. A key outcome was that the data of the workhorse paradigms of episodic memory, recognition and recall, proved to be one- and two-stage Markovian, respectively, to close approximations...
April 1, 2014: Journal of Mathematical Psychology
https://read.qxmd.com/read/23997275/a-tutorial-on-adaptive-design-optimization
#20
Jay I Myung, Daniel R Cavagnaro, Mark A Pitt
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct "smart" experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond...
June 2013: Journal of Mathematical Psychology
journal
journal
25521
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"