journal
MENU ▼
Read by QxMD icon Read
search

Annals of Applied Statistics

journal
https://read.qxmd.com/read/30666272/modeling-hybrid-traits-for-comorbidity-and-genetic-studies-of-alcohol-and-nicotine-co-dependence
#1
Heping Zhang, Dungang Liu, Jiwei Zhao, Xuan Bi
We propose a novel multivariate model for analyzing hybrid traits and identifying genetic factors for comorbid conditions. Comorbidity is a common phenomenon in mental health in which an individual suffers from multiple disorders simultaneously. For example, in the Study of Addiction: Genetics and Environment (SAGE), alcohol and nicotine addiction were recorded through multiple assessments that we refer to as hybrid traits. Statistical inference for studying the genetic basis of hybrid traits has not been well-developed...
December 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30627301/exact-spike-train-inference-via-%C3%A2-0-optimization
#2
Sean Jewell, Daniela Witten
In recent years new technologies in neuroscience have made it possible to measure the activities of large numbers of neurons simultaneously in behaving animals. For each neuron a fluorescence trace is measured; this can be seen as a first-order approximation of the neuron's activity over time. Determining the exact time at which a neuron spikes on the basis of its fluorescence trace is an important open problem in the field of computational neuroscience. Recently, a convex optimization problem involving an ℓ1 penalty was proposed for this task...
December 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30510612/scalpel-extracting-neurons-from-calcium-imaging-data
#3
Ashley Petersen, Noah Simon, Daniela Witten
In the past few years, new technologies in the field of neuroscience have made it possible to simultaneously image activity in large populations of neurons at cellular resolution in behaving animals. In mid-2016, a huge repository of this so-called "calcium imaging" data was made publicly available. The availability of this large-scale data resource opens the door to a host of scientific questions for which new statistical methods must be developed. In this paper we consider the first step in the analysis of calcium imaging data-namely, identifying the neurons in a calcium imaging video...
December 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30473739/the-effects-of-nonignorable-missing-data-on-label-free-mass-spectrometry-proteomics-experiments
#4
Jonathon J O'Brien, Harsha P Gunawardena, Joao A Paulo, Xian Chen, Joseph G Ibrahim, Steven P Gygi, Bahjat F Qaqish
An idealized version of a label-free discovery mass spectrometry proteomics experiment would provide absolute abundance measurements for a whole proteome, across varying conditions. Unfortunately, this ideal is not realized. Measurements are made on peptides requiring an inferential step to obtain protein level estimates. The inference is complicated by experimental factors that necessitate relative abundance estimation and result in widespread non-ignorable missing data. Relative abundance on the log scale takes the form of parameter contrasts...
December 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30627300/estimating-and-comparing-cancer-progression-risks-under-varying-surveillance-protocols
#5
Jane M Lange, Roman Gulati, Amy S Leonardson, Daniel W Lin, Lisa F Newcomb, Bruce J Trock, H Ballentine Carter, Matthew R Cooperberg, Janet E Cowan, Lawrence H Klotz, Ruth Etzioni
Outcomes after cancer diagnosis and treatment are often observed at discrete times via doctor-patient encounters or specialized diagnostic examinations. Despite their ubiquity as endpoints in cancer studies, such outcomes pose challenges for analysis. In particular, comparisons between studies or patient populations with different surveillance schema may be confounded by differences in visit frequencies. We present a statistical framework based on multistate and hidden Markov models that represents events on a continuous time scale given data with discrete observation times...
September 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30416640/tprm-tensor-partition-regression-models-with-applications-in-imaging-biomarker-detection
#6
Michelle F Miranda, Hongtu Zhu, Joseph G Ibrahim
Medical imaging studies have collected high dimensional imaging data to identify imaging biomarkers for diagnosis, screening, and prognosis, among many others. These imaging data are often represented in the form of a multi-dimensional array, called a tensor. The aim of this paper is to develop a tensor partition regression modeling (TPRM) framework to establish a relationship between low-dimensional clinical outcomes (e.g., diagnosis) and high dimensional tensor covariates. Our TPRM is a hierarchical model and efficiently integrates four components: (i) a partition model, (ii) a canonical polyadic decomposition model, (iii) a principal components model, and (iv) a generalized linear model with a sparse inducing normal mixture prior...
September 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30294404/complex-valued-time-series-modeling-for-improved-activation-detection-in-fmri-studies
#7
Daniel W Adrian, Ranjan Maitra, Daniel B Rowe
A complex-valued data-based model with p th order autoregressive errors and general real/imaginary error covariance structure is proposed as an alternative to the commonly-used magnitude-only data-based autoregressive model for fMRI time series. Likelihood-ratio-test-based activation statistics are derived for both models and compared for experimental and simulated data. For a dataset from a right-hand finger-tapping experiment, the activation map obtained using complex-valued modeling more clearly identifies the primary activation region (left functional central sulcus) than the magnitude-only model...
September 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30220953/topological-data-analysis-of-single-trial-electroencephalographic-signals
#8
Yuan Wang, Hernando Ombao, Moo K Chung
Epilepsy is a neurological disorder that can negatively affect the visual, audial and motor functions of the human brain. Statistical analysis of neurophysiological recordings, such as electroencephalogram (EEG), facilitates the understanding and diagnosis of epileptic seizures. Standard statistical methods, however, do not account for topological features embedded in EEG signals. In the current study, we propose a persistent homology (PH) procedure to analyze single-trial EEG signals. The procedure denoises signals with a weighted Fourier series (WFS), and tests for topological difference between the denoised signals with a permutation test based on their PH features persistence landscapes (PL)...
September 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30214655/adaptive-weight-burden-test-for-associations-between-quantitative-traits-and-genotype-data-with-complex-correlations
#9
Xiaowei Wu, Ting Guan, Dajiang J Liu, Luis G León Novelo, Dipankar Bandyopadhyay
High-throughput sequencing has often been used to screen samples from pedigrees or with population structure, producing genotype data with complex correlations rendered from both familial relation and linkage disequilibrium. With such data, it is critical to account for these genotypic correlations when assessing the contribution of variants by gene or pathway. Recognizing the limitations of existing association testing methods, we propose Adaptive-weight Burden Test (ABT), a retrospective, mixed-model test for genetic association of quantitative traits on genotype data with complex correlations...
September 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30224943/kernel-penalized-regression-for-analysis-of-microbiome-data
#10
Timothy W Randolph, Sen Zhao, Wade Copeland, Meredith Hullar, Ali Shojaie
The analysis of human microbiome data is often based on dimension-reduced graphical displays and clusterings derived from vectors of microbial abundances in each sample. Common to these ordination methods is the use of biologically motivated definitions of similarity. Principal coordinate analysis, in particular, is often performed using ecologically defined distances, allowing analyses to incorporate context-dependent, non-Euclidean structure. In this paper, we go beyond dimension-reduced ordination methods and describe a framework of high-dimensional regression models that extends these distance-based methods...
March 2018: Annals of Applied Statistics
https://read.qxmd.com/read/30174778/a-unified-statistical-framework-for-single-cell-and-bulk-rna-sequencing-data
#11
Lingxue Zhu, Jing Lei, Bernie Devlin, Kathryn Roeder
Recent advances in technology have enabled the measurement of RNA levels for individual cells. Compared to traditional tissue-level bulk RNA-seq data, single cell sequencing yields valuable insights about gene expression profiles for different cell types, which is potentially critical for understanding many complex human diseases. However, developing quantitative tools for such data remains challenging because of high levels of technical noise, especially the "dropout" events. A "dropout" happens when the RNA for a gene fails to be amplified prior to sequencing, producing a "false" zero in the observed data...
March 2018: Annals of Applied Statistics
https://read.qxmd.com/read/29731955/powerful-test-based-on-conditional-effects-for-genome-wide-screening
#12
Yaowu Liu, Jun Xie
This paper considers testing procedures for screening large genome-wide data, where we examine hundreds of thousands of genetic variants, e.g., single nucleotide polymorphisms (SNP), on a quantitative phenotype. We screen the whole genome by SNP sets and propose a new test that is based on conditional effects from multiple SNPs. The test statistic is developed for weak genetic effects and incorporates correlations among genetic variables, which may be very high due to linkage disequilibrium. The limiting null distribution of the test statistic and the power of the test are derived...
March 2018: Annals of Applied Statistics
https://read.qxmd.com/read/29731954/msiq-joint-modeling-of-multiple-rna-seq-samples-for-accurate-isoform-quantification
#13
Wei Vivian Li, Anqi Zhao, Shihua Zhang, Jingyi Jessica Li
Next-generation RNA sequencing (RNA-seq) technology has been widely used to assess full-length RNA isoform abundance in a high-throughput manner. RNA-seq data offer insight into gene expression levels and transcriptome structures, enabling us to better understand the regulation of gene expression and fundamental biological processes. Accurate isoform quantification from RNA-seq data is challenging due to the information loss in sequencing experiments. A recent accumulation of multiple RNA-seq data sets from the same tissue or cell type provides new opportunities to improve the accuracy of isoform quantification...
March 2018: Annals of Applied Statistics
https://read.qxmd.com/read/29606991/design-of-vaccine-trials-during-outbreaks-with-and-without-a-delayed-vaccination-comparator
#14
Natalie E Dean, M Elizabeth Halloran, Ira M Longini
Conducting vaccine efficacy trials during outbreaks of emerging pathogens poses particular challenges. The "Ebola ça suffit" trial in Guinea used a novel ring vaccination cluster randomized design to target populations at highest risk of infection. Another key feature of the trial was the use of a delayed vaccination arm as a comparator, in which clusters were randomized to immediate vaccination or vaccination 21 days later. This approach, chosen to improve ethical acceptability of the trial, complicates the statistical analysis as participants in the comparison arm are eventually protected by vaccine...
March 2018: Annals of Applied Statistics
https://read.qxmd.com/read/29515717/a-unified-framework-for-variance-component-estimation-with-summary-statistics-in-genome-wide-association-studies
#15
Xiang Zhou
Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS...
December 2017: Annals of Applied Statistics
https://read.qxmd.com/read/30271520/biomarker-change-point-estimation-with-right-censoring-in-longitudinal-studies
#16
Xiaoying Tang, Michael I Miller, Laurent Younes
We consider in this paper a statistical two-phase regression model in which the change point of a disease biomarker is measured relative to another point in time, such as the manifestation of the disease, which is subject to right-censoring (i.e., possibly unobserved over the entire course of the study). We develop point estimation methods for this model, based on maximum likelihood, and bootstrap validation methods. The effectiveness of our approach is illustrated by numerical simulations, and by the estimation of a change point for amygdalar atrophy in the context of Alzheimer's disease, wherein it is related to the cognitive manifestation of the disease...
September 2017: Annals of Applied Statistics
https://read.qxmd.com/read/29861820/toward-bayesian-inference-of-the-spatial-distribution-of-proteins-from-three-cube-f%C3%A3-rster-resonance-energy-transfer-data
#17
Jan-Otto Hooghoudt, Margarida Barroso, Rasmus Waagepetersen
Förster resonance energy transfer (FRET) is a quantum-physical phenomenon where energy may be transferred from one molecule to a neighbor molecule if the molecules are close enough. Using fluorophore molecule marking of proteins in a cell, it is possible to measure in microscopic images to what extent FRET takes place between the fluorophores. This provides indirect information of the spatial distribution of the proteins. Questions of particular interest are whether (and if so to which extent) proteins of possibly different types interact or whether they appear independently of each other...
September 2017: Annals of Applied Statistics
https://read.qxmd.com/read/29721127/latent-space-models-for-multiview-network-data
#18
Michael Salter-Townshend, Tyler H McCormick
Social relationships consist of interactions along multiple dimensions. In social networks, this means that individuals form multiple types of relationships with the same person (e.g., an individual will not trust all of his/her acquaintances). Statistical models for these data require understanding two related types of dependence structure: (i) structure within each relationship type, or network view, and (ii) the association between views. In this paper, we propose a statistical framework that parsimoniously represents dependence between relationship types while also maintaining enough flexibility to allow individuals to serve different roles in different relationship types...
September 2017: Annals of Applied Statistics
https://read.qxmd.com/read/29479394/a-novel-and-efficient-algorithm-for-de-novo-discovery-of-mutated-driver-pathways-in-cancer
#19
Binghui Liu, Chong Wu, Xiaotong Shen, Wei Pan
Next-generation sequencing studies on cancer somatic mutations have discovered that driver mutations tend to appear in most tumor samples, but they barely overlap in any single tumor sample, presumably because a single driver mutation can perturb the whole pathway. Based on the corresponding new concepts of coverage and mutual exclusivity, new methods can be designed for de novo discovery of mutated driver pathways in cancer. Since the computational problem is a combinatorial optimization with an objective function involving a discontinuous indicator function in high dimension, many existing optimization algorithms, such as a brute force enumeration, gradient descent and Newton's methods, are practically infeasible or directly inapplicable...
September 2017: Annals of Applied Statistics
https://read.qxmd.com/read/29308102/doubly-robust-estimation-of-optimal-treatment-regimes-for-survival-data-with-application-to-an-hiv-aids-study
#20
Runchao Jiang, Wenbin Lu, Rui Song, Michael G Hudgens, Sonia Naprvavnik
In many biomedical settings, assigning every patient the same treatment may not be optimal due to patient heterogeneity. Individualized treatment regimes have the potential to dramatically improve clinical outcomes. When the primary outcome is censored survival time, a main interest is to find optimal treatment regimes that maximize the survival probability of patients. Since the survival curve is a function of time, it is important to balance short-term and long-term benefit when assigning treatments. In this paper, we propose a doubly robust approach to estimate optimal treatment regimes that optimize a user specified function of the survival curve, including the restricted mean survival time and the median survival time...
September 2017: Annals of Applied Statistics
journal
journal
42077
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"