journal
MENU ▼
Read by QxMD icon Read
search

Biometrics

journal
https://read.qxmd.com/read/30746695/cross-sectional-hiv-incidence-estimation-accounting-for-heterogeneity-across-communities
#1
Yuejia Xu, Oliver Laeyendecker, Rui Wang
Accurate estimation of HIV incidence rates is crucial for the monitoring of HIV epidemics, the evaluation of prevention programs, and the design of prevention studies. Traditional cohort approaches to measure HIV incidence require repeatedly testing large cohorts of HIV uninfected individuals with a HIV diagnostic test (e.g., enzyme-linked immunosorbent assay) for long periods of time to identify new infections, which can be prohibitively costly, time-consuming, and subject to loss to follow-up. Cross-sectional approaches based on the usual HIV diagnostic test and biomarkers of recent infection offer important advantages over standard cohort approaches, in terms of time, cost, and attrition...
February 12, 2019: Biometrics
https://read.qxmd.com/read/30746692/exact-inference-for-integrated-population-modelling
#2
P Besbeas, B J T Morgan
Integrated population modelling is widely used in statistical ecology. It allows data from population time series and independent surveys to be analysed simultaneously. In classical analysis the time-series likelihood component can be conveniently approximated using Kalman filter methodology. However, the natural way to model systems which have a discrete state space is to use hidden Markov models (HMMs). The proposed method avoids the Kalman filter approximations and Monte Carlo simulations. Subject to possible numerical sensitivity analysis, it is exact, flexible, and allows the use of standard techniques of classical inference...
February 12, 2019: Biometrics
https://read.qxmd.com/read/30724332/measurement-error-correction-and-sensitivity-analysis-in-longitudinal-dietary-intervention-studies-using-an-external-validation-study
#3
Juned Siddique, Michael J Daniels, Raymond J Carroll, Trivellore E Raghunathan, Elizabeth A Stuart, Laurence S Freedman
In lifestyle intervention trials, where the goal is to change a participant's weight or modify their eating behavior, self-reported diet is a longitudinal outcome variable that is subject to measurement error. We propose a statistical framework for correcting for measurement error in longitudinal self-reported dietary data by combining intervention data with auxiliary data from an external biomarker validation study where both self-reported and recovery biomarkers of dietary intake are available. In this setting, dietary intake measured without error in the intervention trial is missing data and multiple imputation is used to fill in the missing measurements...
February 6, 2019: Biometrics
https://read.qxmd.com/read/30714118/causal-inference-when-counterfactuals-depend-on-the-proportion-of-all-subjects-exposed
#4
Caleb H Miles, Maya Petersen, Mark J van der Laan
The assumption that no subject's exposure affects another subject's outcome, known as the no-interference assumption, has long held a foundational position in the study of causal inference. However, this assumption may be violated in many settings, and in recent years has been relaxed considerably. Often this has been achieved with either the aid of a known underlying network, or the assumption that the population can be partitioned into separate groups, between which there is no interference, and within which each subject's outcome may be affected by all the other subjects in the group via the proportion exposed (the stratified interference assumption)...
February 4, 2019: Biometrics
https://read.qxmd.com/read/30714095/familywise-error-control-in-multi-armed-response-adaptive-trials
#5
D S Robertson, J M S Wason
Response-adaptive designs allow the randomization probabilities to change during the course of a trial based on cumulated response data so that a greater proportion of patients can be allocated to the better performing treatments. A major concern over the use of response-adaptive designs in practice, particularly from a regulatory viewpoint, is controlling the type I error rate. In particular, we show that the naïve z-test can have an inflated type I error rate even after applying a Bonferroni correction. Simulation studies have often been used to demonstrate error control but do not provide a guarantee...
February 4, 2019: Biometrics
https://read.qxmd.com/read/30714093/an-iterative-penalized-least-squares-approach-to-sparse-canonical-correlation-analysis
#6
Qing Mai, Xin Zhang
It is increasingly interesting to model the relationship between two sets of high-dimensional measurements with potentially high correlations. Canonical correlation analysis (CCA) is a classical tool that explores the dependency of two multivariate random variables and extracts canonical pairs of highly correlated linear combinations. Driven by applications in genomics, text mining, and imaging research, among others, many recent studies generalize CCA to high-dimensional settings. However, most of them either rely on strong assumptions on covariance matrices, or do not produce nested solutions...
February 4, 2019: Biometrics
https://read.qxmd.com/read/30690720/a-two-stage-experimental-design-for-dilution-assays
#7
Jake M Ferguson, Tanya A Miura, Craig R Miller
Dilution assays to determine solute concentration have found wide use in biomedical research. Many dilution assays return imprecise concentration estimates because they are only done to orders of magnitude. Previous statistical work has focused on how to design efficient experiments that can return more precise estimates, however this work has not considered the practical difficulties of implementing these designs in the laboratory. We developed a two-stage experiment with a first stage that obtains an order of magnitude estimate and a second stage that concentrates effort on the most informative dilution to increase estimator precision...
January 28, 2019: Biometrics
https://read.qxmd.com/read/30690718/a-cluster-adjusted-rank-based-test-for-a-clinical-trial-concerning-multiple-endpoints-with-application-to-dietary-intervention-assessment
#8
Wei Zhang, Aiyi Liu, Larry L Tang, Qizhai Li
Multiple endpoints are often naturally clustered based on their scientific interpretations. Tests that compare these clustered outcomes between independent groups may lose efficiency if the cluster structures are not properly accounted for. For the two-sample generalized Behrens-Fisher hypothesis concerning multiple endpoints we propose a cluster-adjusted multivariate test procedure for the comparison and demonstrate its gain in efficiency over test procedures that ignore the clusters. Data from a dietary intervention trial are used to illustrate the methods...
January 28, 2019: Biometrics
https://read.qxmd.com/read/30690717/efficient-methods-for-signal-detection-from-correlated-adverse-events-in-clinical-trials
#9
Guoqing Diao, Guanghan F Liu, Donglin Zeng, William Wang, Xianming Tan, Joseph F Heyse, Joseph G Ibrahim
It is an important and yet challenging task to identify true signals from many adverse events that may be reported during the course of a clinical trial. One unique feature of drug safety data from clinical trials, unlike data from post-marketing spontaneous reporting, is that many types of adverse events are reported by only very few patients leading to rare events. Due to the limited study size, the p-values of testing whether the rate is higher in the treatment group across all types of adverse events are in general not uniformly distributed under the null hypothesis that there is no difference between the treatment group and the placebo group...
January 28, 2019: Biometrics
https://read.qxmd.com/read/30690716/a-bayesian-hierarchical-model-estimating-cace-in-meta-analysis-of-randomized-clinical-trials-with-noncompliance
#10
Jincheng Zhou, James S Hodges, M Fareed K Suri, Haitao Chu
Noncompliance to assigned treatment is a common challenge in analysis and interpretation of randomized clinical trials. The complier average causal effect (CACE) approach provides a useful tool for addressing noncompliance, where CACE is defined as the average difference in potential outcomes for the response in the subpopulation of subjects who comply with their assigned treatments. In this article, we present a Bayesian hierarchical model to estimate the CACE in a meta-analysis of randomized clinical trials where compliance may be heterogeneous between studies...
January 28, 2019: Biometrics
https://read.qxmd.com/read/30690707/fast-likelihood-based-inference-for-latent-count-models-using-the-saddlepoint-approximation
#11
W Zhang, M V Bravington, R M Fewster
Latent count models constitute an important modeling class in which a latent vector of counts, z, is summarized or corrupted for reporting, yielding observed data y = T z where T is a known but non-invertible matrix. The observed vector y generally follows an unknown multivariate distribution with a complicated dependence structure. Latent count models arise in diverse fields, such as estimation of population size from capture-recapture studies; inference on multi-way contingency tables summarized by marginal totals; or analysis of route flows in networks based on traffic counts at a subset of nodes...
January 28, 2019: Biometrics
https://read.qxmd.com/read/30666629/a-statistical-method-for-joint-estimation-of-cis-eqtls-and-parent-of-origin-effects-under-family-trio-design
#12
Vasyl Zhabotynsky, Kaoru Inoue, Terry Magnuson, J Mauro Calabrese, Wei Sun
RNA sequencing allows one to study allelic imbalance of gene expression, which may be due to genetic factors or genomic imprinting (i.e., higher expression of maternal or paternal allele). It is desirable to model both genetic and parent-of-origin effects simultaneously to avoid confounding and to improve the power to detect either effect. In studies of genetically tractable model organisms, separation of genetic and parent-of-origin effects can be achieved by studying reciprocal cross of two inbred strains...
January 22, 2019: Biometrics
https://read.qxmd.com/read/30666621/a-sensitivity-analysis-approach-for-informative-dropout-using-shared-parameter-models
#13
Li Su, Qiuju Li, Jessica K Barrett, Michael J Daniels
Shared parameter models (SPMs) are a useful approach to addressing bias from informative dropout in longitudinal studies. In SPMs it is typically assumed that the longitudinal outcome process and the dropout time are independent, given random effects and observed covariates. However, this conditional independence assumption is unverifiable. Currently, sensitivity analysis strategies for this unverifiable assumption of SPMs are underdeveloped. In principle, parameters that can and cannot be identified by the observed data should be clearly separated in sensitivity analyses, and sensitivity parameters should not influence the model fit to the observed data...
January 22, 2019: Biometrics
https://read.qxmd.com/read/30648746/model-confidence-bounds-for-variable-selection
#14
Yang Li, Yuetian Luo, Davide Ferrari, Xiaonan Hu, Yichen Qin
In this article, we introduce the concept of model confidence bounds (MCB) for variable selection in the context of nested models. Similarly to the endpoints in the familiar confidence interval for parameter estimation, the MCB identities two nested models (upper and lower confidence bound models) containing the true model at a given level of confidence. Instead of trusting a single selected model obtained from a given model selection method, the MCB proposes a group of nested models as candidates and the MCB's width and composition enable the practitioner to assess the overall model selection uncertainty...
January 16, 2019: Biometrics
https://read.qxmd.com/read/30648731/inference-for-case-control-studies-with-incident-and-prevalent-cases
#15
Marlena Maziarz, Yukun Liu, Jing Qin, Ruth M Pfeiffer
We propose and study a fully efficient method to estimate associations of an exposure with disease incidence when both, incident cases and prevalent cases, i.e. individuals who were diagnosed with the disease at some prior time point and are alive at the time of sampling, are included in a case-control study. We extend the exponential tilting model for the relationship between exposure and case status to accommodate two case groups, and correct for the survival bias in the prevalent cases through a tilting term that depends on the parametric distribution of the backward time, i...
January 16, 2019: Biometrics
https://read.qxmd.com/read/30648730/approximate-bayesian-inference-for-discretely-observed-continuous-time-multi-state-models
#16
Andrea Tancredi
Inference for continuous time multi-state models presents considerable computational difficulties when the process is only observed at discrete time points with no additional information about the state transitions. In fact, for general multi-state Markov model, evaluation of the likelihood function is possible only via intensive numerical approximations. Moreover, in real applications, transitions between states may depend on the time since entry into the current state, and semi-Markov models, where the likelihood function is not available in closed form, should be fitted to the data...
January 16, 2019: Biometrics
https://read.qxmd.com/read/30638268/causal-comparative-effectiveness-analysis-of-dynamic-continuous-time-treatment-initiation-rules-with-sparsely-measured-outcomes-and-death
#17
Liangyuan Hu, Joseph W Hogan
Evidence supporting the current World Health Organization recommendations of early antiretroviral therapy (ART) initiation for adolescents is inconclusive. We leverage a large observational data and compare, in terms of mortality and CD4 cell count, the dynamic treatment initiation rules for HIV-infected adolescents. Our approaches extend the marginal structural model for estimating outcome distributions under dynamic treatment regimes (DTR), developed in Robins et al. (2008), to allow the causal comparisons of both specific regimes and regimes along a continuum...
January 14, 2019: Biometrics
https://read.qxmd.com/read/30571849/estimations-of-the-joint-distribution-of-failure-time-and-failure-type-with-dependent-truncation
#18
Yu-Jen Cheng, Mei-Cheng Wang, Chang-Yu Tsai
In biomedical studies involving survival data, the observation of failure times is sometimes accompanied by a variable which describes the type of failure event (Kalbfleisch and Prentice, 2002). This paper considers two specific challenges which are encountered in the joint analysis of failure time and failure type. First, because the observation of failure times is subject to left truncation, the sampling bias extends to the failure type which is associated with the failure time. An analytical challenge is to deal with such sampling bias...
December 20, 2018: Biometrics
https://read.qxmd.com/read/30549012/dependence-modeling-for-recurrent-event-times-subject-to-right-censoring-with-d-vine-copulas
#19
Nicole Barthel, Candida Geerdens, Claudia Czado, Paul Janssen
In many time-to-event studies, the event of interest is recurrent. Here, the data for each sample unit corresponds to a series of gap times between the subsequent events. Given a limited follow-up period, the last gap time might be right-censored. In contrast to classical analysis, gap times and censoring times cannot be assumed independent, i.e. the sequential nature of the data induces dependent censoring. Also, the number of recurrences typically varies among sample units leading to unbalanced data. To model the association pattern between gap times, so far only parametric margins combined with the restrictive class of Archimedean copulas have been considered...
December 14, 2018: Biometrics
https://read.qxmd.com/read/30549011/distribution-free-estimation-of-local-growth-rates-around-interval-censored-anchoring-events
#20
Chenghao Chu, Ying Zhang, Wanzhu Tu
Biological processes are usually defined on timelines that are anchored by specific events. For example, cancer growth is typically measured by the change in tumor size from the time of oncogenesis. In the absence of such anchoring events, longitudinal assessments of the outcome lose their temporal reference. In this paper, we considered the estimation of local change rates in the outcomes when the anchoring events are interval-censored. Viewing the subject-specific anchoring event times as random variables from an unspecified distribution, we proposed a distribution-free estimation method for the local growth rates around the unobserved anchoring events...
December 14, 2018: Biometrics
journal
journal
23361
1
2
Fetch more papers »
Fetching more papers... Fetching...
Read by QxMD. Sign in or create an account to discover new knowledge that matter to you.
Remove bar
Read by QxMD icon Read
×

Search Tips

Use Boolean operators: AND/OR

diabetic AND foot
diabetes OR diabetic

Exclude a word using the 'minus' sign

Virchow -triad

Use Parentheses

water AND (cup OR glass)

Add an asterisk (*) at end of a word to include word stems

Neuro* will search for Neurology, Neuroscientist, Neurological, and so on

Use quotes to search for an exact phrase

"primary prevention of cancer"
(heart or cardiac or cardio*) AND arrest -"American Heart Association"