PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 2 ore 39 min fa

Discovery and validation of a prognostic proteomic signature for tuberculosis progression: A prospective cohort study

Ma, 16/04/2019 - 23:00

by Adam Penn-Nicholson, Thomas Hraha, Ethan G. Thompson, David Sterling, Stanley Kimbung Mbandi, Kirsten M. Wall, Michelle Fisher, Sara Suliman, Smitha Shankar, Willem A. Hanekom, Nebojsa Janjic, Mark Hatherill, Stefan H. E. Kaufmann, Jayne Sutherland, Gerhard Walzl, Mary Ann De Groote, Urs Ochsner, Daniel E. Zak, Thomas J. Scriba, ACS and GC6–74 cohort study groups

Background

A nonsputum blood test capable of predicting progression of healthy individuals to active tuberculosis (TB) before clinical symptoms manifest would allow targeted treatment to curb transmission. We aimed to develop a proteomic biomarker of risk of TB progression for ultimate translation into a point-of-care diagnostic.

Methods and findings

Proteomic TB risk signatures were discovered in a longitudinal cohort of 6,363 Mycobacterium tuberculosis-infected, HIV-negative South African adolescents aged 12–18 years (68% female) who participated in the Adolescent Cohort Study (ACS) between July 6, 2005 and April 23, 2007, through either active (every 6 months) or passive follow-up over 2 years. Forty-six individuals developed microbiologically confirmed TB disease within 2 years of follow-up and were selected as progressors; 106 nonprogressors, who remained healthy, were matched to progressors. Over 3,000 human proteins were quantified in plasma with a highly multiplexed proteomic assay (SOMAscan). Three hundred sixty-one proteins of differential abundance between progressors and nonprogressors were identified. A 5-protein signature, TB Risk Model 5 (TRM5), was discovered in the ACS training set and verified by blind prediction in the ACS test set. Poor performance on samples 13–24 months before TB diagnosis motivated discovery of a second 3-protein signature, 3-protein pair-ratio (3PR) developed using an orthogonal strategy on the full ACS subcohort. Prognostic performance of both signatures was validated in an independent cohort of 1,948 HIV-negative household TB contacts from The Gambia (aged 15–60 years, 66% female), longitudinally followed up for 2 years between March 5, 2007 and October 21, 2010, sampled at baseline, month 6, and month 18. Amongst these contacts, 34 individuals progressed to microbiologically confirmed TB disease and were included as progressors, and 115 nonprogressors were included as controls. Prognostic performance of the TRM5 signature in the ACS training set was excellent within 6 months of TB diagnosis (area under the receiver operating characteristic curve [AUC] 0.96 [95% confidence interval, 0.93–0.99]) and 6–12 months (AUC 0.76 [0.65–0.87]) before TB diagnosis. TRM5 validated with an AUC of 0.66 (0.56–0.75) within 1 year of TB diagnosis in the Gambian validation cohort. The 3PR signature yielded an AUC of 0.89 (0.84–0.95) within 6 months of TB diagnosis and 0.72 (0.64–0.81) 7–12 months before TB diagnosis in the entire South African discovery cohort and validated with an AUC of 0.65 (0.55–0.75) within 1 year of TB diagnosis in the Gambian validation cohort. Signature validation may have been limited by a systematic shift in signal magnitudes generated by differences between the validation assay when compared to the discovery assay. Further validation, especially in cohorts from non-African countries, is necessary to determine how generalizable signature performance is.

Conclusions

Both proteomic TB risk signatures predicted progression to incident TB within a year of diagnosis. To our knowledge, these are the first validated prognostic proteomic signatures. Neither meet the minimum criteria as defined in the WHO Target Product Profile for a progression test. More work is required to develop such a test for practical identification of individuals for investigation of incipient, subclinical, or active TB disease for appropriate treatment and care.

Screening for breech presentation using universal late-pregnancy ultrasonography: A prospective cohort study and cost effectiveness analysis

Ma, 16/04/2019 - 23:00

by David Wastlund, Alexandros A. Moraitis, Alison Dacey, Ulla Sovio, Edward C. F. Wilson, Gordon C. S. Smith

Background

Despite the relative ease with which breech presentation can be identified through ultrasound screening, the assessment of foetal presentation at term is often based on clinical examination only. Due to limitations in this approach, many women present in labour with an undiagnosed breech presentation, with increased risk of foetal morbidity and mortality. This study sought to determine the cost effectiveness of universal ultrasound scanning for breech presentation near term (36 weeks of gestational age [wkGA]) in nulliparous women.

Methods and findings

The Pregnancy Outcome Prediction (POP) study was a prospective cohort study between January 14, 2008 and July 31, 2012, including 3,879 nulliparous women who attended for a research screening ultrasound examination at 36 wkGA. Foetal presentation was assessed and compared for the groups with and without a clinically indicated ultrasound. Where breech presentation was detected, an external cephalic version (ECV) was routinely offered. If the ECV was unsuccessful or not performed, the women were offered either planned cesarean section at 39 weeks or attempted vaginal breech delivery. To compare the likelihood of different mode of deliveries and associated long-term health outcomes for universal ultrasound to current practice, a probabilistic economic simulation model was constructed. Parameter values were obtained from the POP study, and costs were mainly obtained from the English National Health Service (NHS). One hundred seventy-nine out of 3,879 women (4.6%) were diagnosed with breech presentation at 36 weeks. For most women (96), there had been no prior suspicion of noncephalic presentation. ECV was attempted for 84 (46.9%) women and was successful in 12 (success rate: 14.3%). Overall, 19 of the 179 women delivered vaginally (10.6%), 110 delivered by elective cesarean section (ELCS) (61.5%) and 50 delivered by emergency cesarean section (EMCS) (27.9%). There were no women with undiagnosed breech presentation in labour in the entire cohort. On average, 40 scans were needed per detection of a previously undiagnosed breech presentation. The economic analysis indicated that, compared to current practice, universal late-pregnancy ultrasound would identify around 14,826 otherwise undiagnosed breech presentations across England annually. It would also reduce EMCS and vaginal breech deliveries by 0.7 and 1.0 percentage points, respectively: around 4,196 and 6,061 deliveries across England annually. Universal ultrasound would also prevent 7.89 neonatal mortalities annually. The strategy would be cost effective if foetal presentation could be assessed for £19.80 or less per woman. Limitations to this study included that foetal presentation was revealed to all women and that the health economic analysis may be altered by parity.

Conclusions

According to our estimates, universal late pregnancy ultrasound in nulliparous women (1) would virtually eliminate undiagnosed breech presentation, (2) would be expected to reduce foetal mortality in breech presentation, and (3) would be cost effective if foetal presentation could be assessed for less than £19.80 per woman.

The incidence of pregnancy hypertension in India, Pakistan, Mozambique, and Nigeria: A prospective population-level analysis

Ve, 12/04/2019 - 23:00

by Laura A. Magee, Sumedha Sharma, Hannah L. Nathan, Olalekan O. Adetoro, Mrutynjaya B. Bellad, Shivaprasad Goudar, Salécio E. Macuacua, Ashalata Mallapur, Rahat Qureshi, Esperança Sevene, John Sotunsa, Anifa Valá, Tang Lee, Beth A. Payne, Marianne Vidler, Andrew H. Shennan, Zulfiqar A. Bhutta, Peter von Dadelszen, the CLIP Study Group

Background

Most pregnancy hypertension estimates in less-developed countries are from cross-sectional hospital surveys and are considered overestimates. We estimated population-based rates by standardised methods in 27 intervention clusters of the Community-Level Interventions for Pre-eclampsia (CLIP) cluster randomised trials.

Methods and findings

CLIP-eligible pregnant women identified in their homes or local primary health centres (2013–2017). Included here are women who had delivered by trial end and received a visit from a community health worker trained to provide supplementary hypertension-oriented care, including standardised blood pressure (BP) measurement. Hypertension (BP ≥ 140/90 mm Hg) was defined as chronic (first detected at <20 weeks gestation) or gestational (≥20 weeks); pre-eclampsia was gestational hypertension plus proteinuria or a pre-eclampsia-defining complication. A multi-level regression model compared hypertension rates and types between countries (p < 0.05 considered significant). In 28,420 pregnancies studied, women were usually young (median age 23–28 years), parous (53.7%–77.3%), with singletons (≥97.5%), and enrolled at a median gestational age of 10.4 (India) to 25.9 weeks (Mozambique). Basic education varied (22.8% in Pakistan to 57.9% in India). Pregnancy hypertension incidence was lower in Pakistan (9.3%) than India (10.3%), Mozambique (10.9%), or Nigeria (10.2%) (p = 0.001). Most hypertension was diastolic only (46.4% in India, 72.7% in Pakistan, 61.3% in Mozambique, and 63.3% in Nigeria). At first presentation with elevated BP, gestational hypertension was most common diagnosis (particularly in Mozambique [8.4%] versus India [6.9%], Pakistan [6.5%], and Nigeria [7.1%]; p < 0.001), followed by pre-eclampsia (India [3.8%], Nigeria [3.0%], Pakistan [2.4%], and Mozambique [2.3%]; p < 0.001) and chronic hypertension (especially in Mozambique [2.5%] and Nigeria [2.8%], compared with India [1.2%] and Pakistan [1.5%]; p < 0.001). Inclusion of additional diagnoses of hypertension and related complications, from household surveys or facility record review (unavailable in Nigeria), revealed higher hypertension incidence: 14.0% in India, 11.6% in Pakistan, and 16.8% in Mozambique; eclampsia was rare (<0.5%).

Conclusions

Pregnancy hypertension is common in less-developed settings. Most women in this study presented with gestational hypertension amenable to surveillance and timed delivery to improve outcomes.

Trial registration

This study is a secondary analysis of a clinical trial - ClinicalTrials.gov registration number NCT01911494.

Lipoarabinomannan in sputum to detect bacterial load and treatment response in patients with pulmonary tuberculosis: Analytic validation and evaluation in two cohorts

Ve, 12/04/2019 - 23:00

by Masanori Kawasaki, Carmenchu Echiverri, Lawrence Raymond, Elizabeth Cadena, Evelyn Reside, Maria Tarcela Gler, Tetsuya Oda, Ryuta Ito, Ryo Higashiyama, Kiyonori Katsuragi, Yongge Liu

Background

Lipoarabinomannan (LAM) is a major antigen of Mycobacterium tuberculosis (MTB). In this report, we evaluated the ability of a novel immunoassay to measure concentrations of LAM in sputum as a biomarker of bacterial load prior to and during treatment in pulmonary tuberculosis (TB) patients.

Methods and findings

Phage display technology was used to isolate monoclonal antibodies binding to epitopes unique in LAM from MTB and slow-growing nontuberculous mycobacteria (NTM). Using these antibodies, a sandwich enzyme-linked immunosorbent assay (LAM-ELISA) was developed to quantitate LAM concentration. The LAM-ELISA had a lower limit of quantification of 15 pg/mL LAM, corresponding to 121 colony-forming units (CFUs)/mL of MTB strain H37Rv. It detected slow-growing NTMs but without cross-reacting to common oral bacteria. Two clinical studies were performed between the years 2013 and 2016 in Manila, Philippines, in patients without known human immunodeficiency virus (HIV) coinfection. In a case-control cohort diagnostic study, sputum specimens were collected from 308 patients (aged 17-69 years; 62% male) diagnosed as having pulmonary TB diseases or non-TB diseases, but who could expectorate sputum, and were then evaluated by smear microscopy, BACTEC MGIT 960 Mycobacterial Detection System (MGIT) and Lowenstein-Jensen (LJ) culture, and LAM-ELISA. Some sputum specimens were also examined by Xpert MTB/RIF. The LAM-ELISA detected all smear- and MTB-culture–positive samples (n = 70) and 50% (n = 29) of smear-negative but culture-positive samples (n = 58) (versus 79.3%; 46 positive cases by the Xpert MTB/RIF), but none from non-TB patients (n = 56). Among both LAM and MGIT MTB-culture-positive samples, log10-transformed LAM concentration and MGIT time to detection (TTD) showed a good inverse relationship (r = −0.803, p < 0.0001). In a prospective longitudinal cohort study, 40 drug-susceptible pulmonary TB patients (aged 18-69 years; 60% male) were enrolled during the first 56 days of the standard 4-drug therapy. Declines in sputum LAM concentrations correlated with increases of MGIT TTD in individual patients. There was a 1.29 log10 decrease of sputum LAM concentration, corresponding to an increase of 221 hours for MGIT TTD during the first 14 days of treatment, a treatment duration often used in early bactericidal activity (EBA) trials. Major limitations of this study include a relatively small number of patients, treatment duration up to only 56 days, lack of quantitative sputum culture CFU count data, and no examination of the correlation of sputum LAM to clinical cure.

Conclusions

These results indicate that the LAM-ELISA can determine LAM concentration in sputum, and sputum LAM measured by the assay may be used as a biomarker of bacterial load prior to and during TB treatment. Additional studies are needed to examine the predictive value of this novel biomarker on treatment outcomes.

Effects of repeat prenatal corticosteroids given to women at risk of preterm birth: An individual participant data meta-analysis

Ve, 12/04/2019 - 23:00

by Caroline A. Crowther, Philippa F. Middleton, Merryn Voysey, Lisa Askie, Sasha Zhang, Tanya K. Martlow, Fariba Aghajafari, Elizabeth V. Asztalos, Peter Brocklehurst, Sourabh Dutta, Thomas J. Garite, Debra A. Guinn, Mikko Hallman, Pollyanna Hardy, Men-Jean Lee, Kimberley Maurel, Premasish Mazumder, Cindy McEvoy, Kellie E. Murphy, Outi M. Peltoniemi, Elizabeth A. Thom, Ronald J. Wapner, Lex W. Doyle, the PRECISE Group

Background

Infants born preterm compared with infants born at term are at an increased risk of dying and of serious morbidities in early life, and those who survive have higher rates of neurological impairments. It remains unclear whether exposure to repeat courses of prenatal corticosteroids can reduce these risks. This individual participant data (IPD) meta-analysis (MA) assessed whether repeat prenatal corticosteroid treatment given to women at ongoing risk of preterm birth in order to benefit their infants is modified by participant or treatment factors.

Methods and findings

Trials were eligible for inclusion if they randomised women considered at risk of preterm birth who had already received an initial, single course of prenatal corticosteroid seven or more days previously and in which corticosteroids were compared with either placebo or no placebo. The primary outcomes for the infants were serious outcome, use of respiratory support, and birth weight z-scores; for the children, they were death or any neurosensory disability; and for the women, maternal sepsis. Studies were identified using the Cochrane Pregnancy and Childbirth search strategy. Date of last search was 20 January 2015. IPD were sought from investigators with eligible trials. Risk of bias was assessed using criteria from the Cochrane Collaboration. IPD were analysed using a one-stage approach.Eleven trials, conducted between 2002 and 2010, were identified as eligible, with five trials being from the United States, two from Canada, and one each from Australia and New Zealand, Finland, India, and the United Kingdom. All 11 trials were included, with 4,857 women and 5,915 infants contributing data. The mean gestational age at trial entry for the trials was between 27.4 weeks and 30.2 weeks. There was no significant difference in the proportion of infants with a serious outcome (relative risk [RR] 0.92, 95% confidence interval [CI] 0.82 to 1.04, 5,893 infants, 11 trials, p = 0.33 for heterogeneity). There was a reduction in the use of respiratory support in infants exposed to repeat prenatal corticosteroids compared with infants not exposed (RR 0.91, 95% CI 0.85 to 0.97, 5,791 infants, 10 trials, p = 0.64 for heterogeneity). The number needed to treat (NNT) to benefit was 21 (95% CI 14 to 41) women/fetus to prevent one infant from needing respiratory support. Birth weight z-scores were lower in the repeat corticosteroid group (mean difference −0.12, 95%CI −0.18 to −0.06, 5,902 infants, 11 trials, p = 0.80 for heterogeneity). No statistically significant differences were seen for any of the primary outcomes for the child (death or any neurosensory disability) or for the woman (maternal sepsis). The treatment effect varied little by reason the woman was considered to be at risk of preterm birth, the number of fetuses in utero, the gestational age when first trial treatment course was given, or the time prior to birth that the last dose was given. Infants exposed to between 2–5 courses of repeat corticosteroids showed a reduction in both serious outcome and the use of respiratory support compared with infants exposed to only a single repeat course. However, increasing numbers of repeat courses of corticosteroids were associated with larger reductions in birth z-scores for weight, length, and head circumference. Not all trials could provide data for all of the prespecified subgroups, so this limited the power to detect differences because event rates are low for some important maternal, infant, and childhood outcomes.

Conclusions

In this study, we found that repeat prenatal corticosteroids given to women at ongoing risk of preterm birth after an initial course reduced the likelihood of their infant needing respiratory support after birth and led to neonatal benefits. Body size measures at birth were lower in infants exposed to repeat prenatal corticosteroids. Our findings suggest that to provide clinical benefit with the least effect on growth, the number of repeat treatment courses should be limited to a maximum of three and the total dose to between 24 mg and 48 mg.

Preferences for HIV testing services among men who have sex with men in the UK: A discrete choice experiment

Gi, 11/04/2019 - 23:00

by Alec Miners, Tom Nadarzynski, Charles Witzel, Andrew N. Phillips, Valentina Cambiano, Alison J. Rodger, Carrie D. Llewellyn

Background

In the UK, approximately 4,200 men who have sex with men (MSM) are living with HIV but remain undiagnosed. Maximising the number of high-risk people testing for HIV is key to ensuring prompt treatment and preventing onward infection. This study assessed how different HIV test characteristics affect the choice of testing option, including remote testing (HIV self-testing or HIV self-sampling), in the UK, a country with universal access to healthcare.

Methods and findings

Between 3 April and 11 May 2017, a cross-sectional online-questionnaire-based discrete choice experiment (DCE) was conducted in which respondents who expressed an interest in online material used by MSM were asked to imagine that they were at risk of HIV infection and to choose between different hypothetical HIV testing options, including the option not to test. A variety of different testing options with different defining characteristics were described so that the independent preference for each characteristic could be valued. The characteristics included where each test is taken, the sampling method, how the test is obtained, whether infections other than HIV are tested for, test accuracy, the cost of the test, the infection window period, and how long it takes to receive the test result. Participants were recruited and completed the instrument online, in order to include those not currently engaged with healthcare services. The main analysis was conducted using a latent class model (LCM), with results displayed as odds ratios (ORs) and probabilities. The ORs indicate the strength of preference for one characteristic relative to another (base) characteristic. In total, 620 respondents answered the DCE questions. Most respondents reported that they were white (93%) and were either gay or bisexual (99%). The LCM showed that there were 2 classes within the respondent sample that appeared to have different preferences for the testing options. The first group, which was likely to contain 86% of respondents, had a strong preference for face-to-face tests by healthcare professionals (HCPs) compared to remote testing (OR 6.4; 95% CI 5.6, 7.4) and viewed not testing as less preferable than remote testing (OR 0.10; 95% CI 0.09, 0.11). In the second group, which was likely to include 14% of participants, not testing was viewed as less desirable than remote testing (OR 0.56; 95% CI 0.53, 0.59) as were tests by HCPs compared to remote testing (OR 0.23; 95% CI 0.15, 0.36). In both classes, free remote tests instead of each test costing £30 was the test characteristic with the largest impact on the choice of testing option. Participants in the second group were more likely to have never previously tested and to be non-white than participants in the first group. The main study limitations were that the sample was recruited solely via social media, the study advert was viewed only by people expressing an interest in online material used by MSM, and the choices in the experiment were hypothetical rather than observed in the real world.

Conclusions

Our results suggest that preferences in the context we examined are broadly dichotomous. One group, containing the majority of MSM, appears comfortable testing for HIV but prefers face-to-face testing by HCPs rather than remote testing. The other group is much smaller, but contains MSM who are more likely to be at high infection risk. For these people, the availability of remote testing has the potential to significantly increase net testing rates, particularly if provided for free.

Octreotide-LAR in later-stage autosomal dominant polycystic kidney disease (ALADIN 2): A randomized, double-blind, placebo-controlled, multicenter trial

Ve, 05/04/2019 - 23:00

by Norberto Perico, Piero Ruggenenti, Annalisa Perna, Anna Caroli, Matias Trillini, Sandro Sironi, Antonio Pisani, Eleonora Riccio, Massimo Imbriaco, Mauro Dugo, Giovanni Morana, Antonio Granata, Michele Figuera, Flavio Gaspari, Fabiola Carrara, Nadia Rubis, Alessandro Villa, Sara Gamba, Silvia Prandini, Monica Cortinovis, Andrea Remuzzi, Giuseppe Remuzzi, for the ALADIN 2 Study Group

Background

Autosomal dominant polycystic kidney disease (ADPKD) is the most frequent genetically determined renal disease. In affected patients, renal function may progressively decline up to end-stage renal disease (ESRD), and approximately 10% of those with ESRD are affected by ADPKD. The somatostatin analog octreotide long-acting release (octreotide-LAR) slows renal function deterioration in patients in early stages of the disease. We evaluated the renoprotective effect of octreotide-LAR in ADPKD patients at high risk of ESRD because of later-stage ADPKD.

Methods and findings

We did an internally funded, parallel-group, double-blind, placebo-controlled phase III trial to assess octreotide-LAR in adults with ADPKD with glomerular filtration rate (GFR) 15–40 ml/min/1.73 m2. Participants were randomized to receive 2 intramuscular injections of 20 mg octreotide-LAR (n = 51) or 0.9% sodium chloride solution (placebo; n = 49) every 28 days for 3 years. Central randomization was 1:1 using a computerized list stratified by center and presence or absence of diabetes or proteinuria. Co-primary short- and long-term outcomes were 1-year total kidney volume (TKV) (computed tomography scan) growth and 3-year GFR (iohexol plasma clearance) decline. Analyses were by modified intention-to-treat. Patients were recruited from 4 Italian nephrology units between October 11, 2011, and March 20, 2014, and followed up to April 14, 2017. Baseline characteristics were similar between groups. Compared to placebo, octreotide-LAR reduced median (95% CI) TKV growth from baseline by 96.8 (10.8 to 182.7) ml at 1 year (p = 0.027) and 422.6 (150.3 to 695.0) ml at 3 years (p = 0.002). Reduction in the median (95% CI) rate of GFR decline (0.56 [−0.63 to 1.75] ml/min/1.73 m2 per year) was not significant (p = 0.295). TKV analyses were adjusted for age, sex, and baseline TKV. Over a median (IQR) 36 (24 to 37) months of follow-up, 9 patients on octreotide-LAR and 21 patients on placebo progressed to a doubling of serum creatinine or ESRD (composite endpoint) (hazard ratio [HR] [95% CI] adjusted for age, sex, baseline serum creatinine, and baseline TKV: 0.307 [0.127 to 0.742], p = 0.009). One composite endpoint was prevented for every 4 treated patients. Among 63 patients with chronic kidney disease (CKD) stage 4, 3 on octreotide-LAR and 8 on placebo progressed to ESRD (adjusted HR [95% CI]: 0.121 [0.017 to 0.866], p = 0.036). Three patients on placebo had a serious renal cyst rupture/infection and 1 patient had a serious urinary tract infection/obstruction, versus 1 patient on octreotide-LAR with a serious renal cyst infection. The main study limitation was the small sample size.

Conclusions

In this study we observed that in later-stage ADPKD, octreotide-LAR slowed kidney growth and delayed progression to ESRD, in particular in CKD stage 4.

Trial registration

ClinicalTrials.gov NCT01377246; EudraCT: 2011-000138-12.

Risk score for predicting mortality including urine lipoarabinomannan detection in hospital inpatients with HIV-associated tuberculosis in sub-Saharan Africa: Derivation and external validation cohort study

Ve, 05/04/2019 - 23:00

by Ankur Gupta-Wright, Elizabeth L. Corbett, Douglas Wilson, Joep J. van Oosterhout, Keertan Dheda, Helena Huerga, Jonny Peter, Maryline Bonnet, Melanie Alufandika-Moyo, Daniel Grint, Stephen D. Lawn, Katherine Fielding

Background

The prevalence of and mortality from HIV-associated tuberculosis (HIV/TB) in hospital inpatients in Africa remains unacceptably high. Currently, there is a lack of tools to identify those at high risk of early mortality who may benefit from adjunctive interventions. We therefore aimed to develop and validate a simple clinical risk score to predict mortality in high-burden, low-resource settings.

Methods and findings

A cohort of HIV-positive adults with laboratory-confirmed TB from the STAMP TB screening trial (Malawi and South Africa) was used to derive a clinical risk score using multivariable predictive modelling, considering factors at hospital admission (including urine lipoarabinomannan [LAM] detection) thought to be associated with 2-month mortality. Performance was evaluated internally and then externally validated using independent cohorts from 2 other studies (LAM-RCT and a Médecins Sans Frontières [MSF] cohort) from South Africa, Zambia, Zimbabwe, Tanzania, and Kenya. The derivation cohort included 315 patients enrolled from October 2015 and September 2017. Their median age was 36 years (IQR 30–43), 45.4% were female, median CD4 cell count at admission was 76 cells/μl (IQR 23–206), and 80.2% (210/262) of those who knew they were HIV-positive at hospital admission were taking antiretroviral therapy (ART). Two-month mortality was 30% (94/315), and mortality was associated with the following factors included in the score: age 55 years or older, male sex, being ART experienced, having severe anaemia (haemoglobin < 80 g/l), being unable to walk unaided, and having a positive urinary Determine TB LAM Ag test (Alere). The score identified patients with a 46.4% (95% CI 37.8%–55.2%) mortality risk in the high-risk group compared to 12.5% (95% CI 5.7%–25.4%) in the low-risk group (p < 0.001). The odds ratio (OR) for mortality was 6.1 (95% CI 2.4–15.2) in high-risk patients compared to low-risk patients (p < 0.001). Discrimination (c-statistic 0.70, 95% CI 0.63–0.76) and calibration (Hosmer-Lemeshow statistic, p = 0.78) were good in the derivation cohort, and similar in the external validation cohort (complete cases n = 372, c-statistic 0.68 [95% CI 0.61–0.74]). The validation cohort included 644 patients between January 2013 and August 2015. Median age was 36 years, 48.9% were female, and median CD4 count at admission was 61 (IQR 21–145). OR for mortality was 5.3 (95% CI 2.2–9.5) for high compared to low-risk patients (complete cases n = 372, p < 0.001). The score also predicted patients at higher risk of death both pre- and post-discharge. A simplified score (any 3 or more of the predictors) performed equally well. The main limitations of the scores were their imperfect accuracy, the need for access to urine LAM testing, modest study size, and not measuring all potential predictors of mortality (e.g., tuberculosis drug resistance).

Conclusions

This risk score is capable of identifying patients who could benefit from enhanced clinical care, follow-up, and/or adjunctive interventions, although further prospective validation studies are necessary. Given the scale of HIV/TB morbidity and mortality in African hospitals, better prognostic tools along with interventions could contribute towards global targets to reduce tuberculosis mortality.

Tuberculosis drugs’ distribution and emergence of resistance in patient’s lung lesions: A mechanistic model and tool for regimen and dose optimization

Ma, 02/04/2019 - 23:00

by Natasha Strydom, Sneha V. Gupta, William S. Fox, Laura E. Via, Hyeeun Bang, Myungsun Lee, Seokyong Eum, TaeSun Shim, Clifton E. Barry III, Matthew Zimmerman, Véronique Dartois, Radojka M. Savic

Background

The sites of mycobacterial infection in the lungs of tuberculosis (TB) patients have complex structures and poor vascularization, which obstructs drug distribution to these hard-to-reach and hard-to-treat disease sites, further leading to suboptimal drug concentrations, resulting in compromised TB treatment response and resistance development. Quantifying lesion-specific drug uptake and pharmacokinetics (PKs) in TB patients is necessary to optimize treatment regimens at all infection sites, to identify patients at risk, to improve existing regimens, and to advance development of novel regimens. Using drug-level data in plasma and from 9 distinct pulmonary lesion types (vascular, avascular, and mixed) obtained from 15 hard-to-treat TB patients who failed TB treatments and therefore underwent lung resection surgery, we quantified the distribution and the penetration of 7 major TB drugs at these sites, and we provide novel tools for treatment optimization.

Methods and findings

A total of 329 plasma- and 1,362 tissue-specific drug concentrations from 9 distinct lung lesion types were obtained according to optimal PK sampling schema from 15 patients (10 men, 5 women, aged 23 to 58) undergoing lung resection surgery (clinical study NCT00816426 performed in South Korea between 9 June 2010 and 24 June 2014). Seven major TB drugs (rifampin [RIF], isoniazid [INH], linezolid [LZD], moxifloxacin [MFX], clofazimine [CFZ], pyrazinamide [PZA], and kanamycin [KAN]) were quantified. We developed and evaluated a site-of-action mechanistic PK model using nonlinear mixed effects methodology. We quantified population- and patient-specific lesion/plasma ratios (RPLs), dynamics, and variability of drug uptake into each lesion for each drug. CFZ and MFX had higher drug exposures in lesions compared to plasma (median RPL 2.37, range across lesions 1.26–22.03); RIF, PZA, and LZD showed moderate yet suboptimal lesion penetration (median RPL 0.61, range 0.21–2.4), while INH and KAN showed poor tissue penetration (median RPL 0.4, range 0.03–0.73). Stochastic PK/pharmacodynamic (PD) simulations were carried out to evaluate current regimen combinations and dosing guidelines in distinct patient strata. Patients receiving standard doses of RIF and INH, who are of the lower range of exposure distribution, spent substantial periods (>12 h/d) below effective concentrations in hard-to-treat lesions, such as caseous lesions and cavities. Standard doses of INH (300 mg) and KAN (1,000 mg) did not reach therapeutic thresholds in most lesions for a majority of the population. Drugs and doses that did reach target exposure in most subjects include 400 mg MFX and 100 mg CFZ. Patients with cavitary lesions, irrespective of drug choice, have an increased likelihood of subtherapeutic concentrations, leading to a higher risk of resistance acquisition while on treatment. A limitation of this study was the small sample size of 15 patients, performed in a unique study population of TB patients who failed treatment and underwent lung resection surgery. These results still need further exploration and validation in larger and more diverse cohorts.

Conclusions

Our results suggest that the ability to reach and maintain therapeutic concentrations is both lesion and drug specific, indicating that stratifying patients based on disease extent, lesion types, and individual drug-susceptibility profiles may eventually be useful for guiding the selection of patient-tailored drug regimens and may lead to improved TB treatment outcomes. We provide a web-based tool to further explore this model and results at http://saviclab.org/tb-lesion/.

Incidence of eclampsia and related complications across 10 low- and middle-resource geographical regions: Secondary analysis of a cluster randomised controlled trial

Ve, 29/03/2019 - 23:00

by Nicola Vousden, Elodie Lawley, Paul T. Seed, Muchabayiwa Francis Gidiri, Shivaprasad Goudar, Jane Sandall, Lucy C. Chappell, Andrew H. Shennan, on behalf of the CRADLE Trial Collaborative Group

Background

In 2015, approximately 42,000 women died as a result of hypertensive disorders of pregnancy worldwide; over 99% of these deaths occurred in low- and middle-income countries. The aim of this paper is to describe the incidence and characteristics of eclampsia and related complications from hypertensive disorders of pregnancy across 10 low- and middle-income geographical regions in 8 countries, in relation to magnesium sulfate availability.

Methods and findings

This is a secondary analysis of a stepped-wedge cluster randomised controlled trial undertaken in sub-Saharan Africa, India, and Haiti. This trial implemented a novel vital sign device and training package in routine maternity care with the aim of reducing a composite outcome of maternal mortality and morbidity. Institutional-level consent was obtained, and all women presenting for maternity care were eligible for inclusion. Data on eclampsia, stroke, admission to intensive care with a hypertensive disorder of pregnancy, and maternal death from a hypertensive disorder of pregnancy were prospectively collected from routine data sources and active case finding, together with data on perinatal outcomes in women with these outcomes. In 536,233 deliveries between 1 April 2016 and 30 November 2017, there were 2,692 women with eclampsia (0.5%). In total 6.9% (n = 186; 3.47/10,000 deliveries) of women with eclampsia died, and a further 51 died from other complications of hypertensive disorders of pregnancy (0.95/10,000). After planned adjustments, the implementation of the CRADLE intervention was not associated with any significant change in the rates of eclampsia, stroke, or maternal death or intensive care admission with a hypertensive disorder of pregnancy. Nearly 1 in 5 (17.9%) women with eclampsia, stroke, or a hypertensive disorder of pregnancy causing intensive care admission or maternal death experienced a stillbirth or neonatal death. A third of eclampsia cases (33.2%; n = 894) occurred in women under 20 years of age, 60.0% in women aged 20–34 years (n = 1,616), and 6.8% (n = 182) in women aged 35 years or over. Rates of eclampsia varied approximately 7-fold between sites (range 19.6/10,000 in Zambia Centre 1 to 142.0/10,000 in Sierra Leone). Over half (55.1%) of first eclamptic fits occurred in a health-care facility, with the remainder in the community. Place of first fit varied substantially between sites (from 5.9% in the central referral facility in Sierra Leone to 85% in Uganda Centre 2). On average, magnesium sulfate was available in 74.7% of facilities (range 25% in Haiti to 100% in Sierra Leone and Zimbabwe). There was no detectable association between magnesium sulfate availability and the rate of eclampsia across sites (p = 0.12). This analysis may have been influenced by the selection of predominantly urban and peri-urban settings, and by collection of only monthly data on availability of magnesium sulfate, and is limited by the lack of demographic data in the population of women delivering in the trial areas.

Conclusions

The large variation in eclampsia and maternal and neonatal fatality from hypertensive disorders of pregnancy between countries emphasises that inequality and inequity persist in healthcare for women with hypertensive disorders of pregnancy. Alongside the growing interest in improving community detection and health education for these disorders, efforts to improve quality of care within healthcare facilities are key. Strategies to prevent eclampsia should be informed by local data.

Trial registration

ISRCTN: 41244132.

A whole-health–economy approach to antimicrobial stewardship: Analysis of current models and future direction

Ve, 29/03/2019 - 23:00

by Monsey McLeod, Raheelah Ahmad, Nada Atef Shebl, Christianne Micallef, Fiona Sim, Alison Holmes

In a Policy Forum, Alison Holmes and colleagues discuss coordinated approaches to antimicrobial stewardship.

Community health workers to improve uptake of maternal healthcare services: A cluster-randomized pragmatic trial in Dar es Salaam, Tanzania

Ve, 29/03/2019 - 23:00

by Pascal Geldsetzer, Eric Mboggo, Elysia Larson, Irene Andrew Lema, Lucy Magesa, Lameck Machumi, Nzovu Ulenga, David Sando, Mary Mwanyika-Sando, Donna Spiegelman, Ester Mungure, Nan Li, Hellen Siril, Phares Mujinja, Helga Naburi, Guerino Chalamilla, Charles Kilewo, Anna Mia Ekström, Dawn Foster, Wafaie Fawzi, Till Bärnighausen

Background

Home delivery and late and infrequent attendance at antenatal care (ANC) are responsible for substantial avoidable maternal and pediatric morbidity and mortality in sub-Saharan Africa. This cluster-randomized trial aimed to determine the impact of a community health worker (CHW) intervention on the proportion of women who (i) visit ANC fewer than 4 times during their pregnancy and (ii) deliver at home.

Methods and findings

As part of a 2-by-2 factorial design, we conducted a cluster-randomized trial of a home-based CHW intervention in 2 of 3 districts of Dar es Salaam from 18 June 2012 to 15 January 2014. Thirty-six wards (geographical areas) in the 2 districts were randomized to the CHW intervention, and 24 wards to the standard of care. In the standard-of-care arm, CHWs visited women enrolled in prevention of mother-to-child HIV transmission (PMTCT) care and provided information and counseling. The intervention arm included additional CHW supervision and the following additional CHW tasks, which were targeted at all pregnant women regardless of HIV status: (i) conducting home visits to identify pregnant women and refer them to ANC, (ii) counseling pregnant women on maternal health, and (iii) providing home visits to women who missed an ANC or PMTCT appointment. The primary endpoints of this trial were the proportion of pregnant women (i) not making at least 4 ANC visits and (ii) delivering at home. The outcomes were assessed through a population-based household survey at the end of the trial period. We did not collect data on adverse events. A random sample of 2,329 pregnant women and new mothers living in the study area were interviewed during home visits. At the time of the survey, the mean age of participants was 27.3 years, and 34.5% (804/2,329) were pregnant. The proportion of women who reported having attended fewer than 4 ANC visits did not differ significantly between the intervention and standard-of-care arms (59.1% versus 60.7%, respectively; risk ratio [RR]: 0.97; 95% CI: 0.82–1.15; p = 0.754). Similarly, the proportion reporting that they had attended ANC in the first trimester did not differ significantly between study arms. However, women in intervention wards were significantly less likely to report having delivered at home (3.9% versus 7.3%; RR: 0.54; 95% CI: 0.30–0.95; p = 0.034). Mixed-methods analyses of additional data collected as part of this trial suggest that an important reason for the lack of effect on ANC outcomes was the perceived high economic burden and inconvenience of attending ANC. The main limitations of this trial were that (i) the outcomes were ascertained through self-report, (ii) the study was stopped 4 months early due to a change in the standard of care in the other trial that was part of the 2-by-2 factorial design, and (iii) the sample size of the household survey was not prespecified.

Conclusions

A home-based CHW intervention in urban Tanzania significantly reduced the proportion of women who reported having delivered at home, in an area that already has very high uptake of facility-based delivery. The intervention did not affect self-reported ANC attendance. Policy makers should consider piloting, evaluating, and scaling interventions to lessen the economic burden and inconvenience of ANC.

Trial registration

ClinicalTrials.gov NCT01932138

Measles vaccination: A matter of confidence and commitment

Ma, 26/03/2019 - 23:00

by Richard Turner, on behalf of the PLOS Medicine Editors

The PLOS Medicine Editors discuss issues of vaccination uptake in the context of recent and ongoing measles outbreaks.

Advances in clinical trial design for development of new TB treatments: A call for innovation

Ve, 22/03/2019 - 23:00

by Christian Lienhardt, Payam Nahid

Christian Lienhardt and Payam Nahid launch the Collection on Advances in Clinical Trial Design for Development of New Tuberculosis Treatments.

Keeping phase III tuberculosis trials relevant: Adapting to a rapidly changing landscape

Ve, 22/03/2019 - 23:00

by Patrick P. J. Phillips, Carole D. Mitnick, James D. Neaton, Payam Nahid, Christian Lienhardt, Andrew J. Nunn

In a Collection Review, Patrick Phillips and colleagues discuss developments in clinical trial design for the evaluation of TB therapeutics.

Independent and combined effects of improved water, sanitation, and hygiene (WASH) and improved complementary feeding on early neurodevelopment among children born to HIV-negative mothers in rural Zimbabwe: Substudy of a cluster-randomized trial

Gi, 21/03/2019 - 23:00

by Melissa J. Gladstone, Jaya Chandna, Gwendoline Kandawasvika, Robert Ntozini, Florence D. Majo, Naume V. Tavengwa, Mduduzi N. N. Mbuya, Goldberg T. Mangwadu, Ancikaria Chigumira, Cynthia M. Chasokela, Lawrence H. Moulton, Rebecca J. Stoltzfus, Jean H. Humphrey, Andrew J. Prendergast, for the SHINE Trial Team

Background

Globally, nearly 250 million children (43% of all children under 5 years of age) are at risk of compromised neurodevelopment due to poverty, stunting, and lack of stimulation. We tested the independent and combined effects of improved water, sanitation, and hygiene (WASH) and improved infant and young child feeding (IYCF) on early child development (ECD) among children enrolled in the Sanitation Hygiene Infant Nutrition Efficacy (SHINE) trial in rural Zimbabwe.

Methods and findings

SHINE was a cluster-randomized community-based 2×2 factorial trial. A total of 5,280 pregnant women were enrolled from 211 clusters (defined as the catchment area of 1–4 village health workers [VHWs] employed by the Zimbabwean Ministry of Health and Child Care). Clusters were randomly allocated to standard of care, IYCF (20 g of small-quantity lipid-based nutrient supplement per day from age 6 to 18 months plus complementary feeding counseling), WASH (ventilated improved pit latrine, handwashing stations, chlorine, liquid soap, and play yard), and WASH + IYCF. Primary outcomes were child length-for-age Z-score and hemoglobin concentration at 18 months of age. Children who completed the 18-month visit and turned 2 years (102–112 weeks) between March 1, 2016, and April 30, 2017, were eligible for the ECD substudy. We prespecified that primary inferences would be drawn from findings of children born to HIV-negative mothers; these results are presented in this paper. A total of 1,655 HIV-unexposed children (64% of those eligible) were recruited into the ECD substudy from 206 clusters and evaluated for ECD at 2 years of age using the Malawi Developmental Assessment Tool (MDAT) to assess gross motor, fine motor, language, and social skills; the MacArthur–Bates Communicative Development Inventories (CDI) to assess vocabulary and grammar; the A-not-B test to assess object permanence; and a self-control task. Outcomes were analyzed in the intention-to-treat population. For all ECD outcomes, there was not a statistical interaction between the IYCF and WASH interventions, so we estimated the effects of the interventions by comparing the 2 IYCF groups with the 2 non-IYCF groups and the 2 WASH groups with the 2 non-WASH groups. The mean (95% CI) total MDAT score was modestly higher in the IYCF groups compared to the non-IYCF groups in unadjusted analysis: 1.35 (0.24, 2.46; p = 0.017); this difference did not persist in adjusted analysis: 0.79 (−0.22, 1.68; p = 0.057). There was no evidence of impact of the IYCF intervention on the CDI, A-not-B, or self-control tests. Among children in the WASH groups compared to those in the non-WASH groups, mean scores were not different for the MDAT, A-not-B, or self-control tests; mean CDI score was not different in unadjusted analysis (0.99 [95% CI −1.18, 3.17]) but was higher in children in the WASH groups in adjusted analysis (1.81 [0.01, 3.61]). The main limitation of the study was the specific time window for substudy recruitment, meaning not all children from the main trial were enrolled.

Conclusions

We found little evidence that the IYCF and WASH interventions implemented in SHINE caused clinically important improvements in child development at 2 years of age. Interventions that directly target neurodevelopment (e.g., early stimulation) or that more comprehensively address the multifactorial nature of neurodevelopment may be required to support healthy development of vulnerable children.

Trial registration

ClinicalTrials.gov NCT01824940

Cost-effectiveness of financial incentives for improving diet and health through Medicare and Medicaid: A microsimulation study

Ma, 19/03/2019 - 23:00

by Yujin Lee, Dariush Mozaffarian, Stephen Sy, Yue Huang, Junxiu Liu, Parke E. Wilde, Shafika Abrahams-Gessel, Thiago de Souza Veiga Jardim, Thomas A. Gaziano, Renata Micha

Background

Economic incentives through health insurance may promote healthier behaviors. Little is known about health and economic impacts of incentivizing diet, a leading risk factor for diabetes and cardiovascular disease (CVD), through Medicare and Medicaid.

Methods and findings

A validated microsimulation model (CVD-PREDICT) estimated CVD and diabetes cases prevented, quality-adjusted life years (QALYs), health-related costs (formal healthcare, informal healthcare, and lost-productivity costs), and incremental cost-effectiveness ratios (ICERs) of two policy scenarios for adults within Medicare and Medicaid, compared to a base case of no new intervention: (1) 30% subsidy on fruits and vegetables (“F&V incentive”) and (2) 30% subsidy on broader healthful foods including F&V, whole grains, nuts/seeds, seafood, and plant oils (“healthy food incentive”). Inputs included national demographic and dietary data from the National Health and Nutrition Examination Survey (NHANES) 2009–2014, policy effects and diet-disease effects from meta-analyses, and policy and health-related costs from established sources. Overall, 82 million adults (35–80 years old) were on Medicare and/or Medicaid. The mean (SD) age was 68.1 (11.4) years, 56.2% were female, and 25.5% were non-whites. Health and cost impacts were simulated over the lifetime of current Medicare and Medicaid participants (average simulated years = 18.3 years). The F&V incentive was estimated to prevent 1.93 million CVD events, gain 4.64 million QALYs, and save $39.7 billion in formal healthcare costs. For the healthy food incentive, corresponding gains were 3.28 million CVD and 0.12 million diabetes cases prevented, 8.40 million QALYs gained, and $100.2 billion in formal healthcare costs saved, respectively. From a healthcare perspective, both scenarios were cost-effective at 5 years and beyond, with lifetime ICERs of $18,184/QALY (F&V incentive) and $13,194/QALY (healthy food incentive). From a societal perspective including informal healthcare costs and lost productivity, respective ICERs were $14,576/QALY and $9,497/QALY. Results were robust in probabilistic sensitivity analyses and a range of one-way sensitivity and subgroup analyses, including by different durations of the intervention (5, 10, and 20 years and lifetime), food subsidy levels (20%, 50%), insurance groups (Medicare, Medicaid, and dual-eligible), and beneficiary characteristics within each insurance group (age, race/ethnicity, education, income, and Supplemental Nutrition Assistant Program [SNAP] status). Simulation studies such as this one provide quantitative estimates of benefits and uncertainty but cannot directly prove health and economic impacts.

Conclusions

Economic incentives for healthier foods through Medicare and Medicaid could generate substantial health gains and be highly cost-effective.

Correction: The effect of a programme to improve men's sedentary time and physical activity: The European Fans in Training (EuroFIT) randomised controlled trial

Gi, 14/03/2019 - 23:00

by Sally Wyke, Christopher Bunn, Eivind Andersen, Marlene N. Silva, Femke van Nassau, Paula McSkimming, Spyros Kolovos, Jason M. R. Gill, Cindy M. Gray, Kate Hunt, Annie S. Anderson, Judith Bosmans, Judith G. M. Jelsma, Sharon Kean, Nicolas Lemyre, David W. Loudon, Lisa Macaulay, Douglas J. Maxwell, Alex McConnachie, Nanette Mutrie, Maria Nijhuis-van der Sanden, Hugo V. Pereira, Matthew Philpott, Glyn C. Roberts, John Rooksby, Øystein B. Røynesdal, Naveed Sattar, Marit Sørensen, Pedro J. Teixeira, Shaun Treweek, Theo van Achterberg, Irene van de Glind, Willem van Mechelen, Hidde P. van der Ploeg

Comparative effectiveness of generic and brand-name medication use: A database study of US health insurance claims

Me, 13/03/2019 - 23:00

by Rishi J. Desai, Ameet Sarpatwari, Sara Dejene, Nazleen F. Khan, Joyce Lii, James R. Rogers, Sarah K. Dutcher, Saeid Raofi, Justin Bohn, John G. Connolly, Michael A. Fischer, Aaron S. Kesselheim, Joshua J. Gagne

Background

To the extent that outcomes are mediated through negative perceptions of generics (the nocebo effect), observational studies comparing brand-name and generic drugs are susceptible to bias favoring the brand-name drugs. We used authorized generic (AG) products, which are identical in composition and appearance to brand-name products but are marketed as generics, as a control group to address this bias in an evaluation aiming to compare the effectiveness of generic versus brand medications.

Methods and findings

For commercial health insurance enrollees from the US, administrative claims data were derived from 2 databases: (1) Optum Clinformatics Data Mart (years: 2004–2013) and (2) Truven MarketScan (years: 2003–2015). For a total of 8 drug products, the following groups were compared using a cohort study design: (1) patients switching from brand-name products to AGs versus generics, and patients initiating treatment with AGs versus generics, where AG use proxied brand-name use, addressing negative perception bias, and (2) patients initiating generic versus brand-name products (bias-prone direct comparison) and patients initiating AG versus brand-name products (negative control). Using Cox proportional hazards regression after 1:1 propensity-score matching, we compared a composite cardiovascular endpoint (for amlodipine, amlodipine-benazepril, and quinapril), non-vertebral fracture (for alendronate and calcitonin), psychiatric hospitalization rate (for sertraline and escitalopram), and insulin initiation (for glipizide) between the groups. Inverse variance meta-analytic methods were used to pool adjusted hazard ratios (HRs) for each comparison between the 2 databases. Across 8 products, 2,264,774 matched pairs of patients were included in the comparisons of AGs versus generics. A majority (12 out of 16) of the clinical endpoint estimates showed similar outcomes between AGs and generics. Among the other 4 estimates that did have significantly different outcomes, 3 suggested improved outcomes with generics and 1 favored AGs (patients switching from amlodipine brand-name: HR [95% CI] 0.92 [0.88–0.97]). The comparison between generic and brand-name initiators involved 1,313,161 matched pairs, and no differences in outcomes were noted for alendronate, calcitonin, glipizide, or quinapril. We observed a lower risk of the composite cardiovascular endpoint with generics versus brand-name products for amlodipine and amlodipine-benazepril (HR [95% CI]: 0.91 [0.84–0.99] and 0.84 [0.76–0.94], respectively). For escitalopram and sertraline, we observed higher rates of psychiatric hospitalizations with generics (HR [95% CI]: 1.05 [1.01–1.10] and 1.07 [1.01–1.14], respectively). The negative control comparisons also indicated potentially higher rates of similar magnitude with AG compared to brand-name initiation for escitalopram and sertraline (HR [95% CI]: 1.06 [0.98–1.13] and 1.11 [1.05–1.18], respectively), suggesting that the differences observed between brand and generic users in these outcomes are likely explained by either residual confounding or generic perception bias. Limitations of this study include potential residual confounding due to the unavailability of certain clinical parameters in administrative claims data and the inability to evaluate surrogate outcomes, such as immediate changes in blood pressure, upon switching from brand products to generics.

Conclusions

In this study, we observed that use of generics was associated with comparable clinical outcomes to use of brand-name products. These results could help in promoting educational interventions aimed at increasing patient and provider confidence in the ability of generic medicines to manage chronic diseases.

Seasonal malaria chemoprevention combined with community case management of malaria in children under 10 years of age, over 5 months, in south-east Senegal: A cluster-randomised trial

Me, 13/03/2019 - 23:00

by Jean Louis A. Ndiaye, Youssoupha Ndiaye, Mamadou S. Ba, Babacar Faye, Maguette Ndiaye, Amadou Seck, Roger Tine, Pape Moussa Thior, Sharanjeet Atwal, Khalid Beshir, Colin Sutherland, Oumar Gaye, Paul Milligan

Background

Seasonal malaria chemoprevention (SMC) is recommended in the Sahel region of Africa for children under 5 years of age, for up to 4 months of the year. It may be appropriate to include older children, and to provide protection for more than 4 months. We evaluated the effectiveness of SMC using sulfadoxine-pyrimethamine plus amodiaquine given over 5 months to children under 10 years of age in Saraya district in south-east Senegal in 2011.

Methods and findings

Twenty-four villages, including 2,301 children aged 3–59 months and 2,245 aged 5–9 years, were randomised to receive SMC with community case management (CCM) (SMC villages) or CCM alone (control villages). In all villages, community health workers (CHWs) were trained to treat malaria cases with artemisinin combination therapy after testing with a rapid diagnostic test (RDT). In SMC villages, CHWs administered SMC to children aged 3 months to 9 years once a month for 5 months. The study was conducted from 27 July to 31 December 2011. The primary outcome was malaria (fever or history of fever with a positive RDT). The prevalence of anaemia and parasitaemia was measured in a survey at the end of the transmission season. Molecular markers associated with resistance to SMC drugs were analysed in samples from incident malaria cases and from children with parasitaemia in the survey. SMC was well tolerated with no serious adverse reactions. There were 1,472 RDT-confirmed malaria cases in the control villages and 270 in the SMC villages. Among children under 5 years of age, the rate difference was 110.8/1,000/month (95% CI 64.7, 156.8; p < 0.001) and among children 5–9 years of age, 101.3/1,000/month (95% CI 66.7, 136.0; p < 0.001). The mean haemoglobin concentration at the end of the transmission season was higher in SMC than control villages, by 6.5 g/l (95% CI 2.0, 11; p = 0.007) among children under 5 years of age, and by 5.2 g/l (95% CI 0.4, 9.9; p = 0.035) among children 5–9 years of age. The prevalence of parasitaemia was 18% in children under 5 years of age and 25% in children 5–9 years of age in the control villages, and 5.7% and 5.8%, respectively, in these 2 age groups in the SMC villages, with prevalence differences of 12.5% (95% CI 6.8%, 18.2%; p < 0.001) in children under 5 years of age and 19.3% (95% CI 8.3%, 30.2%; p < 0.001) in children 5–9 years of age. The pfdhps-540E mutation associated with clinical resistance to sulfadoxine-pyrimethamine was found in 0.8% of samples from malaria cases but not in the final survey. Twelve children died in the control group and 14 in the SMC group, a rate difference of 0.096/1,000 child-months (95% CI 0.99, 1.18; p = 0.895). Limitations of this study include that we were not able to obtain blood smears for microscopy for all suspected malaria cases, such that we had to rely on RDTs for confirmation, which may have included false positives.

Conclusions

In this study SMC for children under 10 years of age given over 5 months was feasible, well tolerated, and effective in preventing malaria episodes, and reduced the prevalence of parasitaemia and anaemia. SMC with CCM achieved high coverage and ensured children with malaria were promptly treated with artemether-lumefantrine.

Trial registration

www.clinicaltrials.gov NCT01449045.