PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 20 ore 28 min fa

Pharmacokinetics, optimal dosing, and safety of linezolid in children with multidrug-resistant tuberculosis: Combined data from two prospective observational studies

Ma, 30/04/2019 - 23:00

by Anthony J. Garcia-Prats, H. Simon Schaaf, Heather R. Draper, Maria Garcia-Cremades, Jana Winckler, Lubbe Wiesner, Anneke C. Hesseling, Rada M. Savic

Background

Linezolid is increasingly important for multidrug-resistant tuberculosis (MDR-TB) treatment. However, among children with MDR-TB, there are no linezolid pharmacokinetic data, and its adverse effects have not yet been prospectively described. We characterised the pharmacokinetics, safety, and optimal dose of linezolid in children treated for MDR-TB.

Methods and findings

Children routinely treated for MDR-TB in 2 observational studies (2011–2015, 2016–2018) conducted at a single site in Cape Town, South Africa, underwent intensive pharmacokinetic sampling after either a single dose or multiple doses of linezolid (at steady state). Linezolid pharmacokinetic parameters, and their relationships with covariates of interest, were described using nonlinear mixed-effects modelling. Children receiving long-term linezolid as a component of their routine treatment had regular clinical and laboratory monitoring. Adverse events were assessed for severity and attribution to linezolid. The final population pharmacokinetic model was used to derive optimal weight-banded doses resulting in exposures in children approximating those in adults receiving once-daily linezolid 600 mg. Forty-eight children were included (mean age 5.9 years; range 0.6 to 15.3); 31 received a single dose of linezolid, and 17 received multiple doses. The final pharmacokinetic model consisted of a one-compartment model characterised by clearance (CL) and volume (V) parameters that included allometric scaling to account for weight; no other evaluated covariates contributed to the model. Linezolid exposures in this population were higher compared to exposures in adults who had received a 600 mg once-daily dose. Consequently simulated, weight-banded once-daily optimal doses for children were lower than those currently used for most weight bands. Ten of 17 children who were followed long term had a linezolid-related adverse event, including 5 with a grade 3 or 4 event, all anaemia. Adverse events resulted in linezolid dose reductions in 4, temporary interruptions in 5, and permanent discontinuation in 4 children. Limitations of the study include the lack of very young children (none below 6 months of age), the limited number who were HIV infected, and the modest number of children contributing to long-term safety data.

Conclusions

Linezolid-related adverse effects were frequent and occasionally severe. Careful linezolid safety monitoring is required. Compared to doses currently used in children in many settings for MDR-TB treatment, lower doses may approximate current adult target exposures, might result in fewer adverse events, and should therefore be evaluated.

Evaluation of a social protection policy on tuberculosis treatment outcomes: A prospective cohort study

Ma, 30/04/2019 - 23:00

by Karen Klein, Maria Paula Bernachea, Sarah Irribarren, Luz Gibbons, Cristina Chirico, Fernando Rubinstein

Background

Tuberculosis (TB) still represents a major public health problem in Latin America, with low success and high default rates. Poor adherence represents a major threat for TB control and promotes emergence of drug-resistant TB. Expanding social protection programs could have a substantial effect on the global burden of TB; however, there is little evidence to evaluate the outcomes of socioeconomic support interventions. This study evaluated the effect of a conditional cash transfer (CCT) policy on treatment success and default rates in a prospective cohort of socioeconomically disadvantaged patients.

Methods and findings

Data were collected on adult patients with first diagnosis of pulmonary TB starting treatment in public healthcare facilities (HCFs) from 16 health departments with high TB burden in Buenos Aires who were followed until treatment completion or abandonment. The main exposure of interest was the registration to receive the CCT. Other covariates, such as sociodemographic and clinical variables and HCFs’ characteristics usually associated with treatment adherence and outcomes, were also considered in the analysis. We used hierarchical models, propensity score (PS) matching, and inverse probability weighting (IPW) to estimate treatment effects, adjusting for individual and health system confounders. Of 941 patients with known CCT status, 377 registered for the program showed significantly higher success rates (82% versus 69%) and lower default rates (11% versus 20%). After controlling for individual and system characteristics and modality of treatment, odds ratio (OR) for success was 2.9 (95% CI 2, 4.3, P < 0.001) and default was 0.36 (95% CI 0.23, 0.57, P < 0.001). As this is an observational study evaluating an intervention not randomly assigned, there might be some unmeasured residual confounding. Although it is possible that a small number of patients was not registered into the program because they were deemed not eligible, the majority of patients fulfilled the requirements and were not registered because of different reasons. Since the information on the CCT was collected at the end of the study, we do not know the exact timing for when each patient was registered for the program.

Conclusions

The CCT appears to be a valuable health policy intervention to improve TB treatment outcomes. Incorporating these interventions as established policies may have a considerable effect on the control of TB in similar high-burden areas.

Extended prophylaxis for venous thromboembolism after hospitalization for medical illness: A trial sequential and cumulative meta-analysis

Lu, 29/04/2019 - 23:00

by Navkaranbir S. Bajaj, Muthiah Vaduganathan, Arman Qamar, Kartik Gupta, Ankur Gupta, Harsh Golwala, Javed Butler, Samuel Z. Goldhaber, Mandeep R. Mehra

Background

The efficacy, safety, and clinical importance of extended-duration thromboprophylaxis (EDT) for prevention of venous thromboembolism (VTE) in medical patients remain unclear. We compared the efficacy and safety of EDT in patients hospitalized for medical illness.

Methods and findings

Electronic databases of PubMed/MEDLINE, EMBASE, Cochrane Central, and ClinicalTrials.gov were searched from inception to March 21, 2019. We included randomized clinical trials (RCTs) reporting use of EDT for prevention of VTE. We performed trial sequential and cumulative meta-analyses to evaluate EDT effects on the primary efficacy endpoint of symptomatic VTE or VTE-related death, International Society on Thrombosis and Haemostasis (ISTH) major or fatal bleeding, and all-cause mortality. The pooled number needed to treat (NNT) to prevent one symptomatic or fatal VTE event and the number needed to harm (NNH) to cause one major or fatal bleeding event were calculated.Across 5 RCTs with 40,247 patients (mean age: 67–77 years, proportion of women: 48%–54%, most common reason for admission: heart failure), the duration of EDT ranged from 24–47 days. EDT reduced symptomatic VTE or VTE-related death compared with standard of care (0.8% versus 1.2%; risk ratio [RR]: 0.61, 95% confidence interval [CI]: 0.44–0.83; p = 0.002). EDT increased risk of ISTH major or fatal bleeding (0.6% versus 0.3%; RR: 2.04, 95% CI: 1.42–2.91; p < 0.001) in both meta-analyses and trial sequential analyses. Pooled NNT to prevent one symptomatic VTE or VTE-related death was 250 (95% CI: 167–500), whereas NNH to cause one major or fatal bleeding event was 333 (95% CI: 200–1,000). Limitations of the study include variation in enrollment criteria, individual therapies, duration of EDT, and VTE detection protocols across included trials.

Conclusions

In this systematic review and meta-analysis of 5 randomized trials, we observed that use of a post-hospital discharge EDT strategy for a 4-to-6-week period reduced symptomatic or fatal VTE events at the expense of increased risk of major or fatal bleeding. Further investigations are still required to define the risks and benefits in discrete medically ill cohorts, evaluate cost-effectiveness, and develop pathways for targeted implementation of this postdischarge EDT strategy.

Trial registration

PROSPERO CRD42018109151.

Tuberculosis vaccines: Rising opportunities

Ma, 23/04/2019 - 23:00

by Johan Vekemans, Katherine L. O’Brien, Jeremy Farrar

Johan Vekemans, Katherine O'Brien, and Jeremy Farrar discuss recent breakthroughs in the search for a highly effective tuberculosis vaccine.

Controlling latent TB tuberculosis infection in high-burden countries: A neglected strategy to end TB

Ma, 23/04/2019 - 23:00

by Gavin J. Churchyard, Sue Swindells

In a Perspective, Gavin Churchyard and Sue Swindells discuss the importance of strategies to target latent tuberculosis infection in high risk populations and thus disrupt a reservoir for new infections in high burden countries.

Host-response-based gene signatures for tuberculosis diagnosis: A systematic comparison of 16 signatures

Ma, 23/04/2019 - 23:00

by Hayley Warsinske, Rohit Vashisht, Purvesh Khatri

Background

The World Health Organization (WHO) and Foundation for Innovative New Diagnostics (FIND) have published target product profiles (TPPs) calling for non-sputum-based diagnostic tests for the diagnosis of active tuberculosis (ATB) disease and for predicting the progression from latent tuberculosis infection (LTBI) to ATB. A large number of host-derived blood-based gene-expression biomarkers for diagnosis of patients with ATB have been proposed to date, but none have been implemented in clinical settings. The focus of this study is to directly compare published gene signatures for diagnosis of patients with ATB across a large, diverse list of publicly available gene expression datasets, and evaluate their performance against the WHO/FIND TPPs.

Methods and findings

We searched PubMed, Gene Expression Omnibus (GEO), and ArrayExpress in June 2018. We included all studies irrespective of study design and enrollment criteria. We found 16 gene signatures for the diagnosis of ATB compared to other clinical conditions in PubMed. For each signature, we implemented a classification model as described in the corresponding original publication of the signature. We identified 24 datasets containing 3,083 transcriptome profiles from whole blood or peripheral blood mononuclear cell samples of healthy controls or patients with ATB, LTBI, or other diseases from 14 countries in GEO. Using these datasets, we calculated weighted mean area under the receiver operating characteristic curve (AUROC), specificity at 90% sensitivity, and negative predictive value (NPV) for each gene signature across all datasets. We also compared the diagnostic odds ratio (DOR), heterogeneity in DOR, and false positive rate (FPR) for each signature using bivariate meta-analysis. Across 9 datasets of patients with culture-confirmed diagnosis of ATB, 11 signatures had weighted mean AUROC > 0.8, and 2 signatures had weighted mean AUROC ≤ 0.6. All but 2 signatures had high NPV (>98% at 2% prevalence). Two gene signatures achieved the minimal WHO TPP for a non-sputum-based triage test. When including datasets with clinical diagnosis of ATB, there was minimal reduction in the weighted mean AUROC and specificity of all but 3 signatures compared to when using only culture-confirmed ATB data. Only 4 signatures had homogeneous DOR and lower FPR when datasets with clinical diagnosis of ATB were included; other signatures either had heterogeneous DOR or higher FPR or both. Finally, 7 of 16 gene signatures predicted progression from LTBI to ATB 6 months prior to sputum conversion with positive predictive value > 6% at 2% prevalence. Our analyses may have under- or overestimated the performance of certain ATB diagnostic signatures because our implementation may be different from the published models for those signatures. We re-implemented published models because the exact models were not publicly available.

Conclusions

We found that host-response-based diagnostics could accurately identify patients with ATB and predict individuals with high risk of progression from LTBI to ATB prior to sputum conversion. We found that a higher number of genes in a signature did not increase the accuracy of the signature. Overall, the Sweeney3 signature performed robustly across all comparisons. Our results provide strong evidence for the potential of host-response-based diagnostics in achieving the WHO goal of ending tuberculosis by 2035, and host-response-based diagnostics should be pursued for clinical implementation.

Lay health supporters aided by mobile text messaging to improve adherence, symptoms, and functioning among people with schizophrenia in a resource-poor community in rural China (LEAN): A randomized controlled trial

Ma, 23/04/2019 - 23:00

by Dong (Roman) Xu, Shuiyuan Xiao, Hua He, Eric D. Caine, Stephen Gloyd, Jane Simoni, James P. Hughes, Juan Nie, Meijuan Lin, Wenjun He, Yeqing Yuan, Wenjie Gong

Background

Schizophrenia is a leading cause of disability, and a shift from facility- to community-based care has been proposed to meet the resource challenges of mental healthcare in low- and middle-income countries. We hypothesized that the addition of mobile texting would improve schizophrenia care in a resource-poor community setting compared with a community-based free-medicine program alone.

Methods and findings

In this 2-arm randomized controlled trial, 278 community-dwelling villagers (patient participants) were randomly selected from people with schizophrenia from 9 townships of Hunan, China, and were randomized 1:1 into 2 groups. The program participants were recruited between May 1, 2015, and August 31, 2015, and the intervention and follow-up took place between December 15, 2015, and July 1, 2016. Baseline characteristics of the 2 groups were similar. The patients were on average 46 years of age, had 7 years of education, had a duration of schizophrenia of 18 years with minimal to mild symptoms and nearly one-fifth loss of functioning, and were mostly living with family (95%) and had low incomes. Both the intervention and the control groups received a nationwide community-based mental health program that provided free antipsychotic medications. The patient participants in the intervention group also received LEAN (Lay health supporters, E-platform, Award, and iNtegration), a program that featured recruitment of a lay health supporter and text messages for medication reminders, health education, monitoring of early signs of relapses, and facilitated linkage to primary healthcare. The primary outcome was medication adherence (proportion of dosages taken) assessed by 2 unannounced home-based pill counts 30 days apart at the 6-month endpoint. The secondary and other outcomes included patient symptoms, functioning, relapses, re-hospitalizations, death for any reason, wandering away without notifying anyone, violence against others, damaging goods, and suicide. Intent-to-treat analysis was used. Missing data were handled with multiple imputations. In total, 271 out of 278 patient participants were successfully followed up for outcome assessment. Medication adherence was 0.48 in the control group and 0.61 in the intervention group (adjusted mean difference [AMD] 0.12 [95% CI 0.03 to 0.22]; p = 0.013; effect size 0.38). Among secondary and other outcomes we noted substantial reduction in the risk of relapse (26 [21.7%] of 120 interventional participants versus 40 [34.2%] of 117 controls; relative risk 0.63 [95% CI 0.42 to 0.97]; number needed to treat [NNT] 8.0) and re-hospitalization (9 [7.3%] of 123 interventional participants versus 25 [20.5%] of 122 controls; relative risk 0.36 [95% CI 0.17 to 0.73]; NNT 7.6). The program showed no statistical difference in all other outcomes. During the course of the program, 2 participants in the intervention group and 1 in the control group died. The limitations of the study include its lack of a full economic analysis, lack of individual tailoring of the text messages, the relatively short 6-month follow-up, and the generalizability constraint of the Chinese context.

Conclusions

The addition of texting to patients and their lay health supporters in a resource-poor community setting was more effective than a free-medicine program alone in improving medication adherence and reducing relapses and re-hospitalizations. Future studies may test the effectiveness of customization of the texting to individual patients.

Trial registration

Chinese Clinical Trial Registry ChiCTR-ICR-15006053.

Socioeconomic position and use of healthcare in the last year of life: A systematic review and meta-analysis

Ma, 23/04/2019 - 23:00

by Joanna M. Davies, Katherine E. Sleeman, Javiera Leniz, Rebecca Wilson, Irene J. Higginson, Julia Verne, Matthew Maddocks, Fliss E. M. Murtagh

Background

Low socioeconomic position (SEP) is recognized as a risk factor for worse health outcomes. How socioeconomic factors influence end-of-life care, and the magnitude of their effect, is not understood. This review aimed to synthesise and quantify the associations between measures of SEP and use of healthcare in the last year of life.

Methods and findings

MEDLINE, EMBASE, PsycINFO, CINAHL, and ASSIA databases were searched without language restrictions from inception to 1 February 2019. We included empirical observational studies from high-income countries reporting an association between SEP (e.g., income, education, occupation, private medical insurance status, housing tenure, housing quality, or area-based deprivation) and place of death, plus use of acute care, specialist and nonspecialist end-of-life care, advance care planning, and quality of care in the last year of life. Methodological quality was evaluated using the Newcastle-Ottawa Quality Assessment Scale (NOS). The overall strength and direction of associations was summarised, and where sufficient comparable data were available, adjusted odds ratios (ORs) were pooled and dose-response meta-regression performed.A total of 209 studies were included (mean NOS quality score of 4.8); 112 high- to medium-quality observational studies were used in the meta-synthesis and meta-analysis (53.5% from North America, 31.0% from Europe, 8.5% from Australia, and 7.0% from Asia). Compared to people living in the least deprived neighbourhoods, people living in the most deprived neighbourhoods were more likely to die in hospital versus home (OR 1.30, 95% CI 1.23–1.38, p < 0.001), to receive acute hospital-based care in the last 3 months of life (OR 1.16, 95% CI 1.08–1.25, p < 0.001), and to not receive specialist palliative care (OR 1.13, 95% CI 1.07–1.19, p < 0.001). For every quintile increase in area deprivation, hospital versus home death was more likely (OR 1.07, 95% CI 1.05–1.08, p < 0.001), and not receiving specialist palliative care was more likely (OR 1.03, 95% CI 1.02–1.05, p < 0.001). Compared to the most educated (qualifications or years of education completed), the least educated people were more likely to not receive specialist palliative care (OR 1.26, 95% CI 1.07–1.49, p = 0.005).The observational nature of the studies included and the focus on high-income countries limit the conclusions of this review.

Conclusions

In high-income countries, low SEP is a risk factor for hospital death as well as other indicators of potentially poor-quality end-of-life care, with evidence of a dose response indicating that inequality persists across the social stratum. These findings should stimulate widespread efforts to reduce socioeconomic inequality towards the end of life.

Discovery and validation of a prognostic proteomic signature for tuberculosis progression: A prospective cohort study

Ma, 16/04/2019 - 23:00

by Adam Penn-Nicholson, Thomas Hraha, Ethan G. Thompson, David Sterling, Stanley Kimbung Mbandi, Kirsten M. Wall, Michelle Fisher, Sara Suliman, Smitha Shankar, Willem A. Hanekom, Nebojsa Janjic, Mark Hatherill, Stefan H. E. Kaufmann, Jayne Sutherland, Gerhard Walzl, Mary Ann De Groote, Urs Ochsner, Daniel E. Zak, Thomas J. Scriba, ACS and GC6–74 cohort study groups

Background

A nonsputum blood test capable of predicting progression of healthy individuals to active tuberculosis (TB) before clinical symptoms manifest would allow targeted treatment to curb transmission. We aimed to develop a proteomic biomarker of risk of TB progression for ultimate translation into a point-of-care diagnostic.

Methods and findings

Proteomic TB risk signatures were discovered in a longitudinal cohort of 6,363 Mycobacterium tuberculosis-infected, HIV-negative South African adolescents aged 12–18 years (68% female) who participated in the Adolescent Cohort Study (ACS) between July 6, 2005 and April 23, 2007, through either active (every 6 months) or passive follow-up over 2 years. Forty-six individuals developed microbiologically confirmed TB disease within 2 years of follow-up and were selected as progressors; 106 nonprogressors, who remained healthy, were matched to progressors. Over 3,000 human proteins were quantified in plasma with a highly multiplexed proteomic assay (SOMAscan). Three hundred sixty-one proteins of differential abundance between progressors and nonprogressors were identified. A 5-protein signature, TB Risk Model 5 (TRM5), was discovered in the ACS training set and verified by blind prediction in the ACS test set. Poor performance on samples 13–24 months before TB diagnosis motivated discovery of a second 3-protein signature, 3-protein pair-ratio (3PR) developed using an orthogonal strategy on the full ACS subcohort. Prognostic performance of both signatures was validated in an independent cohort of 1,948 HIV-negative household TB contacts from The Gambia (aged 15–60 years, 66% female), longitudinally followed up for 2 years between March 5, 2007 and October 21, 2010, sampled at baseline, month 6, and month 18. Amongst these contacts, 34 individuals progressed to microbiologically confirmed TB disease and were included as progressors, and 115 nonprogressors were included as controls. Prognostic performance of the TRM5 signature in the ACS training set was excellent within 6 months of TB diagnosis (area under the receiver operating characteristic curve [AUC] 0.96 [95% confidence interval, 0.93–0.99]) and 6–12 months (AUC 0.76 [0.65–0.87]) before TB diagnosis. TRM5 validated with an AUC of 0.66 (0.56–0.75) within 1 year of TB diagnosis in the Gambian validation cohort. The 3PR signature yielded an AUC of 0.89 (0.84–0.95) within 6 months of TB diagnosis and 0.72 (0.64–0.81) 7–12 months before TB diagnosis in the entire South African discovery cohort and validated with an AUC of 0.65 (0.55–0.75) within 1 year of TB diagnosis in the Gambian validation cohort. Signature validation may have been limited by a systematic shift in signal magnitudes generated by differences between the validation assay when compared to the discovery assay. Further validation, especially in cohorts from non-African countries, is necessary to determine how generalizable signature performance is.

Conclusions

Both proteomic TB risk signatures predicted progression to incident TB within a year of diagnosis. To our knowledge, these are the first validated prognostic proteomic signatures. Neither meet the minimum criteria as defined in the WHO Target Product Profile for a progression test. More work is required to develop such a test for practical identification of individuals for investigation of incipient, subclinical, or active TB disease for appropriate treatment and care.

Screening for breech presentation using universal late-pregnancy ultrasonography: A prospective cohort study and cost effectiveness analysis

Ma, 16/04/2019 - 23:00

by David Wastlund, Alexandros A. Moraitis, Alison Dacey, Ulla Sovio, Edward C. F. Wilson, Gordon C. S. Smith

Background

Despite the relative ease with which breech presentation can be identified through ultrasound screening, the assessment of foetal presentation at term is often based on clinical examination only. Due to limitations in this approach, many women present in labour with an undiagnosed breech presentation, with increased risk of foetal morbidity and mortality. This study sought to determine the cost effectiveness of universal ultrasound scanning for breech presentation near term (36 weeks of gestational age [wkGA]) in nulliparous women.

Methods and findings

The Pregnancy Outcome Prediction (POP) study was a prospective cohort study between January 14, 2008 and July 31, 2012, including 3,879 nulliparous women who attended for a research screening ultrasound examination at 36 wkGA. Foetal presentation was assessed and compared for the groups with and without a clinically indicated ultrasound. Where breech presentation was detected, an external cephalic version (ECV) was routinely offered. If the ECV was unsuccessful or not performed, the women were offered either planned cesarean section at 39 weeks or attempted vaginal breech delivery. To compare the likelihood of different mode of deliveries and associated long-term health outcomes for universal ultrasound to current practice, a probabilistic economic simulation model was constructed. Parameter values were obtained from the POP study, and costs were mainly obtained from the English National Health Service (NHS). One hundred seventy-nine out of 3,879 women (4.6%) were diagnosed with breech presentation at 36 weeks. For most women (96), there had been no prior suspicion of noncephalic presentation. ECV was attempted for 84 (46.9%) women and was successful in 12 (success rate: 14.3%). Overall, 19 of the 179 women delivered vaginally (10.6%), 110 delivered by elective cesarean section (ELCS) (61.5%) and 50 delivered by emergency cesarean section (EMCS) (27.9%). There were no women with undiagnosed breech presentation in labour in the entire cohort. On average, 40 scans were needed per detection of a previously undiagnosed breech presentation. The economic analysis indicated that, compared to current practice, universal late-pregnancy ultrasound would identify around 14,826 otherwise undiagnosed breech presentations across England annually. It would also reduce EMCS and vaginal breech deliveries by 0.7 and 1.0 percentage points, respectively: around 4,196 and 6,061 deliveries across England annually. Universal ultrasound would also prevent 7.89 neonatal mortalities annually. The strategy would be cost effective if foetal presentation could be assessed for £19.80 or less per woman. Limitations to this study included that foetal presentation was revealed to all women and that the health economic analysis may be altered by parity.

Conclusions

According to our estimates, universal late pregnancy ultrasound in nulliparous women (1) would virtually eliminate undiagnosed breech presentation, (2) would be expected to reduce foetal mortality in breech presentation, and (3) would be cost effective if foetal presentation could be assessed for less than £19.80 per woman.

The incidence of pregnancy hypertension in India, Pakistan, Mozambique, and Nigeria: A prospective population-level analysis

Ve, 12/04/2019 - 23:00

by Laura A. Magee, Sumedha Sharma, Hannah L. Nathan, Olalekan O. Adetoro, Mrutynjaya B. Bellad, Shivaprasad Goudar, Salécio E. Macuacua, Ashalata Mallapur, Rahat Qureshi, Esperança Sevene, John Sotunsa, Anifa Valá, Tang Lee, Beth A. Payne, Marianne Vidler, Andrew H. Shennan, Zulfiqar A. Bhutta, Peter von Dadelszen, the CLIP Study Group

Background

Most pregnancy hypertension estimates in less-developed countries are from cross-sectional hospital surveys and are considered overestimates. We estimated population-based rates by standardised methods in 27 intervention clusters of the Community-Level Interventions for Pre-eclampsia (CLIP) cluster randomised trials.

Methods and findings

CLIP-eligible pregnant women identified in their homes or local primary health centres (2013–2017). Included here are women who had delivered by trial end and received a visit from a community health worker trained to provide supplementary hypertension-oriented care, including standardised blood pressure (BP) measurement. Hypertension (BP ≥ 140/90 mm Hg) was defined as chronic (first detected at <20 weeks gestation) or gestational (≥20 weeks); pre-eclampsia was gestational hypertension plus proteinuria or a pre-eclampsia-defining complication. A multi-level regression model compared hypertension rates and types between countries (p < 0.05 considered significant). In 28,420 pregnancies studied, women were usually young (median age 23–28 years), parous (53.7%–77.3%), with singletons (≥97.5%), and enrolled at a median gestational age of 10.4 (India) to 25.9 weeks (Mozambique). Basic education varied (22.8% in Pakistan to 57.9% in India). Pregnancy hypertension incidence was lower in Pakistan (9.3%) than India (10.3%), Mozambique (10.9%), or Nigeria (10.2%) (p = 0.001). Most hypertension was diastolic only (46.4% in India, 72.7% in Pakistan, 61.3% in Mozambique, and 63.3% in Nigeria). At first presentation with elevated BP, gestational hypertension was most common diagnosis (particularly in Mozambique [8.4%] versus India [6.9%], Pakistan [6.5%], and Nigeria [7.1%]; p < 0.001), followed by pre-eclampsia (India [3.8%], Nigeria [3.0%], Pakistan [2.4%], and Mozambique [2.3%]; p < 0.001) and chronic hypertension (especially in Mozambique [2.5%] and Nigeria [2.8%], compared with India [1.2%] and Pakistan [1.5%]; p < 0.001). Inclusion of additional diagnoses of hypertension and related complications, from household surveys or facility record review (unavailable in Nigeria), revealed higher hypertension incidence: 14.0% in India, 11.6% in Pakistan, and 16.8% in Mozambique; eclampsia was rare (<0.5%).

Conclusions

Pregnancy hypertension is common in less-developed settings. Most women in this study presented with gestational hypertension amenable to surveillance and timed delivery to improve outcomes.

Trial registration

This study is a secondary analysis of a clinical trial - ClinicalTrials.gov registration number NCT01911494.

Lipoarabinomannan in sputum to detect bacterial load and treatment response in patients with pulmonary tuberculosis: Analytic validation and evaluation in two cohorts

Ve, 12/04/2019 - 23:00

by Masanori Kawasaki, Carmenchu Echiverri, Lawrence Raymond, Elizabeth Cadena, Evelyn Reside, Maria Tarcela Gler, Tetsuya Oda, Ryuta Ito, Ryo Higashiyama, Kiyonori Katsuragi, Yongge Liu

Background

Lipoarabinomannan (LAM) is a major antigen of Mycobacterium tuberculosis (MTB). In this report, we evaluated the ability of a novel immunoassay to measure concentrations of LAM in sputum as a biomarker of bacterial load prior to and during treatment in pulmonary tuberculosis (TB) patients.

Methods and findings

Phage display technology was used to isolate monoclonal antibodies binding to epitopes unique in LAM from MTB and slow-growing nontuberculous mycobacteria (NTM). Using these antibodies, a sandwich enzyme-linked immunosorbent assay (LAM-ELISA) was developed to quantitate LAM concentration. The LAM-ELISA had a lower limit of quantification of 15 pg/mL LAM, corresponding to 121 colony-forming units (CFUs)/mL of MTB strain H37Rv. It detected slow-growing NTMs but without cross-reacting to common oral bacteria. Two clinical studies were performed between the years 2013 and 2016 in Manila, Philippines, in patients without known human immunodeficiency virus (HIV) coinfection. In a case-control cohort diagnostic study, sputum specimens were collected from 308 patients (aged 17-69 years; 62% male) diagnosed as having pulmonary TB diseases or non-TB diseases, but who could expectorate sputum, and were then evaluated by smear microscopy, BACTEC MGIT 960 Mycobacterial Detection System (MGIT) and Lowenstein-Jensen (LJ) culture, and LAM-ELISA. Some sputum specimens were also examined by Xpert MTB/RIF. The LAM-ELISA detected all smear- and MTB-culture–positive samples (n = 70) and 50% (n = 29) of smear-negative but culture-positive samples (n = 58) (versus 79.3%; 46 positive cases by the Xpert MTB/RIF), but none from non-TB patients (n = 56). Among both LAM and MGIT MTB-culture-positive samples, log10-transformed LAM concentration and MGIT time to detection (TTD) showed a good inverse relationship (r = −0.803, p < 0.0001). In a prospective longitudinal cohort study, 40 drug-susceptible pulmonary TB patients (aged 18-69 years; 60% male) were enrolled during the first 56 days of the standard 4-drug therapy. Declines in sputum LAM concentrations correlated with increases of MGIT TTD in individual patients. There was a 1.29 log10 decrease of sputum LAM concentration, corresponding to an increase of 221 hours for MGIT TTD during the first 14 days of treatment, a treatment duration often used in early bactericidal activity (EBA) trials. Major limitations of this study include a relatively small number of patients, treatment duration up to only 56 days, lack of quantitative sputum culture CFU count data, and no examination of the correlation of sputum LAM to clinical cure.

Conclusions

These results indicate that the LAM-ELISA can determine LAM concentration in sputum, and sputum LAM measured by the assay may be used as a biomarker of bacterial load prior to and during TB treatment. Additional studies are needed to examine the predictive value of this novel biomarker on treatment outcomes.

Effects of repeat prenatal corticosteroids given to women at risk of preterm birth: An individual participant data meta-analysis

Ve, 12/04/2019 - 23:00

by Caroline A. Crowther, Philippa F. Middleton, Merryn Voysey, Lisa Askie, Sasha Zhang, Tanya K. Martlow, Fariba Aghajafari, Elizabeth V. Asztalos, Peter Brocklehurst, Sourabh Dutta, Thomas J. Garite, Debra A. Guinn, Mikko Hallman, Pollyanna Hardy, Men-Jean Lee, Kimberley Maurel, Premasish Mazumder, Cindy McEvoy, Kellie E. Murphy, Outi M. Peltoniemi, Elizabeth A. Thom, Ronald J. Wapner, Lex W. Doyle, the PRECISE Group

Background

Infants born preterm compared with infants born at term are at an increased risk of dying and of serious morbidities in early life, and those who survive have higher rates of neurological impairments. It remains unclear whether exposure to repeat courses of prenatal corticosteroids can reduce these risks. This individual participant data (IPD) meta-analysis (MA) assessed whether repeat prenatal corticosteroid treatment given to women at ongoing risk of preterm birth in order to benefit their infants is modified by participant or treatment factors.

Methods and findings

Trials were eligible for inclusion if they randomised women considered at risk of preterm birth who had already received an initial, single course of prenatal corticosteroid seven or more days previously and in which corticosteroids were compared with either placebo or no placebo. The primary outcomes for the infants were serious outcome, use of respiratory support, and birth weight z-scores; for the children, they were death or any neurosensory disability; and for the women, maternal sepsis. Studies were identified using the Cochrane Pregnancy and Childbirth search strategy. Date of last search was 20 January 2015. IPD were sought from investigators with eligible trials. Risk of bias was assessed using criteria from the Cochrane Collaboration. IPD were analysed using a one-stage approach.Eleven trials, conducted between 2002 and 2010, were identified as eligible, with five trials being from the United States, two from Canada, and one each from Australia and New Zealand, Finland, India, and the United Kingdom. All 11 trials were included, with 4,857 women and 5,915 infants contributing data. The mean gestational age at trial entry for the trials was between 27.4 weeks and 30.2 weeks. There was no significant difference in the proportion of infants with a serious outcome (relative risk [RR] 0.92, 95% confidence interval [CI] 0.82 to 1.04, 5,893 infants, 11 trials, p = 0.33 for heterogeneity). There was a reduction in the use of respiratory support in infants exposed to repeat prenatal corticosteroids compared with infants not exposed (RR 0.91, 95% CI 0.85 to 0.97, 5,791 infants, 10 trials, p = 0.64 for heterogeneity). The number needed to treat (NNT) to benefit was 21 (95% CI 14 to 41) women/fetus to prevent one infant from needing respiratory support. Birth weight z-scores were lower in the repeat corticosteroid group (mean difference −0.12, 95%CI −0.18 to −0.06, 5,902 infants, 11 trials, p = 0.80 for heterogeneity). No statistically significant differences were seen for any of the primary outcomes for the child (death or any neurosensory disability) or for the woman (maternal sepsis). The treatment effect varied little by reason the woman was considered to be at risk of preterm birth, the number of fetuses in utero, the gestational age when first trial treatment course was given, or the time prior to birth that the last dose was given. Infants exposed to between 2–5 courses of repeat corticosteroids showed a reduction in both serious outcome and the use of respiratory support compared with infants exposed to only a single repeat course. However, increasing numbers of repeat courses of corticosteroids were associated with larger reductions in birth z-scores for weight, length, and head circumference. Not all trials could provide data for all of the prespecified subgroups, so this limited the power to detect differences because event rates are low for some important maternal, infant, and childhood outcomes.

Conclusions

In this study, we found that repeat prenatal corticosteroids given to women at ongoing risk of preterm birth after an initial course reduced the likelihood of their infant needing respiratory support after birth and led to neonatal benefits. Body size measures at birth were lower in infants exposed to repeat prenatal corticosteroids. Our findings suggest that to provide clinical benefit with the least effect on growth, the number of repeat treatment courses should be limited to a maximum of three and the total dose to between 24 mg and 48 mg.

Preferences for HIV testing services among men who have sex with men in the UK: A discrete choice experiment

Gi, 11/04/2019 - 23:00

by Alec Miners, Tom Nadarzynski, Charles Witzel, Andrew N. Phillips, Valentina Cambiano, Alison J. Rodger, Carrie D. Llewellyn

Background

In the UK, approximately 4,200 men who have sex with men (MSM) are living with HIV but remain undiagnosed. Maximising the number of high-risk people testing for HIV is key to ensuring prompt treatment and preventing onward infection. This study assessed how different HIV test characteristics affect the choice of testing option, including remote testing (HIV self-testing or HIV self-sampling), in the UK, a country with universal access to healthcare.

Methods and findings

Between 3 April and 11 May 2017, a cross-sectional online-questionnaire-based discrete choice experiment (DCE) was conducted in which respondents who expressed an interest in online material used by MSM were asked to imagine that they were at risk of HIV infection and to choose between different hypothetical HIV testing options, including the option not to test. A variety of different testing options with different defining characteristics were described so that the independent preference for each characteristic could be valued. The characteristics included where each test is taken, the sampling method, how the test is obtained, whether infections other than HIV are tested for, test accuracy, the cost of the test, the infection window period, and how long it takes to receive the test result. Participants were recruited and completed the instrument online, in order to include those not currently engaged with healthcare services. The main analysis was conducted using a latent class model (LCM), with results displayed as odds ratios (ORs) and probabilities. The ORs indicate the strength of preference for one characteristic relative to another (base) characteristic. In total, 620 respondents answered the DCE questions. Most respondents reported that they were white (93%) and were either gay or bisexual (99%). The LCM showed that there were 2 classes within the respondent sample that appeared to have different preferences for the testing options. The first group, which was likely to contain 86% of respondents, had a strong preference for face-to-face tests by healthcare professionals (HCPs) compared to remote testing (OR 6.4; 95% CI 5.6, 7.4) and viewed not testing as less preferable than remote testing (OR 0.10; 95% CI 0.09, 0.11). In the second group, which was likely to include 14% of participants, not testing was viewed as less desirable than remote testing (OR 0.56; 95% CI 0.53, 0.59) as were tests by HCPs compared to remote testing (OR 0.23; 95% CI 0.15, 0.36). In both classes, free remote tests instead of each test costing £30 was the test characteristic with the largest impact on the choice of testing option. Participants in the second group were more likely to have never previously tested and to be non-white than participants in the first group. The main study limitations were that the sample was recruited solely via social media, the study advert was viewed only by people expressing an interest in online material used by MSM, and the choices in the experiment were hypothetical rather than observed in the real world.

Conclusions

Our results suggest that preferences in the context we examined are broadly dichotomous. One group, containing the majority of MSM, appears comfortable testing for HIV but prefers face-to-face testing by HCPs rather than remote testing. The other group is much smaller, but contains MSM who are more likely to be at high infection risk. For these people, the availability of remote testing has the potential to significantly increase net testing rates, particularly if provided for free.

Octreotide-LAR in later-stage autosomal dominant polycystic kidney disease (ALADIN 2): A randomized, double-blind, placebo-controlled, multicenter trial

Ve, 05/04/2019 - 23:00

by Norberto Perico, Piero Ruggenenti, Annalisa Perna, Anna Caroli, Matias Trillini, Sandro Sironi, Antonio Pisani, Eleonora Riccio, Massimo Imbriaco, Mauro Dugo, Giovanni Morana, Antonio Granata, Michele Figuera, Flavio Gaspari, Fabiola Carrara, Nadia Rubis, Alessandro Villa, Sara Gamba, Silvia Prandini, Monica Cortinovis, Andrea Remuzzi, Giuseppe Remuzzi, for the ALADIN 2 Study Group

Background

Autosomal dominant polycystic kidney disease (ADPKD) is the most frequent genetically determined renal disease. In affected patients, renal function may progressively decline up to end-stage renal disease (ESRD), and approximately 10% of those with ESRD are affected by ADPKD. The somatostatin analog octreotide long-acting release (octreotide-LAR) slows renal function deterioration in patients in early stages of the disease. We evaluated the renoprotective effect of octreotide-LAR in ADPKD patients at high risk of ESRD because of later-stage ADPKD.

Methods and findings

We did an internally funded, parallel-group, double-blind, placebo-controlled phase III trial to assess octreotide-LAR in adults with ADPKD with glomerular filtration rate (GFR) 15–40 ml/min/1.73 m2. Participants were randomized to receive 2 intramuscular injections of 20 mg octreotide-LAR (n = 51) or 0.9% sodium chloride solution (placebo; n = 49) every 28 days for 3 years. Central randomization was 1:1 using a computerized list stratified by center and presence or absence of diabetes or proteinuria. Co-primary short- and long-term outcomes were 1-year total kidney volume (TKV) (computed tomography scan) growth and 3-year GFR (iohexol plasma clearance) decline. Analyses were by modified intention-to-treat. Patients were recruited from 4 Italian nephrology units between October 11, 2011, and March 20, 2014, and followed up to April 14, 2017. Baseline characteristics were similar between groups. Compared to placebo, octreotide-LAR reduced median (95% CI) TKV growth from baseline by 96.8 (10.8 to 182.7) ml at 1 year (p = 0.027) and 422.6 (150.3 to 695.0) ml at 3 years (p = 0.002). Reduction in the median (95% CI) rate of GFR decline (0.56 [−0.63 to 1.75] ml/min/1.73 m2 per year) was not significant (p = 0.295). TKV analyses were adjusted for age, sex, and baseline TKV. Over a median (IQR) 36 (24 to 37) months of follow-up, 9 patients on octreotide-LAR and 21 patients on placebo progressed to a doubling of serum creatinine or ESRD (composite endpoint) (hazard ratio [HR] [95% CI] adjusted for age, sex, baseline serum creatinine, and baseline TKV: 0.307 [0.127 to 0.742], p = 0.009). One composite endpoint was prevented for every 4 treated patients. Among 63 patients with chronic kidney disease (CKD) stage 4, 3 on octreotide-LAR and 8 on placebo progressed to ESRD (adjusted HR [95% CI]: 0.121 [0.017 to 0.866], p = 0.036). Three patients on placebo had a serious renal cyst rupture/infection and 1 patient had a serious urinary tract infection/obstruction, versus 1 patient on octreotide-LAR with a serious renal cyst infection. The main study limitation was the small sample size.

Conclusions

In this study we observed that in later-stage ADPKD, octreotide-LAR slowed kidney growth and delayed progression to ESRD, in particular in CKD stage 4.

Trial registration

ClinicalTrials.gov NCT01377246; EudraCT: 2011-000138-12.

Risk score for predicting mortality including urine lipoarabinomannan detection in hospital inpatients with HIV-associated tuberculosis in sub-Saharan Africa: Derivation and external validation cohort study

Ve, 05/04/2019 - 23:00

by Ankur Gupta-Wright, Elizabeth L. Corbett, Douglas Wilson, Joep J. van Oosterhout, Keertan Dheda, Helena Huerga, Jonny Peter, Maryline Bonnet, Melanie Alufandika-Moyo, Daniel Grint, Stephen D. Lawn, Katherine Fielding

Background

The prevalence of and mortality from HIV-associated tuberculosis (HIV/TB) in hospital inpatients in Africa remains unacceptably high. Currently, there is a lack of tools to identify those at high risk of early mortality who may benefit from adjunctive interventions. We therefore aimed to develop and validate a simple clinical risk score to predict mortality in high-burden, low-resource settings.

Methods and findings

A cohort of HIV-positive adults with laboratory-confirmed TB from the STAMP TB screening trial (Malawi and South Africa) was used to derive a clinical risk score using multivariable predictive modelling, considering factors at hospital admission (including urine lipoarabinomannan [LAM] detection) thought to be associated with 2-month mortality. Performance was evaluated internally and then externally validated using independent cohorts from 2 other studies (LAM-RCT and a Médecins Sans Frontières [MSF] cohort) from South Africa, Zambia, Zimbabwe, Tanzania, and Kenya. The derivation cohort included 315 patients enrolled from October 2015 and September 2017. Their median age was 36 years (IQR 30–43), 45.4% were female, median CD4 cell count at admission was 76 cells/μl (IQR 23–206), and 80.2% (210/262) of those who knew they were HIV-positive at hospital admission were taking antiretroviral therapy (ART). Two-month mortality was 30% (94/315), and mortality was associated with the following factors included in the score: age 55 years or older, male sex, being ART experienced, having severe anaemia (haemoglobin < 80 g/l), being unable to walk unaided, and having a positive urinary Determine TB LAM Ag test (Alere). The score identified patients with a 46.4% (95% CI 37.8%–55.2%) mortality risk in the high-risk group compared to 12.5% (95% CI 5.7%–25.4%) in the low-risk group (p < 0.001). The odds ratio (OR) for mortality was 6.1 (95% CI 2.4–15.2) in high-risk patients compared to low-risk patients (p < 0.001). Discrimination (c-statistic 0.70, 95% CI 0.63–0.76) and calibration (Hosmer-Lemeshow statistic, p = 0.78) were good in the derivation cohort, and similar in the external validation cohort (complete cases n = 372, c-statistic 0.68 [95% CI 0.61–0.74]). The validation cohort included 644 patients between January 2013 and August 2015. Median age was 36 years, 48.9% were female, and median CD4 count at admission was 61 (IQR 21–145). OR for mortality was 5.3 (95% CI 2.2–9.5) for high compared to low-risk patients (complete cases n = 372, p < 0.001). The score also predicted patients at higher risk of death both pre- and post-discharge. A simplified score (any 3 or more of the predictors) performed equally well. The main limitations of the scores were their imperfect accuracy, the need for access to urine LAM testing, modest study size, and not measuring all potential predictors of mortality (e.g., tuberculosis drug resistance).

Conclusions

This risk score is capable of identifying patients who could benefit from enhanced clinical care, follow-up, and/or adjunctive interventions, although further prospective validation studies are necessary. Given the scale of HIV/TB morbidity and mortality in African hospitals, better prognostic tools along with interventions could contribute towards global targets to reduce tuberculosis mortality.

Tuberculosis drugs’ distribution and emergence of resistance in patient’s lung lesions: A mechanistic model and tool for regimen and dose optimization

Ma, 02/04/2019 - 23:00

by Natasha Strydom, Sneha V. Gupta, William S. Fox, Laura E. Via, Hyeeun Bang, Myungsun Lee, Seokyong Eum, TaeSun Shim, Clifton E. Barry III, Matthew Zimmerman, Véronique Dartois, Radojka M. Savic

Background

The sites of mycobacterial infection in the lungs of tuberculosis (TB) patients have complex structures and poor vascularization, which obstructs drug distribution to these hard-to-reach and hard-to-treat disease sites, further leading to suboptimal drug concentrations, resulting in compromised TB treatment response and resistance development. Quantifying lesion-specific drug uptake and pharmacokinetics (PKs) in TB patients is necessary to optimize treatment regimens at all infection sites, to identify patients at risk, to improve existing regimens, and to advance development of novel regimens. Using drug-level data in plasma and from 9 distinct pulmonary lesion types (vascular, avascular, and mixed) obtained from 15 hard-to-treat TB patients who failed TB treatments and therefore underwent lung resection surgery, we quantified the distribution and the penetration of 7 major TB drugs at these sites, and we provide novel tools for treatment optimization.

Methods and findings

A total of 329 plasma- and 1,362 tissue-specific drug concentrations from 9 distinct lung lesion types were obtained according to optimal PK sampling schema from 15 patients (10 men, 5 women, aged 23 to 58) undergoing lung resection surgery (clinical study NCT00816426 performed in South Korea between 9 June 2010 and 24 June 2014). Seven major TB drugs (rifampin [RIF], isoniazid [INH], linezolid [LZD], moxifloxacin [MFX], clofazimine [CFZ], pyrazinamide [PZA], and kanamycin [KAN]) were quantified. We developed and evaluated a site-of-action mechanistic PK model using nonlinear mixed effects methodology. We quantified population- and patient-specific lesion/plasma ratios (RPLs), dynamics, and variability of drug uptake into each lesion for each drug. CFZ and MFX had higher drug exposures in lesions compared to plasma (median RPL 2.37, range across lesions 1.26–22.03); RIF, PZA, and LZD showed moderate yet suboptimal lesion penetration (median RPL 0.61, range 0.21–2.4), while INH and KAN showed poor tissue penetration (median RPL 0.4, range 0.03–0.73). Stochastic PK/pharmacodynamic (PD) simulations were carried out to evaluate current regimen combinations and dosing guidelines in distinct patient strata. Patients receiving standard doses of RIF and INH, who are of the lower range of exposure distribution, spent substantial periods (>12 h/d) below effective concentrations in hard-to-treat lesions, such as caseous lesions and cavities. Standard doses of INH (300 mg) and KAN (1,000 mg) did not reach therapeutic thresholds in most lesions for a majority of the population. Drugs and doses that did reach target exposure in most subjects include 400 mg MFX and 100 mg CFZ. Patients with cavitary lesions, irrespective of drug choice, have an increased likelihood of subtherapeutic concentrations, leading to a higher risk of resistance acquisition while on treatment. A limitation of this study was the small sample size of 15 patients, performed in a unique study population of TB patients who failed treatment and underwent lung resection surgery. These results still need further exploration and validation in larger and more diverse cohorts.

Conclusions

Our results suggest that the ability to reach and maintain therapeutic concentrations is both lesion and drug specific, indicating that stratifying patients based on disease extent, lesion types, and individual drug-susceptibility profiles may eventually be useful for guiding the selection of patient-tailored drug regimens and may lead to improved TB treatment outcomes. We provide a web-based tool to further explore this model and results at http://saviclab.org/tb-lesion/.

Incidence of eclampsia and related complications across 10 low- and middle-resource geographical regions: Secondary analysis of a cluster randomised controlled trial

Ve, 29/03/2019 - 23:00

by Nicola Vousden, Elodie Lawley, Paul T. Seed, Muchabayiwa Francis Gidiri, Shivaprasad Goudar, Jane Sandall, Lucy C. Chappell, Andrew H. Shennan, on behalf of the CRADLE Trial Collaborative Group

Background

In 2015, approximately 42,000 women died as a result of hypertensive disorders of pregnancy worldwide; over 99% of these deaths occurred in low- and middle-income countries. The aim of this paper is to describe the incidence and characteristics of eclampsia and related complications from hypertensive disorders of pregnancy across 10 low- and middle-income geographical regions in 8 countries, in relation to magnesium sulfate availability.

Methods and findings

This is a secondary analysis of a stepped-wedge cluster randomised controlled trial undertaken in sub-Saharan Africa, India, and Haiti. This trial implemented a novel vital sign device and training package in routine maternity care with the aim of reducing a composite outcome of maternal mortality and morbidity. Institutional-level consent was obtained, and all women presenting for maternity care were eligible for inclusion. Data on eclampsia, stroke, admission to intensive care with a hypertensive disorder of pregnancy, and maternal death from a hypertensive disorder of pregnancy were prospectively collected from routine data sources and active case finding, together with data on perinatal outcomes in women with these outcomes. In 536,233 deliveries between 1 April 2016 and 30 November 2017, there were 2,692 women with eclampsia (0.5%). In total 6.9% (n = 186; 3.47/10,000 deliveries) of women with eclampsia died, and a further 51 died from other complications of hypertensive disorders of pregnancy (0.95/10,000). After planned adjustments, the implementation of the CRADLE intervention was not associated with any significant change in the rates of eclampsia, stroke, or maternal death or intensive care admission with a hypertensive disorder of pregnancy. Nearly 1 in 5 (17.9%) women with eclampsia, stroke, or a hypertensive disorder of pregnancy causing intensive care admission or maternal death experienced a stillbirth or neonatal death. A third of eclampsia cases (33.2%; n = 894) occurred in women under 20 years of age, 60.0% in women aged 20–34 years (n = 1,616), and 6.8% (n = 182) in women aged 35 years or over. Rates of eclampsia varied approximately 7-fold between sites (range 19.6/10,000 in Zambia Centre 1 to 142.0/10,000 in Sierra Leone). Over half (55.1%) of first eclamptic fits occurred in a health-care facility, with the remainder in the community. Place of first fit varied substantially between sites (from 5.9% in the central referral facility in Sierra Leone to 85% in Uganda Centre 2). On average, magnesium sulfate was available in 74.7% of facilities (range 25% in Haiti to 100% in Sierra Leone and Zimbabwe). There was no detectable association between magnesium sulfate availability and the rate of eclampsia across sites (p = 0.12). This analysis may have been influenced by the selection of predominantly urban and peri-urban settings, and by collection of only monthly data on availability of magnesium sulfate, and is limited by the lack of demographic data in the population of women delivering in the trial areas.

Conclusions

The large variation in eclampsia and maternal and neonatal fatality from hypertensive disorders of pregnancy between countries emphasises that inequality and inequity persist in healthcare for women with hypertensive disorders of pregnancy. Alongside the growing interest in improving community detection and health education for these disorders, efforts to improve quality of care within healthcare facilities are key. Strategies to prevent eclampsia should be informed by local data.

Trial registration

ISRCTN: 41244132.

A whole-health–economy approach to antimicrobial stewardship: Analysis of current models and future direction

Ve, 29/03/2019 - 23:00

by Monsey McLeod, Raheelah Ahmad, Nada Atef Shebl, Christianne Micallef, Fiona Sim, Alison Holmes

In a Policy Forum, Alison Holmes and colleagues discuss coordinated approaches to antimicrobial stewardship.

Community health workers to improve uptake of maternal healthcare services: A cluster-randomized pragmatic trial in Dar es Salaam, Tanzania

Ve, 29/03/2019 - 23:00

by Pascal Geldsetzer, Eric Mboggo, Elysia Larson, Irene Andrew Lema, Lucy Magesa, Lameck Machumi, Nzovu Ulenga, David Sando, Mary Mwanyika-Sando, Donna Spiegelman, Ester Mungure, Nan Li, Hellen Siril, Phares Mujinja, Helga Naburi, Guerino Chalamilla, Charles Kilewo, Anna Mia Ekström, Dawn Foster, Wafaie Fawzi, Till Bärnighausen

Background

Home delivery and late and infrequent attendance at antenatal care (ANC) are responsible for substantial avoidable maternal and pediatric morbidity and mortality in sub-Saharan Africa. This cluster-randomized trial aimed to determine the impact of a community health worker (CHW) intervention on the proportion of women who (i) visit ANC fewer than 4 times during their pregnancy and (ii) deliver at home.

Methods and findings

As part of a 2-by-2 factorial design, we conducted a cluster-randomized trial of a home-based CHW intervention in 2 of 3 districts of Dar es Salaam from 18 June 2012 to 15 January 2014. Thirty-six wards (geographical areas) in the 2 districts were randomized to the CHW intervention, and 24 wards to the standard of care. In the standard-of-care arm, CHWs visited women enrolled in prevention of mother-to-child HIV transmission (PMTCT) care and provided information and counseling. The intervention arm included additional CHW supervision and the following additional CHW tasks, which were targeted at all pregnant women regardless of HIV status: (i) conducting home visits to identify pregnant women and refer them to ANC, (ii) counseling pregnant women on maternal health, and (iii) providing home visits to women who missed an ANC or PMTCT appointment. The primary endpoints of this trial were the proportion of pregnant women (i) not making at least 4 ANC visits and (ii) delivering at home. The outcomes were assessed through a population-based household survey at the end of the trial period. We did not collect data on adverse events. A random sample of 2,329 pregnant women and new mothers living in the study area were interviewed during home visits. At the time of the survey, the mean age of participants was 27.3 years, and 34.5% (804/2,329) were pregnant. The proportion of women who reported having attended fewer than 4 ANC visits did not differ significantly between the intervention and standard-of-care arms (59.1% versus 60.7%, respectively; risk ratio [RR]: 0.97; 95% CI: 0.82–1.15; p = 0.754). Similarly, the proportion reporting that they had attended ANC in the first trimester did not differ significantly between study arms. However, women in intervention wards were significantly less likely to report having delivered at home (3.9% versus 7.3%; RR: 0.54; 95% CI: 0.30–0.95; p = 0.034). Mixed-methods analyses of additional data collected as part of this trial suggest that an important reason for the lack of effect on ANC outcomes was the perceived high economic burden and inconvenience of attending ANC. The main limitations of this trial were that (i) the outcomes were ascertained through self-report, (ii) the study was stopped 4 months early due to a change in the standard of care in the other trial that was part of the 2-by-2 factorial design, and (iii) the sample size of the household survey was not prespecified.

Conclusions

A home-based CHW intervention in urban Tanzania significantly reduced the proportion of women who reported having delivered at home, in an area that already has very high uptake of facility-based delivery. The intervention did not affect self-reported ANC attendance. Policy makers should consider piloting, evaluating, and scaling interventions to lessen the economic burden and inconvenience of ANC.

Trial registration

ClinicalTrials.gov NCT01932138