PLoS Medicine

Condividi contenuti PLOS Medicine: New Articles
A Peer-Reviewed Open-Access Journal
Aggiornato: 6 ore 21 min fa

Validation of IMPROD biparametric MRI in men with clinically suspected prostate cancer: A prospective multi-institutional trial

Lu, 03/06/2019 - 23:00

by Ivan Jambor, Janne Verho, Otto Ettala, Juha Knaapila, Pekka Taimen, Kari T. Syvänen, Aida Kiviniemi, Esa Kähkönen, Ileana Montoya Perez, Marjo Seppänen, Antti Rannikko, Outi Oksanen, Jarno Riikonen, Sanna Mari Vimpeli, Tommi Kauko, Harri Merisaari, Markku Kallajoki, Tuomas Mirtti, Tarja Lamminen, Jani Saunavaara, Hannu J. Aronen, Peter J. Boström

Background

Magnetic resonance imaging (MRI) combined with targeted biopsy (TB) is increasingly used in men with clinically suspected prostate cancer (PCa), but the long acquisition times, high costs, and inter-center/reader variability of routine multiparametric prostate MRI limit its wider adoption.

Methods and findings

The aim was to validate a previously developed unique MRI acquisition and reporting protocol, IMPROD biparametric MRI (bpMRI) (NCT01864135), in men with a clinical suspicion of PCa in a multi-institutional trial (NCT02241122). IMPROD bpMRI has average acquisition time of 15 minutes (no endorectal coil, no intravenous contrast use) and consists of T2-weighted imaging and 3 separate diffusion-weighed imaging acquisitions. Between February 1, 2015, and March 31, 2017, 364 men with a clinical suspicion of PCa were enrolled at 4 institutions in Finland. Men with an equivocal to high suspicion (IMPROD bpMRI Likert score 3–5) of PCa had 2 TBs of up to 2 lesions followed by a systematic biopsy (SB). Men with a low to very low suspicion (IMPROD bpMRI Likert score 1–2) had only SB. All data and protocols are freely available. The primary outcome of the trial was diagnostic accuracy—including overall accuracy, sensitivity, specificity, negative predictive value (NPV), and positive predictive value—of IMPROD bpMRI for clinically significant PCa (SPCa), which was defined as a Gleason score ≥ 3 + 4 (Gleason grade group 2 or higher). In total, 338 (338/364, 93%) prospectively enrolled men completed the trial. The accuracy and NPV of IMPROD bpMRI for SPCa were 70% (113/161) and 95% (71/75) (95% CI 87%–98%), respectively. Restricting the biopsy to men with equivocal to highly suspicious IMPROD bpMRI findings would have resulted in a 22% (75/338) reduction in the number of men undergoing biopsy while missing 4 (3%, 4/146) men with SPCa. The main limitation is uncertainty about the true PCa prevalence in the study cohort, since some of the men may have PCa despite having negative biopsy findings.

Conclusions

IMPROD bpMRI demonstrated a high NPV for SPCa in men with a clinical suspicion of PCa in this prospective multi-institutional clinical trial.

Trial registration

ClinicalTrials.gov NCT02241122.

Effects of a large-scale distribution of water filters and natural draft rocket-style cookstoves on diarrhea and acute respiratory infection: A cluster-randomized controlled trial in Western Province, Rwanda

Lu, 03/06/2019 - 23:00

by Miles A. Kirby, Corey L. Nagel, Ghislaine Rosa, Laura D. Zambrano, Sanctus Musafiri, Jean de Dieu Ngirabega, Evan A. Thomas, Thomas Clasen

Background

Unsafe drinking water and household air pollution (HAP) are major causes of morbidity and mortality among children under 5 in low and middle-income countries. Household water filters and higher-efficiency biomass-burning cookstoves have been widely promoted to improve water quality and reduce fuel use, but there is limited evidence of their health effects when delivered programmatically at scale.

Methods and findings

In a large-scale program in Western Province, Rwanda, water filters and portable biomass-burning natural draft rocket-style cookstoves were distributed between September and December 2014 and promoted to over 101,000 households in the poorest economic quartile in 72 (of 96) randomly selected sectors in Western Province. To assess the effects of the intervention, between August and December, 2014, we enrolled 1,582 households that included a child under 4 years from 174 randomly selected village-sized clusters, half from intervention sectors and half from nonintervention sectors. At baseline, 76% of households relied primarily on an improved source for drinking water (piped, borehole, protected spring/well, or rainwater) and over 99% cooked primarily on traditional biomass-burning stoves. We conducted follow-up at 3 time-points between February 2015 and March 2016 to assess reported diarrhea and acute respiratory infections (ARIs) among children <5 years in the preceding 7 days (primary outcomes) and patterns of intervention use, drinking water quality, and air quality. The intervention reduced the prevalence of reported child diarrhea by 29% (prevalence ratio [PR] 0.71, 95% confidence interval [CI] 0.59–0.87, p = 0.001) and reported child ARI by 25% (PR 0.75, 95% CI 0.60–0.93, p = 0.009). Overall, more than 62% of households were observed to have water in their filters at follow-up, while 65% reported using the intervention stove every day, and 55% reported using it primarily outdoors. Use of both the intervention filter and intervention stove decreased throughout follow-up, while reported traditional stove use increased. The intervention reduced the prevalence of households with detectable fecal contamination in drinking water samples by 38% (PR 0.62, 95% CI 0.57–0.68, p < 0.0001) but had no significant impact on 48-hour personal exposure to log-transformed fine particulate matter (PM2.5) concentrations among cooks (β = −0.089, p = 0.486) or children (β = −0.228, p = 0.127). The main limitations of this trial include the unblinded nature of the intervention, limited PM2.5 exposure measurement, and a reliance on reported intervention use and reported health outcomes.

Conclusions

Our findings indicate that the intervention improved household drinking water quality and reduced caregiver-reported diarrhea among children <5 years. It also reduced caregiver-reported ARI despite no evidence of improved air quality. Further research is necessary to ascertain longer-term intervention use and benefits and to explore the potential synergistic effects between diarrhea and ARI.

Trial registration

Clinical Trials.gov NCT02239250.

Correction: Evaluation of a social protection policy on tuberculosis treatment outcomes: A prospective cohort study

Ve, 31/05/2019 - 23:00

by Karen Klein, Maria Paula Bernachea, Sarah Iribarren, Luz Gibbons, Cristina Chirico, Fernando Rubinstein

Retention and viral suppression in a cohort of HIV patients on antiretroviral therapy in Zambia: Regionally representative estimates using a multistage-sampling-based approach

Ve, 31/05/2019 - 23:00

by Izukanji Sikazwe, Ingrid Eshun-Wilson, Kombatende Sikombe, Nancy Czaicki, Paul Somwe, Aaloke Mody, Sandra Simbeza, David V. Glidden, Elizabeth Chizema, Lloyd B. Mulenga, Nancy Padian, Chris J. Duncombe, Carolyn Bolton-Moore, Laura K. Beres, Charles B. Holmes, Elvin Geng

Background

Although the success of HIV treatment programs depends on retention and viral suppression, routine program monitoring of these outcomes may be incomplete. We used data from the national electronic medical record (EMR) system in Zambia to enumerate a large and regionally representative cohort of patients on treatment. We traced a random sample with unknown outcomes (lost to follow-up) to document true care status and HIV RNA levels.

Methods and findings

On 31 July 2015, we selected facilities from 4 provinces in 12 joint strata defined by facility type and province with probability proportional to size. In each facility, we enumerated adults with at least 1 clinical encounter after treatment initiation in the previous 24 months. From this cohort, we identified lost-to-follow-up patients (defined as 90 or more days late for their last appointment), selected a random sample, and intensively reviewed their records and traced them via phone calls and in-person visits in the community. In 1 of 4 provinces, we also collected dried blood spots (DBSs) for plasma HIV RNA testing. We used inverse probability weights to incorporate sampling outcomes into Aalen–Johansen and Cox proportional hazards regression to estimate retention and viremia. We used a bias analysis approach to correct for the known inaccuracy of plasma HIV RNA levels obtained from DBSs. From a total of 64 facilities with 165,464 adults on ART, we selected 32 facilities with 104,966 patients, of whom 17,602 (17%) were lost to follow-up: Those lost to follow-up had median age 36 years, 60% were female (N = 11,241), they had median enrollment CD4 count of 220 cells/μl, and 38% had WHO stage 1 clinical disease (N = 10,690). We traced 2,892 (16%) and found updated outcomes for 2,163 (75%): 412 (19%) had died, 836 (39%) were alive and in care at their original clinic, 457 (21%) had transferred to a new clinic, 255 (12%) were alive and out of care, and 203 (9%) were alive but we were unable to determine care status. Estimates using data from the EMR only suggested that 42.7% (95% CI 38.0%–47.1%) of new ART starters and 72.3% (95% CI 71.8%–73.0%) of all ART users were retained at 2 years. After incorporating updated data through tracing, we found that 77.3% (95% CI 70.5%–84.0%) of new initiates and 91.2% (95% CI 90.5%–91.8%) of all ART users were retained (at original clinic or transferred), indicating that routine program data underestimated retention in care markedly. In Lusaka Province, HIV RNA levels greater than or equal to 1,000 copies/ml were present in 18.1% (95% CI 14.0%–22.3%) of patients in care, 71.3% (95% CI 58.2%–84.4%) of lost patients, and 24.7% (95% CI 21.0%–29.3%). The main study limitations were imperfect response rates and the use of self-reported care status.

Conclusions

In this region of Zambia, routine program data underestimated retention, and the point prevalence of unsuppressed HIV RNA was high when lost patients were accounted for. Viremia was prevalent among patients who unofficially transferred: Sustained engagement remains a challenge among HIV patients in Zambia, and targeted sampling is an effective strategy to identify such gaps in the care cascade and monitor programmatic progress.

The missed potential of CD4 and viral load testing to improve clinical outcomes for people living with HIV in lower-resource settings

Me, 29/05/2019 - 23:00

by Peter D. Ehrenkranz, Solange L. Baptiste, Helen Bygrave, Tom Ellman, Naoko Doi, Anna Grimsrud, Andreas Jahn, Thokozani Kalua, Rose Kolola Nyirenda, Michael O. Odo, Pascale Ondoa, Lara Vojnov, Charles B. Holmes

In a Policy Forum, Peter Ehrenkranz and colleagues discuss the contribution of CD4 and viral load testing to outcomes for people with HIV in low- and middle-income countries.

Malaria morbidity and mortality following introduction of a universal policy of artemisinin-based treatment for malaria in Papua, Indonesia: A longitudinal surveillance study

Me, 29/05/2019 - 23:00

by Enny Kenangalem, Jeanne Rini Poespoprodjo, Nicholas M. Douglas, Faustina Helena Burdam, Ketut Gdeumana, Ferry Chalfein, Prayoga, Franciscus Thio, Angela Devine, Jutta Marfurt, Govert Waramori, Shunmay Yeung, Rintis Noviyanti, Pasi Penttinen, Michael J. Bangs, Paulus Sugiarto, Julie A. Simpson, Yati Soenarto, Nicholas M. Anstey, Ric N. Price

Background

Malaria control activities can have a disproportionately greater impact on Plasmodium falciparum than on P. vivax in areas where both species are coendemic. We investigated temporal trends in malaria-related morbidity and mortality in Papua, Indonesia, before and after introduction of a universal, artemisinin-based antimalarial treatment strategy for all Plasmodium species.

Methods and findings

A prospective, district-wide malariometric surveillance system was established in April 2004 to record all cases of malaria at community clinics and the regional hospital and maintained until December 2013. In March 2006, antimalarial treatment policy was changed to artemisinin combination therapy for uncomplicated malaria and intravenous artesunate for severe malaria due to any Plasmodium species. Over the study period, a total of 418,238 patients presented to the surveillance facilities with malaria. The proportion of patients with malaria requiring admission to hospital fell from 26.9% (7,745/28,789) in the pre–policy change period (April 2004 to March 2006) to 14.0% (4,786/34,117) in the late transition period (April 2008 to December 2009), a difference of −12.9% (95% confidence interval [CI] −13.5% to −12.2%). There was a significant fall in the mortality of patients presenting to the hospital with P. falciparum malaria (0.53% [100/18,965] versus 0.32% [57/17,691]; difference = −0.21% [95% CI −0.34 to −0.07]) but not in patients with P. vivax malaria (0.28% [21/7,545] versus 0.23% [28/12,397]; difference = −0.05% [95% CI −0.20 to 0.09]). Between the same periods, the overall proportion of malaria due to P. vivax rose from 44.1% (30,444/69,098) to 53.3% (29,934/56,125) in the community clinics and from 32.4% (9,325/28,789) to 44.1% (15,035/34,117) at the hospital. After controlling for population growth and changes in treatment-seeking behaviour, the incidence of P. falciparum malaria fell from 511 to 249 per 1,000 person-years (py) (incidence rate ratio [IRR] = 0.49 [95% CI 0.48–0.49]), whereas the incidence of P. vivax malaria fell from 331 to 239 per 1,000 py (IRR = 0.72 [95% CI 0.71–0.73]). The main limitations of our study were possible confounding from changes in healthcare provision, a growing population, and significant shifts in treatment-seeking behaviour following implementation of a new antimalarial policy.

Conclusions

In this area with high levels of antimalarial drug resistance, adoption of a universal policy of efficacious artemisinin-based therapy for malaria infections due to any Plasmodium species was associated with a significant reduction in total malaria-attributable morbidity and mortality. The burden of P. falciparum malaria was reduced to a greater extent than that of P. vivax malaria. In coendemic regions, the timely elimination of malaria will require that safe and effective radical cure of both the blood and liver stages of the parasite is widely available for all patients at risk of malaria.

Diagnosing growth in low-grade gliomas with and without longitudinal volume measurements: A retrospective observational study

Ma, 28/05/2019 - 23:00

by Hassan M. Fathallah-Shaykh, Andrew DeAtkine, Elizabeth Coffee, Elias Khayat, Asim K. Bag, Xiaosi Han, Paula Province Warren, Markus Bredel, John Fiveash, James Markert, Nidhal Bouaynaya, Louis B. Nabors

Background

Low-grade gliomas cause significant neurological morbidity by brain invasion. There is no universally accepted objective technique available for detection of enlargement of low-grade gliomas in the clinical setting; subjective evaluation by clinicians using visual comparison of longitudinal radiological studies is the gold standard. The aim of this study is to determine whether a computer-assisted diagnosis (CAD) method helps physicians detect earlier growth of low-grade gliomas.

Methods and findings

We reviewed 165 patients diagnosed with grade 2 gliomas, seen at the University of Alabama at Birmingham clinics from 1 July 2017 to 14 May 2018. MRI scans were collected during the spring and summer of 2018. Fifty-six gliomas met the inclusion criteria, including 19 oligodendrogliomas, 26 astrocytomas, and 11 mixed gliomas in 30 males and 26 females with a mean age of 48 years and a range of follow-up of 150.2 months (difference between highest and lowest values). None received radiation therapy. We also studied 7 patients with an imaging abnormality without pathological diagnosis, who were clinically stable at the time of retrospective review (14 May 2018). This study compared growth detection by 7 physicians aided by the CAD method with retrospective clinical reports. The tumors of 63 patients (56 + 7) in 627 MRI scans were digitized, including 34 grade 2 gliomas with radiological progression and 22 radiologically stable grade 2 gliomas. The CAD method consisted of tumor segmentation, computing volumes, and pointing to growth by the online abrupt change-of-point method, which considers only past measurements. Independent scientists have evaluated the segmentation method. In 29 of the 34 patients with progression, the median time to growth detection was only 14 months for CAD compared to 44 months for current standard of care radiological evaluation (p < 0.001). Using CAD, accurate detection of tumor enlargement was possible with a median of only 57% change in the tumor volume as compared to a median of 174% change of volume necessary to diagnose tumor growth using standard of care clinical methods (p < 0.001). In the radiologically stable group, CAD facilitated growth detection in 13 out of 22 patients. CAD did not detect growth in the imaging abnormality group. The main limitation of this study was its retrospective design; nevertheless, the results depict the current state of a gold standard in clinical practice that allowed a significant increase in tumor volumes from baseline before detection. Such large increases in tumor volume would not be permitted in a prospective design. The number of glioma patients (n = 56) is a limitation; however, it is equivalent to the number of patients in phase II clinical trials.

Conclusions

The current practice of visual comparison of longitudinal MRI scans is associated with significant delays in detecting growth of low-grade gliomas. Our findings support the idea that physicians aided by CAD detect growth at significantly smaller volumes than physicians using visual comparison alone. This study does not answer the questions whether to treat or not and which treatment modality is optimal. Nonetheless, early growth detection sets the stage for future clinical studies that address these questions and whether early therapeutic interventions prolong survival and improve quality of life.

Predicting progression to active tuberculosis: A rate-limiting step on the path to elimination

Ve, 24/05/2019 - 23:00

by Ajit Lalvani, Luis C. Berrocal-Almanza, Alice Halliday

In a Perspective, Ajit Lalvani and colleagues discuss new approaches to predicting progression to active tuberculosis.

Evaluation of RESPOND, a patient-centred program to prevent falls in older people presenting to the emergency department with a fall: A randomised controlled trial

Ve, 24/05/2019 - 23:00

by Anna Barker, Peter Cameron, Leon Flicker, Glenn Arendts, Caroline Brand, Christopher Etherton-Beer, Andrew Forbes, Terry Haines, Anne-Marie Hill, Peter Hunter, Judy Lowthian, Samuel R. Nyman, Julie Redfern, De Villiers Smit, Nicholas Waldron, Eileen Boyle, Ellen MacDonald, Darshini Ayton, Renata Morello, Keith Hill

Background

Falls are a leading reason for older people presenting to the emergency department (ED), and many experience further falls. Little evidence exists to guide secondary prevention in this population. This randomised controlled trial (RCT) investigated whether a 6-month telephone-based patient-centred program—RESPOND—had an effect on falls and fall injuries in older people presenting to the ED after a fall.

Methods and findings

Community-dwelling people aged 60–90 years presenting to the ED with a fall and planned for discharge home within 72 hours were recruited from two EDs in Australia. Participants were enrolled if they could walk without hands-on assistance, use a telephone, and were free of cognitive impairment (Mini-Mental State Examination > 23). Recruitment occurred between 1 April 2014 and 29 June 2015. Participants were randomised to receive either RESPOND (intervention) or usual care (control). RESPOND comprised (1) home-based risk assessment; (2) 6 months telephone-based education, coaching, goal setting, and support for evidence-based risk factor management; and (3) linkages to existing services. Primary outcomes were falls and fall injuries in the 12-month follow-up. Secondary outcomes included ED presentations, hospital admissions, fractures, death, falls risk, falls efficacy, and quality of life. Assessors blind to group allocation collected outcome data via postal calendars, telephone follow-up, and hospital records. There were 430 people in the primary outcome analysis—217 randomised to RESPOND and 213 to control. The mean age of participants was 73 years; 55% were female. Falls per person-year were 1.15 in the RESPOND group and 1.83 in the control (incidence rate ratio [IRR] 0.65 [95% CI 0.43–0.99]; P = 0.042). There was no significant difference in fall injuries (IRR 0.81 [0.51–1.29]; P = 0.374). The rate of fractures was significantly lower in the RESPOND group compared with the control (0.05 versus 0.12; IRR 0.37 [95% CI 0.15–0.91]; P = 0.03), but there were no significant differences in other secondary outcomes between groups: ED presentations, hospitalisations or falls risk, falls efficacy, and quality of life. There were two deaths in the RESPOND group and one in the control group. No adverse events or unintended harm were reported. Limitations of this study were the high number of dropouts (n = 93); possible underreporting of falls, fall injuries, and hospitalisations across both groups; and the relatively small number of fracture events.

Conclusions

In this study, providing a telephone-based, patient-centred falls prevention program reduced falls but not fall injuries, in older people presenting to the ED with a fall. Among secondary outcomes, only fractures reduced. Adopting patient-centred strategies into routine clinical practice for falls prevention could offer an opportunity to improve outcomes and reduce falls in patients attending the ED.

Trial registration

Australian New Zealand Clinical Trials Registry (ACTRN12614000336684).

Research to improve differentiated HIV service delivery interventions: Learning to learn as we do

Ma, 21/05/2019 - 23:00

by Elvin H. Geng, Charles B. Holmes

In a Perspective, Elvin Geng and Charles Holmes discuss research on differentiated service delivery in HIV care.

The impact of community- versus clinic-based adherence clubs on loss from care and viral suppression for antiretroviral therapy patients: Findings from a pragmatic randomized controlled trial in South Africa

Ma, 21/05/2019 - 23:00

by Colleen F. Hanrahan, Sheree R. Schwartz, Mutsa Mudavanhu, Nora S. West, Lillian Mutunga, Valerie Keyser, Jean Bassett, Annelies Van Rie

Background

Adherence clubs, where groups of 25–30 patients who are virally suppressed on antiretroviral therapy (ART) meet for counseling and medication pickup, represent an innovative model to retain patients in care and facilitate task-shifting. This intervention replaces traditional clinical care encounters with a 1-hour group session every 2–3 months, and can be organized at a clinic or a community venue. We performed a pragmatic randomized controlled trial to compare loss from club-based care between community- and clinic-based adherence clubs.

Methods and findings

Patients on ART with undetectable viral load at Witkoppen Health and Welfare Centre in Johannesburg, South Africa, were randomized 1:1 to a clinic- or community-based adherence club. Clubs were held every other month. All participants received annual viral load monitoring and medical exam at the clinic. Participants were referred back to clinic-based standard care if they missed a club visit and did not pick up ART medications within 5 days, had 2 consecutive late ART medication pickups, developed a disqualifying (excluding) comorbidity, or had viral rebound. From February 12, 2014, to May 31, 2015, we randomized 775 eligible adults into 12 pairs of clubs—376 (49%) into clinic-based clubs and 399 (51%) into community-based clubs. Characteristics were similar by arm: 65% female, median age 38 years, and median CD4 count 506 cells/mm3. Overall, 47% (95% CI 44%–51%) experienced the primary outcome of loss from club-based care. Among community-based club participants, the cumulative proportion lost from club-based care was 52% (95% CI 47%–57%), compared to 43% (95% CI 38%–48%, p = 0.002) among clinic-based club participants. The risk of loss to club-based care was higher among participants assigned to community-based clubs than among those assigned to clinic-based clubs (adjusted hazard ratio 1.38, 95% CI 1.02–1.87, p = 0.032), after accounting for sex, age, nationality, time on ART, baseline CD4 count, and employment status. Among those who were lost from club-based care (n = 367), the most common reason was missing a club visit and the associated ART medication pickup entirely (54%, 95% CI 49%–59%), and was similar by arm (p = 0.086). Development of an excluding comorbidity occurred in 3% overall of those lost from club-based care, and was not different by arm (p = 0.816); no deaths occurred in either arm during club-based care. Viral rebound occurred in 13% of those lost from community club-based care and 21% of those lost from clinic-based care (p = 0.051). In post hoc secondary analysis, among those referred to standard care, 72% (95% CI 68%–77%) reengaged in clinic-based care within 90 days of their club-based care discontinuation date. The main limitations of the trial are the lack of a comparison group receiving routine clinic-based standard care and the potential limited generalizability due to the single-clinic setting.

Conclusions

These findings demonstrate that overall loss from an adherence club intervention was high in this setting and that, importantly, it was worse in community-based adherence clubs compared to those based at the clinic. We urge caution in assuming that the effectiveness of clinic-based interventions will carry over to community settings, without a better understanding of patient-level factors associated with successful retention in care.

Trial registration

Pan African Clinical Trials Registry (PACTR201602001460157).

Heart failure and healthcare informatics

Ma, 21/05/2019 - 23:00

by Mohamed S. Anwar, Alan G. Japp, Nicholas L. Mills

Diagnostic tests, drug prescriptions, and follow-up patterns after incident heart failure: A cohort study of 93,000 UK patients

Ma, 21/05/2019 - 23:00

by Nathalie Conrad, Andrew Judge, Dexter Canoy, Jenny Tran, Johanna O’Donnell, Milad Nazarzadeh, Gholamreza Salimi-Khorshidi, F. D. Richard Hobbs, John G. Cleland, John J. V. McMurray, Kazem Rahimi

Background

Effective management of heart failure is complex, and ensuring evidence-based practice presents a major challenge to health services worldwide. Over the past decade, the United Kingdom introduced a series of national initiatives to improve evidence-based heart failure management, including a landmark pay-for-performance scheme in primary care and a national audit in secondary care started in 2004 and 2007, respectively. Quality improvement efforts have been evaluated within individual clinical settings, but patterns of care across its continuum, although a critical component of chronic disease management, have not been studied. We have designed this study to investigate patients’ trajectories of care around the time of diagnosis and their variation over time by age, sex, and socioeconomic status.

Methods and findings

For this retrospective population-based study, we used linked primary and secondary health records from a representative sample of the UK population provided by the Clinical Practice Research Datalink (CPRD). We identified 93,074 individuals newly diagnosed with heart failure between 2002 and 2014, with a mean age of 76.7 years and of which 49% were women. We examined five indicators of care: (i) diagnosis care setting (inpatient or outpatient), (ii) posthospitalisation follow-up in primary care, (iii) diagnostic investigations, (iv) prescription of essential drugs, and (v) drug treatment dose. We used Poisson and linear regression models to calculate category-specific risk ratios (RRs) or adjusted differences and 95% confidence intervals (CIs), adjusting for year of diagnosis, age, sex, region, and socioeconomic status. From 2002 to 2014, indicators of care presented diverging trends. Outpatient diagnoses and follow-up after hospital discharge in primary care declined substantially (ranging from 56% in 2002 to 36% in 2014, RR 0.64 [0.62, 0.67] and 20% to 14%, RR 0.73 [0.65, 0.82], respectively). Primary care referral for diagnostic investigations and appropriate initiation of beta blockers and angiotensin-converting–enzyme inhibitors (ACE-Is) or angiotensin receptor blockers (ARBs) both increased significantly (37% versus 82%, RR 2.24 [2.15, 2.34] and 18% versus 63%, RR 3.48 [2.72, 4.43], respectively). Yet, the average daily dose prescribed remained below guideline recommendations (42% for ACE-Is or ARBs, 29% for beta blockers in 2014) and was largely unchanged beyond the first 30 days after diagnosis. Despite increasing rates of treatment initiation, the overall dose prescribed to patients in the 12 months following diagnosis improved little over the period of study (adjusted difference for the combined dose of beta blocker and ACE-I or ARB: +6% [+2%, +10%]). Women and patients aged over 75 years presented significant gaps across all five indicators of care. Our study was limited by the available clinical information, which did not include exact left ventricular ejection fraction values, investigations performed during hospital admissions, or information about follow-up in community heart failure clinics.

Conclusions

Management of heart failure patients in the UK presents important shortcomings that affect screening, continuity of care, and medication titration and disproportionally impact women and older people. National reporting and incentive schemes confined to individual clinical settings have been insufficient to identify these gaps and address patients’ long-term care needs.

<i>Plasmodium vivax</i> morbidity after radical cure: A cohort study in Central Vietnam

Ve, 17/05/2019 - 23:00

by Thanh Vinh Pham, Hong Van Nguyen, Angel Rosas Aguirre, Van Van Nguyen, Mario A. Cleves, Xa Xuan Nguyen, Thao Thanh Nguyen, Duong Thanh Tran, Hung Xuan Le, Niel Hens, Anna Rosanas-Urgell, Umberto D’Alessandro, Niko Speybroeck, Annette Erhart

Background

In Vietnam, the importance of vivax malaria relative to falciparum during the past decade has steadily increased to 50%. This, together with the spread of multidrug-resistant Plasmodium falciparum, is a major challenge for malaria elimination. A 2-year prospective cohort study to assess P. vivax morbidity after radical cure treatment and related risk factors was conducted in Central Vietnam.

Methods and findings

The study was implemented between April 2009 and December 2011 in four neighboring villages in a remote forested area of Quang Nam province. P. vivax-infected patients were treated radically with chloroquine (CQ; 25 mg/kg over 3 days) and primaquine (PQ; 0.5 mg/kg/day for 10 days) and visited monthly (malaria symptoms and blood sampling) for up to 2 years. Time to first vivax recurrence was estimated by Kaplan–Meier survival analysis, and risk factors for first and recurrent infections were identified by Cox regression models. Among the 260 P. vivax patients (61% males [159/260]; age range 3–60) recruited, 240 completed the 10-day treatment, 223 entered the second month of follow-up, and 219 were followed for at least 12 months. Most individuals (76.78%, 171/223) had recurrent vivax infections identified by molecular methods (polymerase chain reaction [PCR]); in about half of them (55.61%, 124/223), infection was detected by microscopy, and 84 individuals (37.67%) had symptomatic recurrences. Median time to first recurrence by PCR was 118 days (IQR 59–208). The estimated probability of remaining free of recurrence by month 24 was 20.40% (95% CI [14.42; 27.13]) by PCR, 42.52% (95% CI [35.41; 49.44]) by microscopy, and 60.69% (95% CI [53.51; 67.11]) for symptomatic recurrences. The main risk factor for recurrence (first or recurrent) was prior P. falciparum infection. The main limitations of this study are the age of the results and the absence of a comparator arm, which does not allow estimating the proportion of vivax relapses among recurrent infections.

Conclusion

A substantial number of P. vivax recurrences, mainly submicroscopic (SM) and asymptomatic, were observed after high-dose PQ treatment (5.0 mg/kg). Prior P. falciparum infection was an important risk factor for all types of vivax recurrences. Malaria elimination efforts need to address this largely undetected P. vivax transmission by simultaneously tackling the reservoir of P. falciparum and P. vivax infections.

Women’s and girls’ experiences of menstruation in low- and middle-income countries: A systematic review and qualitative metasynthesis

Gi, 16/05/2019 - 23:00

by Julie Hennegan, Alexandra K. Shannon, Jennifer Rubli, Kellogg J. Schwab, G. J. Melendez-Torres

Background

Attention to women’s and girls’ menstrual needs is critical for global health and gender equality. The importance of this neglected experience has been elucidated by a growing body of qualitative research, which we systematically reviewed and synthesised.

Methods and findings

We undertook systematic searching to identify qualitative studies of women’s and girls’ experiences of menstruation in low- and middle-income countries (LMICs). Of 6,892 citations screened, 76 studies reported in 87 citations were included. Studies captured the experiences of over 6,000 participants from 35 countries. This included 45 studies from sub-Saharan Africa (with the greatest number of studies from Kenya [n = 7], Uganda [n = 6], and Ethiopia [n = 5]), 21 from South Asia (including India [n = 12] and Nepal [n = 5]), 8 from East Asia and the Pacific, 5 from Latin America and the Caribbean, 5 from the Middle East and North Africa, and 1 study from Europe and Central Asia. Through synthesis, we identified overarching themes and their relationships to develop a directional model of menstrual experience. This model maps distal and proximal antecedents of menstrual experience through to the impacts of this experience on health and well-being. The sociocultural context, including menstrual stigma and gender norms, influenced experiences by limiting knowledge about menstruation, limiting social support, and shaping internalised and externally enforced behavioural expectations. Resource limitations underlay inadequate physical infrastructure to support menstruation, as well as an economic environment restricting access to affordable menstrual materials. Menstrual experience included multiple themes: menstrual practices, perceptions of practices and environments, confidence, shame and distress, and containment of bleeding and odour. These components of experience were interlinked and contributed to negative impacts on women’s and girls’ lives. Impacts included harms to physical and psychological health as well as education and social engagement. Our review is limited by the available studies. Study quality was varied, with 18 studies rated as high, 35 medium, and 23 low trustworthiness. Sampling and analysis tended to be untrustworthy in lower-quality studies. Studies focused on the experiences of adolescent girls were most strongly represented, and we achieved early saturation for this group. Reflecting the focus of menstrual health research globally, there was an absence of studies focused on adult women and those from certain geographical areas.

Conclusions

Through synthesis of extant qualitative studies of menstrual experience, we highlight consistent challenges and developed an integrated model of menstrual experience. This model hypothesises directional pathways that could be tested by future studies and may serve as a framework for program and policy development by highlighting critical antecedents and pathways through which interventions could improve women’s and girls’ health and well-being.

Review protocol registration

The review protocol registration is PROSPERO: CRD42018089581.

Limiting global warming to 1.5 to 2.0°C—A unique and necessary role for health professionals

Ma, 14/05/2019 - 23:00

by Edward W. Maibach, Mona Sarfaty, Mark Mitchell, Rob Gould

In an Editorial, Edward Maibach and colleagues discuss the important role of health professionals in future responses to threats of climate change.

Predicting seizures in pregnant women with epilepsy: Development and external validation of a prognostic model

Lu, 13/05/2019 - 23:00

by John Allotey, Borja M. Fernandez-Felix, Javier Zamora, Ngawai Moss, Manny Bagary, Andrew Kelso, Rehan Khan, Joris A. M. van der Post, Ben W. Mol, Alexander M. Pirie, Dougall McCorry, Khalid S. Khan, Shakila Thangaratinam

Background

Seizures are the main cause of maternal death in women with epilepsy, but there are no tools for predicting seizures in pregnancy. We set out to develop and validate a prognostic model, using information collected during the antenatal booking visit, to predict seizure risk at any time in pregnancy and until 6 weeks postpartum in women with epilepsy on antiepileptic drugs.

Methods and findings

We used datasets of a prospective cohort study (EMPiRE) of 527 pregnant women with epilepsy on medication recruited from 50 hospitals in the UK (4 November 2011–17 August 2014). The model development cohort comprised 399 women whose antiepileptic drug doses were adjusted based on clinical features only; the validation cohort comprised 128 women whose drug dose adjustments were informed by serum drug levels. The outcome was epileptic (non-eclamptic) seizure captured using diary records. We fitted the model using LASSO (least absolute shrinkage and selection operator) regression, and reported the performance using C-statistic (scale 0–1, values > 0.5 show discrimination) and calibration slope (scale 0–1, values near 1 show accuracy) with 95% confidence intervals (CIs). We determined the net benefit (a weighted sum of true positive and false positive classifications) of using the model, with various probability thresholds, to aid clinicians in making individualised decisions regarding, for example, referral to tertiary care, frequency and intensity of monitoring, and changes in antiepileptic medication. Seizures occurred in 183 women (46%, 183/399) in the model development cohort and in 57 women (45%, 57/128) in the validation cohort. The model included age at first seizure, baseline seizure classification, history of mental health disorder or learning difficulty, occurrence of tonic-clonic and non-tonic-clonic seizures in the 3 months before pregnancy, previous admission to hospital for seizures during pregnancy, and baseline dose of lamotrigine and levetiracetam. The C-statistic was 0.79 (95% CI 0.75, 0.84). On external validation, the model showed good performance (C-statistic 0.76, 95% CI 0.66, 0.85; calibration slope 0.93, 95% CI 0.44, 1.41) but with imprecise estimates. The EMPiRE model showed the highest net proportional benefit for predicted probability thresholds between 12% and 99%. Limitations of this study include the varied gestational ages of women at recruitment, retrospective patient recall of seizure history, potential variations in seizure classification, the small number of events in the validation cohort, and the clinical utility restricted to decision-making thresholds above 12%. The model findings may not be generalisable to low- and middle-income countries, or when information on all predictors is not available.

Conclusions

The EMPiRE model showed good performance in predicting the risk of seizures in pregnant women with epilepsy who are prescribed antiepileptic drugs. Integration of the tool within the antenatal booking visit, deployed as a simple nomogram, can help to optimise care in women with epilepsy.

Individualized decision aid for diverse women with lupus nephritis (IDEA-WON): A randomized controlled trial

Me, 08/05/2019 - 23:00

by Jasvinder A. Singh, Liana Fraenkel, Candace Green, Graciela S. Alarcón, Jennifer L. Barton, Kenneth G. Saag, Leslie M. Hanrahan, Sandra C. Raymond, Robert P. Kimberly, Amye L. Leong, Elyse Reyes, Richard L. Street Jr., Maria E. Suarez-Almazor, Guy S. Eakin, Laura Marrow, Charity J. Morgan, Brennda Caro, Jeffrey A. Sloan, Bochra Jandali, Salvador R. Garcia, Jennifer Grossman, Kevin L. Winthrop, Laura Trupin, Maria Dall’Era, Alexa Meara, Tara Rizvi, W. Winn Chatham, Jinoos Yazdany

Background

Treatment decision-making regarding immunosuppressive therapy is challenging for individuals with lupus. We assessed the effectiveness of a decision aid for immunosuppressive therapy in lupus nephritis.

Methods and findings

In a United States multicenter, open-label, randomized controlled trial (RCT), adult women with lupus nephritis, mostly from racial/ethnic minority backgrounds with low socioeconomic status (SES), seen in in- or outpatient settings, were randomized to an individualized, culturally tailored, computerized decision aid versus American College of Rheumatology (ACR) lupus pamphlet (1:1 ratio), using computer-generated randomization. We hypothesized that the co-primary outcomes of decisional conflict and informed choice regarding immunosuppressive medications would improve more in the decision aid group. Of 301 randomized women, 298 were analyzed; 47% were African-American, 26% Hispanic, and 15% white. Mean age (standard deviation [SD]) was 37 (12) years, 57% had annual income of <$40,000, and 36% had a high school education or less. Compared with the provision of the ACR lupus pamphlet (n = 147), participants randomized to the decision aid (n = 151) had (1) a clinically meaningful and statistically significant reduction in decisional conflict, 21.8 (standard error [SE], 2.5) versus 12.7 (SE, 2.0; p = 0.005) and (2) no difference in informed choice in the main analysis, 41% versus 31% (p = 0.08), but clinically meaningful and statistically significant difference in sensitivity analysis (net values for immunosuppressives positive [in favor] versus negative [against]), 50% versus 35% (p = 0.006). Unresolved decisional conflict was lower in the decision aid versus pamphlet groups, 22% versus 44% (p < 0.001). Significantly more patients in the decision aid versus pamphlet group rated information to be excellent for understanding lupus nephritis (49% versus 33%), risk factors (43% versus 27%), medication options (50% versus 33%; p ≤ 0.003 for all); and the ease of use of materials was higher in the decision aid versus pamphlet groups (51% versus 38%; p = 0.006). Key study limitations were the exclusion of men, short follow-up, and the lack of clinical outcomes, including medication adherence.

Conclusions

An individualized decision aid was more effective than usual care in reducing decisional conflict for choice of immunosuppressive medications in women with lupus nephritis.

Trial registration

Clinicaltrials.gov, NCT02319525.

Effects of a clinical medication review focused on personal goals, quality of life, and health problems in older persons with polypharmacy: A randomised controlled trial (DREAMeR-study)

Me, 08/05/2019 - 23:00

by Sanne Verdoorn, Henk-Frans Kwint, Jeanet W. Blom, Jacobijn Gussekloo, Marcel L. Bouvy

Background

Clinical medication reviews (CMRs) are increasingly performed in older persons with multimorbidity and polypharmacy to reduce drug-related problems (DRPs). However, there is limited evidence that a CMR can improve clinical outcomes. Little attention has been paid to patients’ preferences and needs. The aim of this study was to investigate the effect of a patient-centred CMR, focused on personal goals, on health-related quality of life (HR-QoL), and on number of health problems.

Methods and findings

This study was a randomised controlled trial (RCT) performed in 35 community pharmacies and cooperating general practices in the Netherlands. Community-dwelling older persons (≥70 years) with polypharmacy (≥7 long-term medications) were randomly assigned to usual care or to receive a CMR. Randomisation was performed at the patient level per pharmacy using block randomisation. The primary outcomes were HR-QoL (assessed with EuroQol [EQ]-5D-5L and EQ-Visual Analogue Scale [VAS]) and number of health problems (such as pain or dizziness), after 3 and 6 months. Health problems were measured with a self-developed written questionnaire as the total number of health problems and number of health problems with a moderate to severe impact on daily life. Between April 2016 and February 2017, we recruited 629 participants (54% females, median age 79 years) and randomly assigned them to receive the intervention (n = 315) or usual care (n = 314). Over 6 months, in the intervention group, HR-QoL measured with EQ-VAS increased by 3.4 points (95% confidence interval [CI] 0.94 to 5.8; p = 0.006), and the number of health problems with impact on daily life decreased by 12% (difference at 6 months −0.34; 95% CI −0.62 to −0.044; p = 0.024) as compared with the control group. There was no significant difference between the intervention group and control group for HR-QoL measured with EQ-5D-5L (difference at 6 months = −0.0022; 95% CI −0.024 to 0.020; p = 0.85) or total number of health problems (difference at 6 months = −0.30; 95% CI −0.64 to 0.054; p = 0.099). The main study limitations include the risk of bias due to the lack of blinding and difficulties in demonstrating which part of this complex intervention (for example, goal setting, extra attention to patients, reducing health problems, drug changes) contributed to the effects that we observed.

Conclusions

In this study, we observed that a CMR focused on personal goals improved older patients’ lives and wellbeing by increasing quality of life measured with EQ-VAS and decreasing the number of health problems with impact on daily life, although it did not significantly affect quality of life measured with the EQ-5D. Including the patient’s personal goals and preferences in a medication review may help to establish these effects on outcomes that are relevant to older patients’ lives.

Trial registration

Netherlands Trial Register; NTR5713