PLoS Medicine

Condividi contenuti
Aggiornato: 21 ore 40 min fa

Effectiveness of a brief group behavioural intervention on psychological distress in young adolescent Syrian refugees: A randomised controlled trial

Ve, 12/08/2022 - 16:00

by Richard A. Bryant, Aiysha Malik, Ibrahim Said Aqel, Maha Ghatasheh, Rand Habashneh, Katie S. Dawson, Sarah Watts, Mark J. D. Jordans, Felicity L. Brown, Mark van Ommeren, Aemal Akhtar

Background

Millions of young adolescents in low- and middle-income countries (LMICs) affected by humanitarian crises experience elevated rates of poor mental health. There is a need for scalable programs that can improve the mental health of young adolescents. This study evaluated the effectiveness of a nonspecialist delivered group-based intervention (Early Adolescent Skills for Emotions (EASE)) to improve young adolescents’ mental health.

Methods and findings

In this single-blind, parallel, controlled trial, Syrian refugees aged 10 to 14 years in Jordan were identified through screening of psychological distress as defined by scores ≥15 on the Paediatric Symptom Scale. Participants were randomised to either EASE or enhanced usual care (EUC) involving referral to local psychosocial services (on a 1:1.6 ratio). Participants were aware of treatment allocation but assessors were blinded. Primary outcomes were scores on the Paediatric Symptom Checklist (PSC; internalising, externalising, and attentional difficulty scales) assessed at week 0, 9 weeks, and 3 months after treatment (primary outcome time point). It was hypothesised that EASE would result in greater reductions on internalising symptoms than EUC. Secondary outcomes were depression, posttraumatic stress, well-being, functioning, school belongingness, and caregivers’ parenting and mental health. Between June 2019 and January 2020, 1,842 young adolescent refugees were screened for eligibility on the basis of psychological distress. There were 520 adolescents (28.2%) who screened positive, of whom 471 (90.6%) agreed to enter the trial. Overall, 185 were assigned to EASE and 286 to EUC, and 169 and 254 were retained at 3 months for EASE and EUC, respectively. Intent-to-treat analyses indicated that at 3 months, EASE resulted in greater reduction on the PSC-internalising scale than EUC (estimated mean difference 0.69, 95% CI 0.19 to 1.19; p = 0.007; effect size, 0.38) but there were no differences for PSC-externalising (estimated mean difference 0.24, 95% CI −0.43 to 0.91; p = 0.49; effect size, −0.10), PSC-attentional problem (estimated mean difference −0.01, 95% CI −0.51 to 0.54; p = 0.97; effect size, −0.01) scores, or on depression, posttraumatic stress, well-being, functioning, or school belongingness. Relative to EUC, caregivers in EASE had less psychological distress (estimated mean difference 1.95, 95% CI 0.71 to 3.19; p = 0.002) and inconsistent disciplinary parenting (mean difference 1.54, 95% CI 1.03 to 2.05; p < 0.001). Secondary analyses that (a) focused on adolescents with probable internalising disorders; (b) completed the 3-month assessment; and (c) controlled for trauma exposure did not alter the primary results. Mediation analysis indicated that for caregivers in the EASE condition, reduction in inconsistent disciplinary parenting was associated with reduced attentional (β = 0.11, SE 0.07; 95% CI 0.003, 0.274) and internalising (β = 0.11, SE 0.07; 95% CI 0.003, 0.274) problems in their children. No adverse events were attributable to the intervention. A limitation was that EUC was not matched to EASE in terms of facilitator attention or group involvement.

Conclusions

EASE led to reduced internalising problems in young refugee adolescents and was associated with reduced distress and less inconsistent disciplinary parenting in caregivers. This intervention has the potential as a scalable intervention to mitigate young adolescents’ emotional difficulties in LMIC.

Trial registration

Prospectively registered at Australian and New Zealand Clinical Trials Registry: ACTRN12619000341123.

Lipids and atrial fibrillation: New insights into a paradox

Gi, 11/08/2022 - 16:00

by Dimitrios Sagris, Stephanie L. Harrison, Gregory Y. H. Lip

In this Perspective, Dimitrios Sagris, Stephanie Harrison, and Gregory Lip discuss new evidence concerning the paradoxical relationship between circulating lipids and atrial fibrillation.

Lipid levels in midlife and risk of atrial fibrillation over 3 decades—Experience from the Swedish AMORIS cohort: A cohort study

Gi, 11/08/2022 - 16:00

by Mozhu Ding, Alexandra Wennberg, Bruna Gigante, Göran Walldius, Niklas Hammar, Karin Modig

Background

The role of cholesterol levels in the development of atrial fibrillation (AF) is still controversial. In addition, whether and to what extent apolipoproteins are associated with the risk of AF is rarely studied. In this study, we aimed to investigate the association between blood lipid levels in midlife and subsequent risk of new-onset AF.

Methods and findings

This population-based study included 65,136 individuals aged 45 to 60 years without overt cardiovascular diseases (CVDs) from the Swedish Apolipoprotein-Related Mortality Risk (AMORIS) cohort. Lipids were measured in 1985 to 1996, and individuals were followed until December 31, 2019 for incident AF (i.e., study outcome). Hazard ratios (HRs) with 95% confidence intervals (CIs) were estimated using Cox regression, adjusting for age, sex, and socioeconomic status. Over a mean follow-up of 24.2 years (standard deviation 7.5, range 0.2 to 35.9), 13,871 (21.3%) incident AF cases occurred. Higher levels of total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C) were statistically significantly associated with a lower risk of AF during the first 5 years of follow-up (HR = 0.61, 95% CI: 0.41 to 0.99, p = 0.013; HR = 0.64, 95% CI: 0.45 to 0.92, p = 0.016), but not thereafter (HR ranging from 0.94 [95% CI: 0.89 to 1.00, p = 0.038] to 0.96 [95% CI: 0.77 to 1.19, p > 0.05]). Lower levels of high-density lipoprotein cholesterol (HDL-C) and apolipoprotein A-I (ApoA-I) and higher triglycerides (TG)/HDL-C ratio were statistically significantly associated with a higher risk of AF during the entire follow-up (HR ranging from 1.13 [95% CI: 1.07 to 1.19, p < 0.001] to 1.53 [95% CI: 1.12 to 2.00, p = 0.007]). Apolipoprotein B (ApoB)/ApoA-I ratio was not associated with AF risk. The observed associations were similar among those who developed incident heart failure (HF)/coronary heart disease (CHD) and those who did not. The main limitations of this study include lack of adjustments for lifestyle factors and high blood pressure leading to potential residual confounding.

Conclusions

High TC and LDL-C in midlife was associated with a lower risk of AF, but this association was present only within 5 years from lipid measurement and not thereafter. On the contrary, low HDL-C and ApoA-I and high TG/HDL-C ratio were associated with an increased risk of AF over almost 35 years of follow-up. ApoB/ApoA-I ratio was not associated with AF risk.

Income differences in COVID-19 incidence and severity in Finland among people with foreign and native background: A population-based cohort study of individuals nested within households

Me, 10/08/2022 - 16:00

by Sanni Saarinen, Heta Moustgaard, Hanna Remes, Riikka Sallinen, Pekka Martikainen

Background

Although intrahousehold transmission is a key source of Coronavirus Disease 2019 (COVID-19) infections, studies to date have not analysed socioeconomic risk factors on the household level or household clustering of severe COVID-19. We quantify household income differences and household clustering of COVID-19 incidence and severity.

Methods and findings

We used register-based cohort data with individual-level linkage across various administrative registers for the total Finnish population living in working-age private households (N = 4,315,342). Incident COVID-19 cases (N = 38,467) were identified from the National Infectious Diseases Register from 1 July 2020 to 22 February 2021. Severe cases (N = 625) were defined as having at least 3 consecutive days of inpatient care with a COVID-19 diagnosis and identified from the Care Register for Health Care between 1 July 2020 and 31 December 2020. We used 2-level logistic regression with individuals nested within households to estimate COVID-19 incidence and case severity among those infected.Adjusted for age, sex, and regional characteristics, the incidence of COVID-19 was higher (odds ratio [OR] 1.67, 95% CI 1.58 to 1.77, p < 0.001, 28.4% of infections) among individuals in the lowest household income quintile than among those in the highest quintile (18.9%). The difference attenuated (OR 1.23, 1.16 to 1.30, p < 0.001) when controlling for foreign background but not when controlling for other household-level risk factors. In fact, we found a clear income gradient in incidence only among people with foreign background but none among those with native background. The odds of severe illness among those infected were also higher in the lowest income quintile (OR 1.97, 1.52 to 2.56, p < 0.001, 28.0% versus 21.6% in the highest quintile), but this difference was fully attenuated (OR 1.08, 0.77 to 1.52, p = 0.64) when controlling for other individual-level risk factors—comorbidities, occupational status, and foreign background. Both incidence and severity were strongly clustered within households: Around 77% of the variation in incidence and 20% in severity were attributable to differences between households. The main limitation of our study was that the test uptake for COVID-19 may have differed between population subgroups.

Conclusions

Low household income appears to be a strong risk factor for both COVID-19 incidence and case severity, but the income differences are largely driven by having foreign background. The strong household clustering of incidence and severity highlights the importance of household context in the prevention and mitigation of COVID-19 outcomes.

Evaluation of an on-site sanitation intervention against childhood diarrhea and acute respiratory infection 1 to 3.5 years after implementation: Extended follow-up of a cluster-randomized controlled trial in rural Bangladesh

Lu, 08/08/2022 - 16:00

by Jesse D. Contreras, Mahfuza Islam, Andrew Mertens, Amy J. Pickering, Benjamin F. Arnold, Jade Benjamin-Chung, Alan E. Hubbard, Mahbubur Rahman, Leanne Unicomb, Stephen P. Luby, John M. Colford Jr, Ayse Ercumen

Background

Diarrhea and acute respiratory infection (ARI) are leading causes of death in children. The WASH Benefits Bangladesh trial implemented a multicomponent sanitation intervention that led to a 39% reduction in the prevalence of diarrhea among children and a 25% reduction for ARI, measured 1 to 2 years after intervention implementation. We measured longer-term intervention effects on these outcomes between 1 to 3.5 years after intervention implementation, including periods with differing intensity of behavioral promotion.

Methods and findings

WASH Benefits Bangladesh was a cluster-randomized controlled trial of water, sanitation, hygiene, and nutrition interventions (NCT01590095). The sanitation intervention included provision of or upgrades to improved latrines, sani-scoops for feces removal, children’s potties, and in-person behavioral promotion. Promotion was intensive up to 2 years after intervention initiation, decreased in intensity between years 2 to 3, and stopped after 3 years. Access to and reported use of latrines was high in both arms, and latrine quality was significantly improved by the intervention, while use of child feces management tools was low. We enrolled a random subset of households from the sanitation and control arms into a longitudinal substudy, which measured child health with quarterly visits between 1 to 3.5 years after intervention implementation. The study period therefore included approximately 1 year of high-intensity promotion, 1 year of low-intensity promotion, and 6 months with no promotion. We assessed intervention effects on diarrhea and ARI prevalence among children <5 years through intention-to-treat analysis using generalized linear models with robust standard errors. Masking was not possible during data collection, but data analysis was masked. We enrolled 720 households (360 per arm) from the parent trial and made 9,800 child observations between June 2014 and December 2016. Over the entire study period, diarrheal prevalence was lower among children in the sanitation arm (11.9%) compared to the control arm (14.5%) (prevalence ratio [PR] = 0.81, 95% CI 0.66, 1.00, p = 0.05; prevalence difference [PD] = −0.027, 95% CI −0.053, 0, p = 0.05). ARI prevalence did not differ between sanitation (21.3%) and control (22.7%) arms (PR = 0.93, 95% CI 0.82, 1.05, p = 0.23; PD = −0.016, 95% CI −0.043, 0.010, p = 0.23). There were no significant differences in intervention effects between periods with high-intensity versus low-intensity/no promotion. Study limitations include use of caregiver-reported symptoms to define health outcomes and limited data collected after promotion ceased.

Conclusions

The observed effect of the WASH Benefits Bangladesh sanitation intervention on diarrhea in children appeared to be sustained for at least 3.5 years after implementation, including 1.5 years after heavy promotion ceased. Existing latrine access was high in the study setting, suggesting that improving on-site latrine quality can deliver health benefits when latrine use practices are in place. Further work is needed to understand how latrine adoption can be achieved and sustained in settings with low existing access and how sanitation programs can adopt transformative approaches of excreta management, including safe disposal of child and animal feces, to generate a hygienic home environment.

Trial registration

ClinicalTrials.gov; NCT01590095; https://clinicaltrials.gov/ct2/show/NCT01590095.

Correction: Expanding syphilis test uptake using rapid dual self-testing for syphilis and HIV among men who have sex with men in China: A multiarm randomized controlled trial

Gi, 04/08/2022 - 16:00

by Cheng Wang, Jason J. Ong, Peizhen Zhao, Ann Marie Weideman, Weiming Tang, M. Kumi Smith, Michael Marks, Hongyun Fu, Weibin Cheng, Fern Terris-Prestholt, Heping Zheng, Joseph D. Tucker, Bin Yang

Crude and adjusted comparisons of cesarean delivery rates using the Robson classification: A population-based cohort study in Canada and Sweden, 2004 to 2016

Lu, 01/08/2022 - 16:00

by Giulia M. Muraca, K. S. Joseph, Neda Razaz, Linnea V. Ladfors, Sarka Lisonkova, Olof Stephansson

Background

The Robson classification has become a global standard for comparing and monitoring cesarean delivery (CD) rates across populations and over time; however, this classification does not account for differences in important maternal, fetal, and obstetric practice factors known to impact CD rates. The objectives of our study were to identify subgroups of women contributing to differences in the CD rate in Sweden and British Columbia (BC), Canada using the Robson classification and to estimate the contribution of maternal, fetal/infant, and obstetric practice factors to differences in CD rates between countries and over time.

Methods and findings

We conducted a population-based cohort study of deliveries in Sweden (January 1, 2004 to December 31, 2016; n = 1,392,779) and BC (March 1, 2004 to April 31, 2017; n = 559,205). Deliveries were stratified into Robson categories and the CD rate, relative size of each group and its contribution to the overall CD rate were compared between the Swedish and the Canadian cohorts. Poisson and log-binomial regression were used to assess the contribution of maternal, fetal, and obstetric practice factors to spatiotemporal differences in Robson group-specific CD rates between Sweden and BC.Nulliparous women comprised 44.8% of the study population, while women of advanced maternal age (≥35 years) and women with overweight/obesity (≥25 kg/m2) constituted 23.5% and 32.4% of the study population, respectively. The CD rate in Sweden was stable at approximately 17.0% from 2004 to 2016 (p for trend = 0.10), while the CD rate increased in BC from 29.4% to 33.9% (p for trend < 0.001). Differences in CD rates between Sweden and BC varied by Robson group, for example, in Group 1 (nullipara with a term, single, cephalic fetus with spontaneous labor), the CD rate was 8.1% in Sweden and 20.4% in BC (rate ratio [RR] for BC versus Sweden = 2.52, 95% confidence interval [CI] 2.49 to 2.56, p < 0.001) and in Group 2 (nullipara, single, cephalic fetus, term gestation with induction of labor or prelabor CD), the rate of CD was 37.3% in Sweden and 45.9% in BC (RR = 1.23, 95% CI 1.22 to 1.25, p < 0.001). The effect of adjustment for maternal characteristics (e.g., age, body mass index), maternal comorbidity (e.g., preeclampsia), fetal characteristics (e.g., head position), and obstetric practice factors (e.g., epidural) ranged from no effect (e.g., among breech deliveries; Groups 6 and 7) to explaining up to 5.2% of the absolute difference in the CD rate (Group 2: adjusted CD rate in BC 40.7%, adjusted RR = 1.09, 95% CI 1.08 to 1.12, p < 0.001). Adjustment also explained a substantial fraction of the temporal change in CD rates among some Robson groups in BC. Limitations of the study include a lack of information on intrapartum details, such as labor duration as well as maternal and perinatal outcomes associated with the observed differences in CD rates.

Conclusions

In this study, we found that several factors not included in the Robson classification explain a significant proportion of the spatiotemporal difference in CD rates in some Robson groups. These findings suggest that incorporating these factors into explanatory models using the Robson classification may be useful for ensuring that public health initiatives regarding CD rates are evidence informed.

Disparities in distribution of COVID-19 vaccines across US counties: A geographic information system–based cross-sectional study

Gi, 28/07/2022 - 16:00

by Inmaculada Hernandez, Sean Dickson, Shangbin Tang, Nico Gabriel, Lucas A. Berenbrok, Jingchuan Guo

Background

The US Centers for Disease Control and Prevention has repeatedly called for Coronavirus Disease 2019 (COVID-19) vaccine equity. The objective our study was to measure equity in the early distribution of COVID-19 vaccines to healthcare facilities across the US. Specifically, we tested whether the likelihood of a healthcare facility administering COVID-19 vaccines in May 2021 differed by county-level racial composition and degree of urbanicity.

Methods and findings

The outcome was whether an eligible vaccination facility actually administered COVID-19 vaccines as of May 2021, and was defined by spatially matching locations of eligible and actual COVID-19 vaccine administration locations. The outcome was regressed against county-level measures for racial/ethnic composition, urbanicity, income, social vulnerability index, COVID-19 mortality, 2020 election results, and availability of nontraditional vaccination locations using generalized estimating equations.Across the US, 61.4% of eligible healthcare facilities and 76.0% of eligible pharmacies provided COVID-19 vaccinations as of May 2021. Facilities in counties with >42.2% non-Hispanic Black population (i.e., > 95th county percentile of Black race composition) were less likely to serve as COVID-19 vaccine administration locations compared to facilities in counties with <12.5% non-Hispanic Black population (i.e., lower than US average), with OR 0.83; 95% CI, 0.70 to 0.98, p = 0.030. Location of a facility in a rural county (OR 0.82; 95% CI, 0.75 to 0.90, p < 0.001, versus metropolitan county) or in a county in the top quintile of COVID-19 mortality (OR 0.83; 95% CI, 0.75 to 0.93, p = 0.001, versus bottom 4 quintiles) was associated with decreased odds of serving as a COVID-19 vaccine administration location.There was a significant interaction of urbanicity and racial/ethnic composition: In metropolitan counties, facilities in counties with >42.2% non-Hispanic Black population (i.e., >95th county percentile of Black race composition) had 32% (95% CI 14% to 47%, p = 0.001) lower odds of serving as COVID administration facility compared to facilities in counties with below US average Black population. This association between Black composition and odds of a facility serving as vaccine administration facility was not observed in rural or suburban counties. In rural counties, facilities in counties with above US average Hispanic population had 26% (95% CI 11% to 38%, p = 0.002) lower odds of serving as vaccine administration facility compared to facilities in counties with below US average Hispanic population. This association between Hispanic ethnicity and odds of a facility serving as vaccine administration facility was not observed in metropolitan or suburban counties.Our analyses did not include nontraditional vaccination sites and are based on data as of May 2021, thus they represent the early distribution of COVID-19 vaccines. Our results based on this cross-sectional analysis may not be generalizable to later phases of the COVID-19 vaccine distribution process.

Conclusions

Healthcare facilities in counties with higher Black composition, in rural areas, and in hardest-hit communities were less likely to serve as COVID-19 vaccine administration locations in May 2021. The lower uptake of COVID-19 vaccinations among minority populations and rural areas has been attributed to vaccine hesitancy; however, decreased access to vaccination sites may be an additional overlooked barrier.

Postmarketing active surveillance of myocarditis and pericarditis following vaccination with COVID-19 mRNA vaccines in persons aged 12 to 39 years in Italy: A multi-database, self-controlled case series study

Gi, 28/07/2022 - 16:00

by Marco Massari, Stefania Spila Alegiani, Cristina Morciano, Matteo Spuri, Pasquale Marchione, Patrizia Felicetti, Valeria Belleudi, Francesca Romana Poggi, Marco Lazzeretti, Michele Ercolanoni, Elena Clagnan, Emanuela Bovo, Gianluca Trifirò, Ugo Moretti, Giuseppe Monaco, Olivia Leoni, Roberto Da Cas, Fiorella Petronzelli, Loriana Tartaglia, Nadia Mores, Giovanna Zanoni, Paola Rossi, Sarah Samez, Cristina Zappetti, Anna Rosa Marra, Francesca Menniti Ippolito, on behalf of the TheShinISS-Vax|COVID Surveillance Group

Background

Myocarditis and pericarditis following the Coronavirus Disease 2019 (COVID-19) mRNA vaccines administration have been reported, but their frequency is still uncertain in the younger population. This study investigated the association between Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) mRNA vaccines, BNT162b2, and mRNA-1273 and myocarditis/pericarditis in the population of vaccinated persons aged 12 to 39 years in Italy.

Methods and findings

We conducted a self-controlled case series study (SCCS) using national data on COVID-19 vaccination linked to emergency care/hospital discharge databases. The outcome was the first diagnosis of myocarditis/pericarditis between 27 December 2020 and 30 September 2021. Exposure risk period (0 to 21 days from the vaccination day, subdivided in 3 equal intervals) for first and second dose was compared with baseline period. The SCCS model, adapted to event-dependent exposures, was fitted using unbiased estimating equations to estimate relative incidences (RIs) and excess of cases (EC) per 100,000 vaccinated by dose, age, sex, and vaccine product. Calendar period was included as time-varying confounder in the model. During the study period 2,861,809 persons aged 12 to 39 years received mRNA vaccines (2,405,759 BNT162b2; 456,050 mRNA-1273); 441 participants developed myocarditis/pericarditis (346 BNT162b2; 95 mRNA-1273). Within the 21-day risk interval, 114 myocarditis/pericarditis events occurred, the RI was 1.99 (1.30 to 3.05) after second dose of BNT162b2 and 2.22 (1.00 to 4.91) and 2.63 (1.21 to 5.71) after first and second dose of mRNA-1273. During the [0 to 7) days risk period, an increased risk of myocarditis/pericarditis was observed after first dose of mRNA-1273, with RI of 6.55 (2.73 to 15.72), and after second dose of BNT162b2 and mRNA-1273, with RIs of 3.39 (2.02 to 5.68) and 7.59 (3.26 to 17.65). The number of EC for second dose of mRNA-1273 was 5.5 per 100,000 vaccinated (3.0 to 7.9). The highest risk was observed in males, at [0 to 7) days after first and second dose of mRNA-1273 with RI of 12.28 (4.09 to 36.83) and RI of 11.91 (3.88 to 36.53); the number of EC after the second dose of mRNA-1273 was 8.8 (4.9 to 12.9). Among those aged 12 to 17 years, the RI was of 5.74 (1.52 to 21.72) after second dose of BNT162b2; for this age group, the number of events was insufficient for estimating RIs after mRNA-1273. Among those aged 18 to 29 years, the RIs were 7.58 (2.62 to 21.94) after first dose of mRNA-1273 and 4.02 (1.81 to 8.91) and 9.58 (3.32 to 27.58) after second dose of BNT162b2 and mRNA-1273; the numbers of EC were 3.4 (1.1 to 6.0) and 8.6 (4.4 to 12.6) after first and second dose of mRNA-1273. The main study limitations were that the outcome was not validated through review of clinical records, and there was an absence of information on the length of hospitalization and, thus, the severity of the outcome.

Conclusions

This population-based study of about 3 millions of residents in Italy suggested that mRNA vaccines were associated with myocarditis/pericarditis in the population younger than 40 years. According to our results, increased risk of myocarditis/pericarditis was associated with the second dose of BNT162b2 and both doses of mRNA-1273. The highest risks were observed in males of 12 to 39 years and in males and females 18 to 29 years vaccinated with mRNA-1273. The public health implication of these findings should be considered in the light of the proven mRNA vaccine effectiveness in preventing serious COVID-19 disease and death.

Why restricting access to abortion damages women’s health

Ma, 26/07/2022 - 16:00

by The PLOS Medicine Editors

Dr. Caitlin Moyer discusses the implications, for women globally, of restricting access to abortion care.

Implementation research on noncommunicable disease prevention and control interventions in low- and middle-income countries: A systematic review

Lu, 25/07/2022 - 16:00

by Celestin Hategeka, Prince Adu, Allissa Desloge, Robert Marten, Ruitai Shao, Maoyi Tian, Ting Wei, Margaret E. Kruk

Background

While the evidence for the clinical effectiveness of most noncommunicable disease (NCD) prevention and treatment interventions is well established, care delivery models and means of scaling these up in a variety of resource-constrained health systems are not. The objective of this review was to synthesize evidence on the current state of implementation research on priority NCD prevention and control interventions provided by health systems in low- and middle-income countries (LMICs).

Methods and findings

On January 20, 2021, we searched MEDLINE and EMBASE databases from 1990 through 2020 to identify implementation research studies that focused on the World Health Organization (WHO) priority NCD prevention and control interventions targeting cardiovascular disease, cancer, diabetes, and chronic respiratory disease and provided within health systems in LMICs. Any empirical and peer-reviewed studies that focused on these interventions and reported implementation outcomes were eligible for inclusion. Given the focus on this review and the heterogeneity in aims and methodologies of included studies, risk of bias assessment to understand how effect size may have been compromised by bias is not applicable. We instead commented on the distribution of research designs and discussed about stronger/weaker designs. We synthesized extracted data using descriptive statistics and following the review protocol registered in PROSPERO (CRD42021252969). Of 9,683 potential studies and 7,419 unique records screened for inclusion, 222 eligible studies evaluated 265 priority NCD prevention and control interventions implemented in 62 countries (6% in low-income countries and 90% in middle-income countries). The number of studies published has been increasing over time. Nearly 40% of all the studies were on cervical cancer. With regards to intervention type, screening accounted for 49%, treatment for 39%, while prevention for 12% (with 80% of the latter focusing on prevention of the NCD behavior risk factors). Feasibility (38%) was the most studied implementation outcome followed by adoption (23%); few studies addressed sustainability. The implementation strategies were not specified well enough. Most studies used quantitative methods (86%). The weakest study design, preexperimental, and the strongest study design, experimental, were respectively employed in 25% and 24% of included studies. Approximately 72% of studies reported funding, with international funding being the predominant source. The majority of studies were proof of concept or pilot (88%) and targeted the micro level of health system (79%). Less than 5% of studies report using implementation research framework.

Conclusions

Despite growth in implementation research on NCDs in LMICs, we found major gaps in the science. Future studies should prioritize implementation at scale, target higher levels health systems (meso and macro levels), and test sustainability of NCD programs. They should employ designs with stronger internal validity, be more conceptually driven, and use mixed methods to understand mechanisms. To maximize impact of the research under limited resources, adding implementation science outcomes to effectiveness research and regional collaborations are promising.

Estimating lifetime risk of diabetes in the Chinese population

Gi, 21/07/2022 - 16:00

by Fiona Bragg, Zhengming Chen

In this Perspective, Fiona Bragg and Zhengming Chen discuss the burden of diabetes in the Chinese Population.

Lifetime risk of developing diabetes in Chinese people with normoglycemia or prediabetes: A modeling study

Gi, 21/07/2022 - 16:00

by Xinge Zhang, Hongjiang Wu, Baoqi Fan, Mai Shi, Eric S. H. Lau, Aimin Yang, Elaine Chow, Alice P. S. Kong, Juliana C. N. Chan, Ronald C. W. Ma, Andrea O. Y. Luk

Background

Little is known about the lifetime risk of progression to diabetes in the Asian population. We determined remaining lifetime risk of diabetes and life years spent with diabetes in Chinese people with normoglycemia and prediabetes.

Methods and findings

Using territory-wide diabetes surveillance data curated from electronic medical records of Hong Kong Hospital Authority (HA), we conducted a population-based cohort study in 2,608,973 individuals followed from 2001 to 2019. Prediabetes and diabetes were identified based on laboratory measurements, diagnostic codes, and medication records. Remaining lifetime risk and life years spent with diabetes were estimated using Monte Carlo simulations with state transition probabilities based on a Markov chain model. Validations were performed using several sensitivity analyses and modified survival analysis. External replication was performed using the China Health and Retirement Longitudinal Survey (CHARLS) cohort (2010 to 2015).The expected remaining lifetime risk of developing diabetes was 88.0 (95% confidence intervals: 87.2, 88.7)% for people with prediabetes and 65.9 (65.8, 65.9)% for people with normoglycemia at age 20 years. A 20-year-old person with prediabetes would live with diabetes for 32.5 (32.0, 33.1) years or 51.6 (50.8, 52.3)% of remaining life years, whereas a person with normoglycemia at 20 years would live 12.7 (12.7, 12.7) years with diabetes or 18.4 (18.4, 18.5)% of remaining life years. Women had a higher expected remaining lifetime risk and longer life years with diabetes compared to men. Results are subjected to possible selection bias as only people who undertook routine or opportunistic screening were included.

Conclusions

These findings suggest that Hong Kong, an economically developed city in Asia, is confronted with huge challenge of high lifetime risk of diabetes and long life years spent with diabetes, especially in people with prediabetes. Effective public health policies and targeted interventions for preventing progression to diabetes are urgently needed.

Cardiometabolic outcomes up to 12 months after COVID-19 infection. A matched cohort study in the UK

Ma, 19/07/2022 - 16:00

by Emma Rezel-Potts, Abdel Douiri, Xiaohui Sun, Phillip J. Chowienczyk, Ajay M. Shah, Martin C. Gulliford

Background

Acute Coronavirus Disease 2019 (COVID-19) has been associated with new-onset cardiovascular disease (CVD) and diabetes mellitus (DM), but it is not known whether COVID-19 has long-term impacts on cardiometabolic outcomes. This study aimed to determine whether the incidence of new DM and CVDs are increased over 12 months after COVID-19 compared with matched controls.

Methods and findings

We conducted a cohort study from 2020 to 2021 analysing electronic records for 1,356 United Kingdom family practices with a population of 13.4 million. Participants were 428,650 COVID-19 patients without DM or CVD who were individually matched with 428,650 control patients on age, sex, and family practice and followed up to January 2022. Outcomes were incidence of DM and CVD. A difference-in-difference analysis estimated the net effect of COVID-19 allowing for baseline differences, age, ethnicity, smoking, body mass index (BMI), systolic blood pressure, Charlson score, index month, and matched set. Follow-up time was divided into 4 weeks from index date (“acute COVID-19”), 5 to 12 weeks from index date (“post-acute COVID-19”), and 13 to 52 weeks from index date (“long COVID-19”). Net incidence of DM increased in the first 4 weeks after COVID-19 (adjusted rate ratio, RR 1.81, 95% confidence interval (CI) 1.51 to 2.19) and remained elevated from 5 to 12 weeks (RR 1.27, 1.11 to 1.46) but not from 13 to 52 weeks overall (1.07, 0.99 to 1.16). Acute COVID-19 was associated with net increased CVD incidence (5.82, 4.82 to 7.03) including pulmonary embolism (RR 11.51, 7.07 to 18.73), atrial arrythmias (6.44, 4.17 to 9.96), and venous thromboses (5.43, 3.27 to 9.01). CVD incidence declined from 5 to 12 weeks (RR 1.49, 1.28 to 1.73) and showed a net decrease from 13 to 52 weeks (0.80, 0.73 to 0.88). The analyses were based on health records data and participants’ exposure and outcome status might have been misclassified.

Conclusions

In this study, we found that CVD was increased early after COVID-19 mainly from pulmonary embolism, atrial arrhythmias, and venous thromboses. DM incidence remained elevated for at least 12 weeks following COVID-19 before declining. People without preexisting CVD or DM who suffer from COVID-19 do not appear to have a long-term increase in incidence of these conditions.

Opioid agonist treatment and risk of death or rehospitalization following injection drug use–associated bacterial and fungal infections: A cohort study in New South Wales, Australia

Ma, 19/07/2022 - 16:00

by Thomas D. Brothers, Dan Lewer, Nicola Jones, Samantha Colledge-Frisby, Michael Farrell, Matthew Hickman, Duncan Webster, Andrew Hayward, Louisa Degenhardt

Background

Injecting-related bacterial and fungal infections are associated with significant morbidity and mortality among people who inject drugs (PWID), and they are increasing in incidence. Following hospitalization with an injecting-related infection, use of opioid agonist treatment (OAT; methadone or buprenorphine) may be associated with reduced risk of death or rehospitalization with an injecting-related infection.

Methods and findings

Data came from the Opioid Agonist Treatment Safety (OATS) study, an administrative linkage cohort including all people in New South Wales, Australia, who accessed OAT between July 1, 2001 and June 28, 2018. Included participants survived a hospitalization with injecting-related infections (i.e., skin and soft-tissue infection, sepsis/bacteremia, endocarditis, osteomyelitis, septic arthritis, or epidural/brain abscess). Outcomes were all-cause death and rehospitalization for injecting-related infections. OAT exposure was classified as time varying by days on or off treatment, following hospital discharge. We used separate Cox proportional hazards models to assess associations between each outcome and OAT exposure. The study included 8,943 participants (mean age 39 years, standard deviation [SD] 11 years; 34% women). The most common infections during participants’ index hospitalizations were skin and soft tissue (7,021; 79%), sepsis/bacteremia (1,207; 14%), and endocarditis (431; 5%). During median 6.56 years follow-up, 1,481 (17%) participants died; use of OAT was associated with lower hazard of death (adjusted hazard ratio [aHR] 0.63, 95% confidence interval [CI] 0.57 to 0.70). During median 3.41 years follow-up, 3,653 (41%) were rehospitalized for injecting-related infections; use of OAT was associated with lower hazard of these rehospitalizations (aHR 0.89, 95% CI 0.84 to 0.96). Study limitations include the use of routinely collected administrative data, which lacks information on other risk factors for injecting-related infections including injecting practices, injection stimulant use, housing status, and access to harm reduction services (e.g., needle exchange and supervised injecting sites); we also lacked information on OAT medication dosages.

Conclusions

Following hospitalizations with injection drug use–associated bacterial and fungal infections, use of OAT is associated with lower risks of death and recurrent injecting-related infections among people with opioid use disorder.

Global influenza surveillance systems to detect the spread of influenza-negative influenza-like illness during the COVID-19 pandemic: Time series outlier analyses from 2015–2020

Ma, 19/07/2022 - 16:00

by Natalie L. Cobb, Sigrid Collier, Engi F. Attia, Orvalho Augusto, T. Eoin West, Bradley H. Wagenaar

Background

Surveillance systems are important in detecting changes in disease patterns and can act as early warning systems for emerging disease outbreaks. We hypothesized that analysis of data from existing global influenza surveillance networks early in the COVID-19 pandemic could identify outliers in influenza-negative influenza-like illness (ILI). We used data-driven methods to detect outliers in ILI that preceded the first reported peaks of COVID-19.

Methods and findings

We used data from the World Health Organization’s Global Influenza Surveillance and Response System to evaluate time series outliers in influenza-negative ILI. Using automated autoregressive integrated moving average (ARIMA) time series outlier detection models and baseline influenza-negative ILI training data from 2015–2019, we analyzed 8,792 country-weeks across 28 countries to identify the first week in 2020 with a positive outlier in influenza-negative ILI. We present the difference in weeks between identified outliers and the first reported COVID-19 peaks in these 28 countries with high levels of data completeness for influenza surveillance data and the highest number of reported COVID-19 cases globally in 2020. To account for missing data, we also performed a sensitivity analysis using linear interpolation for missing observations of influenza-negative ILI. In 16 of the 28 countries (57%) included in this study, we identified positive outliers in cases of influenza-negative ILI that predated the first reported COVID-19 peak in each country; the average lag between the first positive ILI outlier and the reported COVID-19 peak was 13.3 weeks (standard deviation 6.8). In our primary analysis, the earliest outliers occurred during the week of January 13, 2020, in Peru, the Philippines, Poland, and Spain. Using linear interpolation for missing data, the earliest outliers were detected during the weeks beginning December 30, 2019, and January 20, 2020, in Poland and Peru, respectively. This contrasts with the reported COVID-19 peaks, which occurred on April 6 in Poland and June 1 in Peru. In many low- and middle-income countries in particular, the lag between detected outliers and COVID-19 peaks exceeded 12 weeks. These outliers may represent undetected spread of SARS-CoV-2, although a limitation of this study is that we could not evaluate SARS-CoV-2 positivity.

Conclusions

Using an automated system of influenza-negative ILI outlier monitoring may have informed countries of the spread of COVID-19 more than 13 weeks before the first reported COVID-19 peaks. This proof-of-concept paper suggests that a system of influenza-negative ILI outlier monitoring could have informed national and global responses to SARS-CoV-2 during the rapid spread of this novel pathogen in early 2020.

Prevention of venous thromboembolic events in patients with lower leg immobilization after trauma: Systematic review and network meta-analysis with meta-epsidemiological approach

Lu, 18/07/2022 - 16:00

by D. Douillet, C. Chapelle, E. Ollier, P. Mismetti, P.-M. Roy, S. Laporte

Background

Lower limb trauma requiring immobilization is a significant contributor to overall venous thromboembolism (VTE) burden. The clinical effectiveness of thromboprophylaxis for this indication and the optimal agent strategy are still a matter of debate. Our main objective was to assess the efficacy of pharmacological thromboprophylaxis to prevent VTE in patients with isolated temporary lower limb immobilization after trauma. We aimed to estimate and compare the clinical efficacy and the safety of the different thromboprophylactic treatments to determine the best strategy.

Methods and findings

We conducted a systematic review and a Bayesian network meta-analysis (NMA) including all available randomized trials comparing a pharmacological thromboprophylactic treatment to placebo or to no treatment in patients with leg immobilization after trauma. We searched Medline, Embase, and Web of Science until July 2021. Only RCT or observational studies with analysis of confounding factors including adult patients requiring temporary immobilization for an isolated lower limb injury treated conservatively or surgically and assessing pharmacological thromboprophylactic agents or placebo or no treatment were eligible for inclusion. The primary endpoint was the incidence of major VTE (proximal deep vein thrombosis, symptomatic VTE, and pulmonary embolism-related death). We extracted data according to Preferred Reporting Items for Systematic Reviews and Meta-analyses for NMA and appraised selected trials with the Cochrane review handbook. Fourteen studies were included (8,198 patients). Compared to the control group, rivaroxaban, fondaparinux, and low molecular weight heparins were associated with a significant risk reduction of major VTE with an odds ratio of 0.02 (95% credible interval (CrI) 0.00 to 0.19), 0.22 (95% CrI 0.06 to 0.65), and 0.32 (95% CrI 0.15 to 0.56), respectively. No increase of the major bleeding risk was observed with either treatment. Rivaroxaban has the highest likelihood of being ranked top in terms of efficacy and net clinical benefit. The main limitation is that the network had as many indirect comparisons as direct comparisons.

Conclusions

This NMA confirms the favorable benefit/risk ratio of thromboprophylaxis for patients with leg immobilization after trauma with the highest level of evidence for rivaroxaban.

Trial registration

PROSPERO CRD42021257669.

Associations between moderate alcohol consumption, brain iron, and cognition in UK Biobank participants: Observational and mendelian randomization analyses

Gi, 14/07/2022 - 16:00

by Anya Topiwala, Chaoyue Wang, Klaus P. Ebmeier, Stephen Burgess, Steven Bell, Daniel F. Levey, Hang Zhou, Celeste McCracken, Adriana Roca-Fernández, Steffen E. Petersen, Betty Raman, Masud Husain, Joel Gelernter, Karla L. Miller, Stephen M. Smith, Thomas E. Nichols

Background

Brain iron deposition has been linked to several neurodegenerative conditions and reported in alcohol dependence. Whether iron accumulation occurs in moderate drinkers is unknown. Our objectives were to investigate evidence in support of causal relationships between alcohol consumption and brain iron levels and to examine whether higher brain iron represents a potential pathway to alcohol-related cognitive deficits.

Methods and findings

Observational associations between brain iron markers and alcohol consumption (n = 20,729 UK Biobank participants) were compared with associations with genetically predicted alcohol intake and alcohol use disorder from 2-sample mendelian randomization (MR). Alcohol intake was self-reported via a touchscreen questionnaire at baseline (2006 to 2010). Participants with complete data were included. Multiorgan susceptibility-weighted magnetic resonance imaging (9.60 ± 1.10 years after baseline) was used to ascertain iron content of each brain region (quantitative susceptibility mapping (QSM) and T2*) and liver tissues (T2*), a marker of systemic iron. Main outcomes were susceptibility (χ) and T2*, measures used as indices of iron deposition. Brain regions of interest included putamen, caudate, hippocampi, thalami, and substantia nigra. Potential pathways to alcohol-related iron brain accumulation through elevated systemic iron stores (liver) were explored in causal mediation analysis. Cognition was assessed at the scan and in online follow-up (5.82 ± 0.86 years after baseline). Executive function was assessed with the trail-making test, fluid intelligence with puzzle tasks, and reaction time by a task based on the “Snap” card game.Mean age was 54.8 ± 7.4 years and 48.6% were female. Weekly alcohol consumption was 17.7 ± 15.9 units and never drinkers comprised 2.7% of the sample. Alcohol consumption was associated with markers of higher iron (χ) in putamen (β = 0.08 standard deviation (SD) [95% confidence interval (CI) 0.06 to 0.09], p < 0.001), caudate (β = 0.05 [0.04 to 0.07], p < 0.001), and substantia nigra (β = 0.03 [0.02 to 0.05], p < 0.001) and lower iron in the thalami (β = −0.06 [−0.07 to −0.04], p < 0.001). Quintile-based analyses found these associations in those consuming >7 units (56 g) alcohol weekly. MR analyses provided weak evidence these relationships are causal. Genetically predicted alcoholic drinks weekly positively associated with putamen and hippocampus susceptibility; however, these associations did not survive multiple testing corrections. Weak evidence for a causal relationship between genetically predicted alcohol use disorder and higher putamen susceptibility was observed; however, this was not robust to multiple comparisons correction. Genetically predicted alcohol use disorder was associated with serum iron and transferrin saturation. Elevated liver iron was observed at just >11 units (88 g) alcohol weekly c.f. <7 units (56 g). Systemic iron levels partially mediated associations of alcohol intake with brain iron. Markers of higher basal ganglia iron associated with slower executive function, lower fluid intelligence, and slower reaction times. The main limitations of the study include that χ and T2* can reflect changes in myelin as well as iron, alcohol use was self-reported, and MR estimates can be influenced by genetic pleiotropy.

Conclusions

To the best of our knowledge, this study represents the largest investigation of moderate alcohol consumption and iron homeostasis to date. Alcohol consumption above 7 units weekly associated with higher brain iron. Iron accumulation represents a potential mechanism for alcohol-related cognitive decline.

Global human security in the post–COVID-19 era: The rising role of East Asia

Gi, 14/07/2022 - 16:00

by Kenji Shibuya, Chorh Chuan Tan, Asaph Young Chun, Gabriel M. Leung

Kenji Shibuya and coauthors discuss the potential contribution of East Asian countries to global health in the light of COVID-19.

Immune and endothelial activation markers and risk stratification of childhood pneumonia in Uganda: A secondary analysis of a prospective cohort study

Me, 13/07/2022 - 16:00

by Chloe R. McDonald, Aleksandra Leligdowicz, Andrea L. Conroy, Andrea M. Weckman, Melissa Richard-Greenblatt, Michelle Ngai, Clara Erice, Kathleen Zhong, Sophie Namasopo, Robert O. Opoka, Michael T. Hawkes, Kevin C. Kain

Background

Despite the global burden of pneumonia, reliable triage tools to identify children in low-resource settings at risk of severe and fatal respiratory tract infection are lacking. This study assessed the ability of circulating host markers of immune and endothelial activation quantified at presentation, relative to currently used clinical measures of disease severity, to identify children with pneumonia who are at risk of death.

Methods and findings

We conducted a secondary analysis of a prospective cohort study of children aged 2 to 59 months presenting to the Jinja Regional Hospital in Jinja, Uganda between February 2012 and August 2013, who met the Integrated Management of Childhood Illness (IMCI) diagnostic criteria for pneumonia. Circulating plasma markers of immune (IL-6, IL-8, CXCL-10/IP-10, CHI3L1, sTNFR1, and sTREM-1) and endothelial (sVCAM-1, sICAM-1, Angpt-1, Angpt-2, and sFlt-1) activation measured at hospital presentation were compared to lactate, respiratory rate, oxygen saturation, procalcitonin (PCT), and C-reactive protein (CRP) with a primary outcome of predicting 48-hour mortality. Of 805 children with IMCI pneumonia, 616 had severe pneumonia. Compared to 10 other immune and endothelial activation markers, sTREM-1 levels at presentation had the best predictive accuracy in identifying 48-hour mortality for children with pneumonia (AUROC 0.885, 95% CI 0.841 to 0.928; p = 0.03 to p < 0.001) and severe pneumonia (AUROC 0.870, 95% CI 0.824 to 0.916; p = 0.04 to p < 0.001). sTREM-1 was more strongly associated with 48-hour mortality than lactate (AUROC 0.745, 95% CI 0.664 to 0.826; p < 0.001), respiratory rate (AUROC 0.615, 95% CI 0.528 to 0.702; p < 0.001), oxygen saturation (AUROC 0.685, 95% CI 0.594 to 0.776; p = 0.002), PCT (AUROC 0.650, 95% CI 0.566 to 0.734; p < 0.001), and CRP (AUROC 0.562, 95% CI 0.472 to 0.653; p < 0.001) in cases of pneumonia and severe pneumonia. The main limitation of this study was the unavailability of radiographic imaging.

Conclusions

In this cohort of Ugandan children, sTREM-1 measured at hospital presentation was a significantly better indicator of 48-hour mortality risk than other common approaches to risk stratify children with pneumonia. Measuring sTREM-1 at clinical presentation may improve the early triage, management, and outcome of children with pneumonia at risk of death.

Trial registration

The trial was registered at clinicaltrial.gov (NCT 04726826).