Pitt Scientists Find the Key to Viral-Bacterial Co-infection (Biology)

The mechanism by which acute viral respiratory infections promote secondary bacterial growth and infection in the airways depends on iron-carrying extracellular sacs secreted by the cells lining the host’s airways, report researchers from the University of Pittsburgh School of Medicine in a paper published today in Cell Reports.

The sacs, or “vesicles,” which carry iron bound to a protein called transferrin, associate with bacterial cells and supply them with essential nutrients, promoting the growth of expansive bacterial communities. The finding gives us a glimpse into how bacteria exploit the host’s defense system against pathogens and can offer a new way for creating therapies to prevent secondary bacterial infections in the clinical setting. 

Jennifer Bomberger, Ph.D., associate professor in the Department of Microbiology & Molecular Genetics, University of Pittsburgh School of Medicine. © University of Pittburgh

“The development of chronic bacterial infections often is preceded by acute viral infections, and such co-infections increase patients’ likelihood of death or lifelong disability,” said senior author Jennifer Bomberger, Ph.D., associate professor in the Department of Microbiology & Molecular Genetics at Pitt. “We wanted to understand what it is that the virus is doing that allows bacteria to get a foothold in the patient’s airways.”

Preventing and controlling secondary lung infections by increasingly antibiotic-resistant bacteria, such as Staphylococcus aureus and Pseudomonas aeruginosa, remains a challenging problem in health care. According to reviews of historic autopsy reports, more than 90% of deaths during the 1918 influenza pandemic likely resulted from secondary bacterial pneumonia and, to this day, up to 30% of adults hospitalized with viral community-acquired pneumonia develop bacterial co-infections. 

To study the mechanism of viral-bacterial interactions in chronic lung disease, the Pitt researchers used a model of respiratory syncytial virus, or RSV, and P. aeruginosa co-infection, the severity of which depends on P. aeruginosa’s ability to form biofilms—large communities of bacteria encased in a polymeric matrix.

Using in vitro studies and fluorescent imaging, scientists showed that the production and secretion of extracellular vesicles—two processes that routinely occur in different cell types in the body, including the respiratory epithelium—are boosted by an acute viral infection. Crucially, these vesicles directly associate with P. aeruginosa biofilms and promote their growth. 

“Extracellular vesicles naturally occur in the body and are used by the organism as a communication tool,” said Bomberger. “It seems that bacteria co-opted this process for their own benefit.”

While the exact mechanism of how extracellular vesicles attach to bacteria remains to be explored, researchers found that vesicles carry protein-bound iron on their surface, supplying bacteria with necessary nutrients. 

Matthew Hendricks, Ph.D., a lead author of the paper and former graduate student Dr. Jennifer Bomberger’s laboratory. © Matthew Hendricks

“It would be interesting to see the implications this mechanism has for the host’s immune response,” said Matthew Hendricks, Ph.D., a lead author of the paper and a former graduate student in Bomberger’s laboratory. “If extracellular vesicles can shield bacteria from the immune cells, that could decrease the host’s ability to detect the infection and help bacteria evade the immune response.”

It also appears that the mechanism of extracellular vesicle-dependent interaction between viruses and bacteria is universal for different types of viruses, including other respiratory viruses and viruses that attack other mucosal locations, such as the gastrointestinal tract.

Additional authors of the paper include Sidney Lane, Jeffrey Melvin, Ph.D., Yingshi Ouyang, Ph.D., Donna Stolz, Ph.D., John Williams, M.D., and Yoel Sadovsky, M.D., all of Pitt.

This study was funded by National Institutes of Health grants T32AI060525, T32AI049820 and RO1HL123771; and Cystic Fibrosis Foundation grants MELVIN15F0 and BOMBER14G0.

Featured image: Fluorescent Extracellular Vesicles Associated with Bacterial Biofilms: Red fluorescently labelled extracellular vesicles associate with biofilms formed by clusters of P. aeruginosa (in green). © Jennifer Bomberger


Reference: Matthew R. Hendricks, Sidney Lane et al., “Extracellular vesicles promote transkingdom nutrient transfer during viral-bacterial co-infection”, Cell Reports, 34(4), 2021. https://doi.org/10.1016/j.celrep.2020.108672


Provided by UPMC

Robust Artificial intelligence Tools to Predict Future Cancer (Medicine)

Researchers created a risk-assessment algorithm that shows consistent performance across datasets from US, Europe, and Asia.

To catch cancer earlier, we need to predict who is going to get it in the future. The complex nature of forecasting risk has been bolstered by artificial intelligence (AI) tools, but the adoption of AI in medicine has been limited by poor performance on new patient populations and neglect to racial minorities

Two years ago, a team of scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Jameel Clinic (J-Clinic) demonstrated a deep learning system to predict cancer risk using just a patient’s mammogram. The model showed significant promise and even improved inclusivity: It was equally accurate for both white and Black women, which is especially important given that Black women are 43 percent more likely to die from breast cancer. 

But to integrate image-based risk models into clinical care and make them widely available, the researchers say the models needed both algorithmic improvements and large-scale validation across several hospitals to prove their robustness. 

To that end, they tailored their new “Mirai” algorithm to capture the unique requirements of risk modeling. Mirai jointly models a patient’s risk across multiple future time points, and can optionally benefit from clinical risk factors such as age or family history, if they are available. The algorithm is also designed to produce predictions that are consistent across minor variances in clinical environments, like the choice of mammography machine.

Video: Robust artificial intelligence tools may be used to predict future breast cancer. © MIT

The team trained Mirai on the same dataset of over 200,000 exams from Massachusetts General Hospital (MGH) from their prior work, and validated it on test sets from MGH, the Karolinska Institute in Sweden, and Chang Gung Memorial Hospital in Taiwan. Mirai is now installed at MGH, and the team’s collaborators are actively working on integrating the model into care. 

Mirai was significantly more accurate than prior methods in predicting cancer risk and identifying high-risk groups across all three datasets. When comparing high-risk cohorts on the MGH test set, the team found that their model identified nearly two times more future cancer diagnoses compared the current clinical standard, the Tyrer-Cuzick model. Mirai was similarly accurate across patients of different races, age groups, and breast density categories in the MGH test set, and across different cancer subtypes in the Karolinska test set. 

“Improved breast cancer risk models enable targeted screening strategies that achieve earlier detection, and less screening harm than existing guidelines,” says Adam Yala, CSAIL PhD student and lead author on a paper about Mirai that was published this week in Science Translational Medicine. “Our goal is to make these advances part of the standard of care. We are partnering with clinicians from Novant Health in North Carolina, Emory in Georgia, Maccabi in Israel, TecSalud in Mexico, Apollo in India, and Barretos in Brazil to further validate the model on diverse populations and study how to best clinically implement it.” 

How it works 

Despite the wide adoption of breast cancer screening, the researchers say the practice is riddled with controversy: More-aggressive screening strategies aim to maximize the benefits of early detection, whereas less-frequent screenings aim to reduce false positives, anxiety, and costs for those who will never even develop breast cancer.  

Current clinical guidelines use risk models to determine which patients should be recommended for supplemental imaging and MRI. Some guidelines use risk models with just age to determine if, and how often, a woman should get screened; others combine multiple factors related to age, hormones, genetics, and breast density to determine further testing. Despite decades of effort, the accuracy of risk models used in clinical practice remains modest.  

Recently, deep learning mammography-based risk models have shown promising performance. To bring this technology to the clinic, the team identified three innovations they believe are critical for risk modeling: jointly modeling time, the optional use of non-image risk factors, and methods to ensure consistent performance across clinical settings. 

1. Time

Inherent to risk modeling is learning from patients with different amounts of follow-up, and assessing risk at different time points: this can determine how often they get screened, whether they should have supplemental imaging, or even consider preventive treatments. 

Although it’s possible to train separate models to assess risk for each time point, this approach can result in risk assessments that don’t make sense — like predicting that a patient has a higher risk of developing cancer within two years than they do within five years. To address this, the team designed their model to predict risk at all time points simultaneously, by using a tool called an “additive-hazard layer.” 

The additive-hazard layer works as follows: Their network predicts a patient’s risk at a time point, such as five years, as an extension of their risk at the previous time point, such as four years. In doing so, their model can learn from data with variable amounts of follow-up, and then produce self-consistent risk assessments. 

2. Non-image risk factors

While this method primarily focuses on mammograms, the team wanted to also use non-image risk factors such as age and hormonal factors if they were available — but not require them at the time of the test. One approach would be to add these factors as an input to the model with the image, but this design would prevent the majority of hospitals (such as Karolinska and CGMH), which don’t have this infrastructure, from using the model. 

For Mirai to benefit from risk factors without requiring them, the network predicts that information at training time, and if it’s not there, it can use its own predictive version. Mammograms are rich sources of health information, and so many traditional risk factors such as age and menopausal status can be easily predicted from their imaging. As a result of this design, the same model could be used by any clinic globally, and if they have that additional information, they can use it. 

3. Consistent performance across clinical environments

To incorporate deep-learning risk models into clinical guidelines, the models must perform consistently across diverse clinical environments, and its predictions cannot be affected by minor variations like which machine the mammogram was taken on. Even across a single hospital, the scientists found that standard training did not produce consistent predictions before and after a change in mammography machines, as the algorithm could learn to rely on different cues specific to the environment. To de-bias the model, the team used an adversarial scheme where the model specifically learns mammogram representations that are invariant to the source clinical environment, to produce consistent predictions. 

To further test these updates across diverse clinical settings, the scientists evaluated Mirai on new test sets from Karolinska in Sweden and Chang Gung Memorial Hospital in Taiwan, and found it obtained consistent performance. The team also analyzed the model’s performance across races, ages, and breast density categories in the MGH test set, and across cancer subtypes on the Karolinska dataset, and found it performed similarly across all subgroups. 

“African-American women continue to present with breast cancer at younger ages, and often at later stages,” says Salewai Oseni, a breast surgeon at Massachusetts General Hospital who was not involved with the work. “This, coupled with the higher instance of triple-negative breast cancer in this group, has resulted in increased breast cancer mortality. This study demonstrates the development of a risk model whose prediction has notable accuracy across race. The opportunity for its use clinically is high.” 

Here’s how Mirai works: 

1. The mammogram image is put through something called an “image encoder.”

2. Each image representation, as well as which view it came from, is aggregated with other images from other views to obtain a representation of the entire mammogram.

3. With the mammogram, a patient’s traditional risk factors are predicted using a Tyrer-Cuzick model (age, weight, hormonal factors). If unavailable, predicted values are used. 

4. With this information, the additive-hazard layer predicts a patient’s risk for each year over the next five years. 

Improving Mirai 

Although the current model doesn’t look at any of the patient’s previous imaging results, changes in imaging over time contain a wealth of information. In the future the team aims to create methods that can effectively utilize a patient’s full imaging history.

In a similar fashion, the team notes that the model could be further improved by utilizing “tomosynthesis,” an X-ray technique for screening asymptomatic cancer patients. Beyond improving accuracy, additional research is required to determine how to adapt image-based risk models to different mammography devices with limited data. 

“We know MRI can catch cancers earlier than mammography, and that earlier detection improves patient outcomes,” says Yala. “But for patients at low risk of cancer, the risk of false-positives can outweigh the benefits. With improved risk models, we can design more nuanced risk-screening guidelines that offer more sensitive screening, like MRI, to patients who will develop cancer, to get better outcomes while reducing unnecessary screening and over-treatment for the rest.” 

“We’re both excited and humbled to ask the question if this AI system will work for African-American populations,” says Judy Gichoya, MD, MS and assistant professor of interventional radiology and informatics at Emory University, who was not involved with the work. “We’re extensively studying this question, and how to detect failure.” 

Yala wrote the paper on Mirai alongside MIT research specialist Peter G. Mikhael, radiologist Fredrik Strand of Karolinska University Hospital, Gigin Lin of Chang Gung Memorial Hospital, Associate Professor Kevin Smith of KTH Royal Institute of Technology, Professor Yung-Liang Wan of Chang Gung University, Leslie Lamb of MGH, Kevin Hughes of MGH, senior author and Harvard Medical School Professor Constance Lehman of MGH, and senior author and MIT Professor Regina Barzilay. 

The work was supported by grants from Susan G Komen, Breast Cancer Research Foundation, Quanta Computing, and the MIT Jameel Clinic. It was also supported by Chang Gung Medical Foundation Grant, and by Stockholm Läns Landsting HMT Grant.

Featured image: MIT researchers have improved their machine learning system developed to predict cancer risk from mammogram images, and validated their effectiveness with studies across several hospitals. Credits: Images courtesy of the researchers.


Reference: Adam Yala, Peter G. Mikhael, Fredrik Strand, Gigin Lin, Kevin Smith, Yung-Liang Wan, Leslie Lamb, Kevin Hughes, Constance Lehman, Regina Barzilay, “Toward robust mammography-based models for breast cancer risk”, Science Translational Medicine 27 Jan 2021: Vol. 13, Issue 578, eaba4373 DOI: 10.1126/scitranslmed.aba4373 https://stm.sciencemag.org/content/13/578/eaba4373


Provided by MIT

Childhood Trauma Could Affect Development, Treatment of Multiple Sclerosis (Medicine)

Childhood trauma could affect the trajectory of multiple sclerosis development and response to treatment in adulthood, a new study in mice found.

Mice that had experienced stress when young were more likely to develop the autoimmune disorder and less likely to respond to a common treatment, researchers at the University of Illinois Urbana-Champaign found. However, treatment that activated an immune-cell receptor mitigated the effects of childhood stress in the mice.

Multiple sclerosis is a progressive autoimmune disease in which the body attacks and strips away the protective coating around neurons, resulting in a wide range of neurological symptoms. Both genetic and environmental factors play a role in MS development.

Previous work has shown that early-life trauma increases susceptibility to developing more severe MS, but researchers have not been able to determine how, said Makoto Inoue, a professor of comparative biosciences at Illinois. In the new study, published in Nature Communications, Inoue’s group studied a mouse model of MS. The mice were genetically susceptible to experimental autoimmune encephalomyelitis, the model most widely used for studying MS.

The researchers watched the development and progression of EAE in mice that had been briefly separated from their mother and given a salene injection while young and compared it with mice that had not experienced the same stress.

“Mice that had early-life trauma were more susceptible to EAE disease development and suffered prolonged motor paralysis with severe neuronal damage in the central nervous system, which we found was caused by a heightened immune response,” said graduate student Yee Ming Khaw, the first author of the study.

The researchers traced the EAE triggers to the immune system – in particular, a receptor on immune cells that binds to the stress hormone norepinephrine. The researchers found that childhood stress in the mice triggered a prolonged release of norepinephrine. The receptor was activated for long periods of time, which led the cells to decrease its expression – leaving the immune system less equipped to respond to the stress and inflammation of EAE.

Importantly, mice that developed EAE after stress in their childhoods did not respond to treatment with interferon beta, one of the initial therapies most widely prescribed to individuals with MS. Meanwhile, the drug effectively prevented EAE progression in mice without childhood stress, Khaw said.

Next, the researchers treated the mice with a compound that boosts the receptor’s response. The treatment prevented paralysis and slowed damage to the spinal cord. In addition, mice that received the treatment responded to interferon beta treatment, though they had not responded before.

“This work suggests that individuals with experience of childhood trauma develop autoimmune disease with symptoms and mechanisms that greatly differ from their peers with no history of childhood trauma, and may need different medical treatment,” Inoue said. “This receptor activator may be a therapeutic drug for MS patients with a history of childhood trauma.”

Next, the researchers plan to verify the mechanisms of the receptor, and to perform translational studies to verify whether boosting the receptor in human patients with MS gives the same benefits as it did for the mice with EAE.

“We believe that the best approach to addressing autoimmune diseases in individuals with a history of childhood trauma or other risk factors is a comprehensive and personalized medicine approach that addresses the whole person,” Inoue said.

The University of Illinois and the Sumitomo Foundation supported this work.

The paper “Early-life-trauma triggers interferon-β resistance and neurodegeneration in a multiple sclerosis model via downregulated β1-adrenergic signaling” is available online (https://www.nature.com/articles/s41467-020-20302-0).

Featured image: Graduate student Yee Ming Khaw and comparative biosciences professor Makoto Inoue led a study that found that childhood trauma may affect later development of multiple sclerosis and its responsiveness to treatment. In compliance with COVID-19 safety protocols, both subjects tested negative for COVID-19 prior to the photo. Photo by L. Brian Stauffer


Provided by Illinois News Bureau

New Technique Identifies Important Mutations Behind Lynch Syndrome (Medicine)

Approach could improve predictive value of genetic screening.

Colorectal cancer is the third most common form of cancer. While 90% of cases are in people older than 50, there is an as-yet unexplained rising incidence in younger people.

Family history ranks high among risk factors for developing colorectal cancer, and people with such a history are often advised to get more frequent screening tests or start screening sooner than the recommended age of 45 years old. Those with a family history of cancer often seek out genetic tests to look for mutations linked to cancer risk. However, those tests don’t always provide helpful information.

In a new paper in the American Journal of Human Genetics, Jacob Kitzman, Ph.D., of the Department of Human Genetics at Michigan Medicine, and a team of collaborators describe a method for screening so-called genetic variants of uncertain significance in the hopes of identifying those mutations that could cause disease.

To do this, they used a genetic condition called Lynch syndrome, also known as hereditary non-polyposis colorectal cancer. Like BRCA1, a gene known to cause certain breast cancers, there are a handful of genes behind Lynch syndrome that have been well described. However, “there’s a whole universe of possible genetic variants that can occur in genes associated with Lynch syndrome that we basically know nothing about,” says Kitzman.

Because most mutations are rare in the human population, it can be difficult to tell if any particular one is problematic. And studying one variant in a lab at a time takes a lot of time—often too much to be useful for making clinical decisions.

Using a technique called deep mutational scanning, the research team set out to measure the impact of mutations in the gene MSH2, which when mutated, is one major cause of Lynch syndrome.

“The key advance is rather than doing one mutation at a time, we did it in a pooled format which allowed us to test about 18,000 mutations in a single batch,” says Kitzman.

Using CRISPR-Cas technology, they deleted the normal copy of MSH2 from human cells, and replaced it with library of every possible mutation in the MSH2 gene. This created a mix of cells where each cell carried a unique MSH2 mutation. This population of cells was treated with a drug known as 6-thioguanine, a chemotherapy that killed only the cells that had a functional variant of MSH2.

The counterintuitive idea, notes Kitzman, is that the surviving cells are the ones without functioning MSH2—which are the ones with mutations that are most likely to be disease-causing.

“We were basically trying to sit down and make the mutations we could so they could serve as a reference for ones that are newly seen or are amongst the thousands of variants of unknown significance identified in people from clinical testing,” says Kitzman. “Until now, geneticists could not be sure whether these are benign or pathogenic.”

The hope is that, with other patient-specific information, some of these variants may be able to be reclassified, and those people notified that they should undergo more intense screening.

Says Kitzman, “One of the next areas that will need some focus in the field of human genetics is to create these sorts of maps for many different genes where there is a clinical connection, so we can be more predictive when variants are found in an individual.”

Other University of Michigan contributors to this paper include: First author Xiaoyan Jia, Bala Bharathi Burugula,Victor Chen, Rosemary M. Lemons, Sajini Jayakody, and Mariam Maksutova.

The work was supported in by the National Institute of General Medical Sciences and a Precision Health Fellowship from the University of Michigan.

Featured image credit: gettyimages


Reference: “Massively parallel functional testing of MSH2 missense variants conferring Lynch syndrome risk,” The American Journal of Human Genetics. https://linkinghub.elsevier.com/retrieve/pii/S0002929720304390 DOI: 10.1016/j.ajhg.2020.12.003


Provided by Medicine Health Lab

Scientists Spotted RPS-12 Protein as a Potential Target For Anti-cancer Therapy (Medicine)

Using the developing eye of the fruit fly as a test platform, researchers found that RPS-12 protein overproduction appears to trigger triple-negative breast cancer and possibly some other malignancies. The protein indirectly switches on an important inracellular signaling pathway active while the embryo develops and shut down in healthy cells of adults. Far Eastern Federal University (FEFU), the University of Geneva, and the Institute of Protein Research (Russia) scientists addressed the problem in Scientific Reports.

Researchers have taken another step towards targeted treatment of tumors. The idea of such a therapy is to identify the necessary target proteins playing instructive functions in tumor initiation or progression in order to suppress tumor development while causing minimal harm to healthy cells.

Using fruit flies and a cDNA library of patient-derived triple-negative breast cancer, scientists launched a massive screening for potential novel human oncogenes, i.e. genes that after a mutation activate elements of cancerous transformation. To find the potential targets, scientists inserted genes found in the human tumor into the Drosophila genome, drove their misexpression in the eye of the insect, and observed the potential defects in the development of this sensitive organ.

After they transplanted the human protein RPS-12, the Drosophila eye shrank and obtained a mirror-like appearance.

“This phenomenon reminds the classic glazed phenotype that Thomas Morgan, the father of Drosophila genetics, discovered back in the 1920s. Only in the 90s of the 20th century, it was understood that the mutation affects the Wingless gene. This Drosophila gene corresponds to the WNT genes that trigger the signaling pathway of the same name in humans. The activity of the WNT signaling pathway is vital for the development of the human body at the embryonic stage, though switched off at later stages. Mutations or epigenetic changes can reboot the signaling pathway in adults. After that, the initially healthy cells start a massive proliferation. This is one of the reasons for the development of triple-negative breast cancer and some other forms of cancer, such as in the colon, liver, ovaries, etc.”, explains Vladimir Katanaev, the project ideologist, head of the laboratory of pharmacology of natural compounds at the FEFU School of Biomedicine.

Scientists revealed that the phenotype of the glazed eye arises because the expression of human RPS12 in the eye of Drosophila overactivates the WNT / Wingless-signaling pathway. The overabundance of RPS12 protein stimulates the production of active forms of Wingless capable of diffusing over long distances in the tissue and reaching distant cells. Reciprocally, the reduced amount of RPS12 decreases the production of such Wingless forms.

“The proteins of the Wingless-WNT family are very “sticky”. Their natural distribution in the body tissues is limited, and the number of active forms migrating over long distances is under strict regulation. WNT is an example of morphogens, i.e. substances that are produced in specific places during embryogenesis and spread through the tissue generating a concentration gradient. If we consider a human hand as an example, the palm, elbow, and shoulder are formed due to the reaction of cells to different concentrations of the WNT morphogen”, says Vladimir Katanaev.

Special mechanisms are responsible for the production of WNT-forms capable of spreading through tissue over long distances. One of the mechanisms the team studied earlier is based on the protein reggie-1/fotillin-2.

“It turned out that RPS12 plays a similar role. Thus, we have unveiled a new mechanism for controlling the production of active forms of WNT, and suggest that the RPS12 protein may become a new potential target for anticancer therapy. Further research will show how this protein is really suitable for therapeutic targeting”, concluded Vladimir Katanaev.

About 70-80 percent of the genes responsible for human disease have orthologous genes in Drosophila. Evolutionary, these are practically the same genes, but with some individual sequences in humans and fruit flies.

The Drosophila eye development is complex and multistage. At various phases of its development, various signaling pathways and cellular mechanisms one knows in humans become active. Based on this, scientists assumed that any human oncogene, if “transplanted” into the eye of a Drosophila, would lead to a disruption in the development of this organ. A fly with an affected eye lives up to maturity, meaning that it is easy to observe the eye developmental disorders, simply by studying the insect through a microscope.

Scientists started the HumanaFly project more than 10 years ago at the Institute of Protein Research (Pushchino, Russia) in order to find new human oncogenes via the fruit fly eye screening platform. The final experiments of the first phase were carried out in 2020. An extensive genetic library has been formed for the subsequent search for components and mechanisms implicated in the development of cancer.

Featured image: FEFU lab for DNA diagnostics © FEFU press office


Reference: Vladimir L. Katanaev, Mikhail Kryuchkov, Volodymyr Averkov, Mikhail Savitsky, Kseniya Nikolaeva, Nadezhda Klimova, Sergei Khaustov & Gonzalo P. Solis, “HumanaFly: high-throughput transgenesis and expression of breast cancer transcripts in Drosophila eye discovers the RPS12-Wingless signaling axis”, Scientific Reports volume 10, Article number: 21013 (2020). https://www.nature.com/articles/s41598-020-77942-x


Provided by Far Eastern Federal University

How Vitamins, Steroids and Potential Antivirals Might Affect SARS-CoV-2? (Medicine)

Evidence is emerging that vitamin D – and possibly vitamins K and A – might help combat COVID-19. A new study from the University of Bristol published in the journal of the German Chemical Society Angewandte Chemie has shown how they – and other antiviral drugs – might work. The research indicates that these dietary supplements and compounds could bind to the viral spike protein and so might reduce SARS-CoV-2 infectivity. In contrast, cholesterol may increase infectivity, which could explain why having high cholesterol is considered a risk factor for serious disease.

Recently, Bristol researchers showed that linoleic acid binds to a specific site in the viral spike protein, and that by doing so, it locks the spike into a closed, less infective form. Now, a research team has used computational methods to search for other compounds that might have the same effect, as potential treatments. They hope to prevent human cells becoming infected by preventing the viral spike protein from opening enough to interact with a human protein (ACE2). New anti-viral drugs can take years to design, develop and test, so the researchers looked through a library of approved drugs and vitamins to identify those which might bind to this recently discovered ‘druggable pocket’ inside the SARS-CoV-2 spike protein.

The team first studied the effects of linoleic acid on the spike, using computational simulations to show that it stabilizes the closed form. Further simulations showed that dexamethasone – which is an effective treatment for COVID-19 – might also bind to this site and help reduce viral infectivity in addition to its effects on the human immune system.

The team then conducted simulations to see which other compounds bind to the fatty acid site. This identified some drugs that have been found by experiments to be active against the virus, suggesting that this may be one mechanism by which they prevent viral replication such as, by locking the spike structure in the same way as linoleic acid.

The findings suggested several drug candidates among available pharmaceuticals and dietary components, including some that have been found to slow SARS-CoV-2 reproduction in the laboratory. These have the potential to bind to the SARS-CoV-2 spike protein and may help to prevent cell entry.

The simulations also predicted that the fat-soluble vitamins D, K and A bind to the spike in the same way making the spike less able to infect cells.

Dr Deborah Shoemark, Senior Research Associate (Biomolecular Modelling) in the School of Biochemistry, who modelled the spike, explained: “Our findings help explain how some vitamins may play a more direct role in combatting COVID than their conventional support of the human immune system.

“Obesity is a major risk factor for severe COVID. Vitamin D is fat soluble and tends to accumulate in fatty tissue. This can lower the amount of vitamin D available to obese individuals. Countries in which some of these vitamin deficiencies are more common have also suffered badly during the course of the pandemic. Our research suggests that some essential vitamins and fatty acids including linoleic acid may contribute to impeding the spike/ACE2 interaction. Deficiency in any one of them may make it easier for the virus to infect.”

Pre-existing high cholesterol levels have been associated with increased risk for severe COVID-19. Reports that the SARS-CoV-2 spike protein binds cholesterol led the team to investigate whether it could bind at the fatty acid binding site. Their simulations indicate that it could bind, but that it may have a destabilising effect on the spike’s locked conformation, and favour the open, more infective conformation.

Dr Shoemark continued: “We know that the use of cholesterol lowering statins reduces the risk of developing severe COVID and shortens recovery time in less severe cases. Whether cholesterol de-stabilises the “benign”, closed conformation or not, our results suggest that by directly interacting with the spike, the virus could sequester cholesterol to achieve the local concentrations required to facilitate cell entry and this may also account for the observed loss of circulating cholesterol post infection.”

Professor Adrian Mulholland, of Bristol’s School of Chemistry, added: “Our simulations show how some molecules binding at the linoleic acid site affect the spike’s dynamics and lock it closed. They also show that drugs and vitamins active against the virus may work in the same way. Targeting this site may be a route to new anti-viral drugs. A next step would be to look at effects of dietary supplements and test viral replication in cells.”

Alison Derbenwick Miller, Vice President, Oracle for Research, said: “It’s incredibly exciting that researchers are gaining new insights into how SARS-CoV-2 interacts with human cells, which ultimately will lead to new ways to fight COVID-19. We are delighted that Oracle’s high-performance cloud infrastructure is helping to advance this kind of world-changing research. Growing a globally-connected community of cloud-powered researchers is exactly what Oracle for Research is designed to do.”

The team included experts from Bristol UNCOVER Group, including Bristol’s Schools of Chemistry, Biochemistry, Cellular and Molecular Medicine, and Max Planck Bristol Centre for Minimal Biology, and BrisSynBio, using Bristol’s high performance computers and the UK supercomputer, ARCHER, as well as Oracle cloud computing. The study was supported by grants from the EPSRC and the BBSRC.

Featured image credit: University of Bristol


Reference: ‘Molecular simulations suggest vitamins, retinoids and steroids as ligands binding the free fatty acid pocket of SARS-CoV-2 spike protein’ by Deborah Karen Shoemark, Charlotte K. Colenso, Christine Toelzer, Kapil Gupta, Richard B. Sessions, Andrew D. Davidson, Imre Berger, Christiane Schaffitzel, James Spencer and Adrian J. Mulholland in Angewandte Chemie (2021).


Provided by University of Bristol

Automated AI Algorithm Uses Routine Imaging to Predict Cardiovascular Risk (Medicine)

Artificial intelligence deep learning system can automatically measure coronary artery calcium from routine CT scans and predict cardiovascular events like heart attacks.

Coronary artery calcification — the buildup of calcified plaque in the walls of the heart’s arteries — is an important predictor of adverse cardiovascular events like heart attacks. Coronary calcium can be detected by computed tomography (CT) scans, but quantifying the amount of plaque requires radiological expertise, time and specialized equipment. In practice, even though chest CT scans are fairly common, calcium score CTs are not. Investigators from Brigham and Women’s Hospital’s Artificial Intelligence in Medicine (AIM) Program and the Massachusetts General Hospital’s Cardiovascular Imaging Research Center (CIRC) teamed up to develop and evaluate a deep learning system that may help change this. The system automatically measures coronary artery calcium from CT scans to help physicians and patients make more informed decisions about cardiovascular prevention. The team validated the system using data from more than 20,000 individuals with promising results. Their findings are published in Nature Communications.

“Coronary artery calcium information could be available for almost every patient who gets a chest CT scan, but it isn’t quantified simply because it takes too much time to do this for every patient,” said corresponding author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) Program at the Brigham and Harvard Medical School. “We’ve developed an algorithm that can identify high-risk individuals in an automated manner.”

Working with colleagues, lead author Roman Zeleznik, MSc, a data scientist in AIM, developed the deep learning system described in the paper to automatically and accurately predict cardiovascular events by scoring coronary calcium. While the tool is currently only for research purposes, Zeleznik and co-authors have made it open source and freely available for anyone to use.

“In theory, the deep learning system does a lot of what a human would do to quantify calcium,” said Zeleznik. “Our paper shows that it may be possible to do this in an automated fashion.”

The team began by training the deep learning system on data from the Framingham Heart Study (FHS), a long-term asymptomatic community cohort study. Framingham participants received dedicated calcium scoring CT scans, which were manually scored by expert human readers and used to train the deep learning system. The deep learning system was then applied to three additional study cohorts, which included heavy smokers having lung cancer screening CT (NLST: National Lung Screening Trial), patients with stable chest pain having cardiac CT (PROMISE: the Prospective Multicenter Imaging Study for Evaluation of Chest Pain), and patients with acute chest pain having cardiac CT (ROMICAT-II: the Rule Out Myocardial Infarction using Computer Assisted Tomography trial). All told, the team validated the deep learning system in over 20,000 individuals.

Udo Hoffmann, MD, director of CIRC@MGH who is the principal investigator of CT imaging in the FHS, PROMISE and ROMICAT, emphasized that one of the unique aspects of this study is the inclusion of three National Heart, Lung, and Blood Institute–funded high-quality image and outcome trials that strengthen the generalizability of these results to clinical settings.

The automated calcium scores from the deep learning system highly correlated with the manual calcium scores from human experts. The automated scores also independently predicted who would go on to have a major adverse cardiovascular event like a heart attack.

The coronary artery calcium score plays an important role in current guidelines for who should take a statin to prevent heart attacks. “This is an opportunity for us to get additional value from these chest CTs using AI,” said co-author Michael Lu, MD, MPH, director of artificial intelligence at MGH’s Cardiovascular Imaging Research Center. “The coronary artery calcium score can help patients and physicians make informed, personalized decisions about whether to take a statin. From a clinical perspective, our long-term goal is to implement this deep learning system in electronic health records, to automatically identify the patients at high risk.”

Funding for this work was provided by the National Institutes of Health (NIH-USA U24CA194354, NIHUSA U01CA190234, NIH-USA U01CA209414, NIH-USA R35CA22052, 5R01-HL109711, NIH/NHLBI 5K24HL113128, NIH/NHLBI 5T32HL076136, NIH/NHLBI 5U01HL123339), the European Union—European Research Council (866504), as well as the German Research Foundation (1438/1-1, 6405/2-1), American Heart Association Institute for Precision Cardiovascular Medicine (18UNPG34030172), Fulbright Visiting Researcher Grant (E0583118), and a Rosztoczy Foundation Grant.


Reference: Zeleznik et al. “Deep convolutional neural networks to predict cardiovascular risk from computed tomography” Nature Communications DOI: 10.1038/s41467-021-20966-2


Provided by Brigham’s and Women’s Hospital

Novel Therapy-resistance Mechanism Promoting the Growth of Breast Cancer Brain Metastasis (Medicine)

SORLA is a protein trafficking receptor that has been mainly studied in neurons, but it also plays a role in cancer cells. Professor Johanna Ivaska’s research group at Turku Bioscience observed that SORLA functionally contributes to the most reported therapy-resistant mechanism by which the cell-surface receptor HER3 counteracts HER2 targeting therapy in HER2-positive cancers. Removing SORLA from cancer cells sensitised anti-HER2 resistant breast cancer brain metastasis to targeted therapy.

HER2 protein is a strong driver of tumor growth. HER2 amplification occurs in about 20 % of breast cancers and overexpression or amplification of HER2 is also commonly found in bladder and gastric cancers. HER2 targeting therapies, such as Herceptin, are widely used in clinical care and it plays an important role in the treatment of HER2-positive cancers.

However, some patients will eventually progress during the Herceptin treatment and therapy resistance is frequently linked to the upregulation of HER3 receptor. The newly discovered role of SORLA in supporting HER3 expression and drug resistance offers novel possibilities to target drug-resistant HER2 positive cancers in the future.

– HER2 tumors can become therapy resistant by upregulating HER3. Currently these tumors are un-druggable as there are no HER3 targeted therapies available. Our study showed that removing SORLA protein from drug-resistant HER2-positive cancer cell lines sensitised breast cancer brain metastasis to anti-HER2 therapy. To date, very little has been known about SORLA in cancer. Our discovery that HER3 receptor-induced drug resistance is dependent on SORLA was surprising, since this cancer type and its resistance mechanisms have already been widely studied, says lead author, Post-doctoral Researcher Hussein Al-Akhrass from Turku Bioscience at the University of Turku.

New understanding of these mechanisms enabled the possibility to control the growth of breast cancer cells in their most aggressive situation when they form tumors in the brain.

SORLA removal sensitises metastatic breast cancer cells to HER2 targeting therapy. Aggressive metastatic breast cancer cells growing in the brains of fish embryos. The tumors are resistant to anti-HER2 therapy alone but sensitive when anti-HER2 therapy is combined with SORLA depletion. Pictures by Dr Ilkka Paatero from Turku Bioscience Zebrafish Core Facility.

In vitro cell culture experiments showed that SORLA protein promotes the recycling of HER3 receptor back to plasma membrane, where the receptor is active and drives the proliferation of cancer cells. When SORLA was removed, HER3 receptor was destroyed in cells leading to sensitisation of the cells to anti-HER2 therapy.

The next goal for the research group is to find a way to block the function of SORLA in tumor cells and therefore if there could be a way to develop SORLA targeting treatment.

The Ivaska lab is located in Turku Bioscience that is operated by the University of Turku and Åbo Academy University in Finland. The study was funded by Finnish Cancer Organisations and Sigrid Juselius Foundation.

The study was recently published in high-ranking journal Oncogene


Reference: Al-Akhrass, H., Conway, J.R.W., Poulsen, A.S.A. et al. A feed-forward loop between SorLA and HER3 determines heregulin response and neratinib resistance. Oncogene (2021). https://www.nature.com/articles/s41388-020-01604-5 https://doi.org/10.1038/s41388-020-01604-5


Provided by University of Turku

Hijacking the Host Defenses Gives Bacteria an Advantage (Biology)

A metabolic switch in microbe-fighting macrophages signals bacteria to convert them to hotels with amenities

Bacteria that cause life-threatening infections sometimes resort to the nastiest ploy of all: Stealing the human body’s defense weapons and exploiting them to their own advantage. Researchers at the Weizmann Institute of Science have now uncovered one such strategy used by Salmonella.

When Salmonella bacteria penetrate the human gut, they can cause diarrhea and other symptoms of food poisoning that often stay mild, but if they get into the bloodstream and from there into the liver, spleen and other body organs, they are liable to cause more severe disease that can be fatal. In the case of such invasion, large protective cells called macrophages try to stop the infection by swallowing the Salmonella whole. The bacteria, however, sometimes manage not only to survive but to thrive inside the macrophages, even converting them into incubators that facilitate their spread.

In a study led by doctoral student Gili Rosenberg in the lab of Dr. Roi Avraham of Biological Regulation Department, the researchers started out by exposing macrophages to Salmonella and examining the changes that occur in these cells. As the macrophages gear up to fight the bacteria, their metabolism undergoes such a major shift that they switch from producing energy in the cellular organelles called mitochondria to a massive burning of glucose. But when the scientists blocked this metabolic shift in the macrophages, they found – to their surprise – that the bacteria, instead of growing more aggressive, became less virulent.

This finding suggested that the virulence of Salmonella was somehow dependent on the metabolic shift. In other words, the very changes in cellular metabolism that were intended to help the macrophages deal with the infection could be hijacked and abused by Salmonella. The scientists checked all the metabolites that accumulate in macrophages when they fight Salmonella, and they zeroed in on a compound called succinate. This compound is known to act as a signaling molecule that the macrophages use to activate their defenses against invading bacteria: Succinate promotes the recruitment of the immune system and the generation of toxic inflammatory compounds that can kill the bacteria.

(l-r) Gili Rosenberg and Dr. Roi Avraham © WIS

But as the scientists discovered, the bacteria – in the course of evolution – had learned to make use of this very molecule as a signal to become more virulent and to manipulate the contents of the macrophages to their own benefit. Succinate, as they found, activates certain bacterial genes, causing the Salmonella to grow a needle that punctures vacuoles –closed compartments within the macrophage that keep bacteria wrapped in “hazmat” padding. The needle then secretes substances that neutralize the giant cell’s killing mechanism. On top of this, succinate activates a mechanism that protects the Salmonella from antimicrobial peptides secreted within macrophages, so that the bacteria now feel free to treat the macrophage as a hotel, with all the amenities.

To confirm that these manipulations are indeed dependent on succinate, the scientists genetically engineered Salmonella to disable the transporter molecule that enables these bacteria to take up succinate, and compared the mutant bacteria to unaltered ones – that is, ones that can make full use of succinate. The mutant bacteria failed to survive inside macrophages and were much less effective at infecting mice than the unaltered ones.

The bacteria – in the course of evolution – had learned to make use of this very molecule as a signal to become more virulent

In addition to providing insights into infection by Salmonella, the study’s results pave the way to investigating whether other intracellular bacteria hijack the immune metabolites that accumulate in macrophages following bacterial infection. These might include the bacteria responsible for tuberculosis, as well as Listeria, which can cause a form of meningitis and other severe infections, and Shigella, a common cause of children’s diarrhea in Africa and South Asia. 

The study’s findings may serve as a basis for developing antibacterial therapies to block the uptake of succinate by bacteria; such drugs would be more targeted than existing antibiotics.

Salmonella (bright green) inside macrophages (brownish yellow), viewed under a microscope © WIS

“Whereas antibiotics kill all bacteria, including the good ones, a therapy based on blocking succinate can be aimed at killing only those that cause disease,” Rosenberg says.

Study participants included Dror Yehezkel, Dotan Hoffman, Leia Vainman, Noa Nissani, Dr. Shelly Hen-Avivi, Dr. Shirley Brenner, Dr. Noa Bossel Ben-Moshe and Hadar Ben-Arosh of the Biological Regulation Department; and Dr. Maxim Itkin and Dr. Sergey Malitsky of the Life Sciences Core Facilities Department.

Dr. Roi Avraham’s research is supported by the Sagol Weizmann-MIT Bridge Program; the Pasteur-Weizmann Delegation; the estate of Zvia Zeroni; and the European Research Council. Dr. Avraham is the incumbent of the Philip Harris and Gerald Ronson Career Development Chair. 

Featured image: Macrophages, large immune cells, “swallow” bacteria whole. Getty images


Reference: Gili Rosenberg, Dror Yehezkel, Dotan Hoffman et al., “Host succinate is an activation signal for Salmonella virulence during intracellular infection”, Science 22 Jan 2021: Vol. 371, Issue 6527, pp. 400-405
DOI: 10.1126/science.aba8026 https://science.sciencemag.org/content/371/6527/400


Provided by Weizmann Institute of Science