Putty-like Composites of Gallium Metal with Potential For Real-world Application (Material Science)

Excellent intrinsic properties of these composites will allow them to have a wide range of use, from shielding the grid from coronal mass ejection events to more effective thermal pastes.

Gallium is a highly useful element that has accompanied the advancement of human civilization throughout the 20th century. Gallium is designated as a technologically critical element, as it is essential for the fabrication of semiconductors and transistors. Notably, gallium nitride and related compounds allowed for the discovery of the blue LED, which was the final key in the development of an energy-efficient and long-lasting white LED lighting system. This discovery has led to the awarding of the 2014 Nobel Prize in Physics. It is estimated that up to 98% of the demand for gallium originates from the semiconductor and electronics industry.

In addition to its use in electronics, the unique physical properties of gallium have led to its use in other areas. Gallium itself is a metal with a very low melting point and is a liquid at just above room temperature (30 °C). Also, gallium is capable of forming several eutectic systems (alloys that have a lower melting point than any of its constituents, including gallium) with a number of other metals. Both pure gallium and these gallium based liquid metal alloys have high surface tension and are considered “non-spreadable” on most surfaces. This renders them difficult to handle, shape, or process, which limits their potential for real-world application. However, a recent discovery may have unlocked the possibility for broader use of gallium in the field of functional materials.

A research team at the Center for Multidimensional Carbon Materials (CMCM) within the Institute for Basic Science (IBS) in Ulsan, South Korea and the Ulsan National Institute of Science and Technology (UNIST) has invented a new method for incorporating filler particles in liquid gallium to create functional composites of liquid metal. The incorporation of fillers transforms the material from a liquid state into either a paste- or putty-like form (with consistency and “feel” similar to the commercial product “Plasticine”) depending on the amount of added particles. In the case when graphene oxide (G-O) was used as a filler material, G-O content of 1.6~1.8% resulted in a paste-like form, while 3.6% was optimal for putty formation. A variety of new gallium composites and the mechanism of their formation is described in a recent article published in the journal Science Advances.

The mixing of particles inside the gallium based liquid metal alters the physical properties of the material, which allows for much easier handling. First author Chunhui Wang notes: “The ability for liquid gallium composites to form pastes or putties is extremely beneficial. It removes most of the issues of handling of gallium for applications. It no longer stains surfaces, it can be coated or “painted” onto almost any surface, it can be molded into a variety of shapes. This opens up a wide variety of applications for gallium not seen before.” The potential application of this discovery includes situations where soft and flexible electronics are required, such as in wearable devices and medical implants. The study even showed that the composite can be fashioned into a porous foam-like material with extreme heat resistance, with the ability to withstand a blowtorch for 1 minute without sustaining any damage.

In this study, the team was able to identify the factors that would allow the fillers to successfully mix with liquid gallium. Co-corresponding author Benjamin Cunning described the prerequisites: “Liquid gallium develops an oxide ‘skin’ when exposed to air, and this is crucial for mixing. This skin coats the filler particle and stabilizes it inside the gallium, but this skin is resilient. We learned that particles of a large enough size have to be used otherwise mixing cannot occur and a composite cannot be formed”.

Figure. (a) Liquid gallium being poured into a container. (b) Gallium putty being molded into a ball. (c) Various figures made from gallium putty. (d) Gallium putty being cut by a blade. (e) The mechanism of formation of gallium putty involves filler particles being encapsulated by a gallium oxide layer and incorporated into the gallium. © Benjamin Cunning et al.

The researchers used four different materials as fillers in their study: graphene oxide, silicon carbide, diamond, and graphite. Among these, two of them in particular displayed excellent properties when incorporated in liquid gallium: reduced graphene oxide (rG-O) for electromagnetic interference (EMI) shielding and diamond particles for thermal interface materials. A 13-micron thick coating of Ga/rG-O composite on a reduced graphene oxide film was able to improve the film’s shielding efficiency from 20 dB up to 75 dB, which is sufficient for both commercial (>30 dB) and military (>60 dB) applications. However, the most remarkable property of the composite was its ability to provide EMI shielding property to any everyday common material. The researchers demonstrated that a similar 20-micron thick coating of Ga/rG-O applied on a simple sheet of paper yielded a shielding efficiency of over 70 dB.

Perhaps most exciting was the thermal performance when diamond particles were incorporated into the material. The CMCM team measured the thermal conductivities in collaboration with UNIST researchers Dr. Shalik Joshi and Prof. KIM Gun-ho, and the “real-world” application experiments were carried out by LEE Seunghwan and Prof. LEE Jaeson. The thermal conductivity experiment showed that the diamond containing composite had bulk thermal conductivities of up to ~110 W m-1 K-1, with larger filler particles yielding greater thermal conductivity. This exceeded the thermal conductivity of the commercially available thermal paste (79 W m-1 K-1) by more than 50%. The application experiment further proved the gallium-diamond mixture’s effectiveness as a thermal interface material (TIM) between a heat source and a heat sink. Interestingly, the composite with smaller size diamond particles showed superior real-world cooling capability despite having lower thermal conductivity. The reason for this discrepancy is due to the larger diamond particles being more prone to protruding through the bulk gallium and creating air gaps at the interface of the heat sink or heat source and the TIM, reducing its effectiveness. (Ruoff notes that there are some likely ways to solve this issue in the future.)

Lastly, the group has even created and tested a composite made from a mixture of gallium metal and commercial silicone putty – better known as “Silly Putty” (Crayola LLC). This last type of gallium containing composite is formed by an entirely different mechanism, which involves small droplets of gallium being dispersed throughout the Silly Putty. While it does not have the impressive EMI shielding ability of the above-mentioned Ga/rG-O (the material requires 2 mm of coating to achieve the same 70 dB shielding efficiency), it is compensated with superior mechanical properties. Since this composite uses silicone polymer rather than gallium metal as the base material, it is stretchable in addition to being malleable.

Prof. Rod Ruoff, director of CMCM who conceived of the idea of mixing such carbon fillers with liquid metals notes: “We first submitted this work in September 2019, and it has undergone a few iterations since then. We have discovered that a wide variety of particles can be incorporated into liquid gallium and have provided a fundamental understanding of how particle size plays a role in successful mixing. We found this behavior extends to gallium alloys that are liquids at temperatures below room temperature such as indium-gallium, tin-gallium, and indium-tin-gallium. The capabilities of our UNIST collaborators have demonstrated outstanding applications for these composites, and we hope our work inspires others to discover new functional fillers with exciting applications.”

References: Chunhui Wang, Yan Gong, Benjamin V. Cunning, Seunghwan Lee, Quan Le, Shalik R. Joshi, Onur Buyukcakir, Hanyang Zhang, Won Kyung Seong, Ming Huang, Meihui Wang, Jaeseon Lee, Gun-Ho Kim, Rodney S. Ruoff, “A General Approach to Composites Containing Non-Metallic Fillers and Liquid Gallium,” Science Advances 01 Jan 2021: Vol. 7, no. 1, eabe3767 https://advances.sciencemag.org/content/7/1/eabe3767
DOI: 10.1126/sciadv.abe3767

Provided by Institute for Basic Science

Comb of a Lifetime: A New Method for Fluorescence Microscopy

Scientists develop a fluorescence “lifetime” microscopy technique that uses frequency combs and no mechanical parts to observe dynamic biological phenomena.

Conventional fluorescence microscopy provides poor quantitative information of the sample because it only captures fluorescence intensity, which changes frequently and depends on external factors. Now, scientists from Japan have developed a new fluorescence microscopy technique to measure both fluorescence intensity and lifetime. Their method does not require mechanical scanning of a focal point; instead, it produces images from all points in the sample simultaneously, enabling a more quantitative study of dynamic biological and chemical processes.

Fluorescence microscopy is widely used in biochemistry and life sciences because it allows scientists to directly observe cells and certain compounds in and around them. Fluorescent molecules absorb light within a specific wavelength range and then re-emit it at the longer wavelength range. However, the major limitation of conventional fluorescence microscopy techniques is that the results are very difficult to evaluate quantitatively; fluorescence intensity is significantly affected by both experimental conditions and the concentration of the fluorescent substance. Now, a new study by scientists from Japan is set to revolutionize the field of fluorescence lifetime microscopy. Read on to understand how!

A way around the conventional problem is to focus on fluorescence lifetime instead of intensity. When a fluorescent substance is irradiated with a short burst of light, the resulting fluorescence does not disappear immediately but actually “decays” over time in a way that is specific to that substance. The “fluorescence lifetime microscopy” technique leverages this phenomenon–which is independent of experimental conditions–to accurately quantify fluorescent molecules and changes in their environment. However, fluorescence decay is extremely fast, and ordinary cameras cannot capture it. While a single-point photodetector can be used instead, it has to be scanned throughout the sample’s area to be able to reconstruct a complete 2D picture from each measured point. This process involves movement of mechanical pieces, which greatly limits the speed of image capture.

Fortunately, in this recent study published in Science Advances, the aforementioned team of scientists developed a novel approach to acquire fluorescence lifetime images without necessitating mechanical scanning. Professor Takeshi Yasui, from Institute of Post-LED Photonics (pLED), Tokushima University, Japan, who led the study, explains, “Our method can be interpreted as simultaneously mapping 44,400 ‘light stopwatches’ over a 2D space to measure fluorescence lifetimes–all in a single shot and without scanning.” So, how was this achieved?

2D arrangement of 44,400 light stopwatches enables scan-less fluorescence lifetime imaging © Takeshi Yasui

One of the main pillars of their method is the use of an optical frequency comb as the excitation light for the sample. An optical frequency comb is essentially a light signal composed of the sum of many discrete optical frequencies with a constant spacing in between them. The word “comb” in this context refers to how the signal looks when plotted against optical frequency: a dense cluster of equidistant “spikes” rising from the optical frequency axis and resembling a hair comb. Using special optical equipment, a pair of excitation frequency comb signals is decomposed into individual optical beat signals (dual-comb optical beats) with different intensity-modulation frequencies, each carrying a single modulation frequency, and irradiated on the target sample. The key here is that each light beam hits the sample on a spatially distinct location, creating a one-to-one correspondence between each point on the 2D surface of the sample (pixel) and each modulation frequency of the dual-comb optical beats.

Because of its fluorescence properties, the sample re-emits part of the captured radiation while still preserving the aforementioned frequency-position correspondence. The fluorescence emitted from the sample is then simply focused using a lens onto a high-speed single-point photodetector. Finally, the measured signal is mathematically transformed into the frequency domain, and the fluorescence lifetime at each “pixel” is easily calculated from the relative phase delay that exists between the excitation signal at that modulation frequency versus the one measured.

Principle of operation. Manga courtesy: Suana Science YMY

Thanks to its superior speed and high spatial resolution, the microscopy method developed in this study will make it easier to exploit the advantages of fluorescence lifetime measurements. “Because our technique does not require scanning, a simultaneous measurement over the entire sample is guaranteed in each shot,” remarks Prof. Yasui, “This will be helpful in life sciences where dynamic observations of living cells are needed.” In addition to providing deeper insight into biological processes, this new approach could be used for simultaneous imaging of multiple samples for antigen testing, which is already being used for the diagnosis of COVID-19.

Perhaps most importantly, this study showcases how optical frequency combs, which were only being used as “frequency rulers,” can find a place in microscopy techniques to push the envelope in life sciences. It holds promise for the development of novel therapeutic options to treat intractable diseases and enhance life expectancy, thereby benefitting the whole of humanity.

Reference: T. Mizuno, E. Hase, T. Minamikawa, Y. Tokizane, R. Oe, H. Koresawa, H. Yamamoto, T. Yasui, “Full-field fluorescence-lifetime dual-comb microscopy using spectral mapping and frequency multiplexing of dual-optical-comb beats”, Science Advances 01 Jan 2021: Vol. 7, no. 1, eabd2102 DOI: 10.1126/sciadv.abd2102 https://advances.sciencemag.org/content/7/1/eabd2102

Provided by Tokushima University

Nanoparticle Drug-delivery System Developed to treat Brain Disorders (Neuroscience)

Use of the delivery system in mouse models results in unprecedented siRNA penetration across the intact blood brain barrier.

  • Use of the delivery system in mouse models results in unprecedented siRNA penetration across the intact blood brain barrier.
  • Technology could offer potential for a variety of human neurological disorders.

In the past few decades, researchers have identified biological pathways leading to neurodegenerative diseases and developed promising molecular agents to target them. However, the translation of these findings into clinically approved treatments has progressed at a much slower rate, in part because of the challenges scientists face in delivering therapeutics across the blood-brain barrier (BBB) and into the brain. To facilitate successful delivery of therapeutic agents to the brain, a team of bioengineers, physicians, and collaborators at Brigham and Women’s Hospital and Boston Children’s Hospital created a nanoparticle platform, which can facilitate therapeutically effective delivery of encapsulated agents in mice with a physically breached or intact BBB. In a mouse model of traumatic brain injury (TBI), they observed that the delivery system showed three times more accumulation in brain than conventional methods of delivery and was therapeutically effective as well, which could open possibilities for the treatment of numerous neurological disorders. Findings were published in Science Advances.

Fig. 1 NPs with different surface coatings were prepared to achieve BBB pathophysiology –independent delivery of siRNA in TBI. (A) Schematic illustrating the overall study design. siRNA-loaded NPs with different surface coating chemistries and coating densities were compared for their in vitro uptake and gene silencing efficiency in neural cells as well as their ability to cross intact BBB in healthy mice. NPs with maximum gene silencing efficiency and BBB permeability were then evaluated in TBI mice to determine brain accumulation and gene silencing efficiency when administered during early injury or late injury periods, corresponding to physically breached BBB and intact BBB, respectively. Upon neuronal uptake of NPs, siRNA is released and silences the harmful proteins involved in TBI pathophysiology. (B) Schematic for the preparation of siRNA-loaded PLGA NPs by a modified nanoprecipitation method. DSPE-PEG was used to impart stealth character. In addition, polysorbate 80 (PS 80), poloxamer 188 (F-68), DSPE-PEG-glutathione (GSH), or DSPE-PEG-transferrin (Tf) was used to augment BBB penetration. PEG, polyethylene glycol; DSPE, 1,2-distearoyl-sn-glycero-3-phosphoethanolamine. (C) Transmission electron microscopy images of siRNA-loaded NPs having different surface coatings. Scale bars, 200 nm. The illustrations (A and B) were created with the help of BioRender.com. © Li et al.

Previously developed approaches for delivering therapeutics into the brain after TBI rely on the short window of time after a physical injury to the head, when the BBB is temporarily breached. However, after the BBB is repaired within a few weeks, physicians lack tools for effective drug delivery.

“It’s very difficult to get both small and large molecule therapeutic agents delivered across the BBB,” said corresponding author Nitin Joshi, PhD, an associate bioengineer at the Center for Nanomedicine in the Brigham’s Department of Anesthesiology, Perioperative and Pain Medicine. “Our solution was to encapsulate therapeutic agents into biocompatible nanoparticles with precisely engineered surface properties that would enable their therapeutically effective transport into the brain, independent of the state of the BBB.”

The technology could enable physicians to treat secondary injuries associated with TBI that can lead to Alzheimer’s, Parkinson’s, and other neurodegenerative diseases, which can develop during ensuing months and years once the BBB has healed.

“To be able to deliver agents across the BBB in the absence of inflammation has been somewhat of a holy grail in the field,” said co-senior author Jeff Karp, PhD, of the Brigham’s Department of Anesthesiology, Perioperative and Pain Medicine. “Our radically simple approach is applicable to many neurological disorders where delivery of therapeutic agents to the brain is desired.”

Rebekah Mannix, MD, MPH, of the Division of Emergency Medicine at Boston Children’s Hospital and a co-senior author on the study, further emphasized that the BBB inhibits delivery of therapeutic agents to the central nervous system (CNS) for a wide range of acute and chronic diseases. “The technology developed for this publication could allow for the delivery of large number of diverse drugs, including antibiotics, antineoplastic agents, and neuropeptides,” she said. “This could be a game changer for many diseases that manifest in the CNS.”

The therapeutic used in this study was a small interfering RNA (siRNA) molecule designed to inhibit the expression of the tau protein, which is believed to play a key role in neurodegeneration. Poly(lactic-co-glycolic acid), or PLGA, a biodegradable and biocompatible polymer used in several existing products approved by the U.S. Food and Drug Administration, was used as the base material for nanoparticles. The researchers systematically engineered and studied the surface properties of the nanoparticles to maximize their penetration across the intact, undamaged BBB in healthy mice. This led to the identification of a unique nanoparticle design that maximized the transport of the encapsulated siRNA across the intact BBB and significantly improved the uptake by brain cells.

A 50 percent reduction in the expression of tau was observed in TBI mice who received anti-tau siRNA through the novel delivery system, irrespective of the formulation being infused within or outside the temporary window of breached BBB. In contrast, tau was not affected in mice that received the siRNA through a conventional delivery system.

“In addition to demonstrating the utility of this novel platform for drug delivery into the brain, this report establishes for the first time that systematic modulation of surface chemistry and coating density can be leveraged to tune the penetration of nanoparticles across biological barriers with tight junction,” said first author Wen Li, PhD, of the Department of Anesthesiology, Perioperative and Pain Medicine.

In addition to targeting tau, the researchers have studies underway to attack alternative targets using the novel delivery platform.

“For clinical translation, we want to look beyond tau to validate that our system is amenable to other targets,” Karp said. “We used the TBI model to explore and develop this technology, but essentially anyone studying a neurological disorder might find this work of benefit. We certainly have our work cut out, but I think this provides significant momentum for us to advance toward multiple therapeutic targets and be in the position to move ahead to human testing.”

This work was supported by the National Institutes of Health (HL095722), Fundac?a?o para a Cie?ncia e a Tecnologia through MIT-Portugal (TB/ECE/0013/2013), and the Football Players Health Study at Harvard, funded by a grant from the National Football League Players Association. Karp has been a paid consultant and or equity holder for multiple biotechnology companies (listed here). Joshi, Karp, Mannix, Li, Qiu and Langer have one unpublished patent based on the nanoparticle work presented in this manuscript.

Reference: Li, W et al. “BBB pathophysiology independent delivery of siRNA in traumatic brain injury” Science Advances, Vol. 7, no. 1, 2020. DOI: 10.1126/sciadv.abd6889 https://advances.sciencemag.org/content/7/1/eabd6889

Provided by Brigham and Women’s Hospital

Charging Ahead For Electric Vehicles (Engineering)

Roads installed with wireless charging technology could become an integral feature of our cities in an electric vehicle future.

Modeling how the distribution of EV charging roads in cities will influence driver behavior will be a major element of smart city planning in the future. © Scharfsinn / Alamy Stock Photo

By applying statistical geometry to analyzing urban road networks, KAUST researchers have advanced understanding of how wireless charging roads might influence driver behavior and city planning in a future where electric vehicles (EVs) dominate the car market.

“Our work is motivated by the global trend of moving towards green transportation and EVs,” says postdoc Mustafa Kishk. “Efficient dynamic charging systems, such as wireless power transfer systems installed under roads, are being developed by researchers and technology companies around the world as a way to charge EVs while driving without the need to stop. In this context, there is a need to mathematically analyze the large-scale deployment of charging roads in metropolitan cities.”

Many factors come into play when charging roads are added to the urban road network. Drivers may seek out charging roads on their commute, which has implications for urban planning and traffic control. Meanwhile, the density of charging road installations in a city, and the likely time spent on and between the charging roads by commuters, could influence the size of batteries installed in EVs by car manufacturers.

Calculating the metrics that could be used to analyze a charging road network is very significant, as Kishk’s lab colleague, Duc Minh Nguyen, explains.

“Our main challenge is that the metrics used to evaluate the performance of dynamic charging deployment, such as the distance to the nearest charging road on a random trip, depend on the starting and ending points of each trip,” says Nguyen. “To correctly capture those metrics, we had to explicitly list all possible situations, compute the metrics in each case and evaluate how likely it is for each situation to happen in reality. For this, we used an approach called stochastic geometry to model and analyze how these metrics are affected by factors such as the density of roads and the frequency of dynamic charging deployment.”

Applying this analysis to the Manhattan area of New York, which has a road density of one road every 63 meters, Kishk and Nguyen with research leader Mohamed-Slim Alouini determined that a driver would have an 80 percent chance of encountering a charging road after driving for 500 meters when wireless charging is installed on 20 percent of roads.

“This is the first study to incorporate stochastic geometry into the performance analysis of charging road deployment in metropolitan cities,” Kishk says. “It is an important step towards a better understanding of charging road deployment in metropolitan cities.”

Reference: Nguyen, D.M., Kishk, M.A. & Alouini, M.-S. Modeling and analysis of dynamic charging for EVs: A stochastic geometry approach. IEEE Open Journal on Vehicular Technology 1, 1 (2020). http://dx.doi.org/10.1109/OJVT.2020.3032588

Provided by KAUST

Which Mediator Dark Matter And Visible Matter Use To Communicate? (Cosmology / Astronomy)

Heurtier and Huang have explored the possibility that the inflaton may serve as the mediator through which the dark & visible sectors communicate.

FIG. 1: The inflaton portal naturally suppress the decay of the standard model while specifying the reheating processes leading to the hidden and visible sector temperatures. © Heurtier and Huang

Although the Standard Model (SM) appears to be one of the most complete and accurate theories of particle physics in the last decades, a few major caveats remain to be addressed, especially when particle physics is discussed in the context of cosmology. On the one hand, the rotation curves of galaxies, the observation of the bullet cluster, and the study of the cosmic microwave background (CMB) suggest that our Universe contains a significant fraction of dark matter (DM). On the other hand, the incredible homogeneity and flatness of our observable Universe revealed by analysis of the CMB spectrum renders the vanilla big bang theory somehow obsolete at early time and suggests that the Universe underwent a rapid phase of expansion called “cosmic inflation.” The problem of dark matter requires the existence of a particle stable on scales longer than the age of the Universe. Moreover a scalar field slow rolling in a flat enough potential (called the “inflaton”) at primordial stages of the Universe’s evolution can produce the desired expansion for diluting inhomogeneities and residual curvature.

During the inflationary phase, our spacetime is effectively de Sitter. When inflation ends the inflaton oscillates in a cold and matter-dominated universe. The transition between this postinflationary phase and the thermal history of the universe succeeding it—referred to as “reheating”—is understood as an out-of-equilibrium decay of the inflaton field, converting its potential energy into a relativistic thermal bath. The temperature at which the reheating happens is almost unconstrained by theory, the only requirement being that the universe is not reheated below the big bang nucleosynthesis (BBN) temperature ( TRH≳10  MeV) and by definition it cannot exceed the energy scale of inflation. In a supersymmetric context gravitino production at early time may overclose the universe, imposing an upper bound on the reheating temperature usually around TRH≲10^(10–12)  GeV.

Most of the time the discussion of dark matter production is disconnected from any explicit formulation of the reheating transition. In the context of the thermal “freeze-out” scenario, however, such an approach is not problematic. Indeed dark matter is in this case in thermal equilibrium with the visible sector before it dynamically decouples, and thermalization of the inflaton decay products erases any specificity concerning the way the reheating takes place. However, in alternative scenarios where dark matter may be produced nonthermally or out of equilibrium—such as the so-called “freeze-in” scenario—the way the inflaton decays preferentially into dark matter or into visible particles may have dramatic consequences on the upcoming dark matter production process. In particular, in the freeze-in scenario the coupling of the inflaton to SM states, which thereafter produce DM particles, necessarily provides a direct decay channel for the inflaton into DM particles at the loop level. It was shown by Olive and colleagues that such direct decay, which is by construction present in many models of gravitino DM, might contribute significantly to the overall DM abundance, and possibly overclose the Universe in certain situations. Among the thermal scenarios, whereas a large class of model remains phenomenologically viable, the popular weakly-interacting-massive-particle paradigm for dark matter production is more and more constrained by direct detection experiments. Therefore, investigating the possible consequences of an explicit reheating model on alternative ways of producing dark matter is becoming particularly relevant. Heurtier and Huang in their paper studied the possibility that the inflaton field—which is implicitly present in most of the models of beyond-the-standard-model (BSM) cosmology—is the only mediator between the visible and the hidden sectors. Since the mass of the inflaton is predicted by a large class of models to be of order 10¹³  GeV or larger, such interactions that take place through the inflaton portal is expected to be extremely suppressed, suggesting that the dark sector is highly decoupled from the visible sector at very early time.

FIG. 2: Annihilation channel of dark matter into dark scalars, which ensures thermal equilibrium in the dark sector before DM freezes out. © Huang and Heurtier

Indeed, Bhupal Dev and Heurtier in their respective papers showed that introducing the inflaton as a mediator in the case of thermal and nonthermal scenarios of dark matter production is not fully satisfactory. In the case of thermal production, the annihilation cross section would be far too suppressed, resulting in an overclosure of the Universe. In the case of nonthermal production, O(1) couplings between the inflaton and the dark sector lead to a situation in which the dominant contribution to the dark matter abundance occurs during reheating and not through the freeze-in mechanism, rendering such a scenario highly fine-tuned. An attempt was proposed by L. Heurtier to motivate the hierarchy of couplings necessary to make a freeze-in scenario viable. However, even in a scenario of this sort, obtaining an appropriate annihilation cross section requires the use of a very small parameter, rendering the scenario as unnatural as many of the usual dark matter constructions.

In their papers, Berlin and colleagues showed that a dark matter candidate that decouples thermally within a highly decoupled dark sector can dynamically be produced if an appropriate entropy-dilution mechanism readjusts its relic abundance. Such mechanism generically requires a late, out-of-equilibrium decay of some dark-sector particle(s) into SM particles. This type of scenario was proposed in their papers in which the dark matter mass is required to be as large as a few PeV. However, the late-time decay required by the entropy-dilution process is achieved by a significant fine-tuning of the parameters. Moreover the amount of energy density contained in the dark and visible sectors once inflation ends is arbitrarily chosen.

For these two reasons, the inflaton portal turns out to be a perfect candidate to solve in one stroke these two issues. Thus, Heurtier and Huang in their paper now showed that using the inflaton as a mediator between the hidden sector and the standard model (SM) bath will get rid of any arbitrary choice of initial conditions and naturally relate the inflationary sector to the physics of dark matter production.

FIG. 3: Decay channel of the hidden scalar S —through a loop of dark matter particles and the exchange of an inflaton φ — into N fermions. © Huang and Heurtier

They proposed that, due to the relatively large mass of the inflaton field, such a portal leads to an extremely feeble interaction between the dark and the visible sectors, suggesting that the dark sector cannot reach any thermal equilibrium with the visible sector. After the two sectors are populated by the decay of the inflaton, a heavy dark matter particle thermally decouples within the dark sector. Later, a lighter dark particle, whose decay width is naturally suppressed by the inflaton propagator, decays into the visible sector after it dominates the energy density of the Universe. This process dilutes the dark matter relic density by injecting entropy in the visible sector.

“In the context of large-field inflation models, the fact that the mass scale required during inflation to match observations is about 10¹³  GeV suggests that no thermal equilibrium can ever take place between the dark and the visible sectors. Although “inflaton-portal” scenarios have been proposed in the literature to account for a freeze-in production mechanism of dark matter, we showed that the freeze-in production of dark matter through the inflaton portal has to be extremely fine-tuned in order to make it dominate over a direct decay of the inflaton into dark matter.”, said Heurtier.

They showed that an inflaton mass of O(10¹³)  GeV together with couplings of order one are fully compatible with a dark matter relic abundance Ωh²∼0.1. As a general feature of the model, the entropy-dilution mechanism is accompanied by a period of early matter domination, which modifies the amount of e-folds of inflation necessary to accommodate Planck data.

Moreover, the coupling of the inflaton to the dark and visible sectors brings loop contributions to the inflationary potential which can destabilize the inflation trajectory. Considering all these complementary constraints, they showed that, in the context of a plateau-inflation scenario such as the α-attractor model, the inflaton can constitute a viable mediator between the visible sector and an ∼10  EeV dark matter candidate.

“In this paper, we chose to consider a class of α-attractors, allowing us to consider both the case of a chaotic-inflation scenario ( α≫1) and the case of a plateau-inflation scenario ( α≲1). This has led us to rule out the case of chaotic inflation and favor plateau-inflation models for which we have showed that our dark matter candidate must be heavier than 10 EeV.”, said Huang.

Furthermore, they showed that improved constraints on the tensor-to-scalar ratio and spectral index could potentially rule out dark matter scenarios of this sort in the future.

Researchers concluded by emphasizing that their paper opens up the possibility for a large class of models in which the inflaton portal can mediate interactions between a heavy dark matter candidate and the visible sector. While such large dark matter masses may be difficult to probe experimentally through direct or indirect detection in the near future through the inflaton portal, they saw that observational bounds on the inflation sector derived by Planck from the study of the primordial-perturbation power spectrum are able to put constraints on their scenario. The possibility that dark matter particles decay directly into SM final states under the form of UHECR searches might finally provide a window to test the presence of such heavy dark matter in the Universe.

Reference: Lucien Heurtier and Fei Huang, “Inflaton portal to a highly decoupled EeV dark matter particle”, Phys. Rev. D 100, 043507 Vol. 100, Iss. 4 – Published 5 August 2019. https://journals.aps.org/prd/abstract/10.1103/PhysRevD.100.043507 https://doi.org/10.1103/PhysRevD.100.043507

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us.

Slide 1 © uncoverreality
Slide 2 © uncoverreality
Slide 3 © uncoverreality
Slide 4 © uncoverreality

What Are The Effects of Dark Matter Pressure On The Ellipticity of Cosmic Voids? (Cosmology)

Zeinab Rezaei and colleagues explored the shapes of cosmic voids with the non zero pressure dark matter in different cosmological models by applying the dark matter equation of state from the pseudo-isothermal density profile of galaxies.

Cosmic Void © NASA

In cosmic web, the large empty regions with very low number density of galaxies are called cosmic voids. The cosmic voids as the bubbles of the universe, are separated by the sheets and filaments in which the clusters and superclusters are observed. To investigate the large-scale structure of the Universe, it is important to study the properties of the cosmic voids. Considering the standard gravitational instability theory, the voids form from the local minima of the initial density field and expand faster than the rest of the Universe. Because the dark energy (DE) and dark matter (DM) can affect the properties of cosmic voids, they are counted as cosmological probes. In this regard, the shape of the cosmic void is considered as an observable. The DM around the cosmic voids has tidal effects which this leads to the distortion of the void shape. Therefore, the susceptibility of the void shapes to the tidal distortions can be an indicator of the large-scale tidal and density fields. Moreover, the background Cosmology determines the shape evolution of the voids. The ellipticity of the voids as a result of the tidal field effects, is an important observable related to the void shapes.

Now, Zeinab Rezaei and colleagues, by applying the dark matter equation of state from the pseudo-isothermal density profile of galaxies, explored the shapes of cosmic voids with the non zero pressure dark matter in different cosmological models.

For this purpose, they calculated the linear growth of density perturbation in the presence of dark matter pressure. In addition, they presented the matter transfer function considering the dark matter pressure, as well as the linear matter power spectrum in the presence of the dark matter pressure.

FIG. 1: Probability density distribution of the void ellipticity at different values of the redshift, z, in the cases of zero pressure DM (ZPDM) and non zero pressure DM (NZPDM) considering different values of the Lagrangian void scale, RL, in different cosmological models. © Rezaei et al.

Employing these results, they calculated the probability density distribution for the ellipticity of cosmic voids with the non zero pressure dark matter. Their results confirmed that for the cosmic voids at higher values of the redshift, the dark matter pressure alters the probability density distribution for the ellipticity of cosmic voids.

Considering the dark matter pressure, they found that the the voids with smaller ellipticity are more probabilistic. In other words, the voids with dark matter pressure are expected to have more spherical shapes.

“Our calculations verify that the dark matter pressure leads to more spherical shapes for the cosmic voids.”, said Rezaei.

They also showed that the mean ellipticity of cosmic voids is smaller when the pressure of dark matter is considered. In addition, the rate at which the mean ellipticity decreases with the redshift is affected by the dark matter pressure.

FIG. 2: Redshift dependency of εmax for cosmic voids with zero pressure DM (ZPDM) and non zero pressure DM (NZPDM) at different values of the Lagrangian void scale, RL, applying different cosmological models. © Rezaei et al.

Reference: Zeinab Rezaei, Effects of dark matter pressure on the ellipticity of cosmic voids, Monthly Notices of the Royal Astronomical Society, Volume 487, Issue 2, August 2019, Pages 2614–2623, https://doi.org/10.1093/mnras/stz1436

Copyright of this article totally belongs to our author S. Aman. One is allowed to reuse it only by giving proper credit either to him or to us.

Slide 1 © uncoverreality
Slide 2 © uncoverreality
Slide 3 © uncoverreality
Slide 4 © uncoverreality