Tag Archives: #transducer

Scientists Develop Multi-frequency Ultrasonic Endoscope (Medicine)

Endoscopic ultrasound (EUS) plays a vital role in the development of novel treatment and diagnostic methods of gastrointestinal diseases. For the EUS imaging system, the ultrasonic transducer is an essential component, whose working frequency determines the quality of ultrasonic images. In general, the higher the working frequency, the easier it is to obtain high-quality images. Nevertheless, increasing the frequency often leads to a decrease in detection depth, which limits the imaging range and is unfavorable in clinical use.

Conventional EUS transducers generally work at a single frequency, ranging from 5 to 20 MHz. If an operator wants to alter the working frequency to obtain suitable depth, another probe must be employed, thereby causing inconvenience in clinical use and making patients suffer more. However, multi-frequency imaging and the matched image fusion technique have the potential to solve the issue that limits the performance of the EUS imaging system.

Recently, the medical ultrasound research team from the Suzhou Institute of Biomedical Engineering and Technology of the Chinese Academy of Sciences has developed a multi-frequency and large-bandwidth ultrasonic probe based on piezoelectric composite materials, which balances imaging resolution and imaging depth.

In this research, a triangle-structure triple-frequency transducer that can work at three different frequencies was presented, whose outer diameter is less than 1.5 mm. Static and transient models are built to assist its design.

Figure 1. The fused ultrasound image of multi-layer mimicking phantom. (Image by SIBET)

Specifically, simulation results and sound field measurement results suggest that the three different frequency elements have excellent directivity, and the acoustic field interference among the elements is small. Pulse-echo results show that center frequencies of the elements are 13.17 MHz, 20.3 MHz, and 30.85 MHz respectively. Ultrasound imaging results of phantom indicate that the triple-frequency fusion imaging is better than the single-frequency imaging, and the fused image can achieve an excellent balance between image resolution and imaging depth.

In short, this research proves the feasibility of a triple-frequency transducer in endoscopic imaging and shows its great potential in the diagnosis of gastrointestinal diseases.

The research article “A triple-frequency transducer for endoscopic imaging: Simulation design and experimental verification” has been published in Sensors and Actuators A: Physical.

This research was supported by the National Natural Science Foundation of China, the Foundation of the Jiangsu Social Development – Clinical Frontier Technology, and the Youth Innovation Promotion Association of the Chinese Academy of Sciences.

Featured image: The triple-frequency transducer. (Image by SIBET)


Reference: Zhangjian Li, Jie Xu, Xinle Zhu, Zhile Han, Jiabing Lv, Xiaohua Jian, Weiwei Shao, Yang Jiao, Yaoyao Cui, A triple-frequency transducer for endoscopic imaging: Simulation design and experimental verification, Sensors and Actuators A: Physical, Volume 321, 2021, 112589, ISSN 0924-4247, https://doi.org/10.1016/j.sna.2021.112589. (https://www.sciencedirect.com/science/article/pii/S0924424721000509)


Provided by Chinese Academy of Sciences

Best of Both Worlds: A Hybrid Method For Tracking Laparoscopic Ultrasound Transducers (Medicine)

Combined hardware- and computer vision-based strategy will help improve laparoscopic ultrasound imaging

Laparoscopic surgery, a less invasive alternative to conventional open surgery, involves inserting thin tubes with a tiny camera and surgical instruments into the abdomen. To visualize specific surgical targets, ultrasound imaging is used in conjunction with the surgery. However, ultrasound images are viewed on a separate screen, requiring the surgeon to mentally combine the camera and ultrasound data.

Modern augmented reality (AR)-based methods have overcome this issue by embedding ultrasound images into the video taken by the laparoscopic camera. These AR methods precisely map the ultrasound data coordinates to the coordinates of the images seen through the camera. Although the process is mathematically straightforward, it can only be done if the pose (position and orientation) of the ultrasound probe (transducer) is known by the camera coordinate system. This has proven to be challenging, despite many strategies for tracking the laparoscopic transducer. Hardware-based tracking by attaching electromagnetic (EM) sensors to the probe is a feasible approach, but it is prone to errors due to calibration and hardware limitations. Camera vision (CV) systems can also be used to process the images acquired by the camera and determine the probe’s pose. However, because they rely entirely on camera data, such methods fail if the probe is defocused or if the camera’s view is occluded. Thus, such CV systems are not yet ready for clinical settings.

To this end, in a recent study published in the Journal of Medical Imaging, a team of scientists from the US have come up with a creative solution. Instead of relying entirely on either hardware- or CV-based tracking, they propose a hybrid approach that combines both methods. Michael Miga, Associate Editor of the journal, explains, “In the context of interventional imaging with laparoscopic ultrasound, tracking the flexible ultrasound probe for correlation with preoperative images is a challenging task. The team led by Dr. Shekhar has demonstrated an impressive tracking ability with the proposed hybrid approach; these types of capabilities will be needed to advance the field of image-guided surgery.”

To begin with, the team designed and 3D-printed a custom tracking mount to be placed on the tip of the transducer. This mount contained a sensor for EM-based tracking and several flat surfaces on which black-and-white markers can be attached for CV-based tracking. These markers, which resemble QR codes, are detected in the images recorded by the camera using an open-source AR algorithm called ArUco. Once two or more markers were detected in a frame, the scientists could immediately calculate the pose of the transducer.

Because CV-based tracking is more accurate than EM-based, the system defaults to using the former to track the transducer. And whenever markers are undetectable in a frame, the system adaptively switches to EM-based tracking. Moreover, to enhance their approach beyond the simple combination of both techniques, the scientists developed an algorithm that can perform corrections to the EM-based tracking results based on previous camera frames. This greatly reduces the errors associated with the EM sensor, especially those due to rotations of the laparoscopic probe.

The team demonstrated the effectiveness of their strategy through experiments on both a realistic tissue phantom and live animals. Excited about the results, Raj Shekhar, who led the study, concludes, “Our hybrid method is more reliable than using CV-based tracking alone and more accurate and practical than using EM-based tracking alone. It has the potential to significantly improve tracking performance for AR applications based on laparoscopic ultrasound.”

As this hybrid strategy undergoes further improvements, it can pave the way for laparoscopic surgery to be more effective and safer, leading to faster recoveries and better patient outcomes overall.

Read the original research article by Xinyang Liu, William Plishker, and Raj Shekhar: “Hybrid electromagnetic-ArUco tracking of laparoscopic ultrasound transducer in laparoscopic video,” J. of Medical Imaging 8(1), 015001 (2021), doi 10.1117/1.JMI.8.1.015001.

Featured image: Multimedia still images showing the results of the ArUco tracking (green), the corrected EM tracking using Algorithm 2 (yellow) and the original EM tracking (red), from Liu, Plishker, and Shekhar, doi 10.1117/1.JMI.8.1.015001


Provided by SPIE

This Bracelet Can Jams Microphones (Amazing Products)

Yuxin Chen and colleagues engineered a wearable microphone jammer that is capable of disabling microphones in its user’s surroundings, including hidden microphones. Their device is based on a recent exploit that leverages the fact that when exposed to ultrasonic noise, commodity microphones will leak the noise into the audible range. Moreover, their device exploits a synergy between ultrasonic jamming and the naturally occurring movements that users induce on their wearable devices (e.g., bracelets) as they gesture or walk. They demonstrated that these movements can blur jamming blind spots and increase jamming coverage. Lastly, their wearable bracelet is built in a ring-layout that allows it to jam in multiple directions. This is beneficial in that it allows their jammer to protect against microphones hidden out of sight.

Their prototype is a self-contained wearable comprised of ultrasonic transducers, a signal generator, a microcontroller, a battery, a voltage regulator and a 3W amplifier. © University of Chicago

“Despite the initial excitement around voice-based smart devices,” the authors wrote, “consumers are becoming increasingly nervous with the fact that these interactive devices are, by default, always listening, recording, and possibly saving sensitive personal information. Therefore, it is critical to build tools that protect users against the potential compromise or misuse of microphones in the age of voice-based smart devices.”

Recently, researchers have shown that ultrasonic transducers can prevent commodity microphones from recording human speech. While these ultrasonic signals are imperceptible to human ears, they leak into the audible spectrum after being captured by the microphones, producing a jamming signal inside the microphone circuit that jams (disrupts) voice recordings. The leakage is caused by an inherent, nonlinear property of microphone’s hardware.

However, all these devices exhibit two key limitations: (1) They are heavily directional, thus requiring users to point the jammer precisely at the location where the microphones are. This is not only impractical, as it interferes with the users’ primary task, but is also often impossible when microphones are hidden. (2) They rely on multiple transducers that enlarge their jamming coverage but introduce blind spots locations were the signals from two or more transducers cancel each other out. If a microphone is placed in any of these locations it will not be jammed, rendering the whole jammer obsolete.

To tackle these shortcomings, Yuxin Chen and colleagues engineered a wearable jammer that is worn as a bracelet, which is depicted in Figure below. By turning an ultrasonic jammer into a bracelet, their device leverages natural hand gestures that occur while speaking, gesturing or moving around to blur out the aforementioned blind spots. Furthermore, by arranging the transducers in a ring layout, their wearable jams in multiple directions and protects the privacy of its user’s voice, anywhere and anytime, without requiring its user to manually point the jammer to the eavesdropping microphones.

(a) They engineered a wearable ultrasound jammer that can prevent surrounding microphones from eavesdropping on a conversation. (b) This is the actual speech that their conversation partner hears, since their jammer does not disrupt human hearing. However, (c) is the transcript of what a state-of-the-art speech recognizer makes out of the jammed conversation.

They confirmed that an ultrasonic microphone jammer is superior to state-of-the-art and commercial stationary jammers by conducting a series of technical evaluations and a user study. These demonstrated that: (1) Their wearable jammer outperformed static jammers in jamming coverage; (2) Its jamming is effective even if the microphones are hidden and covered by various materials, such as cloths or paper sheets; and, (3) In a life-like situation their study participants felt that their wearable protected the privacy of their voice.

References: Yuxin Chen, Huiying Li, Shan-Yuan Teng, Steven Nagels, Zhijing Li, Pedro Lopes, Ben Y. Zhao, and Haitao Zheng. 2020. Wearable Microphone Jamming. In Proceedings of CHI Conference on Human Factors in Computing Systems 2020 (CHI’2020). https://doi.org/10.1145/3313831.3376304.