SciCombinator

Discover the most talked about and latest scientific content & concepts.

Journal: Zeitschrift fur medizinische Physik

28

The earliest studies on ‘disability glare’ date from the early 20(th) century. The condition was defined as the negative effect on visual function of a bright light located at some distance in the visual field. It was found that for larger angles (>1degree) the functional effect corresponded precisely to the effect of a light with a luminosity equal to that of the light that is perceived spreading around such a bright source. This perceived spreading of light was called straylight and by international standard disability glare was defined as identical to straylight. The phenomenon was recognized in the ophthalmological community as an important aspect of the quality of vision and attempts were made to design instruments to measure it. This must not be confused with instruments that assess light spreading over small distances (<1 degree), as originating from (higher order) aberrations and defocus. In recent years a new instrument has gained acceptance (C-Quant) for objective and controllable assessment of straylight in the clinical setting. This overview provides a sketch of the historical development of straylight measurement, as well as the results of studies on the origins of straylight (or disability glare) in the normal eye, and on findings on cataract (surgery) and corneal conditions.

Concepts: Quantum mechanics, Evaluation, Eye, Vision, Visual perception, Visual system, Ophthalmology, Normal distribution

22

For dosimetry in radioligand therapy, the time-integrated activity coefficients (TIACs) for organs at risk and for tumour lesions have to be determined. The used sampling scheme affects the TIACs and therefore the calculated absorbed doses. The aim of this work was to develop a general and flexible method, which analyses numerous clinically applicable sampling schedules using true time-activity curves (TACs) of virtual patients.

22

Convolutional neural networks have begun to surpass classical statistical- and atlas based machine learning techniques in medical image segmentation in recent years, proving to be superior in performance and speed. However, a major challenge that the community faces are mismatch between variability within training and evaluation datasets and therefore a dependency on proper data pre-processing. Intensity normalization is a widely applied technique for reducing the variance of the data for which there are several methods available ranging from uniformity transformation to histogram equalization. The current study analyses the influence of intensity normalization on cerebellum segmentation performance of a convolutional neural network (CNN).

2

What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

0

The aim of the present work is to perform dosimetric characterization of a novel vented PinPoint ionization chamber (PTW 31023, PTW-Freiburg, Germany). This chamber replaces the previous model (PTW 31014), where the diameter of the central electrode has been increased from 0.3 to 0.6mm and the guard ring has been redesigned. Correction factors for reference and non-reference measurement conditions were examined.

0

Diffusion anisotropy in diffusion tensor imaging (DTI) is commonly quantified with normalized diffusion anisotropy indices (DAIs). Most often, the fractional anisotropy (FA) is used, but several alternative DAIs have been introduced in attempts to maximize the contrast-to-noise ratio (CNR) in diffusion anisotropy maps. Examples include the scaled relative anisotropy (sRA), the gamma variate anisotropy index (GV), the surface anisotropy (UAsurf), and the lattice index (LI). With the advent of multidimensional diffusion encoding it became possible to determine the presence of microscopic diffusion anisotropy in a voxel, which is theoretically independent of orientation coherence. In accordance with DTI, the microscopic anisotropy is typically quantified by the microscopic fractional anisotropy (μFA). In this work, in addition to the μFA, the four microscopic diffusion anisotropy indices (μDAIs) μsRA, μGV, μUAsurf, and μLI are defined in analogy to the respective DAIs by means of the average diffusion tensor and the covariance tensor. Simulations with three representative distributions of microscopic diffusion tensors revealed distinct CNR differences when differentiating between isotropic and microscopically anisotropic diffusion. q-Space trajectory imaging (QTI) was employed to acquire brain in-vivo maps of all indices. For this purpose, a 15min protocol featuring linear, planar, and spherical tensor encoding was used. The resulting maps were of good quality and exhibited different contrasts, e.g. between gray and white matter. This indicates that it may be beneficial to use more than one μDAI in future investigational studies.

0

Quantitative susceptibility mapping (QSM) reveals pathological changes in widespread diseases such as Parkinson’s disease, Multiple Sclerosis, or hepatic iron overload. QSM requires multiple processing steps after the acquisition of magnetic resonance imaging (MRI) phase measurements such as unwrapping, background field removal and the solution of an ill-posed field-to-source-inversion. Current techniques utilize iterative optimization procedures to solve the inversion and background field correction, which are computationally expensive and lead to suboptimal or over-regularized solutions requiring a careful choice of parameters that make a clinical application of QSM challenging. We have previously demonstrated that a deep convolutional neural network can invert the magnetic dipole kernel with a very efficient feed forward multiplication not requiring iterative optimization or the choice of regularization parameters. In this work, we extended this approach to remove background fields in QSM. The prototype method, called SHARQnet, was trained on simulated background fields and tested on 3T and 7T brain datasets. We show that SHARQnet outperforms current background field removal procedures and generalizes to a wide range of input data without requiring any parameter adjustments. In summary, we demonstrate that the solution of ill-posed problems in QSM can be achieved by learning the underlying physics causing the artifacts and removing them in an efficient and reliable manner and thereby will help to bring QSM towards clinical applications.

0

Non-conventional scan trajectories for interventional three-dimensional imaging promise low-dose interventions and a better radiation protection to the personnel. Circular tomosynthesis (cTS) scan trajectories yield an anisotropical image quality distribution. In contrast to conventional Computed Tomographies (CT), the reconstructions have a preferred focus plane. In the other two perpendicular planes, limited angle artifacts are introduced. A reduction of these artifacts leads to enhanced image quality while maintaining the low dose. We apply Deep Artifact Correction (DAC) to this task. cTS simulations of a digital phantom are used to generate training data. Three U-Net-based networks and a 3D-ResNet are trained to estimate the correction map between the cTS and the phantom. We show that limited angle artifacts can be mitigated using simulation-based DAC. The U-Net-corrected cTS achieved a Root Mean Squared Error (RMSE) of 124.24 Hounsfield Units (HU) on 60 simulated test scans in comparison to the digital phantoms. This equals an error reduction of 59.35% from the cTS. The achieved image quality is similar to a simulated cone beam CT (CBCT). Our network was also able to mitigate artifacts in scans of objects which strongly differ from the training data. Application to real cTS test scans showed an error reduction of 45.18% and 26.4% with the 3D-ResNet in reference to a high-dose CBCT.

0

Sodium magnetic resonance imaging (MRI) of the human abdomen is of increasing clinical interest for e.g. kidney, intervertebral disks, prostate and tumor monitoring examinations in the abdomen. To overcome the low MR sensitivity of sodium, optimal radio frequency (RF) structures should be used. A common approach is to combine a volumetric transmit coil for homogeneous excitation with an array of sensitive receive coils adapted to the human shape. Additionally, proton imaging is required to match the physiological sodium images to the morphological proton images. In this work, we demonstrated the feasibility of a double resonant proton/sodium RF setup for abdominal MRI at 3T, providing a high sodium sensitivity. After extensive simulations, a 16-channel sodium receive array was built and used in combination with a volumetric sodium transmit coil. Additionally, a local proton coil was included in the setup for anatomical localizations. The setup was investigated using electromagnetic field simulations, phantom measurements and final in-vivo measurements of a healthy volunteer. A 3 to 6-fold sensitivity improvement of the sodium receive array compared to the volumetric sodium coil was achieved using the phantom simulations and measurements. Safety assessments of the local proton transmit/receive coil were performed using specific absorption rate simulations. Finally, the feasibility of such a setup was proven by in-vivo measurements.

0

The characteristic depth-dose profile of protons traveling through material is the main advantage of proton therapy over conventional radiotherapy with photons or electrons. However, uncertainties regarding the range of the protons in human tissue prevent to exploit the full potential of proton therapy. Therefore, a non-invasive in-vivo dose monitoring is desired. At the ion beam center MedAustron in Wiener Neustadt/Austria, patient treatment with proton beams started in December 2016. A PET/CT is available in close vicinity of the treatment rooms, exclusively dedicated to offline PET monitoring directly after the therapeutic irradiation. Preparations for a patient study include workflow tests under realistic clinical conditions using two different phantoms, irradiated with protons prior to the scan in the PET/CT. GATE simulations of the C-11 production are used as basis for the prediction of the PET measurement. We present results from the workflow tests in comparison with simulation results, and by this, we demonstrate the applicability of the PET monitoring at the MedAustron facility.