Journal: Zeitschrift fur medizinische Physik
The earliest studies on ‘disability glare’ date from the early 20(th) century. The condition was defined as the negative effect on visual function of a bright light located at some distance in the visual field. It was found that for larger angles (>1degree) the functional effect corresponded precisely to the effect of a light with a luminosity equal to that of the light that is perceived spreading around such a bright source. This perceived spreading of light was called straylight and by international standard disability glare was defined as identical to straylight. The phenomenon was recognized in the ophthalmological community as an important aspect of the quality of vision and attempts were made to design instruments to measure it. This must not be confused with instruments that assess light spreading over small distances (<1 degree), as originating from (higher order) aberrations and defocus. In recent years a new instrument has gained acceptance (C-Quant) for objective and controllable assessment of straylight in the clinical setting. This overview provides a sketch of the historical development of straylight measurement, as well as the results of studies on the origins of straylight (or disability glare) in the normal eye, and on findings on cataract (surgery) and corneal conditions.
For dosimetry in radioligand therapy, the time-integrated activity coefficients (TIACs) for organs at risk and for tumour lesions have to be determined. The used sampling scheme affects the TIACs and therefore the calculated absorbed doses. The aim of this work was to develop a general and flexible method, which analyses numerous clinically applicable sampling schedules using true time-activity curves (TACs) of virtual patients.
Convolutional neural networks have begun to surpass classical statistical- and atlas based machine learning techniques in medical image segmentation in recent years, proving to be superior in performance and speed. However, a major challenge that the community faces are mismatch between variability within training and evaluation datasets and therefore a dependency on proper data pre-processing. Intensity normalization is a widely applied technique for reducing the variance of the data for which there are several methods available ranging from uniformity transformation to histogram equalization. The current study analyses the influence of intensity normalization on cerebellum segmentation performance of a convolutional neural network (CNN).
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Multiple quantum NMR signals that appear in the presence of weak quadrupole interactions were formulated using statistical tensors (Fano, 1957). The approach aimed to present a concise and a computer-based tool for a detailed analysis and modification of the MQ pulse sequences. The calculation avoids a lengthy procedure of utilizing exponential operators and, moreover, the same formulae are applicable for any interval in the TQ pulse sequence, as well as any spin value. The quantum operator algebra was implemented using “Mathematica” software (Wolfram Inc.). The results of tensor’s evolutions in the TQ pulse sequence were graphically illustrated using corresponding spherical harmonics. The visualization takes into consideration the parity properties of irreducible tensors and the corresponding spherical harmonics.
Helical TomoTherapy allows a highly conformal dose distribution to complex target geometries with a good protection of organs at risk. However, the small field sizes associated with this method are a possible source of dosimetrical uncertainties. The IAEA has published detector-specific field output correction factors for static fields of the TomoTherapy in the TRS483. This work investigates the average subfield size of helical TomoTherapy plans.
The aim of the present work is to perform dosimetric characterization of a novel vented PinPoint ionization chamber (PTW 31023, PTW-Freiburg, Germany). This chamber replaces the previous model (PTW 31014), where the diameter of the central electrode has been increased from 0.3 to 0.6mm and the guard ring has been redesigned. Correction factors for reference and non-reference measurement conditions were examined.
Diffusion anisotropy in diffusion tensor imaging (DTI) is commonly quantified with normalized diffusion anisotropy indices (DAIs). Most often, the fractional anisotropy (FA) is used, but several alternative DAIs have been introduced in attempts to maximize the contrast-to-noise ratio (CNR) in diffusion anisotropy maps. Examples include the scaled relative anisotropy (sRA), the gamma variate anisotropy index (GV), the surface anisotropy (UAsurf), and the lattice index (LI). With the advent of multidimensional diffusion encoding it became possible to determine the presence of microscopic diffusion anisotropy in a voxel, which is theoretically independent of orientation coherence. In accordance with DTI, the microscopic anisotropy is typically quantified by the microscopic fractional anisotropy (μFA). In this work, in addition to the μFA, the four microscopic diffusion anisotropy indices (μDAIs) μsRA, μGV, μUAsurf, and μLI are defined in analogy to the respective DAIs by means of the average diffusion tensor and the covariance tensor. Simulations with three representative distributions of microscopic diffusion tensors revealed distinct CNR differences when differentiating between isotropic and microscopically anisotropic diffusion. q-Space trajectory imaging (QTI) was employed to acquire brain in-vivo maps of all indices. For this purpose, a 15min protocol featuring linear, planar, and spherical tensor encoding was used. The resulting maps were of good quality and exhibited different contrasts, e.g. between gray and white matter. This indicates that it may be beneficial to use more than one μDAI in future investigational studies.
Quantitative susceptibility mapping (QSM) reveals pathological changes in widespread diseases such as Parkinson’s disease, Multiple Sclerosis, or hepatic iron overload. QSM requires multiple processing steps after the acquisition of magnetic resonance imaging (MRI) phase measurements such as unwrapping, background field removal and the solution of an ill-posed field-to-source-inversion. Current techniques utilize iterative optimization procedures to solve the inversion and background field correction, which are computationally expensive and lead to suboptimal or over-regularized solutions requiring a careful choice of parameters that make a clinical application of QSM challenging. We have previously demonstrated that a deep convolutional neural network can invert the magnetic dipole kernel with a very efficient feed forward multiplication not requiring iterative optimization or the choice of regularization parameters. In this work, we extended this approach to remove background fields in QSM. The prototype method, called SHARQnet, was trained on simulated background fields and tested on 3T and 7T brain datasets. We show that SHARQnet outperforms current background field removal procedures and generalizes to a wide range of input data without requiring any parameter adjustments. In summary, we demonstrate that the solution of ill-posed problems in QSM can be achieved by learning the underlying physics causing the artifacts and removing them in an efficient and reliable manner and thereby will help to bring QSM towards clinical applications.
Non-conventional scan trajectories for interventional three-dimensional imaging promise low-dose interventions and a better radiation protection to the personnel. Circular tomosynthesis (cTS) scan trajectories yield an anisotropical image quality distribution. In contrast to conventional Computed Tomographies (CT), the reconstructions have a preferred focus plane. In the other two perpendicular planes, limited angle artifacts are introduced. A reduction of these artifacts leads to enhanced image quality while maintaining the low dose. We apply Deep Artifact Correction (DAC) to this task. cTS simulations of a digital phantom are used to generate training data. Three U-Net-based networks and a 3D-ResNet are trained to estimate the correction map between the cTS and the phantom. We show that limited angle artifacts can be mitigated using simulation-based DAC. The U-Net-corrected cTS achieved a Root Mean Squared Error (RMSE) of 124.24 Hounsfield Units (HU) on 60 simulated test scans in comparison to the digital phantoms. This equals an error reduction of 59.35% from the cTS. The achieved image quality is similar to a simulated cone beam CT (CBCT). Our network was also able to mitigate artifacts in scans of objects which strongly differ from the training data. Application to real cTS test scans showed an error reduction of 45.18% and 26.4% with the 3D-ResNet in reference to a high-dose CBCT.