Concept: Median absolute deviation
BACKGROUND: To evaluate institutional nursing care performance in the context of national comparative statistics (benchmarks), approximately one in every three major healthcare institutions (over 1,800 hospitals) across the United States, have joined the National Database for Nursing Quality Indicators[REGISTERED SIGN] (NDNQI[REGISTERED SIGN]). With over 18,000 hospital units contributing data for nearly 200 quantitative measures at present, a reliable and efficient input data screening for all quantitative measures for data quality control is critical to the integrity, validity, and on-time delivery of NDNQI reports. METHODS: With Monte Carlo simulation and quantitative NDNQI indicator examples, we compared two ad-hoc methods using robust scale estimators, Inter Quartile Range (IQR) and Median Absolute Deviation from the Median (MAD), to the classic, theoretically-based Minimum Covariance Determinant (FAST-MCD) approach, for initial univariate outlier detection. RESULTS: While the theoretically based FAST-MCD used in one dimension can be sensitive and is better suited for identifying groups of outliers because of its high breakdown point, the ad-hoc IQR and MAD approaches are fast, easy to implement, and could be more robust and efficient, depending on the distributional property of the underlying measure of interest. CONCLUSION: With highly skewed distributions for most NDNQI indicators within a short data screen window, the FAST-MCD approach, when used in one dimensional raw data setting, could overestimate the false alarm rates for potential outliers than the IQR and MAD with the same pre-set of critical value, thus, overburden data quality control at both the data entry and administrative ends in our setting.
- IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
- Published over 5 years ago
Recently, there has been significant interest in robust fractal image coding for the purpose of robustness against outliers. However, the known robust fractal coding methods (HFIC and LAD-FIC, etc.) are not optimal, since, besides the high computational cost, they use the corrupted domain block as the independent variable in the robust regression model, which may adversely affect the robust estimator to calculate the fractal parameters (depending on the noise level). This paper presents a Huber fitting plane-based fractal image coding (HFPFIC) method. This method builds Huber fitting planes (HFPs) for the domain and range blocks, respectively, ensuring the use of an uncorrupted independent variable in the robust model. On this basis, a new matching error function is introduced to robustly evaluate the best scaling factor. Meanwhile, a median absolute deviation (MAD) about the median decomposition criterion is proposed to achieve fast adaptive quadtree partitioning for the image corrupted by salt & pepper noise. In order to reduce computational cost, the no-search method is applied to speedup the encoding process. Experimental results show that the proposed HFPFIC can yield superior performance over conventional robust fractal image coding methods in encoding speed and the quality of the restored image. Furthermore, the no-search method can significantly reduce encoding time and achieve less than 2.0 s for the HFPFIC with acceptable image quality degradation. In addition, we show that, combined with the MAD decomposition scheme, the HFP technique used as a robust method can further reduce the encoding time while maintaining image quality.
Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on) causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific homeland should not be attempted.
We analysed the peer review of grant proposals under Marie Curie Actions, a major EU research funding instrument, which involves two steps: an independent assessment (Individual Evaluation Report, IER) performed remotely by 3 raters, and a consensus opinion reached during a meeting by the same raters (Consensus Report, CR). For 24,897 proposals evaluated from 2007 to 2013, the association between average IER and CR scores was very high across different panels, grant calls and years. Median average deviation (AD) index, used as a measure of inter-rater agreement, was 5.4 points on a 0-100 scale (interquartile range 3.4-8.3), overall, demonstrating a good general agreement among raters. For proposals where one rater disagreed with the other two raters (n=1424; 5.7%), or where all 3 raters disagreed (n=2075; 8.3%), the average IER and CR scores were still highly associated. Disagreement was more frequent for proposals from Economics/Social Sciences and Humanities panels. Greater disagreement was observed for proposals with lower average IER scores. CR scores for proposals with initial disagreement were also significantly lower. Proposals with a large absolute difference between the average IER and CR scores (≥10 points; n=368, 1.5%) generally had lower CR scores. An inter-correlation matrix of individual raters' scores of evaluation criteria of proposals indicated that these scores were, in general, a reflection of raters' overall scores. Our analysis demonstrated a good internal consistency and general high agreement among raters. Consensus meetings appear to be relevant for particular panels and subsets of proposals with large differences among raters' scores.
Fallout from the Fukushima Dai-ichi nuclear power plant accident resulted in a 3000-km(2) radioactive contamination plume. Here, we model the progressive dilution of the radiocesium contamination in 327 sediment samples from two neighboring catchments with different timing of soil decontamination. Overall, we demonstrate that there has been a ~90% decrease of the contribution of upstream contaminated soils to sediment transiting the coastal plains between 2012 (median - M - contribution of 73%, mean absolute deviation - MAD - of 27%) and 2015 (M 9%, MAD 6%). The occurrence of typhoons and the progress of decontamination in different tributaries of the Niida River resulted in temporary increases in local contamination. However, the much lower contribution of upstream contaminated soils to coastal plain sediment in November 2015 demonstrates that the source of the easily erodible, contaminated material has potentially been removed by decontamination, diluted by subsoils, or eroded and transported to the Pacific Ocean.
Sperm must mature functionally in the process of capacitation to become able to fertilize. Capacitation depends on membrane lipid changes, and can be quantitatively assessed by redistribution of the ganglioside GM1, the basis of the Cap-Score™ sperm function test. Here, differences in Cap-Score were compared among and within men at two time points. Ejaculates were liquefied, washed, and incubated for 3 hours under capacitating (Cap) conditions, then fixed and analyzed immediately (Day0); after being incubated 3 hours under Cap conditions then maintained 22-24 hours in fix (Day1-fix); or after 22-24 hours incubation under Cap conditions prior to fixation (Day1). In all cases, a light fixative previously shown to allow membrane lipid movements was used. Day1-fix and Day1 Cap-Scores were greater than Day0 (p < 0.001; n = 25), whereas Day1-fix and Day1 Cap-Scores were equivalent (p = 0.43; n = 25). In 123 samples from 52 fertile men, Cap-Score increased more than 1SD (7.7; calculated previously from a fertile cohort) from Day0 to Day1-fix in 44% (54/123) of the samples. To test whether timing of capacitation was consistent within an individual, 52 samples from 11 fertile men were classified into either "early" or "late" capacitation groups. The average capacitation group concordance within a donor was 81%. Median absolute deviation (MAD; in Cap-Score units) was used to assess the tightness of clustering of the difference from Day0 to Day1-fix within individuals. The average (2.21) and median (1.98) MAD confirmed consistency within individuals. Together, these data show that the timing of capacitation differed among men and was consistent within men. This article is protected by copyright. All rights reserved.
Using four different benchmark sets of molecular crystals we establish the level of confidence for lattice energies estimated using CE-B3LYP model energies and experimental crystal structures. [See IUCrJ, 2017, 4, 575-587.]. We conclude that they compare very well with available benchmark estimates derived from sublimation enthalpies, and in many cases they are comparable with - and sometimes better than - more computationally-demanding approaches, such as those based on periodic DFT plus dispersion methodologies. The performance over the complete set of 110 crystals indicates a mean absolute deviation from benchmark energies of only 6.4 kJ mol-1. Applications to polymorphic crystals and larger molecules are also presented and critically discussed. The results highlight the importance of recognizing the consequences of different sets of crystal/molecule geometries when comparing different methodologies, as well as the need for more extensive benchmark sets of crystal structures and associated lattice energies.
Quantum dots (QDs)-based white light-emitting diodes (QDs-WLEDs) have been attracting numerous attentions in lighting and flat panel display applications, by virtue of their high luminous efficacy and excellent color rendering ability. However, QDs' key optical parameters including scattering, absorption and anisotropy coefficients for optical modeling are still unclear, which are severely against the design and optimization of QDs-WLEDs. In this work, we proposed a new precise optical modeling approach towards QDs. Optical properties of QDs-polymer film were obtained for the first time, by combining double integrating sphere (DIS) system measurement with inverse adding doubling (IAD) algorithm calculation. The measured results show that the typical scattering, absorption and anisotropy coefficients of red emissive QDs are 2.9382 mm-1, 3.7000 mm-1 and 0.4918 for blue light, respectively, and 1.2490 mm-1, 0.6062 mm-1 and 0.5038 for red light, respectively. A Monte-Carlo ray-tracing model was set-up for validation. With a maximum deviation of 1.16%, the simulated values quantitatively agree with the experimental results. Therefore, our approach provides an effective way for optical properties measurement and precise optical modeling of QDs for QDs-WLEDs.
The aim of this study was to analyse gait variability and symmetry in race walkers. Eighteen senior and 17 junior athletes race walked on an instrumented treadmill (for 10 km and 5 km, respectively) at speeds equivalent to 103% of season’s best time for 20 km and 10 km, respectively. Spatio-temporal and ground reaction force (GRF) data were recorded at 2.5 km, and at 4.5, 6.5 and 8.5 km for a subsection of athletes. Gait variability was measured using median absolute deviation (MAD) whereas inter-leg symmetry was measured using the symmetry angle. Both groups showed low variability for step length (<0.9%), step frequency (<1.1%), contact time (≤1.2%) and vertical peak force values (<5%), and neither variability nor symmetry changed with distance walked. Junior athletes were more variable for both step length (P = 0.004) and loading force (P = 0.003); no differences for gait symmetry were found. Whereas there was little mean asymmetry overall, individual analyses identified asymmetry in several athletes (symmetry angle ≥ 1.2%). Importantly, asymmetrical step lengths were found in 12 athletes and could result from underlying imbalances. Coaches are advised to observe athletes on an individual basis to monitor for both variability and asymmetry.
Methods are needed for rapid screening of environmental compounds for neurotoxicity, particularly ones that assess function. To demonstrate the utility of microelectrode array (MEA)-based approaches as a rapid neurotoxicity screening tool, 1055 chemicals from EPA’s phase II ToxCast library were evaluated for effects on neural function and cell health. Primary cortical networks were grown on multi-well microelectrode array (mwMEA) plates. On day in vitro 13, baseline activity (40 min) was recorded prior to exposure to each compound (40 µM). Changes in spontaneous network activity [mean firing rate (MFR)] and cell viability (lactate dehydrogenase and CellTiter Blue) were assessed within the same well following compound exposure. Following exposure, 326 compounds altered (increased or decreased) normalized MFR beyond hit thresholds based on 2× the median absolute deviation of DMSO-treated wells. Pharmaceuticals, pesticides, fungicides, chemical intermediates, and herbicides accounted for 86% of the hits. Further, changes in MFR occurred in the absence of cytotoxicity, as only eight compounds decreased cell viability. ToxPrint chemotype analysis identified several structural domains (e.g., biphenyls and alkyl phenols) significantly enriched with MEA actives relative to the total test set. The top 5 enriched ToxPrint chemotypes were represented in 26% of the MEA hits, whereas the top 11 ToxPrints were represented in 34% of MEA hits. These results demonstrate that large-scale functional screening using neural networks on MEAs can fill a critical gap in assessment of neurotoxicity potential in ToxCast assay results. Further, a data-mining approach identified ToxPrint chemotypes enriched in the MEA-hit subset, which define initial structure-activity relationship inferences, establish potential mechanistic associations to other ToxCast assay endpoints, and provide working hypotheses for future studies.