Concept: Mean absolute percentage error
Accurate estimates of chlorophyll-a concentration (Chl-a) from remotely sensed data for inland waters are challenging due to their optical complexity. In this study, a framework of Chl-a estimation is established for optically complex inland waters based on combination of water optical classification and two semi-empirical algorithms. Three spectrally distinct water types (Type I to Type III) are first identified using a clustering method performed on remote sensing reflectance (R(rs)) from datasets containing 231 samples from Lake Taihu, Lake Chaohu, Lake Dianchi, and Three Gorges Reservoir. The classification criteria for each optical water type are subsequently defined for MERIS images based on the spectral characteristics of the three water types. The criteria cluster every R(rs) spectrum into one of the three water types by comparing the values from band 7 (central band: 665nm), band 8 (central band: 681.25nm), and band 9 (central band: 708.75nm) of MERIS images. Based on the water classification, the type-specific three-band algorithms (TBA) and type-specific advanced three-band algorithm (ATBA) are developed for each water type using the same datasets. By pre-classifying, errors are decreased for the two algorithms, with the mean absolute percent error (MAPE) of TBA decreasing from 36.5% to 23% for the calibration datasets, and from 40% to 28% for ATBA. The accuracy of the two algorithms for validation data indicates that optical classification eliminates the need to adjust the optimal locations of the three bands or to re-parameterize to estimate Chl-a for other waters. The classification criteria and the type-specific ATBA are additionally validated by two MERIS images. The framework of first classifying optical water types based on reflectance characteristics and subsequently developing type-specific algorithms for different water types is a valid scheme for reducing errors in Chl-a estimation for optically complex inland waters.
We investigated intermodality agreements of strains from two-dimensional echocardiography (2DE) and cardiac magnetic resonance (CMR) feature tracking (FT) in the assessment of right (RV) and left ventricular (LV) mechanics in tetralogy of Fallot (TOF). Patients were prospectively studied with 2DE and CMR performed contiguously. LV and RV strains were computed separately using 2DE and CMR-FT. Segmental and global longitudinal strains (GLS) for the LV and RV were measured from four-chamber views; LV radial (global radial strain [GRS]) and circumferential strains (GCS) measured from short-axis views. Intermodality and interobserver agreements were examined. In 40 patients (20 TOF, mean age 23 years and 20 adult controls), LV, GCS showed narrowest intermodality limits of agreement (mean percentage error 9.5%), followed by GLS (16.4%). RV GLS had mean intermodality difference of 25.7%. GLS and GCS had acceptable interobserver agreement for the LV and RV with both 2DE and CMR-FT, whereas GRS had high interobserver and intermodality variability. In conclusion, myocardial strains for the RV and LV derived using currently available 2DE and CMR-FT software are subject to considerable intermodality variability. For both modalities, LV GCS, LV GLS, and RV GLS are reproducible enough to warrant further investigation of incremental clinical merit.
This study tested the validity of revolutions per minute (RPM) measurements from the Pennington Pedal Desk™. Forty-four participants (73 % female; 39 ± 11.4 years-old; BMI 25.8 ± 5.5 kg/m(2) [mean ± SD]) completed a standardized trial consisting of guided computer tasks while using a pedal desk for approximately 20 min. Measures of RPM were concurrently collected by the pedal desk and the Garmin Vector power meter. After establishing the validity of RPM measurements with the Garmin Vector, we performed equivalence tests, quantified mean absolute percent error (MAPE), and constructed Bland-Altman plots to assess agreement between RPM measures from the pedal desk and the Garmin Vector (criterion) at the minute-by-minute and trial level (i.e., over the approximate 20 min trial period).
The aim of this study was to compare the seven following commercially available activity monitors in terms of step count detection accuracy: Movemonitor (Mc Roberts), Up (Jawbone), One (Fitbit), ActivPAL (PAL Technologies Ltd.), Nike+ Fuelband (Nike Inc.), Tractivity (Kineteks Corp.) and Sensewear Armband Mini (Bodymedia). Sixteen healthy adults consented to take part in the study. The experimental protocol included walking along an indoor straight walkway, descending and ascending 24 steps, free outdoor walking and free indoor walking. These tasks were repeated at three self-selected walking speeds. Angular velocity signals collected at both shanks using two wireless inertial measurement units (OPAL, ADPM Inc) were used as a reference for the step count, computed using previously validated algorithms. Step detection accuracy was assessed using the mean absolute percentage error computed for each sensor. The Movemonitor and the ActivPAL were also tested within a nine-minute activity recognition protocol, during which the participants performed a set of complex tasks. Posture classifications were obtained from the two monitors and expressed as a percentage of the total task duration. The Movemonitor, One, ActivPAL, Nike+ Fuelband and Sensewear Armband Mini underestimated the number of steps in all the observed walking speeds, whereas the Tractivity significantly overestimated step count. The Movemonitor was the best performing sensor, with an error lower than 2% at all speeds and the smallest error obtained in the outdoor walking. The activity recognition protocol showed that the Movemonitor performed best in the walking recognition, but had difficulty in discriminating between standing and sitting. Results of this study can be used to inform choice of a monitor for specific applications.
The rapid growth of very elderly populations requires accurate population estimates up to the highest ages. However, it is recognised that estimates derived from census counts are often unreliable. Methods that make use of death data have not previously been evaluated for Australia and New Zealand. The aim was to evaluate a number of nearly-extinct cohort methods for producing very elderly population estimates by age and sex for Australia and New Zealand. The accuracy of official estimates was also assessed. Variants of three nearly-extinct cohort methods, the Survivor Ratio method, the Das Gupta method and a new method explicitly allowing for falling mortality over time, were evaluated by retrospective application over the period 1976-1996. Estimates by sex and single years of age were compared against numbers derived from the extinct cohort method. Errors were measured by the Weighted Mean Absolute Percentage Error. It is confirmed that for Australian females the Survivor Ratio method constrained to official estimates for ages 90+ performed well. However, for Australian males and both sexes in New Zealand, more accurate estimates were obtained by constraining the Survivor Ratio method to official estimates for ages 85+. Official estimates in Australia proved reasonably accurate for ages 90+ but at 100+ they varied significantly in accuracy from year to year. Estimates produced by Statistics New Zealand in aggregate for ages 90+ proved very accurate. We recommend the use of the Survivor Ratio method constrained to official estimates for ages 85+ to create very elderly population estimates for Australia and New Zealand.
BACKGROUND: Tuberculosis (TB) is a serious public health issue in developing countries. Early prediction of TB epidemic is very important for its control and intervention. We aimed to develop an appropriate model for predicting TB epidemics and analyze its seasonality in China. METHODS: Data of monthly TB incidence cases from January 2005 to December 2011 were obtained from the Ministry of Health, China. A seasonal autoregressive integrated moving average (SARIMA) model and a hybrid model which combined the SARIMA model and a generalized regression neural network model were used to fit the data from 2005 to 2010. Simulation performance parameters of mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to compare the goodness-of-fit between these two models. Data from 2011 TB incidence data was used to validate the chosen model. RESULTS: Although both two models could reasonably forecast the incidence of TB, the hybrid model demonstrated better goodness-of-fit than the SARIMA model. For the hybrid model, the MSE, MAE and MAPE were 38969150, 3406.593 and 0.030, respectively. For the SARIMA model, the corresponding figures were 161835310, 8781.971 and 0.076, respectively. The seasonal trend of TB incidence is predicted to have lower monthly incidence in January and February and higher incidence from March to June. CONCLUSIONS: The hybrid model showed better TB incidence forecasting than the SARIMA model. There is an obvious seasonal trend of TB incidence in China that differed from other countries.
Proton radiography, which images patients with the same type of particles as what they are to be treated with, is a promising approach to image guidance and water equivalent path length (WEPL) verification in proton radiation therapy. We have shown recently that proton radiographs could be obtained by measuring time-resolved dose rate functions (DRF) using an x-ray amorphous silicon flat panel. The WEPL values were derived solely from the root-mean-square (RMS) of DRFs while the intensity information in the DRFs was filtered out. In this work, we explored the use of such intensity information for potential improvement in WEPL accuracy and imaging quality. Three WEPL derivation methods based on, respectively, the RMS only, intensity only, and the intensity weighted RMS were tested and compared in terms of the quality of obtained radiograph images and the accuracy of WEPL values. A Gammex CT calibration phantom containing inserts made of various tissue substitute materials with independently measured relative stopping powers (RSP) were used to assess the imaging performances. Improved image quality with enhanced interfaces was achieved while preserving the accuracy by utilizing intensity information in the calibration. Other objects including an anthropomorphic head phantom, a proton therapy range compensator, a frozen lamb head and an “image quality phantom” were also imaged. Both RMS only and intensity weighted RMS methods derived the RSPs within ± 1% for most of the Gammex phantom inserts, with the mean absolute percentage error of 0.66% for all inserts. In the case of the insert with a titanium rod, the method based on RMS completely failed whereas that based on intensity weighted RMS was qualitatively valid. The use of intensity greatly enhanced the interfaces between different materials in the obtained WEPL images, suggesting the potential for image guidance such as patient positioning and tumor tracking by proton radiography.
A software sensor model based on hybrid fuzzy neural network for rapid estimation water quality in Guangzhou section of Pearl River, China
- Journal of environmental science and health. Part A, Toxic/hazardous substances & environmental engineering
- Published about 1 month ago
In order to manage water resources, a software sensor model was designed to estimate water quality using a hybrid fuzzy neural network (FNN) in Guangzhou section of Pearl River, China. The software sensor system was composed of data storage module, fuzzy decision-making module, neural network module and fuzzy reasoning generator module. Fuzzy subtractive clustering was employed to capture the character of model, and optimize network architecture for enhancing network performance. The results indicate that, on basis of available on-line measured variables, the software sensor model can accurately predict water quality according to the relationship between chemical oxygen demand (COD) and dissolved oxygen (DO), pH and NH4(+)-N. Owing to its ability in recognizing time series patterns and non-linear characteristics, the software sensor-based FNN is obviously superior to the traditional neural network model, and its R (correlation coefficient), MAPE (mean absolute percentage error) and RMSE (root mean square error) are 0.8931, 10.9051 and 0.4634, respectively.
Long length of stay and overcrowding in emergency departments (EDs) are two common problems in the healthcare industry. To decrease the average length of stay (ALOS) and tackle overcrowding, numerous resources, including the number of doctors, nurses and receptionists need to be adjusted, while a number of constraints are to be considered at the same time. In this study, an efficient method based on agent-based simulation, machine learning and the genetic algorithm (GA) is presented to determine optimum resource allocation in emergency departments. GA can effectively explore the entire domain of all 19 variables and identify the optimum resource allocation through evolution and mimicking the survival of the fittest concept. A chaotic mutation operator is used in this study to boost GA performance. A model of the system needs to be run several thousand times through the GA evolution process to evaluate each solution, hence the process is computationally expensive. To overcome this drawback, a robust metamodel is initially constructed based on an agent-based system simulation. The simulation exhibits ED performance with various resource allocations and trains the metamodel. The metamodel is created with an ensemble of the adaptive neuro-fuzzy inference system (ANFIS), feedforward neural network (FFNN) and recurrent neural network (RNN) using the adaptive boosting (AdaBoost) ensemble algorithm. The proposed GA-based optimization approach is tested in a public ED, and it is shown to decrease the ALOS in this ED case study by 14%. Additionally, the proposed metamodel shows a 26.6% improvement compared to the average results of ANFIS, FFNN and RNN in terms of mean absolute percentage error (MAPE).
Worldwide, influenza is estimated to result in approximately 3 to 5 million annual cases of severe illness and approximately 250,000 to 500,000 deaths. We need an accurate time-series model to predict the number of influenza patients. Although time-series models with different time lags as feature spaces could lead to varied accuracy, past studies simply adopted a time lag in their models without comparing or selecting an appropriate number of time lags. We investigated the performance of adopting 6 different time lags in 6 different models: Auto-Regressive Integrated Moving Average (ARIMA), Support Vector Regression (SVR), Random Forest (RF), Gradient Boosting (GB), Artificial Neural Network (ANN), and Long Short Term Memory (LSTM) with hyperparameter adjustment. To the best of our knowledge, this is the first time that LSTM has been used to predict influenza outbreaks. As a result, we found that the time lag of 52 weeks led to the lowest Mean Absolute Percentage Error (MAPE) in the ARIMA, ANN and LSTM, while the machine learning models (SVR, RF, GB) achieved the lowest MAPEs with a time lag of 4 weeks. We also found that the MAPEs of the machine learning models were less than ARIMA, and the MAPEs of the deep learning models (ANN, LSTM) were less than those of the machine learning models. In all the models, the LSTM model of 4 layers reached the lowest MAPE of 5.4%, and the LSTM model of 5 layers with regularization reached the lowest root mean squared error (RMSE) of 0.00210.