A paper-based, multiplexed, microfluidic assay has been developed to visually measure alanine aminotransferase (ALT) in a fingerstick sample, generating rapid, semi-quantitative results. Prior studies indicated a need for improved accuracy; the device was subsequently optimized using an FDA-approved automated platform (Abaxis Piccolo Xpress) as a comparator. Here, we evaluated the performance of the optimized paper test for measurement of ALT in fingerstick blood and serum, as compared to Abaxis and Roche/Hitachi platforms. To evaluate feasibility of remote results interpretation, we also compared reading cell phone camera images of completed tests to reading the device in real time.
The study aimed to evaluate the test-retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70-0.88), and the coefficients of variation were low (CV = 5.3-5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men’s soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations.
Virulence is a microbial property that is realized only in susceptible hosts. There is no absolute measurement for virulence, and consequently it is always measured relative to a standard, usually another microbe or host. This article introduces the concept of pathogenic potential, which provides a new approach to measuring the capacity of microbes for virulence. The pathogenic potential is proportional to the fraction of individuals who become symptomatic after infection with a defined inoculum and can include such attributes as mortality, communicability, and the time from infection to disease. The calculation of the pathogenic potential has significant advantages over the use of the lethal dose that kills 50% of infected individuals (LD50) and allows direct comparisons between individual microbes. An analysis of the pathogenic potential of several microbes for mice reveals a continuum, which in turn supports the view that there is no dividing line between pathogenic and nonpathogenic microbes.
To investigate whether the daily workload per nurse (Oulu Patient Classification (OPCq)/nurse) as measured by the RAFAELA system correlates with different types of patient safety incidents and with patient mortality, and to compare the results with regressions based on the standard patients/nurse measure.
Although drug development typically focuses on binding thermodynamics, recent studies suggest that kinetic properties can strongly impact a drug candidate’s efficacy. Robust techniques for measuring inhibitor association and dissociation rates are therefore essential. To address this need, we have developed a pair of complementary isothermal titration calorimetry (ITC) techniques for measuring the kinetics of enzyme inhibition. The advantages of ITC over standard techniques include speed, generality, and versatility; ITC also measures the rate of catalysis directly, making it ideal for quantifying rapid, inhibitor-dependent changes in enzyme activity. We used our methods to study the reversible covalent and non-covalent inhibitors of prolyl oligopeptidase (POP). We extracted kinetics spanning three orders of magnitude, including those too rapid for standard methods, and measured sub-nM binding affinities below the typical ITC limit. These results shed light on the inhibition of POP and demonstrate the general utility of ITC-based enzyme inhibition kinetic measurements.
A simplified method for measuring the fluidic resistance (R(fluidic)) of microfluidic channels is presented, in which the electrical resistance (R(elec)) of a channel filled with a conductivity standard solution can be measured and directly correlated to R(fluidic) using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine R(fluidic) to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ∼40 to 600kPasmm(-3)) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic R(fluidic) measurements were possible in more complex microfluidic designs. Microchannel R(elec) was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems.
The performance of three different types of ion mobility spectrometer (IMS) devices: GDA2 with a radioactive ion source (Airsense, Germany), UV-IMS with a photo-ionization source (G.A.S. Germany) and VG-Test with a corona discharge source (3QBD, Israel) was studied. The gas-phase ion chemistry in the IMS devices affected the species formed and their measured reduced mobility values. The sensitivity and limit of detection for trimethylamine (TMA), putrescine and cadaverine were compared by continuous monitoring of a stream of air with a given concentration of the analyte and by measurement of headspace vapors of TMA in a sealed vial. Preprocessing of the mobility spectra and the effectiveness of multivariate curve resolution techniques (MCR-LASSO) improved the accuracy of the measurements by correcting baseline effects and adjusting for variations in drift time as well as enhancing the signal to noise ratio and deconvolution of the complex data matrix to their pure components. The limit of detection for measurement of the biogenic amines by the three IMS devices was between 0.1 and 1.2ppm (for TMA with the VG-Test and GDA, respectively) and between 0.2 and 0.7ppm for putrescine and cadaverine with all three devices. Considering the uncertainty in the LOD determination there is almost no statistically significant difference between the three devices although they differ in their operating temperature, ionization method, drift tube design and dopant chemistry. This finding may have general implications on the achievable performance of classic IMS devices.
BACKGROUND: To assess the presence and extent of photophobia in children with intermittent exotropia (X[T]) using the contrast sensitivity test. METHODS: Fifty-eight children with X(T) and 34 normal controls were studied with the functional acuity contrast test. Each participant viewed the stimuli of contrast monocularly and binocularly under photopic and mesopic conditions, performed with and without glare. Photophobia was defined as a reduction of contrast sensitivity caused by glare light. We compared the photophobia of children with X(T) to that of normal controls, and to the photophobia 3 months after muscle surgery. RESULTS: With stimuli of glare, the contrast sensitivity of children with X(T) was suppressed at intermediate spatial frequencies under mesopic condition (p = 0.006 for 6 cycles per degree [cpd], p = 0.027 for 12 cpd), whereas that of normal controls showed no difference. It occurred when X(T) patients viewed targets binocularly, and significantly improved after strabismus surgery (p = 0.003 at 6 cpd). The measured photophobia of X(T) was strongly correlated to the photophobia symptoms reported by parents (p = 0.002). CONCLUSIONS: The mesopic contrast sensitivity with glare can represent the photophobia of children with X(T). Contrast sensitivity may be a useful measure for monitoring symptoms related to X(T).
- Alzheimer's & dementia : the journal of the Alzheimer's Association
- Published about 7 years ago
BACKGROUND: The Alzheimer’s Disease Assessment Scale-Cognitive Behavior section (ADAS-Cog) is the most widely used measure of cognitive performance in AD clinical trials. This key role has rightly brought its performance under increased scrutiny with recent research using traditional psychometric methods, questioning the ADAS-Cog’s ability to adequately measure early-stage disease. However, given the limitations of traditional psychometric approaches, herein we use the more sophisticated Rasch Measurement Theory (RMT) methods to fully examine the strengths and weaknesses of the ADAS-Cog, and identify potential paths toward its improvement. METHODS: We analyzed AD Neuroimaging Initiative (ADNI) ADAS-Cog data (675 measurements across four time-points over 2 years) from the AD participants. RMT analysis was undertaken to examine three broad areas: adequacy of scale-to-sample targeting; degree to which, taken together, the ADAS-Cog items adequately perform as a measuring instrument; and how well the scale measured the subjects in the current sample. RESULTS: The 11 ADAS-Cog components mapped-out a measurement continuum, worked together adequately, and were stable across different time-points and samples. However, the scale did not prove to be a good match to the patient sample supporting previous research. RMT analysis also identified problematic “gaps” and “bunching” of the components across the continuum. CONCLUSION: Although the ADAS-Cog has the building blocks of a good measurement instrument, this sophisticated analysis confirms limitations with potentially serious implications for clinical trials. Importantly, and unlike traditional psychometric methods, our RMT analysis has provided important clues aimed at solving the measurement problems of the ADAS-Cog.
- Reports on progress in physics. Physical Society (Great Britain)
- Published about 7 years ago
The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass.The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved techniques and discoveries in quantum mechanics steadily reduced the uncertainty of h. The central part of this review describes the technical details of the watt balance technique, which is a combination of the mechanical and electronic measurements that now determine h as a direct result, i.e. not requiring measured values of additional fundamental constants. The first technical section describes the basics and some of the common details of many watt balance designs. Next is a review of the ongoing advances at the (currently) seven national metrology institutions where these experiments are pursued. A final summary of the recent h determinations of the last two decades shows how history keeps repeating itself; there is again a question of whether there is a shift in the newest results, albeit at uncertainties that are many orders of magnitude less than the original experiments. The conclusion is that there is room for further development to resolve these differences and find new ideas for a watt balance system with a more universal application. Since the next generation of watt balance experiments are expected to become kilogram realization standards, the historical record suggests that there is yet a need for proof that Planck constant results are finally reproducible at an acceptable uncertainty.