BACKGROUND: The second-order, infinite impulse response notch filter is widely used to remove electrical power line noise in electrocardiograms (ECGs). However this filtering process often introduces spurious ringing artifacts in the vicinity of raw signal with sharp transitions. It is challenging to simultaneously remove these two types of noise without losing vital information about cardiac activities. OBJECTIVE: Our objective is to devise a method to remove the power-line interference without introducing artifacts nor losing vital information. To this end we have developed the “hybrid approach” involving two-sided filtration and multi-iterative approximation techniques. The two-sided filtration technique can suppress the interference but some cardiac components are lost. The lost information can be restored using multi-iterative approximation technique. RESULTS: For evaluation, four artificial data sets, each including 91 ECGs of different heart rates, were generated by a dynamical model. Four publicly-accessible sets of clinical data (MIT-BIH Arrhythmia, QT, PTB Diagnostic ECG, and T-Wave Alternans Challenge Databases) were also selected. Our new hybrid approach and the existing method were tested with these two types of signal under various pre-determined conditions. In contrast with the existing method, the hybrid approach can provide more than 27.40 dB and 37.77 dB reduction in signal distortion for 95% and 60% of artificial ECGs respectively; it can provide in excess of 11.78 dB and 17.48 dB reduction in distortion for 95% and 60% of these real records respectively. CONCLUSIONS: Overall, a significant reduction in signal distortion is demonstrated. These test results indicate that the newly proposed approach outperforms the traditional method assessed on both the artificial and clinical ECGs and suggest it could be of practical use for clinicians in the future.
Graphene and hexagonal boron nitride (h-BN) have similar crystal structures with a lattice constant difference of only 2%. However, graphene is a zero-bandgap semiconductor with remarkably high carrier mobility at room temperature, whereas an atomically thin layer of h-BN is a dielectric with a wide bandgap of ∼5.9 eV. Accordingly, if precise two-dimensional domains of graphene and h-BN can be seamlessly stitched together, hybrid atomic layers with interesting electronic applications could be created. Here, we show that planar graphene/h-BN heterostructures can be formed by growing graphene in lithographically patterned h-BN atomic layers. Our approach can create periodic arrangements of domains with size ranging from tens of nanometres to millimetres. The resulting graphene/h-BN atomic layers can be peeled off the growth substrate and transferred to various platforms including flexible substrates. We also show that the technique can be used to fabricate two-dimensional devices, such as a split closed-loop resonator that works as a bandpass filter.
Component coding is the method NeuroInterventionalists have used for the past 20 years to bill procedural care. The term refers to separate billing for each discrete aspect of a surgical or interventional procedure, and has typically allowed billing the procedural activity, such as catheterization of vessels, separately from the diagnostic evaluation of radiographic images. This work is captured by supervision and interpretation codes. Benefits of component coding will be reviewed in this article. The American Medical Association/Specialty Society Relative Value Scale Update Committee has been filtering for codes that are frequently reported together. NeuroInterventional procedures are going to be caught in this filter as our codes are often reported simultaneously as for example routinely occurs when procedural codes are coupled to those for supervision and interpretation. Unfortunately, history has shown that when bundled codes have been reviewed at the RUC, there has been a trend to lower overall RVU value for the combined service compared with the sum of the values of the separate services.
Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g. visual or tactile), ERP component (e.g. P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a “best-practice” method for ERP detection problems.
Objective. Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an ‘adaptive Laplacian (ALAP) filter’, can provide better performance for SMR-based BCIs. Approach. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Main results. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Significance. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.
In this paper, we present a method for detecting the R-peak of an ECG signal by using an singular value decomposition (SVD) filter and a search back system. The ECG signal was detected in two phases: the pre-processing phase and the decision phase. The pre-processing phase consisted of the stages for the SVD filter, Butterworth High Pass Filter (HPF), moving average (MA), and squaring, whereas the decision phase consisted of a single stage that detected the R-peak. In the pre-processing phase, the SVD filter removed noise while the Butterworth HPF eliminated baseline wander. The MA removed the remaining noise of the signal that had gone through the SVD filter to make the signal smooth, and squaring played a role in strengthening the signal. In the decision phase, the threshold was used to set the interval before detecting the R-peak. When the latest R-R interval (RRI), suggested by Hamilton et al., was greater than 150% of the previous RRI, the method of detecting the R-peak in such an interval was modified to be 150% or greater than the smallest interval of the two most latest RRIs. When the modified search back system was used, the error rate of the peak detection decreased to 0.29%, compared to 1.34% when the modified search back system was not used. Consequently, the sensitivity was 99.47%, the positive predictivity was 99.47%, and the detection error was 1.05%. Furthermore, the quality of the signal in data with a substantial amount of noise was improved, and thus, the R-peak was detected effectively.
Terrestrial laser scanning is of increasing importance for surveying and hazard assessments. Digital terrain models are generated using the resultant data to analyze surface processes. In order to determine the terrain surface as precisely as possible, it is often necessary to filter out points that do not represent the terrain surface. Examples are vegetation, vehicles, and animals. Filtering in mountainous terrain is more difficult than in other topography types. Here, existing automatic filtering solutions are not acceptable, because they are usually designed for airborne scan data. The present article describes a method specifically suitable for filtering terrestrial laser scanning data. This method is based on the direct line of sight between the scanner and the measured point and the assumption that no other surface point can be located in the area above this connection line. This assumption is only true for terrestrial laser data, but not for airborne data. We present a comparison of the wedge filtering to a modified inverse distance filtering method (IDWMO) filtered point cloud data. Both methods use manually filtered surfaces as reference. The comparison shows that the mean error and root-mean-square-error (RSME) between the results and the manually filtered surface of the two methods are similar. A significantly higher number of points of the terrain surface could be preserved, however, using the wedge-filtering approach. Therefore, we suggest that wedge-filtering should be integrated as a further parameter into already existing filtering processes, but is not suited as a standalone solution so far.
One of the main reasons why nonlinear-optical signal processing (regeneration, logic, etc.) has not yet become a practical alternative to electronic processing is that the all-optical elements with nonlinear input-output relationship have remained inherently single-channel devices (just like their electronic counterparts) and, hence, cannot fully utilise the parallel processing potential of optical fibres and amplifiers. The nonlinear input-output transfer function requires strong optical nonlinearity, e.g. self-phase modulation, which, for fundamental reasons, is always accompanied by cross-phase modulation and four-wave mixing. In processing multiple wavelength-division-multiplexing channels, large cross-phase modulation and four-wave mixing crosstalks among the channels destroy signal quality. Here we describe a solution to this problem: an optical signal processor employing a group-delay-managed nonlinear medium where strong self-phase modulation is achieved without such nonlinear crosstalk. We demonstrate, for the first time to our knowledge, simultaneous all-optical regeneration of up to 16 wavelength-division-multiplexing channels by one device. This multi-channel concept can be extended to other nonlinear-optical processing schemes.Nonlinear optical processing devices are not yet fully practical as they are single channel. Here the authors demonstrate all-optical regeneration of up to 16 channels by one device, employing a group-delay-managed nonlinear medium where strong self-phase modulation is achieved without nonlinear inter-channel crosstalk.
Because of the rapidly increasing use of digital composite images, recent studies have identified digital forgery and filtering regions. This research has shown that interpolation, which is used to edit digital images, is an effective way to analyze digital images for composite regions. Interpolation is widely used to adjust the size of the image of a composite target, making the composite image seem natural by rotating or deforming. As a result, many algorithms have been developed to identify composite regions by detecting a trace of interpolation. However, many limitations have been found in detection maps developed to identify composite regions. In this study, we analyze the pixel patterns of noninterpolation and interpolation regions. We propose a detection map algorithm to separate the two regions. To identify composite regions, we have developed an improved algorithm using minimum filer, Laplacian operation and maximum filters. Finally, filtering regions that used the interpolation operation are analyzed using the proposed algorithm.
We propose a kind of heterogeneous trench-assisted graded-index few-mode multi-core fiber with square-lattice layout. For each core in the fiber, effective area (Aeff) of LP01 mode and LP11 mode can achieve about 110 μm2 and 220 μm2. Absolute value of differential mode delay (|DMD|) is smaller than 100 ps/km over C + L bands, which can decrease the complexity of digital signal processing at the receiver end. Considering the upper limit of cladding diameter (Dcl) and cable cutoff wavelength of LP21 mode in the cores located at the inner layer, we set core pitch (Λ) as 43 μm. In this case, Dcl is about 220.4 μm, inter-core crosstalk (XT) is lower than -40 dB/500km and the relative core multiplicity factor (RCMF) reaches 15.93.