Discover the most talked about and latest scientific content & concepts.

Concept: Image compression


The exponential growth of high-throughput DNA sequence data has posed great challenges to genomic data storage, retrieval and transmission. Compression is a critical tool to address these challenges, where many methods have been developed to reduce the storage size of the genomes and sequencing data (reads, quality scores and metadata). However, genomic data are being generated faster than they could be meaningfully analyzed, leaving a large scope for developing novel compression algorithms that could directly facilitate data analysis beyond data transfer and storage. In this article, we categorize and provide a comprehensive review of the existing compression methods specialized for genomic data and present experimental results on compression ratio, memory usage, time for compression and decompression. We further present the remaining challenges and potential directions for future research.

Concepts: DNA, Gene, Genome, Computer storage, Data compression, Media technology, Computer data storage, Image compression


We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

Concepts: Time, Optics, Dispersion, Data compression, Group velocity, Photonics, Phase velocity, Image compression


: Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology.

Concepts: Question, Computer program, Answer, Computational biology, Computational genomics, Information theory, Data compression, Image compression


This paper establishes a review of the recent study in the field of hyperspectral (HS) image compression approaches. Recently, image compression techniques have achieved significant advances from diverse types of coding standards/approaches. HS image compression requires an unconventional coding technique because of its unique, multiple-dimensional structure. The data redundancy exists in both inter-band and intra-band methods. The survey summarizes current literature in inter- and intra-band compression methods. The challenges, opportunities, and future research possibilities regarding HS image compression are further discussed. The experimental results are also provided for validity and applicability of the existing HS image compression techniques.

Concepts: Research, Creativity techniques, Existence, Information theory, Data compression, Redundancy, Image compression, Data redundancy


The recent technological advances in capsule endoscopy system has revolutionized the healthcare system by introducing new techniques and functionalities in the system to diagnose entire gastrointestinal tract which resulted in better diagnostic accuracy, reducing hospitalization, and improving its clinical outcome. Although many benefits of present capsule endoscopy are known, there are still significant drawbacks with respect to conventional endoscope system including size, battery life, bandwidth, image quality and higher frame rate, which has restricted its wide use. In order to solve the problem related to current capsule endoscope, the importance of a low-cost and low-power compression algorithm, that produces higher frame rate with better image quality but lower bandwidth and transmission power, is paramount. While several review papers have been published describing the capability of capsule endoscope in terms of its functionality and emerging features, an extensive review on the compression algorithms from the past and for the future is still required. Hence, in this review paper, we aim to address the issue by exploring the specific characteristics of endoscopic images, summarizing useful compression techniques with in-depth analysis and finally making suggestions for possible future adaptation.

Concepts: Time, Future, Endoscopic retrograde cholangiopancreatography, Endoscopy, Data compression, Image compression


There is an immediate need to validate the authenticity of digital images due to the availability of powerful image processing tools that can easily manipulate the digital image information without leaving any traces. The digital image forensics most often employs the tampering detectors based on JPEG compression. Therefore, to evaluate the competency of the JPEG forensic detectors, an anti-forensic technique is required. In this paper, two improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain. In the proposed framework, the grainy noise left by the perceptual histogram smoothing in DCT domain can be reduced significantly by applying the proposed de-noising operation. Two types of denoising algorithms are proposed, one is based on the constrained minimization problem of total variation of energy and other on the normalized weighted function. Subsequently, an improved TV based deblocking operation is proposed to eliminate the blocking artifacts in the spatial domain. Then, a decalibration operation is applied to bring the processed image statistics back to its standard position. The experimental results show that the proposed anti-forensic approaches outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost.

Concepts: Function, Signal processing, Computer graphics, Digital photography, JPEG, Image processing, Digital image processing, Image compression


Compression algorithm is an essential part of Telemedicine systems, to store and transmit large amount of medical signals. Most of existing compression methods utilize fixed transforms such as discrete cosine transform (DCT) and wavelet and usually cannot efficiently extract signal redundancy especially for non-stationary signals such as electroencephalogram (EEG). In this paper, we first propose learning-based adaptive transform using combination of DCT and artificial neural network (ANN) reconstruction technique. This adaptive ANN-based transform is applied to the DCT coefficients of EEG data to reduce its dimensionality and also to estimate the original DCT coefficients of EEG in the reconstruction phase. To develop a new near lossless compression method, the difference between the original DCT coefficients and estimated ones are also quantized. The quantized error is coded using Arithmetic coding and sent along with the estimated DCT coefficients as compressed data. The proposed method was applied to various datasets and the results show higher compression rate compared to the state-of-the-art methods.

Concepts: Discrete cosine transform, Information theory, JPEG, Data compression, Huffman coding, Arithmetic coding, Lossless data compression, Image compression


Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

Concepts: Information theory, Data compression, Matrix, Image processing, Unitary matrix, Orthogonal matrix, Spectral theorem, Image compression


Digital images have been extensively used in education, research, and entertainment. Many of these images, taken by consumer cameras, are compressed by the JPEG algorithm for effective storage and transmission. Blocking artifact is a well-known problem caused by this algorithm. Effective measurement of blocking artifacts plays an important role in the design, optimization, and evaluation of image compression algorithms. In this paper, we propose a no-reference objective blockiness measure, which is adaptive to high frequency component in an image. Difference of entropies across blocks and variation of block boundary pixel values in edge images are adopted to calculate the blockiness level in areas with low and high frequency component, respectively. Extensive experimental results prove that the proposed measure is effective and stable across a wide variety of images. It is robust to image noise and can be used for real-world image quality monitoring and control. Index Terms-JPEG, no-reference, blockiness measure.

Concepts: Algorithm, Computer graphics, Digital photography, Data compression, Photography, Block, Blocking, Image compression


Next-generation sequencing (NGS) techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences.

Concepts: Gene, Genetics, Computer, Data compression, Gzip, Lossless data compression, Image compression, Context mixing