Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.
©2006 Optical Society of America
The purpose of this research is to demonstrate that it is possible to exploit lens aberrations for source camera identification. It is generally accepted that most consumer-level cameras optics deviates from ideal pinhole camera model [1, 2, 3]. Among different kinds of aberrations, lens radial distortion is the most severe. The inherent lens radial distortion causes non-linear geometrical distortion on the images. In this paper, we propose to estimate the lens radial distortion from an image and use it to identify the source camera of the image.
Source camera identification is useful in image forensics. With the availability of powerful software, digital images can be manipulated easily even by amateurs and the alterations may leave no observable traces. This hinders the credibility of digital images presented as news items or as evidence in court cases. As a result, in image forensics, one would like to ascertain the authenticity of a digital image by identifying the source camera of an image. In this paper, we focus on distinguishing between images captured by a limited number of camera models.
The problem of source camera identification can be approached from several directions. An obvious approach is to examine an image file’s header. For example, Exchangeable Image File Format (EXIF) header is attached in JPEG images by most consumer-level cameras. Information such as digital camera type, exposure, date and time of an image is included in the header. One can determine the source camera of an image from this information. However, this information is fragile. It may be maliciously altered or discarded after image editing.
Another approach is to make use of the difference in image processing methods among camera models. A number of steps are involved in image processing in digital cameras . These include demosaicing, gamma correction, color processing, white balance, compression, and storage. The algorithms and details of image processing may vary from one manufacturer to another. Therefore, the output image may exhibit some traits and patterns regardless of the original image content. Kharrazi et al.  tried to use these traits for identifying the source camera of an image. They proposed to obtain a vector of thirty-four features from pixel intensities to represent an image. The features include average pixel value, RGB pairs correlation, center of mass of neighboring distribution, RGB pairs energy ratio, wavelet domain statistics and a set of Image Quality Metrics (IQM) . A variation of this approach is to specifically make use of the difference in demosaicking methods among camera models. Most digital cameras employ a color filter array (CFA) and each element can only detect red, green or blue intensity value. The missing color samples are obtained by interpolation of neighboring pixel values. Bayram  and Long  proposed that different interpolation methods from camera models may introduce specific periodic correlations among pixel values. The periodic correlations can be used to identify the source camera of an image.
The third approach is to use the noise pattern in digital cameras. Due to the unavoidable defects in the manufacturing processes, pattern noise such as pixel non-uniformity, dust on lens, dark currents are introduced to the CCD sensor. Lukas et al.  proposed that each CCD sensor has a unique pattern noise and the pattern noise can be used for source camera identification. A Gaussian denoising filter was used to extract the pattern noise from an image. The reference noise pattern of a camera is prepared by averaging a number of noise patterns from its images. Then the source camera of an image is determined by the correlation between the image noise pattern and the camera reference pattern.
In this paper, we propose an alternative approach to digital image classification that uses the lens radial distortion. As all lens elements inevitably produce some aberrations, they leave unique imprints on the images being captured. The degree of radial distortion from each image can be quantitatively measured as lens radial distortion parameters by the straight line camera calibration algorithm described in . The radial distortion parameters, together with those features proposed by Kharrazi et al., forms a feature vector in the classification. A support vector machine (SVM) classifier  is trained and used to evaluate the success rate of the classification. In this paper, we show that lens radial distortion is an effective feature in improving the accuracy of the source camera identification problem. Related works that uses radial distortion in source camera identification have been reported in [11, 12].
The rest of this paper is organized as follows. In Section 2, we explain the origin of lens radial distortion and describe the method used for radial distortion in our experiments. In Section 3, we propose an approach to incorporate our lens radial distortion measurements into Kharrazi’s feature-based method, in order to increase the classification accuracy. The performance of the identification method is evaluated in Section 4. The influence of the zoom lens settings on the error rates are evaluated in this section. The future work and limitations are discussed in Section 5. Finally, this paper is concluded in Section 6.
2. Lens radial distortion
2.1. The imaging system
In a digital camera, the light from the scene passes through the camera’s lens system, an antialiasing filter, and color filter array, and finally reaches the camera's sensor [4, 5]. Each light sensing element of the sensor array integrates the incident light over the whole spectrum and obtains an electric signal representation of the scenery. The electric signal is digitalized by an analog-to-digital converter. Then the digital signal is processed by color processing algorithms built in the camera chips. These color processing algorithms includes demosaicing, color correction, white balancing, and gamma correction . Finally, the raw image is compressed and stored in the camera memory device. A block diagram of the image processing pipeline is shown in Fig. 1. The imperfect lens system in digital cameras distorts the light from the scenery. Due to the limitation in computational complexity, the distortion is unlikely to be removed by the later color processing stages in consumer-level digital camera. Therefore, we propose to use the lens radial distortion for the source camera identification problem. In the following subsections, we introduce the background and the mathematical models of lens radial distortion.
2.2. Background of lens radial distortion
Due to the design and manufacturing process, lens produces aberrations in images. The six major types of aberrations are spherical aberration, coma, astigmatism, field curvature, lens radial distortion and chromatic distortion. Among these aberrations, lens radial distortion is the most severe, especially in inexpensive wide-angle lenses . In this paper, we will focus on lens radial distortion.
The radial distortion causes straight lines in the object space rendered as curved lines on the film or camera sensor. It originates from the fact that the transverse magnification, MT, is a function of the off-axis image distance r, rather than a constant predicted by paraxial theory . In other words, the lens has various focal lengths and magnifications in different areas. The radial distortion deforms the whole image even though every point is in focus. The optical system suffers from pincushion distortion when MT increases with r. Similarly, the optical barrel distortion occurs when MT decreases with r. Examples of barrel distortion and pincushion distortion are shown in Fig. 2 and Fig. 3.
For the reason of manufacturing cost, majority of the digital cameras are equipped with lenses having spherical surfaces . These spherical lenses have inherent radial distortion and must be corrected by manipulating the system variables (indices, shapes, spacing, stops, etc). The degree and order of compensation are varied from one manufacturer to another or even in different camera models by the same manufacturer. Therefore, lens from different camera models may have different degrees of radial distortion. Apart from the lens design, the degree of the radial distortion is related to the focal length . Usually, lenses with short focal length have a larger degree of barrel distortion, whereas lenses with long focal length suffer more the pincushion distortion. As a result, lenses from different camera leave unique imprints on the pictures being captured.
2.3. Measuring lens radial distortion
2.3.1. Mathematical model
The lens distortion model can be written as an infinite series. Previous work has shown that the first-order radial symmetric distortion parameter, k 1, can achieve enough accuracy . In order to achieve a higher accuracy, we use the first-order and second-order distortion parameters, k 1 and k 2, to measure the degree of distortion in an image. The lens radial distortion can be written as :
where ru and rd are the undistorted radius and distorted radius respectively. The radius is the radial distance of a point (x, y) from the center of distortion. Since the image center is a good approximation of the center of distortion , we simply take the center of an image as the center of distortion.
2.3.2. Overview of Devernay’s straight line method
To find the distortion parameters of a camera, we use Devernay’s  straight line method. The method uses the fundamental property that the projection of every straight line in space onto a pinhole camera is a straight line. As a result, if we can find a transformation on a radial distorted image so that every straight line in space is viewed as a straight line in the transformed image, then we can estimate the distortion parameters of an image. By using this property, an iterative process is employed to estimate the distortion parameter k 1. The process is to first perform subpixel edge detection on a distorted image, and then apply polygonal approximation to extract possible distorted line segments from the image. The distortion error is measured between the distorted line segments and their corresponding straight lines. Then the distortion parameter, k 1, is optimized to minimize the distortion error. The optimization process is repeated until the relative change of distortion error is less than a threshold. An implementation of Devernay’s algorithm in Matlab is publicly available . We modified that program to estimate radial distortion parameters, k 1 and k 2, for every image under consideration. The details of each step are discussed in the following subsections.
2.3.3. Edge detection method
The first step of the Devernay’s method is to extract edges from an image. Canny edge detector was used to obtain edge magnitude and orientation. Then, non-maxima suppression and hysteresis thresholding were used for edge localization. Non-maxima suppression is to thin wide contours by selecting the maxima of edge magnitude perpendicular to the edge orientation and the hysteresis thresholding is to remove noisy maxima while preserving the continuity of contours. The image distortion is sometimes less than a pixel, therefore, Devernay suggested to extend the accuracy of edge detection method to sub-pixel.
2.3.4. Extracting distorted line segments
After edge detection, we need to extract distorted line segments which are most probably straight line in 3D space. Devernay proposed a number of thresholds to select useful distorted segments. Since some segments may be broken by edge detector, he proposed to join broken segments together when the distance between edge ends are less than a threshold T 1. He also suggested to put a threshold on the segment length because short segments are usually noisy.
2.3.5. Measuring distortion error
In order to measure the curvature of a distorted line segment, Devernay used the points on a distorted segment to form a straight line by least square approximation. The distortion error is the sum of squares of the distances from the points to the straight line. As a result, the distortion error is zero, when a distortion segment is a straight line. The distortion error will be bigger when a distorted segment has a bigger curvature.
3. Using lens radial distortion as features
According to , Kharrazi et al. proposed to use a number of features to capture the photometric effects left behind by the color processing algorithms on the images. Our lens distortion parameters can capture the geometric footprints left behind by the camera (lens system) on the images. Our new features can serve as complementary features to the features proposed by Kharrazi et al. For each image under consideration, a vector of thirty-six features is extracted from the image. The feature vector consists of thirty-four features proposed by Kharrazi et al. and our lens radial distortion parameters, k 1 and k 2. Assuming that a collection of images is available for each possible camera, they are then used to train a classifier for distinguishing between images originating from a specific camera.
4. Experimental results
Four sets of experiments were performed. The first set of experiments is a feasibility test using lens radial distortion in image classification. The second set of experiments shows that our approach has a statistically significant improvement in accuracy over the procedures using only image intensities. The third set of experiments study how the proposed featurs perform when considering more testing images and more cameras. The forth set of experiments study how the error rates are influenced by changing the focal length of zoom lenses.
4.1. Camera and test images
In our first and second set of experiments, three different cameras were used. They are recent models from three different manufacturers. The Canon PowerShot A80 and Casio EX-Z4 were used to produce 1600×1200 images, while Ricoh R2 was used to take 2560×1920 images. The images were taken with no flash, auto-focus, no manual zooming, best JPEG compression quality and other default settings. The configurations of the cameras are shown in Table 1. Each camera was used to take 100 images randomly around a university campus. Some of the samples from our image data set are shown in Fig. 4. After collecting the dataset, the proposed features were measured from each image.
4.2. Classification by lens radial distortion only
This experiment is a feasibility test using radial distortion in classifying images originating from a three-camera model.We obtained the lens distortion parameters, k 1 and k 2 by the method mentioned in Section 2. In the experiments, we used the SVM classifier available in the LibSvm package . The steps for training and testing the SVM classifier are as follows:
- 40 images from each camera were randomly selected to train a classifier.
- The rest of the images were used to test the classifier.
- Step 1 and Step 2 were repeated 50 times and the average classification accuracy is obtained.
The success rates of the classification for Canon (A80), Casio and Ricoh are 97.8%, 92% and 84.8% respectively. The average accuracy obtained is 91.5% and the corresponding confusion matrix is shown in Table 2.
We obtained the lens distortion parameters, k 1 and k 2, from our database and plotted them in Fig. 5. Canon (A80), Casio and Ricoh are represented by ‘×’, ‘o’ and ‘+’ respectively. It can be seen that the lens radial distortion parameters can be clearly separable into three groups. Those outliers in the plot are images with very short straight lines or very few straight lines. The classification results in Table 2 and the scatter plot in Fig. 5 show that it is feasible to identifying the source camera of a digital image by lens radial distortion.
4.3. Classification by lens radial distortion and Kharrazi’s proposed features
In this section, we investigate the improvement in accuracy by adding radial distortion to Kharrazi’s proposed statistics. Based on Kharrazi’s features, we evaluated the accuracies of the classification with and without radial distortion. The procedures for training and testing the classifier are the same as the Section 4.2. The average accuracy of the system with and without lens radial distortion parameter is 87.4% and 91.4% respectively. There is a 4% improvement in accuracy with our proposed lens radial distortion feature. The corresponding confusion matrices of these two experiments are in Table 3 and Table 4 respectively.
The classification accuracy of using only k 1 and k 2 and using 36 features is 91.5% and 91.4% respectively. The 0.1% difference may not be significant evidence that the classifier using only k 1 and k 2 is more successful than the classifier using 36 features. However, from these results, we may conclude that both experiments have comparable accuracies. The results also indicate the possibility of overfitting. The high dimension feature set may cause “curse of dimensionality”. To reduce the possible over-generation of the model, a feature selection procedure is needed to reduce the dimension of the feature set. In another publication, we have studied the feature selection problem. Related work can be found in .
In the third set of experiments, we investigated the performance of our proposed features when considering more testing images. We obtained 140 images from each of our three cameras. 40 and 100 images were randomly chosen for training and testing the classifier. The process was repeated 50 times. The average accuracies obtained were 91.6%, 87.1% and 91.4% for experiment using lens radial distortion, experiment using Kharrazi’s features, and experiment using both lens radial distortion and Kharrazi’s features respectively. These accuracies are very close to the previous experiments with 40 testing images. The corresponding confusion matrices of these experiments are in Table 5, Table 6 and Table 7 respectively.
In addition, we conducted experiments with more than three cameras. We obtained 140 images from extra 2 different cameras. The cameras are Canon IXUS 55 and Olympus C-50Z. The images from Canon (I55) are 1600 × 1200 and those from Olympus are 2048 × 1536. The other settings were the same as the 3 cameras in the previous studies. The average accuracies obtained were 89.1%, 82% and 89.2% for experiment using lens radial distortion, experiment using Kharrazi’s features, and experiment using both lens radial distortion and Kharrazi’s features respectively. Comparing with the results in three-camera case, the results of experiment using lens radial distortion, and the results of experiment using both lens distortion and Kharrazi’s features drop about 2% in the five-camera case. However, the accuracies in five-camera case are still reasonably high. The confusion matrices of these experiments are in Table 8, Table 9 and Table 10. The scatter plot of lens distortion parameters from 5 cameras is shown in Fig. 6. It can be seen that the lens radial distortion parameters can be clearly separable into five groups.
4.4. Effect of optical zoom on classification accuracy
Most consumer digital cameras are equipped with an optical zoom lens. It is likely that images from a dataset may be captured by various optical zoom. As discussed in Section 2, the lens radial distortion parameters change with focal length, making the classification of images by radial distortion more difficult.
Radial distortion is difficult to calibrate in zoom lenses, which usually go from barrel at the wide end to pincushion at the tele end. In this section, we investigate the impact of optical zoom on the reliability of camera identification.
One way of determining the relationship between focal length and radial distortion is to treat each configuration of lens settings as a fixed focal lens and to perform the calibration for each configuration [18, 19]. However, it is not efficient because a zoom lens may have many configurations. Therefore, we only calibrate the zoom lens at intervals of the focal length. It is suggested that the radial distortions may not vary significantly between intervals .
A new image dataset was prepared for the simulation.We used three cameras to take images around the university campus. The images were taken with no flash, auto-focus and best JPEG compression quality and other default settings. We divided each camera’s zoom range into five intervals and take 20 images for each interval. Consequently, 100 images are taken by each camera. After collecting the images, the radial distortion parameters are measured by Devernay’s straight line method mentioned in Section 2. The distortion parameters measured under various zoom intervals are plotted in Fig. 7.
The scatter plot shows the distortion parameters from Canon (A80), Casio, and Ricoh. The first zoom interval represents maximum zoom out (wide), whereas the fifth zoom interval represents maximum zoom in (tele). The magnitude of the radial distortion parameters decreases towards zero and then increases again when the zoom moves from the first interval to the fifth interval. This agrees with the previous discussion that zoom lens has barrel distortion at the wide and pincushion distortion at the tele. Also, it is noted that the distortion parameters for a zoom interval are not constant. Since the focal length is affected by the zoom and focus of a camera, the variation in focus changes the distortion parameters of a zoom interval. From the plot, we can see that the distortion parameters from one camera can be clearly separable from another camera in some cases. However, the distortion parameters from 5th zoom interval of Canon (A80) have a considerable overlap with the parameters from Casio.
A SVM classifier was used to evaluate the impact of optical zoom on accuracy in camera identification. We trained a SVM classifier by randomly selecting 8 images from each zoom interval of each camera. Then we test the classifier with the rest of the images. In other words, 40 and 60 images from each camera were chosen to train and to test the classifier respectively. The training and testing process were repeated 10 times and an average accuracy was obtained. The corresponding confusion matrix is given in Table 11. The success rates of the classification for Canon (A80), Casio, and Ricoh is 75.9%, 80%, and 86.6% respectively. The average accuracy obtained is 80.8%.
Comparing the classification results that used only lens distortion parameters in Section 4.2, the accuracy of this experiment has dropped 10.7%. The relative low accuracy in this experiment is due to the considerable overlap of the distortion parameters from 5th zoom interval of Canon (A80) with the parameters from Casio. In the overlapping region, the distortion parameters, k 1 and k 2, are very close to zero which means that the radial distortion is very small in the images. The precision in the radial distortion algorithm may not be able to resolve the difference in the distortion between Canon (A80) and Casio.
5. Discussions and future work
In Section 4.2 fig. 5, we have shown a scatter plot of radial distortion parameters, k 1 and k 2. The outliers in the plot are images with very short lines and inadequate number of long straight lines. Short straight lines usually contain more noise than useful information about distortion. An example Casio image with short segments and its edge map is shown in fig. 8 (top). The estimated k 1 and k 2 is 0.049 and -0.014 respectively. The lack of long edges in the example image leads to wrong estimation of the lens distortion. Apart from the length of the straight lines, the position of the line also affects the lens distortion estimation. Since the radial distortion is a function of the radius from the center of distortion, the further away a line from the center, the more sever the distortion. If a line is close to the center, it will be less severely distorted and will provide less useful information for distortion estimation. An example image from Casio of this kind is shown in fig. 8 (bottom).We only used the straight line near the center for distortion estimation and the estimated k 1 and k 2 is 0.037 and 0.023 respectively. The k 2 and k 2 values from both examples are far from the majority of Casio images.
From the above discussion, we have two criteria on the straight lines. Firstly, the straight line should be long enough. In our experiments, we only used the lines with minimum length longer than half of the image height. Another requirement is the position of lines. The further is the line from the center, the better the line. Usually, more straight lines provide more useful information for distortion estimation. When there is one straight line that fulfills both criteria, we can use our proposed classifier. However, more research and analysis is needed to determine the influence of lines’ length and position on the stability and accuracy of the classifier.
In Section 4.4, we propose to calibrate the relationship between the focal length and radial distortion parameters by sampling over the zoom intervals of a camera. However, it may not be able to get the samples at designated focal length intervals because the focal length is affected by the zoom and focus together. Also, in most consumer-level digital cameras, the focus is automatically controlled and we cannot retrieve the focus information. It may affect the classification accuracy. One solution is to estimate the effective focal length from the image and sample the distortion parameters over the intervals of the focal length.
From our experimental results in Section 4.4, the images taken by Canon (A80) with 5th zoom interval cannot be clearly distinguished from the images taken by Casio. The experimental results can be improved by using a more sophisticated method to estimate the lens distortion from an image. Another solution is to integrate our approach with other methods, such as the feature-based identification  and sensor pattern noise identification , which are less likely to be influenced by focal length of lens.
Since camera identification techniques may be used in court, it is interesting to address malicious attacks intended to interfere the identification algorithm, such as adding or removing radial distortion from an image. By using (1), it is possible to manipulate the distortion parameters of an image by substituting different k 1 and k 2. However, these alterations will result in bending the image inward or outward and the final distorted image will no longer be rectangular. The attackers may need to add or discard scenery information on the image if they want to keep the image size same as the original image. This increases the difficulty of the malicious attacks.
In this paper, we examine the use of lens footprints left on the images in identifying the source camera of a digital image. We propose to use the lens radial distortion on the images for this problem. A classifier based on lens radial distortion is built and used to evaluate the effectiveness of this feature.We show that it is feasible to use the lens radial distortion to classify images originating from a five-camera model.We also propose to incorporate our lens radial distortion with the statistics obtained from image intensities for image classification. We demonstrate that comparing with the procedures using only statistics from image intensities, our approach shows a statistical improvement in accuracy. Since the lens distortion parameters vary with focal length, we also investigate the effectiveness of our lens distortion parameters, k 1 and k 2, in an image dataset with various optical zoom.
The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project Number HKU 7143/05E).
References and links
1 . R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation 3(4), 323–344 (1987). [CrossRef]
2 . F. Devernay and O. Faugeras, “Automatic calibration and removal of distortion from scenes of structured environments,” in Investigative and Trial Image Processing, vol. 2567 of Proc. SPIE, pp. 62–67 (1995). [CrossRef]
3 . J. Perš and S. Kovačič, “Nonparametric, model-based radial lens distortion correction using tilted camera assumption,” in Proceedings of the Computer Vision Winter Workshop 2002, pp. 286–295 (2002).
4 . J. Adams, K. Parulski, and K. Spaulding, “Color processing in digital cameras,” IEEE Micro 18(6), 20–30 (1998). [CrossRef]
5 . M. Kharrazi, H. T. Sencar, and N. Memon, “Blind source camera identification,” in IEEE International Conference on Image Processing, pp. 709–712 (2004).
6 . I. Avcibas, N. Memon, and B. Sankur, “Steganalysis using image quality metrics,” IEEE Transactions on Image Processing 12(2), 221–229 (2003). [CrossRef]
7 . S. Bayram, H. T. Sencar, N. Memon, and I. Avcibas, “Source camera identification based on CFA interpolation,” in IEEE International Conference on Image Processing , vol. 3, pp. 69–72 (2005).
8 . Y. Long and Y. Huang, “Image based source camera identification using demosaicking,” in IEEE International Workshop on Multimedia Signal Processing, vol. 3 (2006).
9 . J. Lukáš, J. Fridrich, and M. Goljan, “Determining digital image origin using sensor imperfections,” in Image and Video Communications and Processing, vol. 5685 of Proc. SPIE, pp. 16–20 (2005).
10 . R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (Wiley, New York, 2001).
11 . K. S. Choi, E. Y. Lam, and K. K. Y. Wong, “Source camera identification using footprints from lens aberration,” in Digital Photography II, vol. 6069 of Proc. SPIE, pp. 155–162 (2006).
12 . K. S. Choi, E. Y. Lam, and K. K. Y. Wong, “Feature selection in source camera identification,” in IEEE International Conference on Systems, Man and Cybernetics, pp. 3176–3180 (2006).
13 . E. Y. Lam, “Image restoration in digital photography,” IEEE Transactions on Consumer Electronics 49(2), 269–274 (2003). [CrossRef]
14 . E. Hecht, Optics (Addison Wesly, San Francisco, California, 2002).
15 . B. Tordoff and D. W. Murray, “Violating rotating camera geometry: the effect of radial distortion on selfcalibration,” in Proc. 15th International Conference on Pattern Recognition, vol. 1, pp. 423–427 (2000).
16 . P. D. Kovesi, Matlab and octave functions for computer vision and image processing. Software available at http://www.csse.uwa.edu.au/pk/research/matlabfns/.
17 . C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines (2001). Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm.
18 . M. Li and J.-M. Lavest, “Some aspects of zoom lens camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence 18(11), 1105–1110 (1996). [CrossRef]
19 . Y.-S. Chen, S.-W. Shih, Y.-P. Hung, and C.-S. Fuh, “The JPEG still picture compression standard,” in Proc. of 15th International Conference on Pattern Recognition, vol. 4, pp. 495–498 (2000).