CN111951254A - Source camera identification method and system based on edge-guided weighted average - Google Patents

Source camera identification method and system based on edge-guided weighted average Download PDF

Info

Publication number
CN111951254A
CN111951254A CN202010832394.2A CN202010832394A CN111951254A CN 111951254 A CN111951254 A CN 111951254A CN 202010832394 A CN202010832394 A CN 202010832394A CN 111951254 A CN111951254 A CN 111951254A
Authority
CN
China
Prior art keywords
camera
image
edge
fingerprint
weighted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010832394.2A
Other languages
Chinese (zh)
Other versions
CN111951254B (en
Inventor
刘云霞
张文娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202010832394.2A priority Critical patent/CN111951254B/en
Publication of CN111951254A publication Critical patent/CN111951254A/en
Application granted granted Critical
Publication of CN111951254B publication Critical patent/CN111951254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a source machine identification method and system based on edge-guided weighted averaging, belonging to the technical field of source machine identification, wherein the method comprises the following steps: acquiring image data shot by a camera; cutting the acquired image data into image blocks with preset sizes; acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image; fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint; calculating a weighted correlation value between a residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value; according to the method, different weights are distinguished by the given edge area and the non-edge area, so that the influence of the image edge area on the camera fingerprint is effectively reduced, residual images are further fused in a statistical level through maximum likelihood estimation, and the source camera identification effect is greatly improved.

Description

Source camera identification method and system based on edge-guided weighted average
Technical Field
The disclosure relates to the technical field of source machine identification, and in particular relates to a source machine identification method and system based on edge-guided weighted averaging.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Digital images serve as a kind of information carrier and can be used as valid evidence for use in court. However, as digital images are maliciously tampered with, people's confidence in the images is reduced. Therefore, the problem of source machine identification in digital image forensics technology is receiving wide attention. Sensor Pattern Noise (SPN) has been an effective method to solve the SCI (source camera identification) problem because it is a unique fingerprint that identifies a particular device of the same brand and camera model. The current method for acquiring fingerprints is as follows: given a set of images from the same camera device, its residual is obtained by subtracting the de-noised version from the original image, and then the residuals are summed using different strategies to estimate the fingerprint of the camera device.
The inventor of the present disclosure finds that, due to the imperfection of the current image denoising algorithm, a large number of structures related to image contents are left in the residual image, and by comparing the original image with the residual image difference, it can be found that the residual image is highly related to the edge/texture region of the original image, the smooth region is favorable for the estimation of the camera fingerprint, and the texture/edge region interferes with the estimation of the camera fingerprint, thereby reducing the accuracy of the result of the source camera identification.
Disclosure of Invention
In order to solve the defects of the prior art, the method and the system for identifying the source camera based on the edge-guided weighted average are provided by the present disclosure, different weights are distinguished by the given edge area and the non-edge area, so that the influence of the image edge area on the camera fingerprint is effectively reduced, the residual error is further fused in the statistical level through the maximum likelihood estimation to obtain the camera fingerprint, and the identification effect of the source camera is greatly improved.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
the first aspect of the disclosure provides a source machine identification method based on edge-guided weighted averaging.
A source machine identification method based on edge-guided weighted averaging comprises the following steps:
acquiring image data shot by a camera;
cutting the acquired image data into image blocks with preset sizes;
acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
As some possible implementations, the laplacian edge detection operator is used to detect the edge region and the non-edge region of the residual image, and the weight of the edge region and the weight of the non-edge region are assigned.
As some possible implementations, the method for acquiring the camera fingerprint includes:
cutting an original image of a database image into image blocks with preset sizes, and dividing the image blocks into a fingerprint set and a test set;
acquiring a group of residual images of a camera by using a fingerprint set, and constructing an edge weighted weight graph of each residual image;
and fusing the acquired residual image and the corresponding edge weighted weight graph by using a camera fingerprint fusion method, and estimating to obtain the camera fingerprint.
As a further limitation, the residual image is fused pixel by maximum likelihood estimation to obtain the final camera fingerprint.
By way of further limitation, the recognition accuracy of a camera is calculated by comparing the number of correctly classified test images in a test set of the camera to the total number of all test images in the test set.
As a further limitation, setting an experiment database in two ways, wherein one way is that for all camera models, one camera of each camera model is randomly selected to form a first experiment database; another is to select multiple cameras from the same camera model as the second experimental database.
As a further limitation, for the images of all cameras in the two experimental databases, the data sets are divided in two ways;
one is to randomly select a first number of images of all cameras as a fingerprint set and the remaining second number of images as a test set; the other is to randomly select a third number of images of all cameras as a fingerprint set and the remaining fourth number of images as a test set.
As some possible implementations, the camera with the largest weighted correlation value between the residual image and the camera fingerprint is the source camera corresponding to the image to be identified.
As some possible implementations, the original image to be identified is denoised to obtain a denoised version thereof, and the difference between the original image and the denoised version is used as a residual image.
As some implementations are possible, the image to be recognized is cropped from the central area to 64 × 64 or 128 × 128 image blocks.
A second aspect of the present disclosure provides a source machine identification system based on edge-guided weighted averaging.
A source camera identification system based on edge-guided weighted averaging, comprising:
a data acquisition module configured to: acquiring image data shot by a camera;
an image cropping module configured to: cutting the acquired image data into image blocks with preset sizes;
a weight assignment module configured to: acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
a fingerprint acquisition module configured to: fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
an identification module configured to: and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
A third aspect of the present disclosure provides a medium having stored thereon a program that, when executed by a processor, performs the steps in the method for source machine identification based on edge-guided weighted averaging according to the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device, including a memory, a processor, and a program stored on the memory and executable on the processor, where the processor implements the steps in the method for source machine identification based on edge-guided weighted averaging according to the first aspect of the present disclosure when executing the program.
Compared with the prior art, the beneficial effect of this disclosure is:
1. according to the method, the system, the medium and the electronic equipment, different weights are distinguished by the given edge area and the non-edge area, so that the influence of the image edge area on the camera fingerprint is effectively reduced, residual errors are further fused in a statistical level through maximum likelihood estimation to obtain the camera fingerprint, and the source camera identification effect is finally improved.
2. According to the method, the system, the medium and the electronic equipment, different edge weighting weights are distributed to the residual image pixel by pixel according to reliability in the fingerprint acquisition stage, so that the contribution of the residual in the edge area to the camera fingerprint is reduced, and meanwhile, the more accurate camera fingerprint is further estimated by combining with a maximum likelihood estimation method.
3. According to the method, the system, the medium and the electronic equipment, weighting correlation is carried out in a test stage, so that the influence of image content on source machine identification in an edge area of a single test image due to a denoising algorithm is greatly reduced; the artifacts introduced by the denoising algorithm are suppressed, and a better result is obtained.
4. The method, the system, the medium and the electronic equipment can be combined with different denoising algorithms and SPN enhancement methods to further improve the estimation precision; meanwhile, the experimental database and the divided data sets designed by the method can be used for more fairly comparing the effectiveness of the algorithm.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a schematic flowchart of a source machine identification method based on edge-guided weighted averaging according to embodiment 1 of the present disclosure.
Fig. 2 is an illustration of the influence of the edge region on the camera fingerprint estimation provided in embodiment 1 of the present disclosure.
Fig. 3 is a schematic view of a fingerprint extraction process provided in embodiment 1 of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
Example 1:
as shown in fig. 1, embodiment 1 of the present disclosure provides a source machine identification method based on edge-guided weighted averaging, including the following steps:
acquiring image data to be identified;
acquiring image data shot by a camera;
cutting the acquired image data into image blocks with preset sizes;
acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
In detail, the method comprises parts of edge-guided weighted averaging, maximum likelihood estimation residual fusion, weighted correlation and the like, so that better identification performance can be obtained.
In the stage of fingerprint extraction, firstly, acquiring an edge/non-edge area of an image, and distributing different weight coefficients to ensure that a reliable area has greater contribution to fingerprint estimation; secondly, residual error fusion is carried out by utilizing the statistical information of the residual error and maximum likelihood estimation so as to obtain more accurate camera fingerprint; finally, a weighted correlation is applied to the candidate camera fingerprint and the test image residual to calculate a correlation value. Compared with aggregation of multiple images when fingerprints are estimated, a single test image in the test stage is more susceptible to the influence of image content, and therefore the edge-guided weighted graph of the test image is used for weighted correlation in the correlation value calculation stage.
The specific process comprises the following steps:
s1: setting up an experimental database and a data set
In the largest digital image forensics Dresden database, an experimental database is set in two ways. The first method is that for all camera models in the database, a composition experiment database of one camera device is randomly selected, and 26 camera models are shared in the Dresden database, so that an experiment database containing 26 camera devices is obtained and is called as an experiment database A; the second approach used was to select a camera model with Dresden data containing 5 camera devices, for a total of 5 camera models and 25 camera devices, which was referred to as experimental data B.
For the images of all the camera devices in the experimental databases a and B, the data set is divided in two ways. One is to randomly select 25 images of all camera devices as a fingerprint set and the remaining 130 images as a test set; another is to randomly select 50 images of all camera devices as a fingerprint set and the remaining 100 images as a test set. This arrangement of the data set is more fair due to the disparity in the total number of images per camera device in the Dresden database.
S2: fingerprint estimation
For an image of the data set it is cropped from the middle area to an image block of size 64 x 64 or 128 x 128. The small image blocks contain less fingerprint information than the original image, so the source and receiver identification method based on the image blocks is more difficult. Hereinafter 'image block' denotes a portion cut out from an original image for an experiment.
Performing a BM3D denoising filter on image blocks of the fingerprint set, and referring to the difference between an image block and a denoised version thereof as a residual:
R=I-FBM3D(I) (1)
detecting edge regions and non-edge regions of the image block using a laplacian edge detection operator:
Figure BDA0002638474390000071
the edge weight map W is set, and the edge weight of the assigned edge region is 0.475 and the edge weight of the non-edge region is 1. Fig. 2 illustrates the effect of edge regions on camera fingerprint estimation.
Fig. 2 (a) shows an estimated camera fingerprint after averaging a plurality of image block residual images. Fig. 2 (b) shows an image block obtained by cutting out an original image. Fig. 2 (c) is a residual image extracted from fig. 2 (b) by the GBWA method. Fig. 2 (d) is an edge view of fig. 2 (b).
Fusing residuals using an edge weighted average algorithm and a maximum likelihood estimation algorithm to obtain a final camera fingerprint:
Figure BDA0002638474390000081
the fingerprint extraction flowchart of the present embodiment is shown in fig. 3.
S3: weighted correlation
For the image of the test set, it is also cropped from the middle regionIn image blocks of size 64 x 64 or 128 x 128. The image block is subjected to a BM3D denoising filter and its residual is obtained. Detecting edge region and non-edge region of image block by using Laplace edge detection operator, and setting edge weighting graph WtThe edge weight of the assigned edge region is 0.475, and the edge weight of the non-edge region is 1. Performing a weighted correlation operation on the candidate camera fingerprint and the test image residual to obtain a normalized correlation coefficient:
Figure BDA0002638474390000082
s4: source machine identification performance evaluation
The experimental database A is classified by 26 by utilizing the normalized correlation coefficient, and the experimental database B is classified by 5, because the cameras of the experimental database B are from the same camera model, the fingerprints before the experimental database A and the experimental database B are easier to be confused and more difficult.
At the time of testing, the determination condition is followed to determine the test image to the candidate camera that produces the largest correlation value. The camera identification accuracy is determined by calculating the ratio of the number of correctly classified test images in a certain camera equipment test set to the total number of all test images in the test set:
Figure BDA0002638474390000091
since the number of camera devices in the two experimental databases is large, the average source camera recognition accuracy of all the camera devices is adopted as an evaluation criterion.
The present embodiment will be further described with reference to specific examples.
First, the digital images in the Dresden database are downloaded, and then the images in the database are divided into a fingerprint set and a test set according to the data set division criterion. Second, the entire image is cropped from the image center into 64 × 64 or 128 × 128 image blocks, and the residuals are fused using edge weighted averaging and maximum likelihood estimation to get the camera fingerprint. And finally, carrying out weighted correlation on the residual error of the test image and the fingerprints of the candidate cameras to obtain a normalized correlation coefficient for source camera identification.
For the experimental database a, the method of this embodiment is compared with several other successful source-machine identification methods under the same design.
The comparison results are given in table 1, and the accuracy of the individual camera devices are all calculated by equation (5).
From the experimental results given in table 1, the source-machine identification method of the present embodiment all gives the highest identification accuracy thanks to edge-guided weighting, maximum-likelihood estimation fusion residuals, and weighting correlation. Based on the experimental setup of the present embodiment, the number of extracted fingerprints and the size of the identification image block are respectively changed. It is a known fact that the larger the number of images used to estimate the camera fingerprint, the more accurate the estimated camera fingerprint is; along with the reduction of the image block, the identification accuracy rate is greatly reduced. In the method of the embodiment, the highest recognition accuracy is achieved under four conditions of the two variable combination modes, and the effectiveness of the algorithm is widely reflected.
Generally, BM 3D-based methods generally exhibit the best performance due to their powerful noise reduction capabilities. Meanwhile, the performance can be continuously improved by the embodiment, and the performance is respectively and averagely improved by 1.54%, 1.1%, 1.69% and 0.38% aiming at four conditions. Compared with the MLE method with the highest popularity, the recognition accuracy of the four conditions is respectively improved by 26.06%, 18.97%, 24.65% and 17.27%.
Table 1: comparison of average identification accuracy rates of different source machine identification methods
Figure BDA0002638474390000101
In the 26 classification experiment, compared with the MLE method, the method of the embodiment is greatly improved, and the identification precision can be effectively improved.
For the experiment database B, the average recognition accuracy of the 5 camera models can be obtained according to the above operations, and in order to better show the experiment results, the overall average recognition accuracy under all the camera models is calculated below.
This embodiment method is compared to several other successful source-machine identification methods. The comparison results are given in table 2, and the accuracy of each camera device is calculated by the following formula:
Figure BDA0002638474390000102
for four cases, the identification accuracy of the embodiment is improved by 3.36%, 1.51%, 3.99%, 1.27% compared with the method [3] based on BM 3D. Compared with the MLE method with the highest popularity, the recognition accuracy of the four conditions is respectively improved by 14.06%, 11.28%, 20.76% and 15.14%.
Table 2: comparison of average identification accuracy rates of different source machine identification methods
Figure BDA0002638474390000103
Figure BDA0002638474390000111
For the BM3D method, the method of the present embodiment has a higher improvement in distinguishing camera devices from the same camera model, which is very important for the actual evidence collection.
Example 2:
the embodiment 2 of the present disclosure provides a source machine identification system based on edge-guided weighted averaging, including:
a data acquisition module configured to: acquiring image data shot by a camera;
an image cropping module configured to: cutting the acquired image data into image blocks with preset sizes;
a weight assignment module configured to: acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
a fingerprint acquisition module configured to: fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
an identification module configured to: and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
The working method of the system is the same as the source machine identification method based on edge-guided weighted averaging provided in embodiment 1, and details are not repeated here.
Example 3:
the embodiment 3 of the present disclosure provides a medium, on which a program is stored, and when the program is executed by a processor, the method implements the steps in the source camera identification method based on edge-guided weighted averaging according to the embodiment 1 of the present disclosure, where the steps are:
acquiring image data shot by a camera;
cutting the acquired image data into image blocks with preset sizes;
acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
The detailed steps are the same as those of the source machine identification method based on edge-guided weighted averaging provided in embodiment 1, and are not described herein again.
Example 4:
the embodiment 4 of the present disclosure provides an electronic device, including a memory, a processor, and a program stored on the memory and executable on the processor, where the processor implements, when executing the program, the steps in the method for identifying a source camera based on edge-guided weighted averaging according to embodiment 1 of the present disclosure, where the steps are:
acquiring image data shot by a camera;
cutting the acquired image data into image blocks with preset sizes;
acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
The detailed steps are the same as those of the source machine identification method based on edge-guided weighted averaging provided in embodiment 1, and are not described herein again.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (10)

1. A source machine identification method based on edge-guided weighted averaging is characterized by comprising the following steps:
acquiring image data shot by a camera;
cutting the acquired image data into image blocks with preset sizes;
acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
2. The method of claim 1, wherein a laplacian edge detector is used to detect edge regions and non-edge regions of the residual image, and weights for the edge regions and the non-edge regions are assigned.
3. The method for source camera identification based on edge-guided weighted averaging as claimed in claim 1, wherein the method for obtaining the camera fingerprint is:
cutting an original image of a database image into image blocks with preset sizes, and dividing the image blocks into a fingerprint set and a test set;
acquiring a group of residual images of a camera by using a fingerprint set, and constructing an edge weighted weight graph of each residual image;
and fusing the acquired residual image and the corresponding edge weighted weight graph by using a camera fingerprint fusion method, and estimating to obtain the camera fingerprint.
4. The source camera identification method based on edge-guided weighted averaging as claimed in claim 3, wherein the residual image is fused pixel by pixel using maximum likelihood estimation to obtain a final camera fingerprint;
alternatively, the first and second electrodes may be,
the recognition accuracy of a camera is calculated by comparing the number of correctly classified test images in a test set of the camera with the total number of all test images in the test set.
5. The method of claim 3, wherein the experimental database is configured in two ways, one being that for all camera models, one camera of each camera model is randomly selected to constitute a first experimental database; another is to select multiple cameras from the same camera model as the second experimental database.
6. The method of claim 5, wherein the partitioning of the data set is performed in two ways for the images of all cameras in the two experimental databases;
one is to randomly select a first number of images of all cameras as a fingerprint set and the remaining second number of images as a test set; the other is to randomly select a third number of images of all cameras as a fingerprint set and the remaining fourth number of images as a test set.
7. The method as claimed in claim 1, wherein the camera with the largest weighted correlation value between the residual image and the camera fingerprint is the source camera corresponding to the image to be identified;
alternatively, the first and second electrodes may be,
denoising an original image to be identified to obtain a denoised version of the original image, and using a difference value between the original image and the denoised version as a residual image;
alternatively, the first and second electrodes may be,
the image to be recognized is cropped from the central area to 64 × 64 or 128 × 128 image blocks.
8. A source machine identification system based on edge-guided weighted averaging, comprising:
a data acquisition module configured to: acquiring image data shot by a camera;
an image cropping module configured to: cutting the acquired image data into image blocks with preset sizes;
a weight assignment module configured to: acquiring a residual image of an image block, and constructing an edge weighted weight graph of the residual image;
a fingerprint acquisition module configured to: fusing the obtained residual image and the corresponding edge weighted weight graph and estimating to obtain a camera fingerprint;
an identification module configured to: and calculating a weighted correlation value between the residual image of the image data to be identified and the camera fingerprint, and identifying the source camera according to the weighted correlation value.
9. A medium having a program stored thereon, wherein the program, when executed by a processor, performs the steps of the method for source machine identification based on edge-guided weighted averaging as claimed in any one of claims 1 to 7.
10. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the method for source machine identification based on edge-guided weighted averaging as claimed in any one of claims 1-7 when executing the program.
CN202010832394.2A 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system Active CN111951254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010832394.2A CN111951254B (en) 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010832394.2A CN111951254B (en) 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system

Publications (2)

Publication Number Publication Date
CN111951254A true CN111951254A (en) 2020-11-17
CN111951254B CN111951254B (en) 2024-05-10

Family

ID=73343161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010832394.2A Active CN111951254B (en) 2020-08-18 2020-08-18 Edge-guided weighted-average-based source camera identification method and system

Country Status (1)

Country Link
CN (1) CN111951254B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025073A (en) * 2021-11-18 2022-02-08 支付宝(杭州)信息技术有限公司 Method and device for extracting hardware fingerprint of camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000175046A (en) * 1998-09-30 2000-06-23 Fuji Photo Film Co Ltd Image processing method and image processor
US20100183240A1 (en) * 2008-12-31 2010-07-22 Masaki Hiraga Image processing method and imaging apparatus
CN102819831A (en) * 2012-08-16 2012-12-12 江南大学 Camera source evidence obtaining method based on mode noise big component
CN107451990A (en) * 2017-06-13 2017-12-08 宁波大学 A kind of photograph image altering detecting method using non-linear guiding filtering
CN111178166A (en) * 2019-12-12 2020-05-19 中国科学院深圳先进技术研究院 Camera source identification method based on image content self-adaption

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000175046A (en) * 1998-09-30 2000-06-23 Fuji Photo Film Co Ltd Image processing method and image processor
US20100183240A1 (en) * 2008-12-31 2010-07-22 Masaki Hiraga Image processing method and imaging apparatus
CN102819831A (en) * 2012-08-16 2012-12-12 江南大学 Camera source evidence obtaining method based on mode noise big component
CN107451990A (en) * 2017-06-13 2017-12-08 宁波大学 A kind of photograph image altering detecting method using non-linear guiding filtering
CN111178166A (en) * 2019-12-12 2020-05-19 中国科学院深圳先进技术研究院 Camera source identification method based on image content self-adaption

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RICCARDO SATTA 等: "Sensor Pattern Noise Matching Based on Reliability Map for SourceCamera Identification", IN PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS (VISAPP-2015), pages 222 - 226 *
WEN-NA ZHANG 等: "An Improved Sensor Pattern Noise Estimation Method Based on Edge Guided Weighted Averaging", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR CYBER SECURITY, pages 405 - 415 *
WEN-NA ZHANG: "Effective Source Camera Identification based on MSEPLL Denoising Applied to Small Image Patches", PROCEEDINGS OF APSIPA ANNUAL SUMMIT AND CONFERENCE 2019, pages 18 - 21 *
XIN WANG 等: "Laplacian Operator-Based Edge Detectors", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 29, no. 5, pages 886 - 890, XP011175350, DOI: 10.1109/TPAMI.2007.1027 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025073A (en) * 2021-11-18 2022-02-08 支付宝(杭州)信息技术有限公司 Method and device for extracting hardware fingerprint of camera
CN114025073B (en) * 2021-11-18 2023-09-29 支付宝(杭州)信息技术有限公司 Method and device for extracting hardware fingerprint of camera

Also Published As

Publication number Publication date
CN111951254B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
US8385630B2 (en) System and method of processing stereo images
CN108416789A (en) Method for detecting image edge and system
CN111027546B (en) Character segmentation method, device and computer readable storage medium
JP2002288658A (en) Object extracting device and method on the basis of matching of regional feature value of segmented image regions
US20170344846A1 (en) Image processing apparatus, image processing method and program
CN113109368A (en) Glass crack detection method, device, equipment and medium
CN109858438B (en) Lane line detection method based on model fitting
JP2012058787A (en) Information processor and processing method thereof
CN109255792B (en) Video image segmentation method and device, terminal equipment and storage medium
CN111179295A (en) Improved two-dimensional Otsu threshold image segmentation method and system
CN110889817B (en) Image fusion quality evaluation method and device
CN113538263A (en) Motion blur removing method, medium, and device based on improved DeblurgAN model
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN112926695B (en) Image recognition method and system based on template matching
KR100691855B1 (en) Apparatus for extracting features from image information
CN112204957A (en) White balance processing method and device, movable platform and camera
CN111951254B (en) Edge-guided weighted-average-based source camera identification method and system
CN117115117B (en) Pathological image recognition method based on small sample, electronic equipment and storage medium
Cozzolino et al. PRNU-based forgery localization in a blind scenario
CN106778822B (en) Image straight line detection method based on funnel transformation
CN117853510A (en) Canny edge detection method based on bilateral filtering and self-adaptive threshold
CN108269264B (en) Denoising and fractal method of bean kernel image
CN106446832B (en) Video-based pedestrian real-time detection method
CN107680083B (en) Parallax determination method and parallax determination device
CN115984178A (en) Counterfeit image detection method, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant