CN115526892B - Image defect duplicate removal detection method and device based on three-dimensional reconstruction - Google Patents

Image defect duplicate removal detection method and device based on three-dimensional reconstruction Download PDF

Info

Publication number
CN115526892B
CN115526892B CN202211503164.7A CN202211503164A CN115526892B CN 115526892 B CN115526892 B CN 115526892B CN 202211503164 A CN202211503164 A CN 202211503164A CN 115526892 B CN115526892 B CN 115526892B
Authority
CN
China
Prior art keywords
detected
dimensional
image
matching
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211503164.7A
Other languages
Chinese (zh)
Other versions
CN115526892A (en
Inventor
黄文琦
曾群生
吴洋
李轩昂
梁凌宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Power Grid Digital Grid Research Institute Co Ltd
Original Assignee
Southern Power Grid Digital Grid Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Power Grid Digital Grid Research Institute Co Ltd filed Critical Southern Power Grid Digital Grid Research Institute Co Ltd
Priority to CN202211503164.7A priority Critical patent/CN115526892B/en
Publication of CN115526892A publication Critical patent/CN115526892A/en
Application granted granted Critical
Publication of CN115526892B publication Critical patent/CN115526892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image defect duplicate removal detection method and device based on three-dimensional reconstruction. The method comprises the following steps: the method comprises the steps of obtaining a to-be-detected image set under a target scene, wherein the to-be-detected images in the to-be-detected image set carry defect labeling areas, extracting two-dimensional feature points of each to-be-detected image, performing two-dimensional feature point matching comparison on every two to-be-detected images to obtain matching result data of the two-dimensional feature points, performing three-dimensional reconstruction on the to-be-detected images based on the matching relationship to obtain three-dimensional scene point clouds corresponding to the target scene and an incidence relationship between the to-be-detected images and the three-dimensional scene point clouds, performing visual area common detection on the defect labeling areas in the two to-be-detected images according to the incidence relationship to obtain common visual area detection results, and fusing the common visual area detection results to obtain defect repetition areas. By adopting the method, the defect repeated area in the image can be accurately identified.

Description

Image defect duplicate removal detection method and device based on three-dimensional reconstruction
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for detecting image defect duplication removal based on three-dimensional reconstruction, a computer device, a storage medium, and a computer program product.
Background
Image retrieval techniques are mainly classified into two types, one is text-based image retrieval, and the other is content-based image retrieval. Image retrieval techniques are available in the field of defect deduplication, and therefore, a defect detection technique based on image retrieval has emerged. The defect deduplication technology based on image retrieval is to remove repeated images by using two-dimensional feature information of the images, and specifically, the technology is to represent the defect images into compact feature vectors, design a feature similarity function, and retrieve the repeated defect images in the global environment.
However, in the existing defect deduplication scheme based on image retrieval, great ambiguity exists in two-dimensional feature information of an image under the condition of large visual angle, illumination and environment changes, the defect deduplication accuracy in an actual power grid inspection scene is seriously affected, and the problem that the same defect is easily detected for many times in the two-dimensional image defect detection process of a power transmission scene.
Therefore, the existing defect duplication removing scheme has the problem of low defect detection accuracy.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method, an apparatus, a computer device, a computer readable storage medium and a computer program product for detecting image defects based on three-dimensional reconstruction, which can improve the accuracy of defect detection.
In a first aspect, the application provides a method for detecting image defect deduplication based on three-dimensional reconstruction. The method comprises the following steps:
acquiring a to-be-detected image set in a target scene, wherein the to-be-detected image in the to-be-detected image set carries a defect labeling area;
extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points;
performing three-dimensional reconstruction on the basis of the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud;
according to the incidence relation, carrying out common visual area detection on the defect labeling areas in the images to be detected in pairs to obtain a common visual area detection result;
and fusing the detection results of the common visual area to obtain a defect repetitive area.
In one embodiment, the matching result data comprises matching logarithms of two-dimensional feature points;
according to the association relationship, before the common visual area detection is carried out on the defect labeling areas in the images to be detected, the method further comprises the following steps:
screening out an image to be detected of the target with the matching logarithm being larger than or equal to a preset matching logarithm threshold value according to the matching logarithm;
according to the incidence relation, carrying out common visual area detection on the defect labeling areas in the images to be detected, and obtaining a common visual area detection result, wherein the common visual area detection result comprises the following steps:
and performing common visual area detection on the defect labeling areas in the images to be detected of every two targets according to the association relation to obtain a common visual area detection result.
In one embodiment, the common visual area detection of the defect labeling areas in the images to be detected of every two targets according to the association relationship further comprises:
acquiring the quantity of characteristic points in a defect labeling area in each target image to be detected;
if the number of the characteristic points is less than the preset threshold value of the number of the characteristic points, iteratively adjusting the size of the defect labeling area according to a preset step length until the number of the characteristic points in the defect labeling area is greater than or equal to the preset threshold value of the number of the characteristic points;
and carrying out common visual area detection on the adjusted defect labeling areas in the images to be detected of every two targets.
In one embodiment, the common visual area detection of the defect labeling areas in the two images to be detected according to the association relationship comprises:
acquiring an incidence relation between every two images to be detected and the three-dimensional scene point cloud, wherein the incidence relation comprises position data of two-dimensional feature points in the two images to be detected in the three-dimensional scene point cloud;
comparing the position data of the two-dimensional characteristic points in the images to be detected in pairs in the three-dimensional scene point cloud to obtain the matching proportion of the three-dimensional characteristic points in the three-dimensional scene point cloud;
and carrying out common visual area detection on the defect labeling areas in the images to be detected according to the matching proportion of the three-dimensional characteristic points.
In one embodiment, the three-dimensional reconstruction based on the matching relationship and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an association relationship between the image to be detected and the three-dimensional scene point cloud comprises:
based on the matching relation, calling a motion recovery structure algorithm, and performing three-dimensional reconstruction on an image to be detected to obtain a three-dimensional scene point cloud corresponding to a target scene;
based on the matching relation, acquiring the projection relation between the pose data of the camera corresponding to the image to be detected and the three-dimensional feature points;
and obtaining the incidence relation between the image to be detected and the three-dimensional scene point cloud according to the projection relation of the three-dimensional feature points and the pose data of the camera.
In one embodiment, based on the matching relationship, acquiring pose data of a camera corresponding to an image to be detected includes:
acquiring parameter information of a camera corresponding to an image to be detected based on the matching relation;
eliminating noise data in the three-dimensional scene point cloud;
acquiring a motion relation of three-dimensional feature points in the three-dimensional scene point cloud after noise data are eliminated from a three-dimensional space to a two-dimensional space;
and obtaining the pose data of the camera corresponding to the image to be detected based on the motion relation and the parameter information of the camera corresponding to the image to be detected.
In one embodiment, after fusing the detection results of the common viewing area to obtain the defect repetition area, the method further includes: and carrying out defect duplication elimination on a defect duplication area of the image to be detected.
In a second aspect, the application further provides an image defect duplication removal detection device based on three-dimensional reconstruction. The device comprises:
the image acquisition module is used for acquiring an image set to be detected in a target scene, wherein the image set to be detected in the image set to be detected carries a defect labeling area;
the characteristic matching module is used for extracting two-dimensional characteristic points of each image to be detected, and performing two-dimensional characteristic point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional characteristic points, wherein the matching result data of the two-dimensional characteristic points comprises the matching relation of the two-dimensional characteristic points;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the basis of the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud;
the common visual area detection module is used for carrying out common visual area detection on the defect labeling areas in the images to be detected according to the association relation to obtain a common visual area detection result;
and the defect repeated detection module is used for fusing the detection results of the common visual area to obtain a defect repeated area.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring a to-be-detected image set in a target scene, wherein the to-be-detected image in the to-be-detected image set carries a defect labeling area;
extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points;
performing three-dimensional reconstruction based on the matching relationship and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relationship between the image to be detected and the three-dimensional scene point cloud;
according to the association relation, carrying out common visual area detection on the defect labeling areas in the images to be detected to obtain a common visual area detection result;
and fusing the detection results of the common visual area to obtain a defect repetitive area.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a to-be-detected image set under a target scene, wherein the to-be-detected image in the to-be-detected image set carries a defect labeling area;
extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points;
performing three-dimensional reconstruction based on the matching relationship and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relationship between the image to be detected and the three-dimensional scene point cloud;
according to the association relation, carrying out common visual area detection on the defect labeling areas in the images to be detected to obtain a common visual area detection result;
and fusing the detection results of the common visual area to obtain a defect repetitive area.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a to-be-detected image set under a target scene, wherein the to-be-detected image in the to-be-detected image set carries a defect labeling area;
extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points;
performing three-dimensional reconstruction on the basis of the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud;
according to the association relation, carrying out common visual area detection on the defect labeling areas in the images to be detected to obtain a common visual area detection result;
and fusing the detection results of the common visual area to obtain a defect repetitive area.
According to the image defect duplicate removal detection method based on three-dimensional reconstruction, the device, the computer equipment, the storage medium and the computer program product, the two-dimensional image and the three-dimensional scene are combined for defect detection, the matching relation of the two-dimensional characteristic points is obtained through matching and comparing the two-dimensional characteristic points, then the three-dimensional reconstruction is carried out based on the matching relation and the image to be detected, the three-dimensional scene point cloud corresponding to the target scene and the association relation of the image to be detected and the three-dimensional scene point cloud are obtained, then the common visual area detection is carried out on the defect labeling areas in two images to be detected based on the association relation, the areas which are located at different positions in different images and actually point to the same defect can be more accurately identified according to the position data of the characteristic points of different images on the three-dimensional space, finally, the detection results of the common visual area are fused, a plurality of images pointing to the same defect can be integrated, the defect repetition area can be obtained, and the detection efficiency of the repeated defect can be improved. In conclusion, the repeated defect area in the image can be accurately identified by adopting the scheme.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a method for de-emphasis of image defects based on three-dimensional reconstruction;
FIG. 2 is a schematic flowchart illustrating an image defect deduplication detection method based on three-dimensional reconstruction in one embodiment;
FIG. 3 is a detailed flowchart of an image defect deduplication detection method based on three-dimensional reconstruction in one embodiment;
FIG. 4 is a flowchart illustrating the step of detecting the common view area according to one embodiment;
FIG. 5 is a detailed flowchart of a method for detecting defect de-emphasis based on three-dimensional reconstruction in another embodiment;
FIG. 6 is a block diagram illustrating an exemplary apparatus for detecting defect de-emphasis based on three-dimensional reconstruction;
FIG. 7 is a block diagram of an apparatus for detecting defective images based on three-dimensional reconstruction in another embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image defect deduplication detection method based on three-dimensional reconstruction provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. Specifically, a maintenance worker can upload an image to be detected to a server 104 through a terminal 102 and send an image defect detection message carrying a defect labeling area to the server 104, the server 104 responds to the image defect detection message to obtain an image set to be detected under a target scene, the image to be detected in the image set to be detected carries the defect labeling area, then two-dimensional feature points of each image to be detected are extracted, two-dimensional feature point matching comparison is performed on two images to be detected to obtain matching result data of the two-dimensional feature points, the matching result data of the two-dimensional feature points comprises matching relations of the two-dimensional feature points, three-dimensional reconstruction is performed on the basis of the matching relations and the images to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud, common visual area detection is performed on the defect labeling areas in the two images to be detected according to the incidence relation to obtain a common visual area detection result, and finally, the common visual area detection result is fused to obtain a defect repetition area. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, an image defect deduplication detection method based on three-dimensional reconstruction is provided, which is described by taking the method as an example applied to the server 104 in fig. 1, and includes the following steps:
step S202, acquiring an image set to be detected in a target scene, wherein the image set to be detected in the image set to be detected carries a defect labeling area.
Each image to be detected in the image set to be detected is an image carrying a defect labeling area, specifically, the defect labeling area is an area indicating a defect position. In specific implementation, the image to be detected can be acquired by 4 image acquisition devices including a 360-degree panoramic camera, a high-definition digital camera, an RGBD (RGB + Depth Map, red, green, blue + image Depth) camera and an unmanned aerial vehicle. The unmanned aerial vehicle shoots the global structure of the scene, and then the panoramic camera, the digital camera and the RGBD camera are used for shooting the delicate local structure on the ground. In practical application, taking defect deduplication of a power transmission scene image as an example, the image to be detected may be a series of images obtained by shooting the same tower by the image acquisition equipment. The operation and maintenance personnel manually mark the defect position of the image set in the form of a defect detection target frame (hereinafter, the defect detection target frame can be simply referred to as a target frame), four pixel coordinates of the image are given in the target frame, and the enclosed rectangular frame is the region where the defect is located.
And S204, extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points.
After a series of images to be detected are obtained, two-dimensional feature points of each image to be detected can be extracted, two images to be detected are iteratively browsed, two images to be detected are matched and compared with each other through the two-dimensional feature points, matching key points are further selected, and matching result data of the two-dimensional feature points are obtained. Specifically, the matching result data includes matching logarithms and matching correspondence between two-dimensional feature points, that is, positions corresponding to a plurality of groups of the same feature points in different images. In this embodiment, a SuperPoint algorithm may be used to extract the two-dimensional feature points and descriptors in each image to be detected. The Superpoint algorithm has the advantages of deep learning, high robustness, accurate extraction position and high real-time property. It is understood that, in other embodiments, the Feature points and the global descriptor of each image to be detected may be extracted by using a NetVLAD image retrieval technique, or two-dimensional Feature points may be extracted by using SIFT (Scale Invariant Feature Transform) or ORB (organized Fast and Rotated Brief, an algorithm for accelerating Feature point extraction and description).
And S206, performing three-dimensional reconstruction based on the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud.
And after the step is carried out, a series of images to be detected are analyzed by adopting a three-dimensional reconstruction technology based on the matching relation and the images to be detected after the matching relation of the two-dimensional characteristic points is obtained, the 3D coordinates of the characteristic points of the scene in the images are obtained, and then three-dimensional reconstruction is carried out, so that the three-dimensional scene point cloud corresponding to the target scene is obtained. In this embodiment, the three-dimensional scene point cloud data may be a three-dimensional scene sparse point cloud. And obtaining the three-dimensional scene point cloud, and simultaneously obtaining the incidence relation between each image to be detected and the three-dimensional scene point cloud, including the positions of the characteristic points in each image to be detected in the three-dimensional scene point cloud. In this embodiment, the three-dimensional reconstruction may be performed by using a SLAM (Simultaneous localization and Mapping) algorithm, an SFM (Structure from Motion) algorithm, or other three-dimensional reconstruction techniques, which may be determined according to actual situations and is not limited herein.
And S208, detecting the common visual area of the defect labeling areas in the images to be detected according to the association relation to obtain a detection result of the common visual area.
"Co-view" means that some feature in both images actually points to the same object in three-dimensional space. The common visual area refers to an area where there is an overlap of the visual fields. In the present embodiment, the common view region refers to a defect region having overlapping views. The common view area detection means detecting whether an overlapping view area exists in an image shot by a camera. In this embodiment, after obtaining the association relationship between each image to be detected and the three-dimensional scene point cloud, the method may perform, according to the association relationship, common-view region detection on the defect labeling regions in two images to be detected, that is, detect whether the defect labeling regions in the images to be detected have a common-view region, and obtain a common-view region detection result, where the common-view region detection result indicates which image has the common-view region and a specific common-view region. When the two-dimensional feature points are repeated in a large number, that is, the repetition degree of the three-dimensional feature points is greater than a preset repetition degree threshold, the two-dimensional feature points are regarded as the common-view areas, that is, the actual contents corresponding to the target frames of the two images are the same. That is, the defects captured by the two images are repetitive.
Step S210, fusing the detection results of the common visual area to obtain a defect repeat area.
After the above embodiment is carried out, after the detection of the common-view area is completed, because the number of the images to be detected may be very large, and the angles acquired by the plurality of image acquisition devices for the same object are different, the contents of the captured images are also different, and therefore, the detection result of the common-view area may cause a situation that the characteristic portions of the plurality of images point to the same object in the point cloud of the three-dimensional scene. Therefore, the detection results of the common-view area need to be fused, specifically, the area where the defect is located can be accurately found out through two-dimensional association and three-dimensional association from the associated image with the highest repetition degree of the defect area, and then the association relationship that which associated images correspond to the same defect area is obtained through sorting, that is, which images have the defect repetition area, so that the accurate positioning of the repeated defect area is realized.
According to the image defect duplication removal detection method based on three-dimensional reconstruction, a two-dimensional image and a three-dimensional scene are combined for defect detection, the matching relation of two-dimensional feature points is obtained through matching and comparison of the two-dimensional feature points, three-dimensional reconstruction is carried out on the basis of the matching relation and an image to be detected, the three-dimensional scene point cloud corresponding to a target scene and the incidence relation of the image to be detected and the three-dimensional scene point cloud are obtained, common visual area detection is carried out on defect labeling areas in two images to be detected on the basis of the incidence relation, areas which are located in different positions in different images and actually point to the same defect can be identified more accurately according to position data of feature points of different images on a three-dimensional space, finally, common visual area detection results are fused, a plurality of images pointing to the same defect can be integrated, and the detection efficiency of repeated defects is improved. In conclusion, by adopting the scheme, the repeated defects in the image can be accurately identified, and the detection efficiency of the repeated defects is improved.
As shown in fig. 3, in one embodiment, step S206 includes:
step S226, based on the matching relation, calling a motion recovery structure algorithm, performing three-dimensional reconstruction on the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene, based on the matching relation, obtaining the position and pose data of the camera corresponding to the image to be detected and the projection relation of the three-dimensional feature points, and according to the projection relation of the three-dimensional feature points and the position and pose data of the camera, obtaining the incidence relation between the image to be detected and the three-dimensional scene point cloud.
The camera pose data refers to the position and orientation of the camera relative to the real object (world coordinate system). The projection relationship of the three-dimensional feature points refers to the corresponding relationship between the three-dimensional feature points and the two-dimensional feature points projected on the two-dimensional image. In this embodiment, the three-dimensional reconstruction mode may be based on a matching relationship, a motion recovery structure algorithm, that is, an SfM algorithm, is called, a plurality of images to be detected are analyzed, three-dimensional coordinates of feature points in the images are obtained, and then three-dimensional reconstruction is performed on the images to be detected, so as to obtain a three-dimensional scene point cloud corresponding to a target scene. And then, acquiring the position and pose data of the camera corresponding to the image to be detected and the projection relation of the three-dimensional characteristic points based on the matching relation, and obtaining the association relation between the image to be detected and the three-dimensional scene point cloud according to the projection relation of the three-dimensional characteristic points and the position and pose data of the camera. Specifically, the pose data of the camera can be obtained by solving through a PnP (Perspective-N-Points) algorithm. Further, obtaining the incidence relation of the key point level between the image to be detected and the three-dimensional scene point cloud according to the projection relation of the three-dimensional feature points and the pose data of the camera. In the embodiment, the three-dimensional reconstruction is performed by using the SfM algorithm, and in the process, the positions of part of two-dimensional feature points in the three-dimensional space can be recovered, so that the method does not depend on dense image construction, the cost is reduced, and the efficiency is improved.
In one embodiment, based on the matching relationship, acquiring pose data of a camera corresponding to an image to be detected includes: acquiring parameter information of a camera corresponding to an image to be detected based on the matching relation, rejecting noise data in the three-dimensional scene point cloud, acquiring a motion relation of a three-dimensional feature point in the three-dimensional scene point cloud after the noise data is rejected from a three-dimensional space to a two-dimensional space, and acquiring pose data of the camera corresponding to the image to be detected based on the motion relation and the parameter information of the camera corresponding to the image to be detected.
The external parameters of the camera refer to parameters of the camera in a world coordinate system, including the position, the rotation direction and the like of the camera. In practical application, because the three-dimensional scene point cloud usually contains noise data, the three-dimensional scene point cloud may be denoised first to eliminate the noise data therein. Specifically, in this embodiment, noise points may be eliminated by using RANSAC (Random Sample Consensus), then, based on a PnP algorithm, a motion relationship from a three-dimensional space to a two-dimensional space of three-dimensional feature points in a three-dimensional scene point cloud after noise data is eliminated is obtained, and based on the motion relationship and parameter information of a camera corresponding to an image to be detected, pose data of the camera corresponding to the image to be detected is obtained.
As shown in fig. 4, in one embodiment, step S208 includes:
and step S268, acquiring the incidence relation between every two images to be detected and the three-dimensional scene point cloud, wherein the incidence relation comprises the position data of the two-dimensional feature points in the two images to be detected in the three-dimensional scene point cloud.
Step S278, comparing the position data of the two-dimensional feature points in the two images to be detected in the three-dimensional scene point cloud to obtain the matching proportion of the three-dimensional feature points in the three-dimensional scene point cloud.
Step S288, according to the matching proportion of the three-dimensional characteristic points, carrying out common visual area detection on the defect labeling areas in the images to be detected.
In specific implementation, two to-be-detected images can be taken as a unit, and the incidence relation between each two to-be-detected images and the three-dimensional scene point cloud is obtained, wherein the incidence relation comprises position data of two-dimensional feature points in the to-be-detected images in the three-dimensional scene point cloud and the corresponding relation between the two-dimensional feature points in the to-be-detected images and the three-dimensional scene point cloud. And then, comparing whether the position data of the two-dimensional characteristic points in the three-dimensional scene point cloud are the same, if so, representing that the two-dimensional characteristic points respectively positioned in different images actually point to the same three-dimensional characteristic point. And obtaining the matching proportion of the three-dimensional characteristic points in the three-dimensional scene point cloud according to whether the position data of the two-dimensional characteristic points in the three-dimensional scene point cloud is the same, wherein the matching proportion of the three-dimensional characteristic points can represent the correlation similarity degree between every two images to be detected and the three-dimensional scene point cloud. And then, further carrying out common visual area detection on the defect labeling areas in the images to be detected according to the matching proportion of the three-dimensional characteristic points. Specifically, if the matching ratio of the three-dimensional feature points is greater than a preset matching ratio threshold value, for example, 95%, it may be determined that the target frames in the two images to be detected have the common-view region. In the embodiment, the common visual area is detected according to the matching proportion of the three-dimensional characteristic points, so that the common visual area can be accurately and efficiently identified, and the defect area can be accurately positioned.
As shown in fig. 3, in one embodiment, the matching result data includes the matching logarithm of the two-dimensional feature points;
before step S208, the method further includes: and step S207, screening out the target image to be detected with the matching logarithm more than or equal to a preset matching logarithm threshold value according to the matching logarithm.
Step S208 includes: step S228, according to the association relationship, the defect labeling areas in the images to be detected of every two targets are detected in the common visual area, and a detection result of the common visual area is obtained.
As described in the above embodiment, the matching result data of the two-dimensional feature points includes the matching logarithm of the two-dimensional feature points. In this embodiment, a concept of detecting a coarse-to-fine defect overlap region is introduced. The images with repeated defect areas possibly existing can be screened out by rough screening according to the matching logarithm of the two-dimensional feature points. Specifically, the rough screening process may be performed by taking two to-be-detected images as a unit, comparing the matching logarithm of the two-dimensional feature points in the target frames of the two to-be-detected images with a preset matching logarithm threshold, if the matching logarithm is greater than or equal to the preset matching logarithm threshold, considering that the two images may have a repeated defect region, and in this way, iteratively comparing the matching logarithm of the two-dimensional feature points of the two to-be-detected images through rough screening, and screening out the target to-be-detected image. And carrying out defect fine positioning on the image to be detected of the target which passes through the coarse screening. Specifically, the common visual area detection may be performed on the defect labeling areas in the to-be-detected images of two targets according to the association relationship, so as to obtain a detection result of the common visual area, and the specific detection method of the common visual area is described in the foregoing embodiment and is not described herein again. In this embodiment, an image in which a defect repetitive region may exist is screened out from a two-dimensional angle in a coarse screening manner, and then the repetitive defect is accurately positioned from a three-dimensional angle by combining with a fine screening scheme, so that the accurate positioning of the repetitive defect can be realized.
As shown in fig. 5, in one embodiment, step S228 further includes:
step S248, the number of the characteristic points in the defect labeling area in each target image to be detected is obtained, if the number of the characteristic points is less than the preset threshold value of the number of the characteristic points, the size of the defect labeling area is iteratively adjusted according to the preset step length until the number of the characteristic points in the defect labeling area is more than or equal to the preset threshold value of the number of the characteristic points, and the adjusted defect labeling areas in the images to be detected of every two targets are detected in the common visual area.
In practical applications, since the target scene may be very large, the defect region to be detected may be only a small local region in the extensive scene, so that the captured image may contain more useless contents, i.e. a region where defect detection is not needed. Therefore, in this embodiment, loss constraint of the defect labeling area, that is, the defect detection target frame, may be introduced, and for the case that the feature in the defect detection target frame captured by the Superpoint algorithm is too little, it is considered that the loss is too large, and the size of the target frame is appropriately adjusted in a self-adaptive manner. Specifically, the number of feature points in the defect labeling area in each image to be detected can be obtained, the number of the feature points is compared with a preset feature point number threshold value, for example, 3, if the number of the feature points is smaller than the preset feature point number threshold value, the size of the defect labeling area is iteratively adjusted according to a preset step length until the number of the feature points in the defect labeling area is larger than or equal to the preset feature point number threshold value, so that more feature points are summarized into a detection range, and the algorithm deduplication rate is improved without losing accuracy.
As shown in fig. 5, in another embodiment, after step S210, the method further includes: step S212, the defect duplication elimination is carried out on the defect duplication area of the image to be detected.
After the defect repetitive area is detected, defect deduplication can be performed on the defect repetitive area of the image to be detected. Specifically, the duplicate removal operation may be performed from the defect with the highest defect repetition degree or confidence, so as to avoid detecting the same defect multiple times during subsequent defect detection on the image.
In order to more clearly describe the image defect deduplication detection method based on three-dimensional reconstruction provided by the present application, a specific embodiment is described below with reference to fig. 5, where the embodiment includes the following contents:
step S202: acquiring a to-be-detected image set under a target scene, wherein the to-be-detected image carries a defect labeling area.
Step S204: and extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points.
Step S206: and performing three-dimensional reconstruction on the basis of the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud.
Specifically, a motion recovery structure algorithm is called based on a matching relationship, three-dimensional reconstruction is performed on an image to be detected to obtain a three-dimensional scene point cloud corresponding to a target scene, pose data of a camera corresponding to the image to be detected and a projection relationship of three-dimensional feature points are obtained based on the matching relationship, and an association relationship between the image to be detected and the three-dimensional scene point cloud is obtained according to the projection relationship of the three-dimensional feature points and the pose data of the camera.
Step S207: and screening out the target image to be detected with the matching logarithm being greater than or equal to a preset matching logarithm threshold value according to the matching logarithm.
Step S228: and detecting the defect labeling areas in the images to be detected of every two targets according to the association relation to obtain a detection result of the common visual area.
Specifically, the number of feature points in the defect labeling area in each image to be detected may be obtained. If the number of the characteristic points is smaller than the preset threshold value of the number of the characteristic points, iteratively adjusting the size of the defect marking area according to the preset step length until the number of the characteristic points in the defect marking area is larger than or equal to the preset threshold value of the number of the characteristic points. And carrying out common visual area detection on the adjusted defect labeling areas in the images to be detected in pairs.
Step S210: and fusing the detection results of the common visual area to obtain a defect repetitive area.
Step S212, the defect duplication elimination is carried out on the defect duplication area of the image to be detected.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides an image defect duplication removal detection device based on three-dimensional reconstruction, which is used for realizing the image defect duplication removal detection method based on three-dimensional reconstruction. The implementation scheme for solving the problem provided by the apparatus is similar to the implementation scheme described in the above method, so that specific limitations in one or more embodiments of the apparatus for detecting image defects based on three-dimensional reconstruction provided below can be referred to the limitations in the above method for detecting image defects based on three-dimensional reconstruction, and are not described herein again.
In one embodiment, as shown in fig. 6, there is provided an image defect deduplication detection apparatus based on three-dimensional reconstruction, including: an image acquisition module 610, a feature matching module 620, a three-dimensional reconstruction module 630, a common viewing zone detection module 640, and a defect repetition detection module 650, wherein:
the image obtaining module 610 is configured to obtain an image set to be detected in a target scene, where the image set to be detected carries a defect labeling area.
The feature matching module 620 is configured to extract two-dimensional feature points of each image to be detected, and perform two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, where the matching result data of the two-dimensional feature points includes a matching relationship of the two-dimensional feature points.
And a three-dimensional reconstruction module 630, configured to perform three-dimensional reconstruction based on the matching relationship and the image to be detected, to obtain a three-dimensional scene point cloud corresponding to the target scene and an association relationship between the image to be detected and the three-dimensional scene point cloud.
And the common visual area detection module 640 is configured to perform common visual area detection on the defect labeling areas in every two to-be-detected images according to the association relationship, so as to obtain a common visual area detection result.
And the defect repeated detection module 650 is configured to fuse the detection results of the common visual area to obtain a defect repeated area.
According to the image defect duplication removal detection device based on three-dimensional reconstruction, a two-dimensional image and a three-dimensional scene are combined for defect detection, specifically, a matching relation of two-dimensional feature points is obtained through matching and comparing two-dimensional feature points, then, three-dimensional reconstruction is carried out on the basis of the matching relation and an image to be detected, a three-dimensional scene point cloud corresponding to a target scene and an association relation of the image to be detected and the three-dimensional scene point cloud are obtained, then, on the basis of the association relation, common visual area detection is carried out on defect labeling areas in two images to be detected, areas which are located in different positions in different images and actually point to the same defect can be more accurately identified according to position data of feature points of different images on a three-dimensional space, finally, common visual area detection results are fused, a plurality of images pointing to the same defect can be integrated, and the detection efficiency of the duplication defect is improved. In conclusion, the device can accurately identify the repeated defects in the image and improve the detection efficiency of the repeated defects.
As shown in fig. 7, in one embodiment, the matching result data includes the matching logarithm of the two-dimensional feature points; the device further comprises an image rough screening module 635, which is used for screening out the target image to be detected, of which the matching logarithm is greater than or equal to a preset matching logarithm threshold value, according to the matching logarithm;
the common-view area detection module 640 is further configured to perform common-view area detection on the defect labeling areas in the to-be-detected images of every two targets according to the association relationship, so as to obtain a common-view area detection result.
In one embodiment, the common-view area detection module 640 is further configured to obtain the number of feature points in the defect labeling area in each image to be detected, and if the number of feature points is less than a preset feature point number threshold, iteratively adjust the size of the defect labeling area according to a preset step length until the number of feature points in the defect labeling area is greater than or equal to the preset feature point number threshold, and perform common-view area detection on the adjusted defect labeling area in every two images to be detected.
In one embodiment, the common visual area detection module 640 is further configured to obtain an association relationship between each two images to be detected and the three-dimensional scene point cloud, where the association relationship includes position data of two-dimensional feature points in the two images to be detected in the three-dimensional scene point cloud, compare the position data of the two-dimensional feature points in the two images to be detected in the three-dimensional scene point cloud to obtain a matching ratio of the three-dimensional feature points in the three-dimensional scene point cloud, and perform common visual area detection on the defect labeling areas in the two images to be detected according to the matching ratio of the three-dimensional feature points.
In one embodiment, the three-dimensional reconstruction module 630 is further configured to invoke a motion recovery structure algorithm based on the matching relationship, perform three-dimensional reconstruction on the image to be detected to obtain a three-dimensional scene point cloud corresponding to the target scene, obtain the pose data of the camera corresponding to the image to be detected and the projection relationship of the three-dimensional feature points based on the matching relationship, and obtain the association relationship between the image to be detected and the three-dimensional scene point cloud according to the projection relationship of the three-dimensional feature points and the pose data of the camera.
In one embodiment, the three-dimensional reconstruction module 630 is further configured to obtain parameter information of a camera corresponding to the image to be detected based on the matching relationship, reject noise data in the three-dimensional scene point cloud, obtain a motion relationship from a three-dimensional space to a two-dimensional space of a three-dimensional feature point in the three-dimensional scene point cloud after the noise data is rejected, and obtain pose data of the camera corresponding to the image to be detected based on the motion relationship and the parameter information of the camera corresponding to the image to be detected.
The modules in the image defect deduplication detection apparatus based on three-dimensional reconstruction may be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer equipment is used for storing the image to be detected, the detection result data of the common visual area, the defect repeated area and other data. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a method for image defect deduplication detection based on three-dimensional reconstruction.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the image defect deduplication detection method based on three-dimensional reconstruction when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned image defect deduplication detection method based on three-dimensional reconstruction.
In one embodiment, a computer program product is provided, which comprises a computer program, and the computer program is used for realizing the steps of the image defect deduplication detection method based on three-dimensional reconstruction when being executed by a processor.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include a Read-Only Memory (ROM), a magnetic tape, a floppy disk, a flash Memory, an optical Memory, a high-density embedded nonvolatile Memory, a resistive Random Access Memory (ReRAM), a Magnetic Random Access Memory (MRAM), a Ferroelectric Random Access Memory (FRAM), a Phase Change Memory (PCM), a graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the various embodiments provided herein may be, without limitation, general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing-based data processing logic devices, or the like.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. An image defect deduplication detection method based on three-dimensional reconstruction is characterized by comprising the following steps:
acquiring a to-be-detected image set under a target scene, wherein the to-be-detected image in the to-be-detected image set carries a defect labeling area;
extracting two-dimensional feature points of each image to be detected, and performing two-dimensional feature point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional feature points, wherein the matching result data of the two-dimensional feature points comprises the matching relation of the two-dimensional feature points and the matching logarithm of the two-dimensional feature points;
performing three-dimensional reconstruction based on the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to a target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud;
screening out a target image to be detected with the matching logarithm being greater than or equal to a preset matching logarithm threshold value according to the matching logarithm;
according to the incidence relation, carrying out common visual area detection on the defect labeling areas in the images to be detected in pairs to obtain a common visual area detection result;
fusing the detection result of the common visual area to obtain a defect repetition area;
according to the incidence relation, carrying out common visual area detection on the defect labeling areas in the images to be detected, and obtaining a common visual area detection result, wherein the common visual area detection result comprises the following steps: acquiring the quantity of the feature points in the defect labeling area of each target image to be detected, if the quantity of the feature points is smaller than a preset feature point quantity threshold value, iteratively adjusting the size of the defect labeling area according to a preset step length until the quantity of the feature points in the defect labeling area is larger than or equal to the preset feature point quantity threshold value, and carrying out common visual area detection on the defect labeling area after adjustment in the images to be detected of every two targets to obtain a common visual area detection result.
2. The method according to claim 1, wherein the detecting the common visual area of the defect labeling areas in the two images to be detected according to the association relationship comprises:
acquiring an incidence relation between each two images to be detected and the three-dimensional scene point cloud, wherein the incidence relation comprises position data of two-dimensional feature points in each two images to be detected in the three-dimensional scene point cloud;
comparing the position data of the two-dimensional characteristic points in the two images to be detected in the three-dimensional scene point cloud to obtain the matching proportion of the three-dimensional characteristic points in the three-dimensional scene point cloud;
and carrying out common visual area detection on the defect labeling areas in the images to be detected according to the matching proportion of the three-dimensional characteristic points.
3. The method according to any one of claims 1 to 2, wherein performing three-dimensional reconstruction based on the matching relationship and the image to be detected to obtain a three-dimensional scene point cloud corresponding to a target scene and an association relationship between the image to be detected and the three-dimensional scene point cloud comprises:
based on the matching relationship, calling a motion recovery structure algorithm, and performing three-dimensional reconstruction on the image to be detected to obtain a three-dimensional scene point cloud corresponding to a target scene;
based on the matching relationship, acquiring the position and pose data of the camera corresponding to the image to be detected and the projection relationship of the three-dimensional characteristic points;
and obtaining the incidence relation between the image to be detected and the three-dimensional scene point cloud according to the projection relation of the three-dimensional feature points and the pose data of the camera.
4. The method according to claim 3, wherein acquiring the pose data of the camera corresponding to the image to be detected based on the matching relationship comprises:
acquiring parameter information of a camera corresponding to the image to be detected based on the matching relation;
rejecting noise data in the three-dimensional scene point cloud;
acquiring a motion relation of three-dimensional feature points in the three-dimensional scene point cloud after noise data are eliminated from a three-dimensional space to a two-dimensional space;
and obtaining the pose data of the camera corresponding to the image to be detected based on the motion relation and the parameter information of the camera corresponding to the image to be detected.
5. The method according to claim 4, wherein said fusing said common view region detection results to obtain a defective repeat region further comprises: and carrying out defect duplication elimination on the defect duplication area of the image to be detected.
6. An apparatus for detecting image defect de-emphasis based on three-dimensional reconstruction, the apparatus comprising:
the image acquisition module is used for acquiring an image set to be detected in a target scene, wherein the image set to be detected in the image set to be detected carries a defect labeling area;
the characteristic matching module is used for extracting two-dimensional characteristic points of each image to be detected, and performing two-dimensional characteristic point matching comparison on every two images to be detected to obtain matching result data of the two-dimensional characteristic points, wherein the matching result data of the two-dimensional characteristic points comprises the matching relation of the two-dimensional characteristic points and the matching logarithm of the two-dimensional characteristic points;
the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the basis of the matching relation and the image to be detected to obtain a three-dimensional scene point cloud corresponding to a target scene and an incidence relation between the image to be detected and the three-dimensional scene point cloud;
the image rough screening module is used for screening out the target image to be detected, of which the matching logarithm is greater than or equal to a preset matching logarithm threshold value, according to the matching logarithm;
the common visual area detection module is used for acquiring the number of the characteristic points in the defect labeling area of each target image to be detected, if the number of the characteristic points is smaller than a preset threshold value of the number of the characteristic points, iteratively adjusting the size of the defect labeling area according to a preset step length until the number of the characteristic points in the defect labeling area is larger than or equal to the preset threshold value of the number of the characteristic points, and performing common visual area detection on the adjusted defect labeling area in every two target images to be detected to obtain a common visual area detection result;
and the defect repeated detection module is used for fusing the detection results of the common visual area to obtain a defect repeated area.
7. The device according to claim 6, wherein the common visual area detection module is further configured to obtain an association relationship between each two images to be detected and the three-dimensional scene point cloud, where the association relationship includes position data of two-dimensional feature points in each two images to be detected in the three-dimensional scene point cloud, compare the position data of two-dimensional feature points in each two images to be detected in the three-dimensional scene point cloud to obtain a matching ratio of the three-dimensional feature points in the three-dimensional scene point cloud, and perform common visual area detection on the defect labeling areas in each two images to be detected according to the matching ratio of the three-dimensional feature points.
8. The device according to claim 6, wherein the three-dimensional reconstruction module is further configured to invoke a motion recovery structure algorithm based on the matching relationship, perform three-dimensional reconstruction on the image to be detected, obtain a three-dimensional scene point cloud corresponding to a target scene, obtain the pose data of the camera corresponding to the image to be detected and the projection relationship of the three-dimensional feature points based on the matching relationship, and obtain the association relationship between the image to be detected and the three-dimensional scene point cloud according to the projection relationship of the three-dimensional feature points and the pose data of the camera.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202211503164.7A 2022-11-29 2022-11-29 Image defect duplicate removal detection method and device based on three-dimensional reconstruction Active CN115526892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211503164.7A CN115526892B (en) 2022-11-29 2022-11-29 Image defect duplicate removal detection method and device based on three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211503164.7A CN115526892B (en) 2022-11-29 2022-11-29 Image defect duplicate removal detection method and device based on three-dimensional reconstruction

Publications (2)

Publication Number Publication Date
CN115526892A CN115526892A (en) 2022-12-27
CN115526892B true CN115526892B (en) 2023-03-03

Family

ID=84705371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211503164.7A Active CN115526892B (en) 2022-11-29 2022-11-29 Image defect duplicate removal detection method and device based on three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN115526892B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880507A (en) * 2023-02-07 2023-03-31 南方电网数字电网研究院有限公司 Method, device, equipment and storage medium for de-duplication of defect detection of power transmission image
CN116862898A (en) * 2023-07-27 2023-10-10 小米汽车科技有限公司 Defect detection method and device for parts, storage medium and electronic equipment
CN117078677B (en) * 2023-10-16 2024-01-30 江西天鑫冶金装备技术有限公司 Defect detection method and system for starting sheet

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242828A (en) * 2018-08-13 2019-01-18 浙江大学 3D printing product 3 D defects detection method based on optical grating projection multistep phase shift method
CN114332415A (en) * 2022-03-09 2022-04-12 南方电网数字电网研究院有限公司 Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology
CN114419028A (en) * 2022-03-10 2022-04-29 南方电网数字电网研究院有限公司 Transmission line insulator defect duplication removing method and device integrating space multiple visual angles
CN114764802A (en) * 2022-05-23 2022-07-19 国网安徽省电力有限公司电力科学研究院 Equipment defect detection repeated image eliminating method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001902A (en) * 2020-08-19 2020-11-27 上海商汤智能科技有限公司 Defect detection method and related device, equipment and storage medium
CN115294117B (en) * 2022-10-08 2022-12-06 深圳市天成照明有限公司 Defect detection method and related device for LED lamp beads

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242828A (en) * 2018-08-13 2019-01-18 浙江大学 3D printing product 3 D defects detection method based on optical grating projection multistep phase shift method
CN114332415A (en) * 2022-03-09 2022-04-12 南方电网数字电网研究院有限公司 Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology
CN114419028A (en) * 2022-03-10 2022-04-29 南方电网数字电网研究院有限公司 Transmission line insulator defect duplication removing method and device integrating space multiple visual angles
CN114764802A (en) * 2022-05-23 2022-07-19 国网安徽省电力有限公司电力科学研究院 Equipment defect detection repeated image eliminating method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Use of rapid prototyping and three-dimensional reconstruction modeling in the management of complex fractures;Vaibhav Bagaria et al.;《European Journal of Radiology 80(2011)》;20111231;第814-820页 *
一种基于目标去重的缺陷检测巡检报告生成方法研究;李彬 等;《仪器仪表与检测技术》;20220731;第41卷(第7期);第118-120、176页 *

Also Published As

Publication number Publication date
CN115526892A (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN115526892B (en) Image defect duplicate removal detection method and device based on three-dimensional reconstruction
CN110569721B (en) Recognition model training method, image recognition method, device, equipment and medium
EP2711670A1 (en) Visual localisation
CN110781911B (en) Image matching method, device, equipment and storage medium
CN111754394A (en) Method and device for detecting object in fisheye image and storage medium
CN111639147B (en) Map compression method, system and computer readable storage medium
CN112348885A (en) Visual feature library construction method, visual positioning method, device and storage medium
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
KR102421604B1 (en) Image processing methods, devices and electronic devices
CN114241141B (en) Smooth object three-dimensional reconstruction method and device, computer equipment and storage medium
CN113537180A (en) Tree obstacle identification method and device, computer equipment and storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
WO2019100348A1 (en) Image retrieval method and device, and image library generation method and device
Nawaz et al. Single and multiple regions duplication detections in digital images with applications in image forensic
CN116452573A (en) Defect detection method, model training method, device and equipment for substation equipment
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN106778822B (en) Image straight line detection method based on funnel transformation
CN115630660B (en) Barcode positioning method and device based on convolutional neural network
CN111753766A (en) Image processing method, device, equipment and medium
US20220414393A1 (en) Methods and Systems for Generating Composite Image Descriptors
Elashry et al. Feature matching enhancement using the graph neural network (gnn-ransac)
Golubev et al. Validation of Real Estate Ads based on the Identification of Identical Images
CN115880507A (en) Method, device, equipment and storage medium for de-duplication of defect detection of power transmission image
CN115861927A (en) Image identification method and device for power equipment inspection image and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant